Configuration Management in Docker

26/04/2017   Industry insights  

This is a guest post from Michael Jones, originally published on his blog.

Running a micro service orientated containerised environment can be challenging. Among security, monitoring and support; one of those challenges is configuration management. Generally, when we talk about configuration management we hear big names like Chef and Puppet etc. However, these solutions may be overkill for some needs, as they were originally thought up to manage configuration on physical machines.

There are several approaches to managing configuration for Docker and you will find that you will need to decide which methods to use on a case by case basis. Here we will discuss the most common methods, and also the approach that I have found to be very useful.

Baking or frying?

When it comes down to it, configuration is usually baked or fried. Baking a Docker image is a fairly straightforward concept where all the configuration is passed in at the build stage – for example using COPY or ADD commands in the Dockerfile to place files in specific directories. You then have a preconfigured image that you can ship anywhere knowing the configuration won’t change. A downside to this is that if any of the configuration needs to be changed, a rebuild of an image is required.

With frying, the image will be built to take environment variables and pass them into the configuration upon launch of the image. This is done with ‘docker run -e foo=bar someimage’. This allows containers to be launched dynamically without the need for modification of the Docker image. A downside of using this method is that you won’t have unity across dev and prod environments, as the image can be configured to be easily launched differently each time.

There are pros and cons to both methods but it really depends on what you’re trying to achieve and how dynamic you need your containers to be. If you think you will be making constant changes to configurations then the frying method is ideal, however baking your configuration would be a better choice if you want a consistent environment without the need to be dynamic. 

Using Docker volumes

In a lot of cases configuration is passed into containers by attaching a volume on launch of the container with a command like this:

docker run -v /path/to/config/myconfig.conf:/etc/myconfig.conf someimage

(see Docker volumes)

This works well but presents two problems. One being you need to find a solid solution to persist data and store these configurations or use a solution such as Chef or Puppet to pass the configuration files onto the host. The second problem is how you would dynamically update this configuration, should one of your configuration variables change. The answer to this is not very easily. Last time I checked the only way to mount a volume is at launch, so if you needed to update something you would have to restart the container, which might be okay for some solutions but I’ve found that it’s not convenient.

The approach that I have used

I needed the ability to update configuration on the go, without having to rebuild images or engineer a method to update environment variables across hundreds of running containers.

A solution that made sense for a project I was working on was Confd.

In short, Confd is an open source lightweight configuration management tool that focuses on keeping local files up to date based on values stored in a Key Value Store (KVS) such as DynamoDB, Etcd, Consul, Redis and a few others. It can also reload a program or run a task when it detects a change.

Confd can launch in two modes – interval and onetime. With interval, you can set Confd to check for a change in the KVS at regular intervals and onetime just checks once.

Essentially this is an extension of the fry method but with added features. The config can not only be applied on launch of the container but can also be updated when running, and perform a task such reload Nginx. This diagram is a simplified version of what I wanted to happen in my containers:

To use Confd you still need to get config templates into the image, which you can do on the build of the Docker image or by dynamically pulling them down on launch of the container.

Confd needs three things:

  • A backend KVS such as Etcd, Consul, Redis etc. (I used DynamoDB).
  • A template resource config which defines which keys to look for and where the source template and destination for the config is.
  • A source template.

I designed the Docker file so that the entry point command called a simple shell script as shown in the above workflow that:

  1. Started Confd in its interval mode and appended the output to stdout.
    confd -interval 15 -backend dynamodb -table test 2>&1 &
  2. Run the service that the container is intended for in the foreground e.g.
     nginx -g 'daemon off'

With this setup it allowed me to update a value in the database and then watch all my containers detect the change and reload the config.

Take a look at the Confd Quick Start Guide to learn how to implement this.

In conclusion

All these methods have pros and cons but ultimately to have complete automation end to end, you need to choose what is right for your application. You also need to take into consideration your environment – how important to you is the parity between different stages? Do you want to use lots of storage to hold lots of different images or have one that you can use dynamically? Does it matter that things might take longer because you are running scripts in a container after it's been provisioned?

All these questions will give you the best answer to what approach you should take.


About Michael Jones

Michael has been exploring the huge world of DevOps for just over a year, and blogs on DevOps, Automation and Technology to share his knowledge so that others can benefit and learn from his experience - and provide a place for him to refer back to anything useful that he might forget with time.