Docker is a great tool to deploy your application to staging and production environments, but the current ecosystem doesn’t make it particularly pleasant to use in development. Docker Compose does help a lot, but things could be more efficient — Even for the most common tasks, you still have to pass additional options to docker and docker-compose, and resort to utilities like awk and xargs.
This is where docker-dev comes into play: It offers a simple interface to perform the most common tasks locally, and it does so by wrapping docker and docker-compose without locking you in (you can still use those commands directly).
This article explains how you can take advantage of it in a Python project, and assumes that the application is already dockerized. This specific example involves running a web application, but you can also use it in other environments, like running one-off commands.
Example Docker Compose Project
Say you have a Python web application that uses PostgreSQL. Your docker-compose.yml file probably looks like this:
version: "2.1"
services:
app:
build: "."
env_file: "runtime/dev-env-vars"
command: "gunicorn --reload"
restart: "always"
volumes:
- ".:/opt/app:ro"
links:
- "db"
ports:
- "127.0.0.1:8080:80"
db:
image: "postgres:9.5-alpine"
environment:
POSTGRES_PASSWORD: "password"
volumes:
- "db:/var/lib/postgresql/data"
volumes:
db:
With the configuration above, your Dockerfile will install the Python distribution (and its dependencies), and the data in your PostgreSQL database will persist even if the container is terminated along with its unnamed volumes. The services are stateless, because the only data that’s persisted and shared by their containers is kept outside (in the named volume “db” in this case). The current directory is also mounted on the container as a read-only volume, so that you always run the latest code.
Performing Common Tasks
docker-dev simplifies the following tasks:
Building Images
This is by far the most painful aspect of using Docker with a Python application in development: You want to install your distribution with “pip install -e .” or “python setup.py develop”, but that requires creating a *.egg-info directory in the same directory as the distribution.
It may be tempting to allow the the container to write to the current directory in the host, but those files are most likely to be owned by a user other than you, so this option will cause some hassle down the line. Ideally, you’d just configure the build to output all artefacts outside your project directory, which I’ve achieved in environments like NodeJS, but that’s just not an option in Python: *.egg-info must be generated in that directory.
To work around this issue, docker-dev allows you to run commands in the host before running “docker-compose build”. In our case, we’ll use that to generate the *.egg-info directory so that the container won’t attempt to create it. You just have to create a file called .dockerdev-prebuild, which in our case will contain the following:
#!/bin/bash
# Make Bash not tolerate errors
set -o nounset
set -o errexit
set -o pipefail
# Generate the *.egg-info directory without actually installing anything
rm -rf *.egg-info
python setup.py develop --editable --build-directory . --no-deps --dry-run
So, without further ado: The command to build the image becomes “docker-dev build” instead of “docker-compose build”. In addition to generating the *.egg-info directory, it will also remove intermediate build-time containers and will pull the latest version of the parent image.
Starting Long-Running Processes
“docker-compose up” is what you’d use to start a long-running process, like a web server. And in many cases, it will be enough, but it could still be better in development.
One minor issue is that, by default, “docker-compose up” will reuse stopped containers (which may have stale data/code), will continue running even when a container terminates and will leave orphan containers behind as you remove/rename services. To avoid this, “docker-dev up” runs “docker-compose up” with the right arguments.
But things actually get much worse if you want to start one or more specific services, explicitly. Imagine that you’re working on a Single-Page Application whose backend is powered by your Python application: You’d have separate services for the client- and server-side applications, plus a reverse proxy (e.g., Nginx) to serve the static files or pass the request on to the web application server. Running “docker-compose up reverse-proxy” has the following problems:
- The client- and server-side applications would be stated, but you wouldn’t see any logs from them. You’ll only see logs from the reverse proxy service.
- Pressing Ctrl+C will only terminate the reverse proxy.
- If any of the dependent services dies, the other services will continue to run — And because you can’t see the logs, you wouldn’t see when/why it happened.
To work around this issue, run “docker-dev up2 reverse-proxy” instead. This sub-command has to do a lot to avoid the issues above and is expected to replace “docker-dev up” eventually.
Running One-Off Commands
“docker-dev run” wraps “docker-compose run” and makes sure that the containers are removed once they terminate. This is the sub-command that you’d use to run tests or Django management commands. For example:
docker-dev run app nosetests
Having said this, if you’re going to be running one-off commands, you might actually want to create a separate Docker Compose service, as in the example below:
version: "2.1"
services:
app-base:
build: "."
command: "false"
env_file: "runtime/dev-env-vars"
volumes:
- ".:/opt/app:ro"
app:
extends:
service: "app-base"
command: "gunicorn --reload"
restart: "always"
links:
- "db"
ports:
- "127.0.0.1:8080:80"
app-task:
extends:
service: "app-base"
links:
- "db"
stdin_open: true
db:
image: "postgres:9.5-alpine"
environment:
POSTGRES_PASSWORD: "password"
volumes:
- "db:/var/lib/postgresql/data"
volumes:
db:
You can think of “app-base” as an abstract base class, as it’s meant to be extended and not used directly. “app” is the web server, which you’d start with “docker-dev up app” (you’d no longer be able to run “docker-dev up” without passing the arguments because there are now services that you wouldn’t want started). Finally, “app-task” is the service you’d use to run one-off commands — Like “docker-dev run app nosetests”.
Removing Docker Resources
The Docker Compose project file above would have resulted Docker images, networks and volumes. “docker-compose down” is what you’d use to have them removed when you no longer need them, but its default arguments wouldn’t remove it all. So, “docker-dev down” is what you’d run to have all the resources in that project removed.
Cleaning Up Your Local Docker Installation
It’s pretty easy to end up with obsolete images, containers, volumes and networks over time, and docker-clean can help with that: It’ll remove all your Docker resources, except for images created in the past week.
Conclusion
docker-dev aims to do just one thing and do it well: Take the pain out of building applications with Docker locally. We built it at 2degrees to make us more efficient and work around the *.egg-info limitation above, and has worked well for us.
Got any comments? Please let us know on GitHub or use the form below!