Distributing Command Line Tools with Docker

Last time, I covered some of the basics of using Docker for isolated local development environments. This time, I’d like to talk about how Docker can be used to distribute command line tools with complex dependencies in a portable way.

Before I go any further, I want to point out that I am not the first person to use Docker in this way. For another example, see the command line interface for Code Climate’s new platform.


Why would you want to distribute a command line application with a container instead of running it directly on your host? One reason could be that your application has a complicated setup and installation process. For example, your application might require a lot of additional libraries to be installed. Or, your language of choice might not provide a good means of distributing applications without first installing all of the developer tools (e.g. Ruby1,2). There are often language-specific alternatives to this approach, but using Docker as a distribution mechanism can work for most anything you can install within a Linux container.

Simple Example: GNU Date

For a contrived example, let’s say you want to make use of the version of date(1) distributed with Ubuntu instead of the version available on OS X. (Yes, you can get GNU coreutils from Homebrew–this is a contrived example!) Let’s say we want to use date to get an ISO8601-formatted date from a relative date, say “next Friday.” We can do that using docker run like so:

$ docker run --rm -ti ubuntu:12.04 date -d "next Friday" -I

As you can see, we can directly invoke a command contained in a specific image, and pass it arguments. Let’s take this a step further and make a wrapper script:

# gnu-date - a wrapper script for invoking `date(1)` from within a Docker image
docker run --rm -ti ubuntu:12.04 date "$@"

If we save this as gnu-date, mark it as executable, and put it somewhere in our $PATH, we can invoke it like so:

$ gnu-date -d "next Friday" -I

Using a wrapper script like this to invoke docker run allows us to distribute our own applications.

Custom Images

As a more realistic example, let’s assume we have a GLI-based Ruby command line app we’d like to distribute to users who are not Ruby developers, but do have Docker Toolbox installed. We can write a Dockerfile to build an image based on the ruby:2.2 image like so:

FROM ruby:2.2

COPY ./ruby-cli-app /app
RUN cd /app \
 && bundle install

ENTRYPOINT ["ruby-cli-app"]

And we can build our image:

$ docker build -t ruby-cli-app .

And run it:

$ docker run --rm -ti ruby-cli-app help
ruby-cli-app - Describe your application here</code>

ruby-cli-app [global options] command [command options] [arguments...]


	-f, --flagname=The name of the argument - Describe some flag here (default: the default)
	--help - Show this message
	-s, --[no-]switch - Describe some switch here
	--version - Display the program version

	help - Shows a list of commands or help for one command

By using an ENTRYPOINT, all of the arguments to docker run following our image name are passed as arguments to our application.

Distributing via Docker Hub

To actually distribute our application in this way, we can publish our custom image on Docker Hub. Here’s a Makefile and a more advanced wrapper script:


PREFIX ?= /usr/local
VERSION = "v0.0.1"

all: install

	mkdir -p $(DESTDIR)$(PREFIX)/bin
	install -m 0755 ruby-cli-app-wrapper $(DESTDIR)$(PREFIX)/bin/ruby-cli-app

	@$(RM) $(DESTDIR)$(PREFIX)/bin/ruby-cli-app
	@docker rmi atomicobject/ruby-cli-app:$(VERSION)
	@docker rmi atomicobject/ruby-cli-app:latest

	@docker build -t atomicobject/ruby-cli-app:$(VERSION) . \
	&& docker tag -f atomicobject/ruby-cli-app:$(VERSION) atomicobject/ruby-cli-app:latest

publish: build
	@docker push atomicobject/ruby-cli-app:$(VERSION) \
	&& docker push atomicobject/ruby-cli-app:latest

.PHONY: all install uninstall build publish



# ruby-cli-app
# A wrapper script for invoking ruby-cli-app with docker
# Put this script in $PATH as `ruby-cli-app`

PROGNAME="$(basename $0)"

# Helper functions for guards
  echo "ERROR: $2" >&2
  echo "($PROGNAME wrapper version: $VERSION, error code: $error_code )" &>2
  exit $1
  which $cmd > /dev/null 2>&1 || error 1 "$cmd not found!"

# Guards (checks for dependencies)
check_cmd_in_path docker
check_cmd_in_path docker-machine
docker-machine active > /dev/null 2>&1 || error 2 "No active docker-machine VM found."

# Set up mounted volumes, environment, and run our containerized command
exec docker run \
  --interactive --tty --rm \
  --volume "$PWD":/wd \
  --workdir /wd \
  "atomicobject/ruby-cli-app:$VERSION" "$@"

Now that we have a container-based distribution mechanism for our application, we’re free to make use of whatever dependencies we need within the Linux container. We can use mounted volumes to allow our application to access files and even sockets from the host. We could even go as far as the Code Climate CLI does, and take control of Docker within our container to download and run additional images.


The biggest downside of this approach is that it requires users to first have Docker installed. Depending on your application, however, having a single dependency on Docker may be much simpler to support. Imagine, for example, having dependencies on multiple libraries across multiple platforms and dealing with other unexpected interactions with your users’ system configurations–this would be a great situation to choose Docker.

There’s another gotcha to watch out for when running more complex setups: It can be confusing to keep track of which files are and are not accessible via mounted volumes.


All of the examples above can also be found on our GitHub.


I am actively using this approach on an internal tool (to build and deploy Craft CMS-based websites) right now. If you also try out this approach, I’d love to hear about it! Please leave questions or comments below. Thanks!