Distributing Command Line Tools with Docker

Last time, I covered some of the basics of using Docker for isolated local development environments. This time, I’d like to talk about how Docker can be used to distribute command line tools with complex dependencies in a portable way.

Before I go any further, I want to point out that I am not the first person to use Docker in this way. For another example, see the command line interface for Code Climate’s new platform.


Why would you want to distribute a command line application with a container instead of running it directly on your host? One reason could be that your application has a complicated setup and installation process. For example, your application might require a lot of additional libraries to be installed. Or, your language of choice might not provide a good means of distributing applications without first installing all of the developer tools. There are often language-specific alternatives to this approach, but using Docker as a distribution mechanism can work for most anything you can install within a Linux container.

Simple Example: GNU Date

For a contrived example, let’s say you want to make use of the version of date(1) distributed with Ubuntu instead of the version available on OS X. (Yes, you can get GNU coreutils from Homebrew–this is a contrived example!) Let’s say we want to use date to get an ISO8601-formatted date from a relative date, say “next Friday.” We can do that using docker run like so:

$ docker run --rm -ti ubuntu:12.04 date -d "next Friday" -I

As you can see, we can directly invoke a command contained in a specific image, and pass it arguments. Let’s take this a step further and make a wrapper script:

# gnu-date - a wrapper script for invoking `date(1)` from within a Docker image
docker run --rm -ti ubuntu:12.04 date "$@"

If we save this as gnu-date, mark it as executable, and put it somewhere in our $PATH, we can invoke it like so:

$ gnu-date -d "next Friday" -I

Using a wrapper script like this to invoke docker run allows us to distribute our own applications.

Custom Images

As a more realistic example, let’s assume we have a GLI-based Ruby command line app we’d like to distribute to users who are not Ruby developers, but do have Docker Toolbox installed. We can write a Dockerfile to build an image based on the ruby:2.2 image like so:

FROM ruby:2.2

COPY ./ruby-cli-app /app
RUN cd /app \
 && bundle install

ENTRYPOINT ["ruby-cli-app"]

And we can build our image:

$ docker build -t ruby-cli-app .

And run it:

$ docker run --rm -ti ruby-cli-app help
ruby-cli-app - Describe your application here</code>

ruby-cli-app [global options] command [command options] [arguments...]


	-f, --flagname=The name of the argument - Describe some flag here (default: the default)
	--help - Show this message
	-s, --[no-]switch - Describe some switch here
	--version - Display the program version

	help - Shows a list of commands or help for one command

By using an ENTRYPOINT, all of the arguments to docker run following our image name are passed as arguments to our application.

Distributing via Docker Hub

To actually distribute our application in this way, we can publish our custom image on Docker Hub. Here’s a Makefile and a more advanced wrapper script:


PREFIX ?= /usr/local
VERSION = "v0.0.1"

all: install

	mkdir -p $(DESTDIR)$(PREFIX)/bin
	install -m 0755 ruby-cli-app-wrapper $(DESTDIR)$(PREFIX)/bin/ruby-cli-app

	@$(RM) $(DESTDIR)$(PREFIX)/bin/ruby-cli-app
	@docker rmi atomicobject/ruby-cli-app:$(VERSION)
	@docker rmi atomicobject/ruby-cli-app:latest

	@docker build -t atomicobject/ruby-cli-app:$(VERSION) . \
	&& docker tag -f atomicobject/ruby-cli-app:$(VERSION) atomicobject/ruby-cli-app:latest

publish: build
	@docker push atomicobject/ruby-cli-app:$(VERSION) \
	&& docker push atomicobject/ruby-cli-app:latest

.PHONY: all install uninstall build publish



# ruby-cli-app
# A wrapper script for invoking ruby-cli-app with docker
# Put this script in $PATH as `ruby-cli-app`

PROGNAME="$(basename $0)"

# Helper functions for guards
  echo "ERROR: $2" >&2
  echo "($PROGNAME wrapper version: $VERSION, error code: $error_code )" &>2
  exit $1
  which $cmd > /dev/null 2>&1 || error 1 "$cmd not found!"

# Guards (checks for dependencies)
check_cmd_in_path docker
check_cmd_in_path docker-machine
docker-machine active > /dev/null 2>&1 || error 2 "No active docker-machine VM found."

# Set up mounted volumes, environment, and run our containerized command
exec docker run \
  --interactive --tty --rm \
  --volume "$PWD":/wd \
  --workdir /wd \
  "atomicobject/ruby-cli-app:$VERSION" "$@"

Now that we have a container-based distribution mechanism for our application, we’re free to make use of whatever dependencies we need within the Linux container. We can use mounted volumes to allow our application to access files and even sockets from the host. We could even go as far as the Code Climate CLI does, and take control of Docker within our container to download and run additional images.


The biggest downside of this approach is that it requires users to first have Docker installed. Depending on your application, however, having a single dependency on Docker may be much simpler to support. Imagine, for example, having dependencies on multiple libraries across multiple platforms and dealing with other unexpected interactions with your users’ system configurations–this would be a great situation to choose Docker.

There’s another gotcha to watch out for when running more complex setups: It can be confusing to keep track of which files are and are not accessible via mounted volumes.


All of the examples above can also be found on our GitHub.


I am actively using this approach on an internal tool (to build and deploy Craft CMS-based websites) right now. If you also try out this approach, I’d love to hear about it! Please leave questions or comments below. Thanks!

  • Dave Franco says:

    This a really nice approach to what I’m looking for, I’m creating a command line tool to manage an infrastructure and for developing purposes is perfect, for example I’m using python setuptool and if in the future I want to uninstall is a mess or maybe an update, so thanks :)

    • Mike English Mike English says:

      Thanks, Dave! I’m glad to hear this was helpful.
      Let me know if you end up writing about your tool, too.

  • jensvdh says:

    Hi! This is great.

    I want to build a CLI tool with docker that runs Node/Grunt internally. My CLI will be used to modify files.

    Is there a way for my app inside of my docker container to modify files in the PWD of the host machine? Or files passed in.

  • Joshua says:

    I spent a lot of time recently developing a command line tool written in Python and planned to distribute it in a docker container. During the initial development I was only running the Python scripts from my local command line, and everything was looking great. Once I had the basics ready to go I put them into a Docker container and gave it a try, but the output of my scripts was being drawn incorrectly in the “pseudo-TTY” that Docker provides using the -t option. In the end, I found that this was because a TTY converts a newline (like \n) into carriage return plus newline (like \r\n). Basically, anything that outputs back to the Docker TTY gets the carriage return added to it and there doesn’t seem to be a really good way to prevent that. I found this discussed in Docker issue 8513 on GItHub, with the solution being to not use the -t option unless you want a TTY that adds carriage returns to your output. If anyone is going to use Docker with command line tools, be aware that the added carriage returns might cause issues. If the command line tool is extremely simple, there probably won’t be any trouble.

  • Martin d'Anjou says:

    For command line applications that write to the mounted volume, the output files are owned by root and can’t be removed unless the user has sudo. I have to pass “–user $(id -u):$(id -g)” to “docker run” so the user owns the files after docker is done, but I am not sure that it is the right solution.

  • Alex Morais says:

    I used this guide to build a command line tool written in Python, and your instructions work really well.

    I noticed that when I use `docker-compose run my-project etc` that it takes about 30-45 seconds to see the first interaction. I’m not sure what is causing that delay as it wouldn’t seem to be my application. However, my question is, should I start the containers in some kind of waiting state and use `docker-compose exec` instead to run my command against them after they are started? My other concern is that the delay would occur on every `run`.

    Otherwise, thanks for the great guide!

  • Comments are closed.