Using Docker as a Dev Environment with VS Code: Part 2

In my last post, I covered the basic process of setting up VS Code to allow you to use Docker as a development container. This is sufficient for a simple project. However, your average full-stack web project may require several containers running together. Today, I’m going to explore a strategy for using VS Code Remote-Containers and Docker Compose to develop a complex project.

Setting Up Our Project with Docker-compose

Let’s start with a typical Javascript-based web app. It includes a React frontend, an Express backend, and a PostgreSQL database. Our app is in a monorepo with appropriately named frontend and backend folders. Let’s set up this app to run in a set of Docker containers using this docker-compose.yml configuration file:


version: '3.8'
services:
  my-app-frontend:
    build: ./frontend
    ports:
      - 3000:3000
    # Needed to keep the container from exiting
    stdin_open: true

  my-app-backend:
    depends_on:
      - my-app-db
    build: ./backend
    ports:
      - 8080:8080

  my-app-db:
    image: postgres:13-alpine
    restart: always
    environment:
      POSTGRES_USER: example
      POSTGRES_PASSWORD: example
      POSTGRES_DB: mydb
    volumes:
      - postgresdata:/var/lib/postgresql/data
    ports:
      - 5432:5432

We can launch this app from the command line using docker-compose up. We can also use the VS Code Remote-Containers extension to launch and then connect to one of the containers for local development. Great, sounds like we’re all done here!

My work here is done

Editing Multiple Containers Simultaneously

Of course, it’s not that simple. A single VS Code window can only connect to one container at a time. If you want to do simultaneous development of both the frontend and backend code, you must open two separate VS Code windows and switch between them for editing. You can read more about this approach in the VS Code documentation under Advanced Container Configuration.

This is satisfactory for some projects, but it gets unwieldy when the number of individual service containers grows beyond two. Instead, let’s set up a  way to edit the whole project at once, while still maintaining the ability to mimic the app’s production deployment. We do this by running each service in its own container.

Using a Custom Devcontainer for Development

In my last post, I noted that the Remote-Containers extension, by default, stores its configuration inside a .devcontainer folder. We can take advantage of this setup to create a separate development container independent of our actual app. The config folder will look like this:


$ ls .devcontainer/
├── Dockerfile.dev
├── devcontainer.json
└── docker-compose.dev-container.yml

Specifying a Development Container

First, the Dockerfile.dev should set up an environment that’s convenient for developing the whole project. Generally, this means using a more fully-featured Linux distribution than you would for production containers. We can create our own custom image for this, but Microsoft provides a lot of convenient sample images. For our React project, the typescript-node container is perfect:


# [Choice] Node.js version: 14, 12, 10
ARG VARIANT=16
FROM mcr.microsoft.com/vscode/devcontainers/typescript-node:${VARIANT}

# Copy the full repository into the /workspace directory
WORKDIR /workspace
COPY . .

Note that we’re copying the relative current directory here, which you might assume is the .devcontainer folder. However, what we want is to copy the entire code repository into our container volume. I’ll clarify this below.

Using docker-compose for Your Development Container

The next step is to set up our compose file. I’ve named it docker-compose.dev-container.yml to clearly distinguish it from the project’s main compose file. Trust my firsthand experience: even though it’s in a subdirectory, it’s nice to have this distinction when editing so you don’t change the wrong file.


version: "3.8"
services:
  app:
    build:
      context: ..
      dockerfile: .devcontainer/Dockerfile.dev
    init: true
    volumes:
      # Use a named volume for the source code
      - my-app:/workspace
      # Persist VS Code extensions between rebuilds
      - vscode-extensions:/root/.vscode-server/extensions
      # And/or for VS Code Insiders
      - vscode-extensions-insiders:/root/.vscode-server-insiders/extensions

    container_name: my-app-development
    # Overrides default command so things don't shut down after the process ends.
    command: sleep infinity

volumes:
  my-app:
  vscode-extensions:
  vscode-extensions-insiders:

Setting the build context to ../ ensures that, within the execution of the actual Dockerfile, our COPY command works as intended to clone the whole repository. We’re also using a few named Docker volumes to persist this data. Check out my colleague Patrick Bacon’s post about slow-mounted volumes on Mac for why we benefit from storing our code in a named volume.

Tying It All Together with Remote Containers Configuration

Finally, we can use the configuration devcontainer.json file to tell VS Code how to start up and connect to our development container:


// For format details, see https://aka.ms/vscode-remote/devcontainer.json or this file's README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.112.0/containers/typescript-node-12
{
  "name": "My App",
  "dockerComposeFile": "docker-compose.dev-container.yml",
  "service": "app",
  "shutdownAction": "stopCompose",
  // Open the sub-folder with the source code
  "workspaceFolder": "/workspace",
  // Set *default* container specific settings.json values on container create.
  "settings": {
    "#terminal.integrated.defaultProfile.linux#": "/bin/bash"
  },
  // Use 'forwardPorts' to make a list of ports inside the container available locally.
  // Left empty because this is handled by docker-compose for individual services,
  // use this only for additional ports not needed by those services (e.g. ssh)
  "forwardPorts": [],
  // Install npm packages to the shared volume before running docker-compose in the container
  "postCreateCommand": "cd frontend && yarn install && cd ../backend && yarn install"
}

With these files in place, run Remote-Containers: Open Folder in Container... from the VS Code command palette. The new dev container will be started and attached to the VS Code window, ready to work!

Except, what about our original docker-compose.yml and our separate app containers? How do we actually run and test this app?

If you’re not using a named volume and instead you’re bind-mounting your project code from your host machine, you can use the docker-compose CLI on your host to run the app. However, if we want to stick with a named volume, we need a way to run our app from inside our development container.

Going Deeper: Docker-from-docker-compose

That’s right, we’re going to launch a container from within a container. You can read more details about this technique in the vscode-dev-containers exmaple. First, we need to install the Docker CLI and docker-compose inside our development container. Edit the Dockerfile.dev:


# [Choice] Node.js version: 14, 12, 10
ARG VARIANT=16
FROM mcr.microsoft.com/vscode/devcontainers/typescript-node:${VARIANT}

# Install Docker CE CLI
RUN apt-get update \
  && apt-get install -y apt-transport-https ca-certificates curl gnupg2 lsb-release \
  && curl -fsSL https://download.docker.com/linux/$(lsb_release -is | tr '[:upper:]' '[:lower:]')/gpg | apt-key add - 2>/dev/null \
  && echo "deb [arch=amd64] https://download.docker.com/linux/$(lsb_release -is | tr '[:upper:]' '[:lower:]') $(lsb_release -cs) stable" | tee /​etc​/apt/sources.list.d/docker.list \
  && apt-get update \
  && apt-get install -y docker-ce-cli

# Install Docker Compose
RUN export LATEST_COMPOSE_VERSION=$(curl -sSL "https://api.github.com/repos/docker/compose/releases/latest" | grep -o -P '(?<="tag_name": ").+(?=")') \
  && curl -sSL "https://github.com/docker/compose/releases/download/${LATEST_COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose \
  && chmod +x /usr/local/bin/docker-compose

We want to make sure that the version of VS Code attached to our dev container has the Docker extension installed. This is done through the devcontainer.json config file. It’s a great time to add other extensions we know every developer on the project will want to have installed to provide a truly consistent environment.


  // Add the IDs of extensions you want installed when the container is created.
  "extensions": [
    "ms-azuretools.vscode-docker"
  ]

We also need to modify our docker-compose.dev-container.yml file to expose our host machine’s Docker socket to the container:


 volumes:
      # Forwards the local Docker socket to the container.
      - /var/run/docker.sock:/var/run/docker.sock

Here’s what the entire docker-compose.dev-container.yml file now looks like:


version: "3.8"
services:
  app:
    build:
      context: ..
      dockerfile: .devcontainer/Dockerfile.dev
    init: true
    volumes:
      # Forwards the local Docker socket to the container.
      - /var/run/docker.sock:/var/run/docker.sock
      # Use a named volume for the source code
      - my-app:/workspace
      # Persist VS Code extensions between rebuilds
      - vscode-extensions:/root/.vscode-server/extensions
      # And/or for VS Code Insiders
      - vscode-extensions-insiders:/root/.vscode-server-insiders/extensions
    container_name: my-app-dev-container
    # Overrides default command so things don't shut down after the process ends.
    command: sleep infinity

volumes:
  my-app:
  vscode-extensions:
  vscode-extensions-insiders:

Fixing Our Volume and Network Names

We need to make sure that Docker containers launched from our development container are sharing code and can all communicate with each other. Unfortunately, the Remote-Containers extension is a bit liberal in its naming conventions. If you started your remote container from the folder “my-app-dir”, your Docker might report something like:


$ docker volume ls
DRIVER    VOLUME NAME
local     my-app-dir_devcontainer_my-app
local     my-app-dir_devcontainer_vscode-extensions
local     my-app-dir_devcontainer_vscode-extensions-insiders
$ docker network ls
NETWORK ID     NAME                               DRIVER    SCOPE
442e510e0a06   my-app-dir_devcontainer_default    bridge    local
23a0ed5765bd   bridge                             bridge    local
163e2538d9d2   host                               host      local
efc775ca507e   none                               null      local

As you’ll see a bit further down, we don’t want to rely on these dynamically generated names for all our containers. And because we’re launching these containers from a container, but onto our host machine, we need to be explicit about both the volumes and networks they should use. The strategy I’ve discovered is to use “external” volumes and networks.


# This external network is shared with the main docker-compose.yml config
# Ensures that services started from inside this dev container can all communicate
networks:
  my-app-network:
    external: true

Annoyingly, Docker won’t automatically create external volumes/networks for us through docker-compose. The external designation basically tells the Docker CLI that it doesn’t need to worry about these things. This means you need to create them independently before your containers start up. To do this, I use a bash script to create my volume and network if they don’t already exist. Then, I tell VS Code to run that script via the devcontainer.json file:


#!/usr/bin/env bash

if docker volume create --name my-app &> /dev/null; then
  echo "Created volume my-app"
else
  echo "Failed to create volume my-app"
fi

docker network create my-app-network &> /dev/null
if [ "$?" -ne "0" ]; then
  echo "Network my-app-network already exists"
else
  echo "Created docker network my-app-network"
fi

// Make sure the required docker network and volume exist before creating containers
  "initializeCommand": ".devcontainer/docker-init.sh",

The Final Puzzle Piece

At last, we need to modify our editor docker-compose.dev-container.yml and app development docker-compose.yml to take advantage of the custom named volumes and networks we’re using. I’ve stripped some of the configuration here to focus on the important bits.


services:
  my-app-frontend:
    build: ./frontend
    volumes:
      - my-app:/home/app
    networks:
      - my-app-network
    ports:
      - 3000:3000

  my-app-backend:
    build: ./backend
    volumes:
      - my-app:/home/app
    networks:
      - my-app-network
    ports:
      - 8080:8080

  my-app-db:
    image: postgres:13-alpine
    volumes:
      - postgresdata:/var/lib/postgresql/data
    networks:
      - my-app-network
    ports:
      - 5432:5432

volumes:
  my-app:
    external: true
  postgresdata:

networks:
  my-app-network:
    external: true

services:
  app:
    build:
      context: ..
      dockerfile: .devcontainer/Dockerfile.dev
    init: true
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - my-app:/workspace
      - vscode-extensions:/root/.vscode-server/extensions
      - vscode-extensions-insiders:/root/.vscode-server-insiders/extensions
    networks:
      - my-app-network
    command: sleep infinity

volumes:
  ao-games:
    external: true
  vscode-extensions:
  vscode-extensions-insiders:

networks:
  my-app-network:
    external: true

Fire It Up!

With all of this in place, we can spin up the development container inside VS Code. Then, from the VS Code terminal in our Docker container environment, we can run docker-compose up in the root of our project. This will start up our individual service containers on the host machine. But, thanks to the shared network and volumes, we can continue to edit code in our development container and see the changes immediately show up on our services!

It’s also possible to use the Docker CLI to shut down or restart services as needed in development. You can even open another VS Code window and use the Remote-Containers extension to connect to one of the running service containers to directly inspect logs, edit code within that constrained environment, etc. This is a generic setup that can work for any project. Onboarding new developers requires just installing VS Code and Docker and then cloning the repository and opening it in a container. The initial setup is a lot of work, but I think this is the cleanest development experience you can get with Docker right now.

Check out the repository I’ve created for this post to try it for yourself, and let me know in the comments (or open an issue on the repo) if you have any suggestions for improvement!

Conversation
  • BLove says:

    great writeup!

    Missing “container_name:” in the final docker-compose.dev-container.yml. Is that important?

    • Dylan Goings Dylan Goings says:

      Missing on purpose to keep the file length shorter and highlight the important changes for readability. A dev container name isn’t necessary, but I do find them extremely useful!

  • Hi. I have a question about the decision of named volumes.
    Using named volumes mean data is persistent across devcontainer rebuilds. That means devcontainer won’t pick up new packages added to package.json for example, it would not even detect changes to the .devcontainer folder because all of that is persisted across rebuilds. Solution for making changes in source tree and doing devcontainer rebuilds is deleting the named volume and create it again, but that just deletes all your data.
    It’s a tricky setup.

  • Comments are closed.