Using Docker Containers for End-to-End Testing in GitLab

End-to-end integration testing often involves a delicate orchestration of many parts. Fortunately, Docker has simplified this process or at least made it easier to reason about. Testing the integration of components hosted in different repositories adds another wrinkle, but Docker containers are a really great solution for this as well.

Setup

Imagine that you have the following project structure:

  • my-group – a private GitLab group containing multiple projects
  • my-service – a microservice that depends on a database (in this case I’ll use a gremlin graph database)
  • my-web-app – a web app that depends on both its own database (we’ll go with postgres) and my-service

We want to set up a hosted GitLab continuous integration pipeline that fires up my-web-app with all of its dependencies and runs end-to-end tests.

Building a Container for my-service

The first step is to build an image of my-service. Let’s say this is an express-powered REST server. It exposes additional endpoints when running in a test environment to enable seeding the database. The following Dockerfile describes a simple build-and-run for a project using yarn.

### BUILD CONTAINER ###
FROM node:14.17.0 AS build

WORKDIR /source

COPY ./src/ .

RUN yarn install
RUN yarn run build


### RUN CONTAINER ###
FROM node:14.17.0

WORKDIR /app

ENV NODE_ENV=test

COPY --from=build /source/dist/ ./dist
COPY --from=build /source/node_modules/ ./node_modules

EXPOSE 9000

CMD ["node", "./dist/src/entry-point.js"]

Then we just need to configure GitLab CI to build an image using the above Dockerfile. The tricky part about doing this in a GitLab-hosted CI runner is that you don’t have direct access to the Docker host. (Since GitLab’s CI runners are also running under Docker, allowing arbitrary containers to run alongside them would be a security issue).

Fortunately, Docker provides a somewhat mind-bending work-around which is Docker-in-Docker – a Docker host that is itself running under Docker. This allows our old familiar Docker commands to run safely isolated from GitLab CI runners.

build-docker-express:
  stage: build
  image: docker:20.10
  variables:
    DOCKER_HOST: tcp://docker:2375/
  services:
    - docker:20.10-dind
  script: |
    docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    docker build --pull -t registry.gitlab.com/my-group/my-service/my-test-image -f Dockerfile .
    docker push registry.gitlab.com/my-group/my-service/my-test-image

Gitlab CI provides those magical username/password variables, but we still need to login to the Docker registry. After that, it’s a simple matter of building the test image and pushing it to the registry.

Testing the Application

Now that my-service is available as a container, we just need to pull it into my-web-app. Since GitLab CI already supports service containers, this seems pretty straightforward. But by default, GitLab service containers are linked (via virtual networking) to the main application only. Getting those containers to talk to each other requires just a little more configuration.

The following GitLab CI step sets up three service containers:

  • A postgres database, used by the my-web-app
  • A gremlin graph database, used by my-service
  • my-service itself
test_e2e:
  stage: test
  image: cypress/browsers:node14.16.0-chrome90-ff88
  variables:
    FF_NETWORK_PER_BUILD: "true"
    NODE_ENV: test
    MY_SERVICE_URL: http://my-service:9000/
    GREMLIN_HOST: ws://gremlin-database:8182/gremlin
    DB_HOST: pg-database
  services:
    - name: postgres:12.3
      alias: pg-database
    - name: tinkerpop/gremlin-server:3.5.0
      alias: gremlin-database
    - name: registry.gitlab.com/my-group/my-service/my-test-image
      alias: my-service
  script: |
    yarn run test-e2e

The magical bit of network configuration is the FF_NETWORK_PER_BUILD feature flag. This allows service containers to not only talk to the main application, but also to each other (which is critical for my-service to connect to the gremlin database we’ve set up).

Each of the variables in that config are exported as environment variables to all of the containers. And services are given aliases that can be resolved as hostnames.

When installing a service container, GitLab CI will usually look in the default public Docker Hub registry (which it does for postgres and the gremlin server). But we can also specify a full path to a container registry. In this case, it’s a GitLab container registry for the same private group (“my-group”), so GitLab is able to log into it automatically.


Although there are still a lot of parts moving behind the scenes, this relatively small amount of configuration is all it takes to reliably connect multiple isolated components for integration testing.