So far, the focus of our series on adding end-to-end Cypress to a web app has been on getting tests running locally. In this post, we’ll walk through how we got those tests running in a Google Cloud Build test pipeline.
Running Your App and Cypress in a Container
With the testing infrastructure, we introduced in part one of this series, the services the app depended on (DB, Redis, etc.) were running in Docker containers, but the web app and Cypress were not. To get e2e Cypress running in a CI pipeline, we’ll need to update our configuration so that our app and Cypress run in a container as well.
Adding a Dockerfile
To do so, we first create a Dockerfile to define what dependencies the container we’ll run our tests in requires. The one we created looks roughly like this:
FROM cypress/base:14.16.0
RUN apt-get update && apt-get install -y postgresql-client
ARG NPM_TOKEN
WORKDIR app
ADD . .
RUN yarn install
RUN touch .env
Here’s a brief overview of what the above does:
We use an image with Cypress dependencies as our base, then add the psql client on top of that. This is necessary because our seed DB scripts run psql commands directly.
We then define any variables allowed when building the image and set environment variables. Next, we copy over the files from the app
directory and install dependencies. Finally, we create a .env
file.
Changes to the Docker Compose File
Next, we’ll need to create a new docker-compose.ci.yaml
file.
Just like in part one, we’ll add the test DB, Redis, and PostgREST services. The new additions to this file are as follows:
services:
tests:
container_name: app-test
build:
context: ./app
dockerfile: Dockerfile.test
args:
NPM_TOKEN: "${NPM_TOKEN}"
command: tail -f /dev/null
environment:
DB_API_HOST: postgrest-test
DB_API_PORT: 3001
REDIS_HOST: redis-test
REDIS_PORT: 6379
DATABASE_HOST: db-test
DATABASE_PORT: 5432
ports:
- "8082:8082"
links:
- redis
- postgrest
depends_on:
- postgrest
- redis
networks:
default:
external:
name: cloudbuild
We add a new tests service referencing the Dockerfile we created above. We also set certain environment variables here: we use the container hostnames and ports of our other containers. Any secrets needed could also be provided here.
We’ll also specify the Cloud Build external network, to allow our containers to communicate with one another when running in CI. This addition is one of the main reasons we needed to create a separate docker-compose.ci.yaml
instead of extending the existing one.
Setting Up the CI Pipeline
Along with any other standalone steps of your test pipeline (such as running formatter, linters, or unit tests), we then need to add the following steps for running our e2e Cypress in Google Cloud Build.
steps:
- name: docker/compose:1.23.1
id: start-containers
args: ['-f', 'docker-compose.ci.yaml', 'up', '-d']
- name: sqitch/sqitch
id: migrate-app
waitFor: ["start-containers"]
args:
- deploy
- ci-test
dir: postgres
- name: gcr.io/cloud-builders/docker
id: run-cypress
waitFor: ["migrate-app"]
args:
- exec
- app-test
- yarn
- cy:test:headless:ci
You might find the Google Cloud Build docs helpful in unpacking the specifics of what the above accomplishes.
Running End to End Cypress in a Google Cloud Build Pipeline
At a high level, there are three steps that must run sequentially. First, we start up the containers. Next, we run our DB migrations (using the sqitch docker hub image). And finally, we run our Cypress tests!
If you’ve been following our series from the beginning, we’ve gone from a project with no e2e tests to a project that runs e2e tests regularly as part of a CI process. While the specific infrastructure, services, or CI systems that you are using might be different, we hope this series will help guide you through the steps involved in introducing e2e Cypress to an existing web app.
Previous installments in this series:
Part 1: Setting Up the Infrastructure
Part 2: Seeding the Test Data
Part 3: Getting Set Up with Cypress & Seeding the Database Between Test