Are you working on a project with a local development environment and a Postgres database? Running the database in a container can make your life easier.
Simplicity is one big advantage of using a Docker container for your database. Are you jumping between multiple projects with different Postgres versions and configurations? Having a separate Docker container for each project makes transitioning between projects easier. All you need to do is run “docker-compose up” and your database is good to go. You can make it even easier to switch between projects with Docker dev environments However, here, I’ll focus on using Docker to run a Postgres container for local development.
Another plus of running Postgres in Docker for local development is that it better replicates a production-like environment. Your database container won’t be dependent on any local configuration. You must be explicit about environment variables or other configuration. This can help prevent potential issues down the line when, say, deploying changes to production.
Spinning Up a Postgres Container with the Docker Command Line
To start a Postgres container with the Docker CLI, follow these steps:
- Choose a Postgres Docker image from Docker Hub and the version of Postgres you would like to run. For context, Docker Hub is just a centralized repository where you can find pre-built images for platforms and applications. To run a Postgres database in a container, choose a Postgres image that matches your desired version and requirements. For example, you might choose the
postgres:14.1-alpineimage if you want to run Postgres version 14.1 on an Alpine Linux-based container.
- Run the container using the Postgres image and the following command:
docker run --name postgres --rm \ -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=password \ -e PGDATA=/var/lib/postgresql/data/pgdata \ -v data:/var/lib/postgresql/data \ -p 5432:5432 postgres:14.1-alpine
docker runcommand starts a container process with its own file system, networking, and isolated process separate from the host machine.
postgres:14.1-alpineimage is used as is, without any customization.
--rmoption can clean up the container and remove the file system when the container exits.
-eoption sets environment variables for the container, including
-voption sets up a volume that links the
/datadirectory from the host machine to the
var/lib/postgresql/data/pgdatadirectory within the container. The volume allows data to be persisted outside of the Docker container’s file system, even after the container is stopped and its file system is cleaned up. This means that databases, tables, rows, etc. you created in your container will still be available if you stop and restart it.
-poption maps port 5432 in the container to port 5432 on the Docker host. This allows the host machine to communicate with the Postgres container. To check if the container is running, you can run
docker ps -ain another terminal and look for
PORTScolumn for your container.
Connecting to your Postgres Database
Now that your Postgres database is up and running in a container, you’ll probably want to connect to it. There are several ways to do this. I often use a database client such as TablePlus. For simplicity’s sake, I’ll cover how to connect to the database using the psql CLI tool.
To connect to the Postgres image running in a Docker container, you’ll need to specify the hostname, port number, user name, database, and password. In this case, port 5432 is exposed to the host on port 5432, so the port number is 5432 and the host is localhost. When starting the container, we set the Postgres user to be “postgres” and the password to be “password.” I’ll connect to the “postgres” database as that database is created by default.
Here’s the command to connect to your database using psql:
psql -h localhost -p 5432 -U postgres postgres
Once you enter this command, you’ll see a prompt for your password. Enter the password you specified when starting the container, and you’ll connect to your Postgres database.
Using Docker Compose to Start a Postgres Container
docker run command is a quick way to start up a Postgres container. However, it can get tedious if you have lots of containers or want to keep infrastructure consistent between developers. That’s where
docker-compose comes in handy.
docker-compose is a tool for defining and running multi-container Docker applications, allowing you to define your application’s services, networks, and volumes in a single YAML file.
Here’s an example
docker-compose.yml file for starting up a Postgres container:
version: '3' services: db: image: postgres:14.1-alpine restart: always environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: password PGDATA: /var/lib/postgresql/data/pgdata POSTGRES_DB=development ports: - "5432:5432" volumes: - data:/var/lib/postgresql/data volumes: data:
I’ll go through what’s in this file:
restart: always. This tells Docker to always restart the container if it stops.
environment. This sets environment variables for the container, just like the
-eflags in the
docker runcommand. Here I added an additional env variable
POSTGRES_DBto rename the default database name from “postgres” (the name of the user) to “development.”
ports. This maps port 5432 in the container to port 5432 on the host, just like the
-pflag in the
volumes. This creates a named volume named
data, just like the
-vflag in the
Now that we have our
docker-compose.yml file set up, we can start up our Postgres container using the following command:
This will start up our container in the background, and we can check the status of our container using the following command:
To stop and remove our container, we can use the following command:
That’s it! Using
docker-compose makes it simple to start a Postgres database with all the right configurations. And, by committing the
docker-compose.yml file to the repository, all developers will be able to use the same version and configurations for Postgres.