Running NodeJS Scripts in AWS Fargate

On my project, I’m dealing with a lot of data-intensive tasks. Thankfully, importing data from multiple sources and processing the data can be automated easily. I’ve written quite a few NodeJS scripts to take care of a lot of that manual labor.

However, some of these scripts take minutes or hours to run. Naturally, I wanted to push these long-running processes from my machine and into AWS.

Recently, I’ve been using AWS Fargate to run these longer running NodeJS scripts in the cloud. By creating a AWS Elastic Container Service (ECS) task that uses a Docker deployed to AWS Elastic Container Repository (ECR), I can run these data processing tasks in the cloud. Here’s what I did to set it all up.

In a previous post, I created a few scripts that migrate DynamoDB structures. I’ll be using the scripts found in that repository for my examples in this post.

Creating a Cluster

First, I need to create a cluster in AWS ECS. When creating the cluster, I want to use the “Networking only” cluster. That will use AWS Fargate, allowing me to run containers without provisioning my own EC2 instance.

I also need a spot to store my container that will be run in the cluster. In AWS ECR, I create a new repository (either public or private). After creating a new repository, make note of the URI. You’ll need that later in this setup.

Creating the Container

Next, I need a Docker container to run in the cluster. This container needs to hold whatever scripts I want to run in ECS. I created a Node container using this Dockerfile:


FROM node:alpine

RUN mkdir -p /usr/src/app
RUN chown -R node: /usr/src/app

USER node
WORKDIR /usr/src/app

COPY package.json yarn.lock /usr/src/app/
RUN yarn install --production

# For my project, all my scripts are located in the "scripts" directory.
RUN mkdir -p /usr/src/app/scripts
COPY scripts /usr/src/app/scripts

I also created a bash script to build the container and deploy it to the ECR repository I just created:


#!/usr/bin/env bash

echo "Logging into Amazon ECR..."
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin "${AWS_ECR_REGISTRY}"

# echo "Building app..."
# yarn install --production=false
# yarn build:pre
# yarn build:server

ECR_TAG=test

echo "Building docker image..."
docker build --build-arg DOCKER_REGISTRY=$AWS_ECR_REGISTRY -t "$ECR_REPO_NAME:$ECR_TAG" -f Dockerfile .
echo "Built image \"$ECR_REPO_NAME:$ECR_TAG\""

echo "Tagging image..."
docker tag "$ECR_REPO_NAME:$ECR_TAG" "$AWS_ECR_REGISTRY/$ECR_REPO_NAME:$ECR_TAG"

echo "Pushing the Docker image..."
docker push $AWS_ECR_REGISTRY/$ECR_REPO_NAME:$ECR_TAG
echo "Done"

Note the two environment variables that need to be set here. The “AWS_ECR_REGISTRY” needs to be the ECR repository URI without the name, and the “ECR_REPO_NAME” is just the name.

After running that bash script, the container that can run the NodeJS scripts should be in ECR. With the container in AWS, I can now create an ECS task definition in ECS that will point to this container.

When creating this container, I used the Fargate launch type. When adding a container to the definition, I added the ECR URI from the container I created earlier. Also, under the “Environment” section, I set the command to be “node,path/to/my-script.js”. That will run the NodeJS script specified.

For the rest of the settings, I either left them untouched or chose the lowest option for now. One exception, however, is the “Task Role.” This role will be passed to the container and will give the container permission to access other AWS services. In my case, I needed to create a role in IAM to allow ECS to have full access to DynamoDB.

Running the Task

With the task definition now created, I can now run this task in my cluster. Navigating back to the cluster I created earlier, I can go to the “Task” tab and click the button “Run new task.” By filling out the form with the proper task definition, launch type, and VPC configuration, I can start the task.

If you’re unsure about the VPC configuration, be sure to select all three subnets that are present by default after selecting the cluser VPC. If everything is set up correctly, I should be able to see the task provisioning:

Clicking on the task will allow me to see more details about the task running, including logs to make sure the process is running properly.


And that’s it! I can now run whatever NodeJS scripts I want in ECS using AWS Fargate.

I’ve found these sets of tools incredibly helpful in running my longer-running scripts. Before hooking these AWS services together, I tried accomplishing the same thing with AWS Lambda. However, Lambdas are meant for short tasks and have a very limited window before timing out. With this approach, I don’t have to worry about the length of the process.