Modern web apps often have very rich frontends and lean backends, which pushes infrastructure toward more dynamic, developer-friendly tools. SST v3 is one such tool: it lets you write Typescript code to define AWS infrastructure for web apps.
In this post, I’ll walk through a common full-stack React app setup in a monorepo, using AWS as the cloud provider and SST v3 as the infrastructure layer — which builds on top of Pulumi.
Note: SST v2 used CDK under the hood, but this post focuses solely on the latest version (v3), and won’t cover the migration path between versions.
This isn’t a comprehensive guide.
SST is evolving quickly. It’s what I’d consider bleeding edge, so if you choose to adopt it, plan to stay close to its community on GitHub, Discord, Youtube, and their docs.
SST: A High-level Overview
SST adds its own high-level constructs, like VPC
, StaticSite
, Function
, etc.) for common web patterns. This contrasts with older IaC tools like CloudFormation and Terraform, which use a declarative approach.
In the latter, you describe what you want, typically in YAML/JSON or HCL rather than imperative Typescript code.
Here’s when each of these IaC tools were released:
Tool | Initial Release Year |
---|---|
CloudFormation | 2011 |
Terraform | 2014 |
Pulumi | 2018 |
SST (v3 w/ Pulumi) | 2023 (originally in 2021) |
Imperative vs. Declarative IaC
CloudFormation and Terraform are declarative. CloudFormation stacks are defined in YAML/JSON templates, and Terraform uses its HashiCorp Configuration Language (HCL)
In both cases you write “this is the end state I want” and the tool figures out how to achieve it. By contrast, SST lets you use imperative programming via Typescript.
You write code with loops, conditionals, functions, etc., which ultimately generate the cloud resources. In practice that means you can use all the features of a programming language: import libraries, use variables and loops, handle logic, and so on. This proves to be extremely convenient and easy to get started for Typescript project.
SST’s imperative approach can make complex setups or DRY patterns easier to implement, especially if you peruse their rich examples on their Github.
In short, SST operates at a higher abstraction: you write Typescript to describe resources, whereas Terraform and CloudFormation use higher-level DSLs.
SST abstracts the AWS APIs in Typescript by leveraging Pulumi, Terraform abstracts them in HCL via its AWS provider, and CloudFormation is AWS’s native deployment language.
Next, I’ll show you details to be aware of when using SST.
Rollback Behavior
The three tools handle failures differently:
CloudFormation automatically rolls back stack updates on failure by default, unless disabled. This means if a deployment fails mid-stack, it will undo the partial changes to return to the last known good state. This is a biggest capability I miss from as this built‑in rollback makes it easy to recover from changing stack sets.
Terraform does not automatically undo changes if terraform apply fails halfway.
In fact, Terraform’s design is to stop on error and leave any partial changes in place.
You then typically fix the issue manually in the code, and re-run terraform apply. State versions can be manually restored, but there is no auto-rollback feature. Not as nice as CloudFormation.
SST similarly does not have automatic rollback, and I’d argue it’s harder to rollback from the others mentioned.
If a Pulumi deployment errors, Pulumi will finish any in-flight operations and exit with an error.
To rollback, you must revert or fix your code and run the deployment again.
In other words, like Terraform, SST requires manual intervention to undo bad changes. I’ve found this to be a big headache and had to destroy stages and manually delete resources using the AWS console in order to start with a clean slate.
Layers of Abstraction: CloudFormation, Terraform, and SST
You can think of these tools as layers:
CloudFormation (Single abstraction layer) | |
---|---|
Your IaC Code | AWS Internal Systems (Resource provisioning & management) |
Terraform (Two abstraction layers) | |
---|---|
Your IaC Code (HCL) | Terraform Core Engine |
Terraform HCL | ↳ Terraform AWS Provider |
↳ AWS REST API / SDK |
SST v3 with Pulumi (Three abstraction layers) | |
---|---|
Your IaC Code (TypeScript SST Constructs) | SST npm package |
SST Pulumi | ↳ SST Platform wrapper |
AWS Pulumi Engine | ↳ Pulumi AWS (SDK) |
↳ AWS REST API / SDK |
CloudFormation is AWS’s native IaC. It talks directly to AWS’s control plane.
You write YAML/JSON, and AWS handles the rest. Its state is managed inside AWS (no separate state file).
Terraform sits on top of AWS and supports other providers.
You write HCL, and Terraform’s AWS provider then makes API calls, via AWS SDK, not CloudFormation templates, to create resources.
Terraform must manage its own state file, typically in S3/DynamoDB for AWS deployments.
SST sits on top of AWS as well, but it’s imperative code. SST uses Pulumi’s AWS provider and leverages Pulumi’s engine.
Like In fact, SST’s documentation explicitly notes that SST uses Pulumi behind the scenes for the providers and the deployment engine. Also, supports other providers just like Terraform.
SST adds yet another abstraction layer: it provides higher-level “components” that simplify common AWS setups. But underneath, SST v3 + Pulumi still creates real AWS resources, through the AWS SDK.
State Storage and Security
Each tool stores state differently, with security implications.
CloudFormation
This does not expose a state file you manage. The stack state is kept internally by AWS. In effect, your AWS account knows what was deployed in each stack, and CloudFormation can track and roll back changes. You don’t need to handle a separate state file, but you also don’t see or control it directly.
Terraform
Terraform explicitly uses a state file. By default, it’s local (terraform.tfstate), but in team settings, you almost always configure a remote backend (like an S3 bucket + DynamoDB) to share state. That state is a JSON document of everything Terraform has created. It often contains resource IDs and any outputs, and may include sensitive values (unless you enable encryption).
You must protect this state file (for example, enable S3 encryption and restrict access). That’s because it can contain secrets or keys in plaintext by default. It’s effectively a “source of truth” of your infrastructure, but you are responsible for managing and securing it.
SST
This uses Pulumi under the hood. By default, SST creates a local JSON state file and then backs it up to your AWS account. Specifically, SST will provision an S3 bucket named like sst-state- in your AWS region and store the state JSON there.
It also creates an SSM Parameter to hold a passphrase (used to encrypt secrets in that state).
In practice, this means your SST state is encrypted with a key whose passphrase is in SSM, and the encrypted state is stored in S3. Secrets in SST are encrypted in state by design. Pulumi itself encrypts state when using Pulumi’s managed backends, and SST’s method is similar. SST’s state persists locally and in AWS S3 buckets. In S3, it is encrypted at rest by default.
In summary, CloudFormation keeps state hidden in AWS, Terraform uses an exposed state file you must secure, and SST uses a state file that’s encrypted and stored in your AWS account by default.
Networking Cost: Bastion EC2 vs NAT Gateway
SST’s VPC component can greatly reduce networking costs. By default SST’s Vpc component can be configured with a bastion host and an EC2-based NAT instead of AWS’s managed NAT Gateways.
For example, enabling new aws.Vpc("Vpc", { bastion: true, nat: "ec2" })
will launch a tiny t4g.nano EC2 instance (about $3/month) that acts as both a bastion and NAT, which is secure cost saving trick. In contrast, a managed NAT Gateway costs around $30–65.
The “ec2” option uses fck-nat and is 10x cheaper than the “managed” NAT Gateway.
In practice, for small workloads this single nano instance handles all outbound traffic from private subnets. The savings can be on the order of dozens to hundreds of dollars per month for multi-AZ deployments. Of course, you lose the automatic scaling of a real NAT gateway, but for many apps the traffic is low enough that the t4g.nano suffices.
SST v3 Configuration Example
Here’s an sst.config.ts
example tying it all together.
It uses SST secrets, a VPC, an Aurora PostgreSQL database, a single mono Lambda Function, and a StaticSite for the frontend. It also enables the bastion/NAT EC2 to save cost as discussed.
Here’s an extraction of a common SST configuration for a thick React app that requires database data to bootstrap the React app with dynamic data from a relational database.
/// <reference path="./.sst/platform/config.d.ts" />
export default $config({
app(input) {
return {
name: "archie-api",
removal: input?.stage === "production" ? "retain" : "remove",
home: "aws",
providers: {
aws: process.env.CI
? undefined
: {
profile:
input?.stage === "production" ? "acme-production" : "acme-dev,",
},
},
region: input?.stage === "production" ? "us-east-1" : "us-east-2",
};
},
async run() {
// Create a VPC with EC2-based NAT and bastion host
const vpc = new aws.Vpc("Vpc", {
bastion: true,
nat: "ec2",
});
// Define a secret (set via `sst secret set`)
const dbPassword = new sst.Secret("DbPassword");
// REPLACE: simple basic authorization example, be sure to create a username and password.
// In production you would want more security either using IPSec of use an OAuth provider.
const username = new sst.Secret("USERNAME");
const password = new sst.Secret("PASSWORD");
const basicAuth = $resolve([username.value, password.value]).apply(
([username, password]) =>
Buffer.from(`${username}:${password}`).toString("base64")
);
// Create an Aurora PostgreSQL DB inside the VPC
const database = new aws.Aurora("Database", {
engine: "postgres",
vpc,
});
// Create a mono Lambda function, e.g. using Hono
const api = new aws.Function("ApiFunction", {
handler: "src/api.handler",
runtime: "nodejs20.x",
vpc,
url: true,
link: [database],
environment: {
DB_PASSWORD: dbPassword.value,
DB_HOST: database.host,
DB_NAME: database.databaseName,
DB_PORT: database.port.toString(),
NODE_ENV: "production",
NODE_OPTIONS: "--enable-source-maps --experimental-modules",
},
});
// Deploy frontend as static site on S3 + CloudFront with example of imperative programming using string interpolation.
const site = new aws.StaticSite("Web", {
path: "frontend",
build: {
output: "build",
command: "npm run build",
},
edge: {
viewerRequest: {
injection: $interpolate`
if (!event.request.headers.authorization || event.request.headers.authorization.value !== "Basic ${basicAuth}" ) {
return {
statusCode: 401,
headers: {
"www-authenticate": { value: "Basic" }
}
};
}`,
},
viewerResponse: {
injection: $interpolate`
// Check if the request path matches any of our static asset patterns
const path = event.request.uri;
if (path.match(/\.(js|css|svg|geojson)$/)) {
// Add cache control headers for static assets
event.response.headers["cache-control"] = {
value: "public, max-age=31536000, immutable"
};
}`,
},
},
environment: {
API_URL: api.url,
},
});
// Log the endpoint (optional)
console.log("API URL:", api.url);
console.log("UI URL:", site.url);
},
});
Using SST Resources in a Hono mono repo
In the above example, the Lambda handler (src/api.handler) might use Hono (a lightweight web framework). Inside that code, you can import SST-managed resource values, like the database credentials, via the Resource object that SST provides at runtime. For example, a simple Hono app connecting to the Aurora DB might look like this:
import { Hono } from "hono";
import postgres from "postgres";
import { Resource } from "sst";
const app = new Hono();
app.get("/", async (c) => {
// Create a Postgres client using SST-provided resource values
const sql = postgres({
user: Resource.Database.username,
password: Resource.Database.password,
host: Resource.Database.host,
port: Resource.Database.port,
database: Resource.Database.database,
});
// Run a query
const [result] = await sql`SELECT NOW() AS now`;
return c.json({ time: result.now });
});
export default app;
In this snippet, Resource.Database
refers to the Aurora component named “Database” in sst.config.ts
. SST automatically injects the runtime values (host, port, username, etc.) so your code can connect to the DB without hardcoding credentials.
By combining imperative Typescript, simple high-level components, and smart defaults (like a cheap bastion/NAT setup), SST v3 aims to make AWS infrastructure more accessible to web developers. It still relies on AWS best practices under the hood (security groups, VPCs, etc.), but the developer experience feels closer to writing application code than crafting low-level templates.
When comparing SST to Terraform or CloudFormation, remember these key differences: SST is code-first, CloudFormation is AWS-native declarative, and Terraform is a cloud-agnostic DSL.
Rollback and state handling also vary: CloudFormation handles state internally and auto-rolls-back, whereas Terraform and SST put state on you requiring manual fixes on failure. In exchange, SST provides convenience like live dev mode and built-in components that streamline common serverless patterns, making it a pragmatic choice for developer-centric AWS apps.
Recap & General Recommendation
Choosing the right Infrastructure-as-Code tool depends largely on your goals, team structure, and cloud footprint. Here’s how I currently think about it:
Use SST v3 if you:
- Are building a greenfield Typescript web app with a heavy frontend and minimal backend logic.
- Want tight coupling between application logic and infrastructure.
- Are leaning serverless-first and want fast local iteration with sst dev.
- Are comfortable writing infrastructure imperatively in code (like you already do in your app).
- Want excellent ergonomics for fullstack Typescript teams and a thoughtful abstraction over AWS. It also gives you clever cost-saving defaults like EC2-based NAT, and access to your secrets and VPCs from a single mono Lambda — all with minimal code.
Use Terraform if you:
- Are managing infrastructure across multiple clouds or want to stay cloud-neutral.
- Are working on a large team with centralized platform engineering and long-lived infra.
- Need mature support for policy enforcement, drift detection, and state diffs.
- Are a instituting multi-cloud infra and or are a larger org where infrastructure is its own product.
Use CloudFormation if you:
- Are operating in a strictly AWS-governed environment with deep integrations.
- Need to leverage AWS-native features like StackSets or service-linked roles.
- Are building infrastructure that must be AWS-first, deeply secure, and change-controlled.
- Use AWS. It’s verbose and rigid, but rock-solid when working within the AWS ecosystem at scale — especially in regulated or enterprise environments.
IaC tool | Serverless Fit | Container Fit |
---|---|---|
CloudFormation | Medium | High |
Terraform | High | High |
SST (Pulumi) | Very High | Medium |
How to continue exploring SST
I will continue to experiment with SST. It’s currently being rapidly developed on Github!
If you want to try it yourself, I highly recommend looking through through /examples directory in the repo and follow their HOW TO for the prerequisites on setting up your AWS accounts.
I look forward to what this productive team innovates throughout the year and as how AWS expands it’s new global offerings.
What’s your take… is imperative IaC here to stay, and is it a good idea?