3 Best Examples of Deploying a Node.js API to Heroku

If you’re building APIs with Node, you’ve probably hit the same wall many developers do: “Okay, it works locally. Now how do I actually ship this thing?” That’s where seeing real examples of deploying a Node.js API to Heroku: 3 examples and beyond, becomes incredibly helpful. Instead of vague theory, you want to see how people actually wire up `Procfile`s, environment variables, and CI so you can copy, adapt, and move on with your life. In this guide, we’ll walk through three of the best examples of deploying a Node.js API to Heroku, then expand with several more real examples and patterns you’ll run into in 2024–2025. We’ll cover a basic Express API, a JWT-secured API with a database, and a production-style setup with CI/CD and environment-specific configs. Along the way, we’ll talk about why Heroku is still relevant in the age of serverless, and how to avoid the common gotchas that slow teams down.
Written by
Jamie
Published

When people search for examples of deploying a Node.js API to Heroku: 3 examples, this is usually the first scenario they mean: a simple Express server, one or two routes, no database, just something live on the internet.

Here’s a realistic setup for a basic API that returns JSON.

Project structure and config

You might start with something like:

my-api/
  package.json
  server.js
  Procfile

server.js:

const express = require('express');
const app = express();

const PORT = process.env.PORT || 3000;

app.get('/health', (req, res) => {
  res.json({ status: 'ok', timestamp: new Date().toISOString() });
});

app.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
});

Two details matter for Heroku:

  • You must listen on process.env.PORT.
  • You should not hardcode hostnames or ports.

Procfile:

web: node server.js

This tells Heroku to spin up a web dyno and run node server.js.

package.json (minimal):

{
  "name": "my-basic-api",
  "version": "1.0.0",
  "main": "server.js",
  "scripts": {
    "start": "node server.js"
  },
  "engines": {
    "node": "20.x"
  },
  "dependencies": {
    "express": "^4.19.0"
  }
}

Specifying a Node version with "engines" keeps your local and Heroku environments aligned, which is even more important in 2024–2025 as Node 18 and 20 features become standard.

Deploying from Git

Once you’ve committed your files:

heroku create my-basic-node-api
heroku git:remote -a my-basic-node-api

git push heroku main  # or master

Heroku detects Node, runs npm install, then runs the web process from your Procfile. Your API is now live at something like:

https://my-basic-node-api.herokuapp.com/health

This is the first and simplest example of deploying a Node.js API to Heroku: a single-process Express app with no external dependencies. Many of the best examples you’ll find online are just variations on this pattern.


Example 2: API + Database + Env Vars – A Real-World Pattern

The next level in our examples of deploying a Node.js API to Heroku: 3 examples is what most teams actually ship: an API that talks to a database, uses environment variables, and handles authentication.

Let’s say you’re building a simple task API:

  • Node + Express
  • PostgreSQL via Heroku Postgres
  • JWT-based authentication

Environment variables and security

Instead of hardcoding secrets, you wire them through env vars:

require('dotenv').config();
const express = require('express');
const jwt = require('jsonwebtoken');
const { Pool } = require('pg');

const app = express();
app.use(express.json());

const PORT = process.env.PORT || 3000;
const JWT_SECRET = process.env.JWT_SECRET;

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  ssl: process.env.NODE_ENV === 'production' ? { rejectUnauthorized: false } : false
});

function authMiddleware(req, res, next) {
  const authHeader = req.headers.authorization;
  if (!authHeader) return res.status(401).json({ error: 'Missing Authorization header' });

  const token = authHeader.replace('Bearer ', '');

  try {
    req.user = jwt.verify(token, JWT_SECRET);
    next();
  } catch (err) {
    return res.status(401).json({ error: 'Invalid token' });
  }
}

app.get('/tasks', authMiddleware, async (req, res) => {
  const { rows } = await pool.query('SELECT * FROM tasks WHERE user_id = $1', [req.user.id]);
  res.json(rows);
});

app.listen(PORT, () => console.log(`API on ${PORT}`));

On Heroku you configure these values without checking them into Git:

heroku addons:create heroku-postgresql:mini
heroku config:set JWT_SECRET="super-long-random-string" \
  NODE_ENV=production

This example of deploying a Node.js API to Heroku highlights a pattern you’ll reuse constantly: let Heroku manage the database URL, and keep secrets in config vars.

Handling migrations

You can integrate a migration tool like Knex or Prisma. Here’s a Knex-style workflow:

npm install knex pg
npx knex migrate:make create_tasks
npx knex migrate:latest

Then, in Heroku:

heroku run npx knex migrate:latest

This is where Heroku’s one-off dynos shine: you run migrations in the same environment where your app will run, which reduces the “works on my machine” problem.

Among the best examples of deploying a Node.js API to Heroku in real teams, this pattern shows up constantly: environment-variable-driven config, managed Postgres, and a lightweight migration story.


Example 3: Production-Style Deploy with CI/CD and Review Apps

The third in our examples of deploying a Node.js API to Heroku: 3 examples set is closer to what a serious product team would use in 2024–2025:

  • GitHub integration
  • Automated tests on each push
  • Auto-deploy to staging
  • Manual promotion to production
  • Optional Review Apps per pull request

GitHub integration and pipelines

You can set this up entirely through the Heroku dashboard:

  • Create two apps: my-api-staging and my-api-prod.
  • Create a pipeline and add both apps.
  • Connect the pipeline to your GitHub repo.
  • Enable automatic deploys to staging from the main branch.
  • Require manual promotion from staging to production.

Now your workflow looks like this:

  • Open a pull request.
  • Tests run via GitHub Actions.
  • Once merged, Heroku auto-deploys to staging.
  • A human verifies, then promotes to production.

This is one of the best examples of deploying a Node.js API to Heroku in a way that scales with a team: it bakes in review, testing, and promotion instead of relying on git push heroku main from someone’s laptop.

Example GitHub Actions workflow

Here’s a trimmed-down CI config:

name: CI

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:

      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'

      - run: npm ci
      - run: npm test

Heroku handles the deployment side after tests pass, while GitHub Actions focuses on keeping your Node.js API healthy. This combination is very common in modern examples of deploying a Node.js API to Heroku because it’s simple to reason about and doesn’t lock you into a single vendor for CI.


More Real Examples of Deploying a Node.js API to Heroku in 2024–2025

Those three are the headline examples of deploying a Node.js API to Heroku: 3 examples, but real projects rarely stop there. Let’s walk through several more real examples and patterns you’re likely to hit.

Example 4: Rate-Limited Public API with Caching

Imagine you’re exposing a public API for a mobile app. You don’t want to melt your database, and you need to protect against abuse.

A common pattern:

  • Use express-rate-limit to control request volume.
  • Use Redis (via Heroku Redis) for caching expensive responses.
  • Keep configuration in env vars for tuning in production.

Key code ideas:

const rateLimit = require('express-rate-limit');

const limiter = rateLimit({
  windowMs: 15 * 60 * 1000,
  max: process.env.RATE_LIMIT_MAX || 100
});

app.use('/api/', limiter);

Then configure Redis via REDIS_URL from Heroku. This example of deploying a Node.js API to Heroku is popular for public-facing APIs that need some basic protection but don’t yet justify a separate API gateway.

Example 5: Background Jobs with a Worker Dyno

APIs that send emails, process images, or integrate with third-party services shouldn’t do everything inside the request/response cycle. One of the best examples of deploying a Node.js API to Heroku for this use case is splitting your work into:

  • A web dyno that handles HTTP.
  • A worker dyno that processes jobs from a queue (e.g., Redis + BullMQ).

Procfile:

web: node server.js
worker: node worker.js

Then scale independently:

heroku ps:scale web=1 worker=1

This pattern shows up constantly in real examples of deploying a Node.js API to Heroku for production applications: it keeps requests fast while background work happens asynchronously.

Example 6: Multi-Environment Config with Feature Flags

By 2024–2025, even small teams are using feature flags and multiple environments (dev, staging, prod). Heroku’s config vars make this straightforward:

  • NODE_ENV=development locally.
  • NODE_ENV=staging on your staging app.
  • NODE_ENV=production on your production app.

You might have flags like:

heroku config:set FEATURE_NEW_CHECKOUT=true -a my-api-staging
heroku config:set FEATURE_NEW_CHECKOUT=false -a my-api-prod

Then in Node:

const featureNewCheckout = process.env.FEATURE_NEW_CHECKOUT === 'true';

This example of deploying a Node.js API to Heroku with feature flags lets you test new behavior safely before exposing it to real users.

Example 7: Scheduled Jobs with Heroku Scheduler

Some APIs need periodic work: cleanup tasks, sending daily summaries, or refreshing cached data. Another real example of deploying a Node.js API to Heroku uses the Heroku Scheduler add-on:

  • Install Heroku Scheduler.
  • Add a script in package.json, like "scripts": { "cron:cleanup": "node scripts/cleanup.js" }.
  • In the Scheduler UI, run npm run cron:cleanup every 10 minutes or daily.

This keeps your API codebase in one place while still handling scheduled tasks without managing a separate cron server.

Example 8: Logging and Observability in Production

In 2024–2025, logging and monitoring are non-negotiable. A realistic example of deploying a Node.js API to Heroku includes:

  • Structured logs (JSON) from Node.
  • Log drains or add-ons like Papertrail or LogDNA.
  • Health checks and uptime monitoring.

Basic structured logging:

app.use((req, res, next) => {
  const start = Date.now();
  res.on('finish', () => {
    const duration = Date.now() - start;
    console.log(JSON.stringify({
      method: req.method,
      path: req.path,
      status: res.statusCode,
      duration
    }));
  });
  next();
});

Heroku aggregates these logs and can forward them to external systems. While this may not be the flashiest of the best examples, it’s the one that saves you when something breaks at 3 a.m.


Why Heroku Still Makes Sense for Node.js APIs

You’ll see a lot of debate in 2024–2025 about whether to use Heroku, serverless platforms, or container-based solutions. There’s no one right answer, but Heroku still wins in a few scenarios:

  • You want fast, low-friction deployments for small teams.
  • You value built-in Postgres, Redis, and logging over managing infrastructure.
  • You prefer Git-based deploys and simple pipelines.

Many of the best examples of deploying a Node.js API to Heroku are from teams that care more about shipping features than tuning Kubernetes clusters.

If you need harder data on cloud reliability and practices, the general principles of uptime, error budgets, and monitoring discussed in resources like Google’s SRE book (via USENIX) map cleanly to Heroku-style deployments, even though that book isn’t Heroku-specific.


FAQ: Real-World Questions About Deploying Node.js APIs to Heroku

What are some real examples of deploying a Node.js API to Heroku in 2024?

Real examples include a basic Express health-check API, a JWT-secured API backed by Heroku Postgres, a rate-limited public API with Redis caching, a background-job setup with worker dynos, and a CI/CD pipeline that auto-deploys to staging and promotes to production.

Do I need a Procfile for every example of deploying a Node.js API to Heroku?

You don’t always need a Procfile for very simple apps, because Heroku can infer web: npm start. That said, most best examples use a Procfile explicitly, especially once you add worker dynos or nonstandard start commands. It makes your process model clear and avoids surprises.

How do I handle environment variables safely on Heroku?

You set them via heroku config:set and read them with process.env in Node. This matches the twelve-factor app guidelines used widely in industry. For deeper background on secure configuration practices and secret handling, many security courses and references from universities like MIT emphasize not hardcoding secrets and using environment-based configuration.

Can I use Node 20 or later when deploying to Heroku?

Yes. In your package.json, specify:

"engines": {
  "node": "20.x"
}

Heroku’s Node buildpack is updated regularly, and using a supported LTS version like Node 18 or 20 is recommended to stay aligned with current security and performance improvements.

Are these examples of deploying a Node.js API to Heroku suitable for production?

The patterns themselves are production-ready, but production quality depends on how you implement them. Add proper error handling, logging, rate limiting, authentication, and monitoring. Many of the best examples from real teams evolve from the simple three examples we covered into more mature setups with background jobs, feature flags, and strong observability.


Heroku may not be the flashiest platform in 2025, but when you look at practical examples of deploying a Node.js API to Heroku: 3 examples and the extended patterns above, you see why it still shows up in so many real-world stacks: it lets you focus on your API, not your infrastructure.

Explore More Building a Simple API with Node.js

Discover more examples and insights in this category.

View All Building a Simple API with Node.js