Free Ebook cover Docker for Beginners: Containers Explained with Simple Projects

Docker for Beginners: Containers Explained with Simple Projects

New course

12 pages

Project: API and Database with Multi-Container Orchestration

Capítulo 9

Estimated reading time: 11 minutes

+ Exercise

What you will build: an API + database as one application

In this project you will run an API service and a database service together as a single “application” using multi-container orchestration. The goal is not just to start two containers, but to manage them as a unit: start/stop together, share configuration, connect over an internal network, and keep the database data persistent across restarts.

You will use Docker Compose to describe the whole stack in one file. Compose is ideal for beginner-friendly orchestration on a single machine because it lets you declare services, networks, volumes, environment variables, and startup order in a readable format. You will also learn the practical realities of multi-container apps: service discovery by name, health checks, initialization scripts, and safe handling of secrets during local development.

Project overview

  • API service: a small web API (example: Node.js + Express) that reads/writes items in a database.
  • Database service: PostgreSQL (you can swap for MySQL later, but the patterns are the same).
  • Orchestration: Docker Compose file that defines both services, a private network, and a named volume for database storage.

Key concept: orchestration is about declaring relationships

When you orchestrate multiple containers, you are declaring how services relate to each other rather than manually wiring them every time. In a multi-container app, the API depends on the database being reachable, and both services need consistent configuration. Compose provides:

  • Service definitions: each container is described as a service with image/build, ports, environment, and more.
  • Service discovery: services can reach each other by service name (for example, the API connects to db as a hostname).
  • Shared networks: Compose creates a default network so services can communicate privately without exposing the database to your host.
  • Volumes: persistent storage for the database so data survives container recreation.
  • Lifecycle management: start, stop, rebuild, and view logs for the whole stack with a few commands.

A common beginner mistake is to treat containers like lightweight VMs and hardcode IP addresses or rely on “start order” alone. In Compose, you should rely on service names and health checks, and you should design the API to retry connections because databases may take a few seconds to become ready.

Folder structure for the project

Create a new folder for this project. A clean structure helps Compose find your API build context and keeps configuration organized.

Continue in our app.

You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.

Or continue reading below...
Download App

Download the app

multi-container-api-db/  docker-compose.yml  api/    Dockerfile    package.json    package-lock.json    src/      server.js  db/    init.sql  .env

This chapter focuses on orchestration. The API code is intentionally small so you can focus on how the services work together.

Step 1: Create a minimal API that uses PostgreSQL

The API will expose endpoints to create and list “notes”. It will connect to PostgreSQL using environment variables provided by Compose.

api/package.json

{  "name": "notes-api",  "version": "1.0.0",  "main": "src/server.js",  "type": "commonjs",  "scripts": {    "start": "node src/server.js"  },  "dependencies": {    "express": "^4.18.2",    "pg": "^8.11.3"  }}

api/src/server.js

const express = require('express');const { Pool } = require('pg');const app = express();app.use(express.json());const pool = new Pool({  host: process.env.DB_HOST,  port: Number(process.env.DB_PORT || 5432),  user: process.env.DB_USER,  password: process.env.DB_PASSWORD,  database: process.env.DB_NAME,});async function ensureTable() {  await pool.query(`    CREATE TABLE IF NOT EXISTS notes (      id SERIAL PRIMARY KEY,      text TEXT NOT NULL,      created_at TIMESTAMP NOT NULL DEFAULT NOW()    );  `);}app.get('/health', async (req, res) => {  try {    await pool.query('SELECT 1');    res.json({ ok: true });  } catch (err) {    res.status(500).json({ ok: false, error: err.message });  }});app.get('/notes', async (req, res) => {  const result = await pool.query('SELECT id, text, created_at FROM notes ORDER BY id DESC');  res.json(result.rows);});app.post('/notes', async (req, res) => {  const { text } = req.body;  if (!text) return res.status(400).json({ error: 'text is required' });  const result = await pool.query('INSERT INTO notes(text) VALUES($1) RETURNING id, text, created_at', [text]);  res.status(201).json(result.rows[0]);});const port = Number(process.env.PORT || 3000);(async () => {  // The database might not be ready immediately; do a simple retry loop.  const maxAttempts = 20;  const delayMs = 1000;  for (let attempt = 1; attempt <= maxAttempts; attempt++) {    try {      await pool.query('SELECT 1');      await ensureTable();      break;    } catch (err) {      if (attempt === maxAttempts) throw err;      await new Promise(r => setTimeout(r, delayMs));    }  }  app.listen(port, () => {    console.log(`API listening on port ${port}`);  });})().catch(err => {  console.error('Failed to start API:', err);  process.exit(1);});

Notice two important orchestration-friendly behaviors:

  • The API reads database connection settings from environment variables, so Compose can inject them.
  • The API retries the database connection on startup. This reduces “race conditions” where the API starts before the database is ready.

Step 2: Dockerfile for the API service

This Dockerfile builds a small image for the API. Keep it straightforward for learning purposes.

FROM node:20-alpineWORKDIR /appCOPY package*.json ./RUN npm ci --only=productionCOPY src ./srcEXPOSE 3000CMD ["npm", "start"]

The API will be built by Compose using the api/ folder as the build context.

Step 3: Database initialization script

PostgreSQL images support running initialization scripts placed in a special directory. This is useful for creating a database schema or seed data automatically the first time the database volume is created.

db/init.sql

CREATE TABLE IF NOT EXISTS notes (  id SERIAL PRIMARY KEY,  text TEXT NOT NULL,  created_at TIMESTAMP NOT NULL DEFAULT NOW());INSERT INTO notes(text) VALUES ('Hello from init.sql') ON CONFLICT DO NOTHING;

In real projects you would manage migrations more carefully, but this shows the pattern: the database container can bootstrap itself without manual steps.

Step 4: Create a Compose file that defines the whole stack

Now you will write docker-compose.yml to orchestrate both services. Compose will create a default network so the API can reach the database by service name.

services:  db:    image: postgres:16-alpine    container_name: notes-db    environment:      POSTGRES_USER: ${POSTGRES_USER}      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}      POSTGRES_DB: ${POSTGRES_DB}    volumes:      - db_data:/var/lib/postgresql/data      - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql:ro    healthcheck:      test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]      interval: 5s      timeout: 3s      retries: 10    restart: unless-stopped  api:    build:      context: ./api    container_name: notes-api    environment:      PORT: 3000      DB_HOST: db      DB_PORT: 5432      DB_USER: ${POSTGRES_USER}      DB_PASSWORD: ${POSTGRES_PASSWORD}      DB_NAME: ${POSTGRES_DB}    ports:      - "3000:3000"    depends_on:      db:        condition: service_healthy    restart: unless-stoppedvolumes:  db_data:

What to notice in this Compose file

  • Service name as hostname: the API uses DB_HOST: db. In Compose, db becomes a DNS name on the project network.
  • Database not exposed to host: there is no ports mapping for db. The database is reachable from the API container but not directly from your machine. This is a common secure default.
  • Named volume: db_data stores PostgreSQL data outside the container filesystem so it persists.
  • Initialization script: ./db/init.sql is mounted read-only into the init directory.
  • Health check + depends_on condition: depends_on with service_healthy ensures Compose waits for the DB health check to pass before starting the API. This is more reliable than simple start order.

Step 5: Add an environment file for local development

Compose can read variables from a .env file in the same directory as docker-compose.yml. This keeps credentials out of the Compose file and makes it easier to change values without editing YAML.

POSTGRES_USER=notes_userPOSTGRES_PASSWORD=notes_passwordPOSTGRES_DB=notes_db

For local learning, this is fine. For production, you would use a secrets manager or platform-specific secret injection rather than committing passwords to source control.

Step 6: Start the stack and verify behavior

From the project root (where docker-compose.yml lives), run:

docker compose up --build

Compose will build the API image, pull the PostgreSQL image, create the network and volume, and start both services. Keep the terminal open to watch logs. In another terminal, test the API:

curl http://localhost:3000/health
curl http://localhost:3000/notes
curl -X POST http://localhost:3000/notes -H "Content-Type: application/json" -d '{"text":"First note from curl"}'

If everything is wired correctly, /notes should show the seeded note from init.sql plus the note you posted.

Step 7: Understand persistence by recreating containers

A major reason to orchestrate with volumes is to keep stateful data safe when containers are replaced. Try this sequence:

docker compose down

This stops and removes containers and the default network, but it does not remove named volumes by default. Now start again:

docker compose up

Check the notes again:

curl http://localhost:3000/notes

Your previously inserted notes should still exist because db_data persisted them. If you want to remove the volume (and therefore erase the database), you must explicitly do so:

docker compose down -v

This is a practical mental model: containers are disposable, volumes are durable. Orchestration makes it easy to rebuild the disposable parts without losing the durable parts.

Step 8: Day-to-day orchestration commands you will actually use

Once you have a Compose setup, you will typically manage the app with a small set of commands:

  • Start in background: docker compose up -d
  • View logs: docker compose logs -f or docker compose logs -f api
  • Restart one service: docker compose restart api
  • Rebuild after code changes: docker compose up --build -d
  • Stop and remove containers: docker compose down

For debugging, it is often useful to run a one-off command inside a running service container. For example, to open a shell in the API container:

docker compose exec api sh

Or to run psql inside the database container:

docker compose exec db psql -U $POSTGRES_USER -d $POSTGRES_DB

These commands are part of the orchestration workflow: you treat the stack as a managed environment where tools and services are already in the right place.

Step 9: Common orchestration pitfalls and how to avoid them

Pitfall 1: Confusing host ports with container ports

The API is exposed with "3000:3000", meaning your host’s port 3000 forwards to the container’s port 3000. The database has no port mapping, so you cannot connect to it from your host at localhost:5432. This is intentional for isolation. If you do want local access for a GUI client, you can temporarily add:

ports:  - "5432:5432"

to the db service, but understand that this exposes the database to your host network.

Pitfall 2: Assuming depends_on guarantees readiness

Without health checks, depends_on only ensures start order, not readiness. In this project you used a DB health check and also added retry logic in the API. This “belt and suspenders” approach is common: orchestration helps, but applications should still handle transient failures.

Pitfall 3: Re-running init scripts unexpectedly

PostgreSQL init scripts in /docker-entrypoint-initdb.d run only when the data directory is empty (typically the first time the volume is created). If you change init.sql and expect it to re-run, it will not unless you remove the volume with docker compose down -v. For evolving schemas, use migrations rather than relying on init scripts.

Pitfall 4: Hardcoding credentials in the Compose file

It is tempting to write passwords directly in YAML. Using a .env file is a step forward for local development, but still treat it carefully. Add .env to .gitignore if you plan to publish the project, and consider using separate values per environment.

Step 10: Add a dedicated internal network (optional but instructive)

Compose already creates a default network, but defining your own network makes the architecture explicit and prepares you for more complex stacks (for example, adding a reverse proxy or a worker service).

services:  db:    image: postgres:16-alpine    networks:      - backend  api:    build:      context: ./api    networks:      - backend    ports:      - "3000:3000"networks:  backend:

This doesn’t change behavior much in a two-service project, but it reinforces the idea that orchestration is a declarative map of how services connect.

Step 11: Extend the stack with an admin tool (optional extension)

Multi-container orchestration becomes more valuable as you add supporting services. A common addition is a database admin UI. For PostgreSQL, adminer is a lightweight option. Add a third service:

services:  db:    image: postgres:16-alpine  api:    build:      context: ./api  adminer:    image: adminer:4    container_name: notes-adminer    ports:      - "8080:8080"    depends_on:      - db

Then visit http://localhost:8080 and connect using:

  • System: PostgreSQL
  • Server: db
  • Username: value of POSTGRES_USER
  • Password: value of POSTGRES_PASSWORD
  • Database: value of POSTGRES_DB

This demonstrates a key orchestration advantage: you can add tools that live “next to” your app without installing them on your host machine, and they can use the same internal network and service discovery.

Step 12: A practical checklist for multi-container projects

When you create your own API+DB stacks, use this checklist to avoid fragile setups:

  • Use service names for connections (no IP addresses).
  • Keep state in volumes and understand when init scripts run.
  • Prefer health checks for dependencies and add retry logic in the app.
  • Expose only what you need with ports; keep databases internal by default.
  • Centralize configuration with environment variables and a local .env file.
  • Make it reproducible: one command should bring the whole stack up.

Now answer the exercise about the content:

In a Docker Compose API plus PostgreSQL setup, which approach best prevents the API from failing when the database takes time to become ready?

You are right! Congratulations, now go to the next page

You missed! Try again.

Health checks help Compose wait until the database is reachable, and API retry logic handles brief delays anyway. Together they reduce race conditions where the API starts before the database is ready.

Next chapter

Defining Multi-Service Setups with Docker Compose

Arrow Right Icon
Download the app to earn free Certification and listen to the courses in the background, even with the screen off.