Docker Compose Tutorial: Multi-Container Apps Made Easy (2026)
Running a single container with docker run gets you started, but real applications involve a web server, a database, a cache, and a message queue - all wired together. Docker Compose is the tool that makes orchestrating multi-container applications on a single host simple, repeatable, and version-controlled.
Why Docker Compose Exists
Before Docker Compose, spinning up a multi-container app meant writing long shell scripts full of docker run commands, manually creating networks, and remembering the exact port mappings every time. It worked, but it was fragile. A new developer joining the team had to read the script line by line to understand the topology.
Docker Compose solves this by letting you describe your entire application stack in a single docker-compose.yml file. One command - docker compose up - starts everything. One command - docker compose down - tears it all down. The file can be committed to git, reviewed in pull requests, and shared with the team.
Common problems Docker Compose solves:
- "Works on my machine" syndrome: everyone runs the same container versions and configuration
- Service startup ordering:
depends_onwith health checks ensures the database is ready before the app starts - Networking: services discover each other by name without manual
--linkflags - Environment management:
.envfiles keep secrets out of the Compose file itself - Volume management: named volumes persist database data across container restarts
Installing Docker Compose
Docker Compose V2 ships as a plugin with Docker Desktop on Mac and Windows. On Linux, install the plugin alongside Docker Engine:
# Verify you have Compose V2 (the plugin, not the legacy Python binary)
docker compose version
# Docker Compose version v2.24.0
If you see docker-compose: command not found, you have the legacy V1 binary. The V2 plugin uses docker compose (no hyphen). This tutorial uses V2 syntax throughout.
Your First docker-compose.yml
Start with the simplest possible example: a Node.js app backed by PostgreSQL.
services:
app:
build: .
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://postgres:secret@db:5432/myapp
depends_on:
db:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- db_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
volumes:
db_data:
What each section does:
- services: each key is a container.
appanddbare the two services. - build: . tells Compose to build the image from the
Dockerfilein the current directory. - ports: maps host port 3000 to container port 3000. Format is
"HOST:CONTAINER". - depends_on / condition: service_healthy: Compose waits until the
dbhealthcheck passes before startingapp. Without this, the app often crashes on startup because the database is not ready yet. - volumes: db_data: a named volume that persists PostgreSQL data even when the container is removed.
Core Commands
# Start all services in the background
docker compose up -d
# View running services
docker compose ps
# Follow logs for all services
docker compose logs -f
# Follow logs for a specific service
docker compose logs -f app
# Stop and remove containers (keeps volumes)
docker compose down
# Stop, remove containers AND volumes (wipes the database)
docker compose down -v
# Rebuild images and restart
docker compose up -d --build
# Run a one-off command in a service container
docker compose exec app sh
# Scale a stateless service to 3 replicas
docker compose up -d --scale app=3
Environment Variables and .env Files
Hard-coding passwords in docker-compose.yml is a bad practice. Use .env files to separate configuration from the Compose definition:
# .env (never commit this file to git)
POSTGRES_PASSWORD=supersecretpassword
POSTGRES_DB=myapp
APP_PORT=3000
# docker-compose.yml
services:
app:
build: .
ports:
- "${APP_PORT}:3000"
environment:
DATABASE_URL: postgres://postgres:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
db:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
Compose automatically loads .env from the same directory as the Compose file. You can also use env_file: to load an environment file directly into the container:
services:
app:
build: .
env_file:
- .env.app
Add
.envto your.gitignore. Commit a.env.examplewith placeholder values so team members know what variables are required.
Networks
By default, Compose creates a single network for all services in a file. Services discover each other using their service name as the hostname - db resolves to the database container's IP automatically.
For more complex apps, define explicit networks to isolate concerns:
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
networks:
- frontend
app:
build: .
networks:
- frontend
- backend
db:
image: postgres:16-alpine
networks:
- backend # db is NOT reachable from nginx directly
networks:
frontend:
backend:
In this setup, nginx can reach app, and app can reach db, but nginx cannot reach db directly. This mirrors a proper three-tier architecture.
Volumes: Persistent and Bind Mounts
Docker Compose supports two types of mounts:
Named volumes (recommended for databases)
services:
db:
image: postgres:16-alpine
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data: # Docker manages the storage location
Bind mounts (recommended for development)
services:
app:
build: .
volumes:
- .:/app # Mount current directory into container
- /app/node_modules # Exclude node_modules from the bind mount
Bind mounts let you edit code on your host and see changes in the container instantly - essential for a fast development workflow. Named volumes are better for databases because Docker manages the storage and you avoid permission issues.
A Real-World Example: Node.js + PostgreSQL + Redis + Nginx
Here is a production-realistic Compose file for a Node.js API with a PostgreSQL database, Redis cache, and Nginx reverse proxy:
services:
nginx:
image: nginx:1.25-alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- app
networks:
- frontend
restart: unless-stopped
app:
build:
context: .
target: production
env_file: .env
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
networks:
- frontend
- backend
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
db:
image: postgres:16-alpine
env_file: .env
volumes:
- db_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 10s
timeout: 5s
retries: 5
networks:
- backend
restart: unless-stopped
redis:
image: redis:7-alpine
command: redis-server --save 60 1 --loglevel warning
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
networks:
- backend
restart: unless-stopped
networks:
frontend:
backend:
volumes:
db_data:
redis_data:
Scan Your Site for Free
Our Exposure Checker runs 19 parallel security checks - SSL, headers, exposed paths, DNS, open ports, and more.
Run Free Security ScanMultiple Compose Files: Override Pattern
A powerful pattern for managing dev/staging/prod differences is using a base Compose file with override files:
# docker-compose.yml (base - shared config)
services:
app:
build: .
environment:
NODE_ENV: production
db:
image: postgres:16-alpine
# docker-compose.override.yml (auto-loaded in development)
services:
app:
build:
target: development
volumes:
- .:/app
environment:
NODE_ENV: development
ports:
- "9229:9229" # Node debugger port
# docker-compose.prod.yml (explicit for production)
services:
app:
restart: always
deploy:
replicas: 2
# Development (auto-merges override file)
docker compose up
# Production (explicit merge)
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Health Checks in Depth
Health checks are critical for production reliability. Without them, depends_on only waits for the container to start - not for the service inside to be ready. The format is:
healthcheck:
test: ["CMD", "pg_isready", "-U", "postgres"]
interval: 10s # How often to run the check
timeout: 5s # How long to wait for a response
retries: 5 # Failures before marking unhealthy
start_period: 30s # Grace period after container starts
Common health check commands by service type:
- PostgreSQL:
pg_isready -U postgres - MySQL/MariaDB:
mysqladmin ping -h localhost - Redis:
redis-cli ping - HTTP services:
curl -f http://localhost:3000/health || exit 1 - MongoDB:
mongosh --eval "db.adminCommand('ping')"
Step-by-Step: Migrating a docker run Command to Compose
If you have been running containers manually, here is how to migrate:
- List all your
docker runcommands and note the flags:-p,-e,-v,--network,--name - Create a
docker-compose.ymlwith aservices:block - Map each
-p HOST:CONTAINERtoports: - Map each
-e KEY=VALUEtoenvironment:or move to.env - Map each
-vtovolumes: - Replace
--linkreferences with service names on a shared network - Add
depends_onwith health checks for startup ordering - Run
docker compose up -dand verify withdocker compose ps
Our Docker to Compose converter automates steps 2–6 for you.
FAQ
What is the difference between Docker Compose V1 and V2?
V1 (docker-compose) was a standalone Python binary. V2 (docker compose) is a Go plugin that ships with Docker Engine and Docker Desktop. V2 is significantly faster, supports BuildKit by default, and is the maintained version. V1 reached end-of-life in July 2023. Use docker compose (no hyphen) for all new work.
Does docker-compose.yml work in production?
Docker Compose works well for single-host deployments and is commonly used for staging environments. For multi-host production deployments with high availability, Kubernetes or Docker Swarm is more appropriate. However, many small-to-medium applications run successfully in production on a single host with Docker Compose and a process supervisor like systemd managing the docker compose up invocation.
How do I run database migrations automatically on startup?
Use a separate migrate service that depends on the database health check and runs before the app:
services:
migrate:
build: .
command: npm run migrate
depends_on:
db:
condition: service_healthy
restart: on-failure
app:
build: .
depends_on:
migrate:
condition: service_completed_successfully
How do I handle secrets securely in Docker Compose?
Use Docker secrets (Swarm mode) for true secrets management, or use environment variables loaded from a .env file that is excluded from git. Never hard-code passwords or API keys directly in docker-compose.yml. For production, consider using a secrets manager like HashiCorp Vault or AWS Secrets Manager and injecting values at runtime.
Why does my app container start before the database is ready?
depends_on without a condition only waits for the container to exist, not for the service inside it to be ready. Always use condition: service_healthy combined with a healthcheck on the dependency. Alternatively, add retry logic in your application startup code using a library like wait-for-it or dockerize.
How do I view which containers are running and their status?
docker compose ps shows all services defined in the current Compose file, their state (running, exited, unhealthy), and port mappings. Use docker compose ps --all to include stopped containers. For logs, docker compose logs -f --tail=100 follows the last 100 lines of all service logs in real time.
The Bottom Line
Docker Compose turns a multi-container application from a collection of docker run commands in a readme into a version-controlled, reproducible, one-command stack. Start with the basics - services, ports, volumes, and depends_on - and layer in networks, override files, and resource limits as your stack grows.
Use our free tool here → Docker to Compose Converter to convert existing docker run commands into a proper docker-compose.yml instantly.
Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic servers, and building zero-knowledge security tools. Read more about the author.