← Back to Blog

Docker Commands Cheat Sheet 2026: 50+ Essential Commands with Examples

Docker is the foundation of modern software delivery. Whether you are containerizing a Node.js API for the first time or managing a production multi-service stack, this cheat sheet covers every command and pattern you will reach for repeatedly - organized by workflow, with real examples.

Why Docker Still Matters in 2026

Containers have become the universal unit of software deployment. Docker standardized how applications are packaged, and despite the rise of Kubernetes and container orchestration platforms, Docker remains the dominant tool for building images and running containers locally. Understanding Docker deeply is a prerequisite for Kubernetes, ECS, Cloud Run, and every other container platform because they all run the same OCI-compatible images Docker produces.

The common pain points that bring people to a cheat sheet: forgetting the exact flag to follow logs, not remembering how to get a shell into a running container, or trying to figure out why a Compose service will not start. This guide covers all of that.

Container Lifecycle

The core Docker workflow is: pull or build an image, run a container from it, inspect and debug the container, then stop and remove it. These are the commands you will use every day.

# Run a container
docker run -d --name myapp -p 8080:3000 myimage:latest
docker run -it --rm ubuntu:22.04 bash        # interactive, auto-remove on exit
docker run --env-file .env -d myimage        # inject environment variables from file
docker run --memory 512m --cpus 1 myimage    # resource limits

# Lifecycle
docker start myapp
docker stop myapp           # graceful (SIGTERM then SIGKILL after 10s)
docker restart myapp
docker kill myapp           # immediate SIGKILL
docker rm myapp             # remove stopped container
docker rm -f myapp          # force remove running container

# List containers
docker ps                   # running
docker ps -a                # all (including stopped)
docker ps -q                # IDs only (useful for scripting)
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"

# Cleanup
docker container prune      # remove all stopped containers
docker system prune -a      # remove all unused containers, images, networks, cache
docker system df            # show disk usage by Docker components

Image Management

Images are the blueprints for containers. Managing them well keeps your disk clean and your builds fast.

# Build
docker build -t myapp:1.0 .
docker build -t myapp:1.0 -f docker/Dockerfile.prod .  # custom Dockerfile path
docker build --no-cache -t myapp:1.0 .                  # bypass layer cache
docker build --build-arg NODE_ENV=production -t myapp . # pass build args
docker build --platform linux/amd64 -t myapp .          # cross-platform build

# List and inspect
docker images
docker image inspect myapp:1.0
docker history myapp:1.0             # see each layer: size, command, timestamp

# Pull and push
docker pull node:20-alpine
docker tag myapp:1.0 registry.example.com/myapp:1.0
docker push registry.example.com/myapp:1.0
docker login registry.example.com    # authenticate to private registry

# Save/load (for air-gapped transfers)
docker save myapp:1.0 | gzip > myapp.tar.gz
docker load < myapp.tar.gz

# Cleanup
docker image prune           # remove dangling images (untagged)
docker image prune -a        # remove all images not used by any container

Writing an Efficient Dockerfile

A poorly written Dockerfile produces bloated images, slow rebuilds, and security vulnerabilities. These principles apply to every language and framework.

Multi-Stage Build: The Production Pattern

Multi-stage builds are the single most impactful improvement you can make. They separate the build environment (compilers, dev dependencies, build tools) from the runtime image. The result: a final image that contains only what is needed to run the application.

# Stage 1: Build (full toolchain, can be large)
FROM node:20-alpine AS builder
WORKDIR /app
# Copy dependency manifests first - Docker caches this layer separately
# so npm ci only re-runs when package.json changes
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
COPY . .
RUN npm run build

# Stage 2: Production runtime (minimal, no build tools)
FROM node:20-alpine
WORKDIR /app
# Create non-root user before copying files
RUN addgroup -g 1001 appgroup && \
    adduser -u 1001 -G appgroup -s /bin/sh -D appuser

# Copy only the build output and production dependencies
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./

USER appuser
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s --retries=3 \
  CMD wget -qO- http://localhost:3000/health || exit 1
CMD ["node", "dist/server.js"]

Key Dockerfile Rules

  • Order layers by change frequency. Dependencies change less often than source code. Put COPY package*.json and RUN npm ci before COPY . . to keep the dependency install layer cached across most rebuilds.
  • Use a .dockerignore file. Exclude node_modules, .git, .env, test fixtures, and local build output. Without this, Docker sends gigabytes of unnecessary context to the daemon on every build.
  • Use npm ci not npm install. ci is deterministic (respects lockfile exactly), faster in CI, and fails loudly on lockfile inconsistencies.
  • Always run as non-root. Create a dedicated user and switch with USER. Running as root in a container is a significant security risk if the container is ever compromised.
  • Pin base image versions. Use node:20.11-alpine3.19 not node:latest. latest is unpredictable and can break builds silently when the upstream tag moves.
  • Use Alpine or distroless images. node:20-alpine is ~60MB; node:20 (Debian) is ~350MB. Smaller images mean faster pulls and smaller attack surfaces.
  • Add a HEALTHCHECK. Docker and orchestrators (ECS, Kubernetes) use health checks to detect when a container is running but not healthy. Without one, a crashed app process that keeps the container alive will never be restarted.

Docker Compose

Compose is the standard way to run multi-container applications locally and in simple deployments. The V2 CLI (docker compose) is now bundled with Docker Desktop and the Docker Engine plugin.

# docker-compose.yml - a complete example with best practices
services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      DATABASE_URL: postgres://user:pass@db:5432/mydb
      REDIS_URL: redis://cache:6379
    env_file:
      - .env.local          # load additional env vars from file
    depends_on:
      db:
        condition: service_healthy   # wait for health check, not just container start
      cache:
        condition: service_started
    restart: unless-stopped
    volumes:
      - ./src:/app/src      # bind mount for local development hot-reload

  db:
    image: postgres:16-alpine
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: mydb
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  cache:
    image: redis:7-alpine
    command: redis-server --maxmemory 256mb --maxmemory-policy allkeys-lru

volumes:
  pgdata:     # named volume, managed by Docker, persists across restarts
# Compose CLI commands
docker compose up -d               # start all services in background
docker compose up --build -d       # rebuild images before starting
docker compose down                # stop and remove containers + networks
docker compose down -v             # also remove named volumes (wipes data)
docker compose logs -f app         # follow logs for the 'app' service
docker compose logs -f --tail=50   # last 50 lines then follow
docker compose exec app sh         # open shell in running container
docker compose run --rm app npm test  # run a one-off command in a new container
docker compose build --no-cache    # rebuild all images ignoring cache
docker compose ps                  # list services and their status
docker compose restart app         # restart a single service
docker compose pull                # pull latest images for all services

Debugging Containers

When something goes wrong in a container, these commands will get you to the answer in minutes.

# View logs
docker logs myapp                  # all logs (stdout + stderr)
docker logs -f myapp               # follow (streaming tail)
docker logs --tail 100 myapp       # last 100 lines
docker logs --since 1h myapp       # logs from the last hour
docker logs --since "2026-03-26T10:00:00" myapp  # logs since specific time

# Execute commands in a running container
docker exec -it myapp sh           # interactive shell (Alpine)
docker exec -it myapp bash         # bash (Debian/Ubuntu)
docker exec myapp env              # see all environment variables
docker exec myapp cat /etc/hosts   # run a single command

# Inspect container metadata
docker inspect myapp                              # full JSON: networks, mounts, config
docker inspect --format='{{.State.Health.Status}}' myapp  # just health status
docker inspect --format='{{.NetworkSettings.IPAddress}}' myapp  # container IP
docker stats                       # live CPU, memory, network, disk I/O
docker stats --no-stream           # snapshot (non-streaming)
docker top myapp                   # list processes running inside container

# Copy files between host and container
docker cp myapp:/app/logs/error.log ./error.log
docker cp ./config.json myapp:/app/config.json

Pro tip: if a container crashes immediately on start and you cannot get a shell into it, override the entrypoint to drop into a shell: docker run -it --entrypoint sh myimage. This lets you inspect the filesystem, test commands, and identify the issue without modifying the Dockerfile.

Networking

Docker networking enables containers to communicate. The default bridge network works for simple cases, but custom networks are better: they provide DNS-based container discovery by name.

# List networks
docker network ls

# Create a custom bridge network
docker network create mynet

# Attach containers to the network - they resolve each other by container name
docker run -d --name api --network mynet myapp
docker run -d --name db --network mynet postgres:16

# From inside 'api', connect to 'db' by hostname 'db':
# psql -h db -U postgres mydb

# Inspect a network
docker network inspect mynet

# Remove unused networks
docker network prune

Volumes and Data Persistence

# Named volume - managed by Docker, survives container removal, easy to back up
docker volume create mydata
docker run -v mydata:/app/data myimage
docker volume inspect mydata       # see where data lives on host
docker volume ls
docker volume prune                # remove all unused volumes

# Bind mount - maps a host directory into the container
# Ideal for local development (changes on host appear instantly in container)
docker run -v $(pwd)/src:/app/src myimage

# tmpfs mount - in-memory, not persisted, fast for temporary data
docker run --tmpfs /app/tmp:size=100m myimage

Scan Your Site for Free

Our Exposure Checker runs 19 parallel security checks - SSL, headers, exposed paths, DNS, open ports, and more.

Run Free Security Scan

Production Security Checklist

  • Run as non-root: Every production container should have a USER directive with a non-root UID.
  • Read-only filesystem: Add --read-only to docker run or read_only: true in Compose. Mount writable paths explicitly with tmpfs or named volumes.
  • Drop capabilities: Add --cap-drop ALL and --cap-add only what is specifically needed.
  • No privileged mode: Never use --privileged in production. It gives the container nearly full host access.
  • Scan images: Use docker scout cves myimage or Trivy (trivy image myimage) to find known CVEs before deploying.
  • Pin digest not just tag: Tags are mutable. Pin images by digest in production: node:20-alpine@sha256:abc123....
  • Limit resources: Always set --memory and --cpus limits to prevent a runaway container from starving the host.

Frequently Asked Questions

What is the difference between a Docker image and a container?

An image is a read-only template - a layered filesystem snapshot built by a Dockerfile. A container is a running instance of an image. The relationship is like a class and an object: one image can run as many containers simultaneously. When you run docker run, Docker creates a writable container layer on top of the image's read-only layers. The image itself is never modified.

Why is my Docker image so large?

The most common culprits: using a full OS base image instead of Alpine or distroless, not cleaning up package manager caches in the same RUN layer that installs packages (each RUN creates a layer, so cleanup in a separate layer does not shrink the image), and not using multi-stage builds (so build tools and dev dependencies end up in the final image). Check layer sizes with docker history myimage to identify what is taking up space.

How do I pass environment variables to a container securely?

For local development: use an .env file with --env-file .env (and add .env to .gitignore). For production: use your platform's secrets management (AWS Secrets Manager, Kubernetes Secrets, Docker Swarm secrets) and inject values as environment variables at runtime. Never bake secrets into the image with ENV directives in the Dockerfile - they are visible in docker history and in the image metadata.

What is the difference between CMD and ENTRYPOINT?

ENTRYPOINT defines the executable that always runs. CMD provides default arguments to the entrypoint. When you run docker run myimage bash, bash overrides CMD but not ENTRYPOINT. Best practice: use ENTRYPOINT for the main process, CMD for default arguments. Use the exec form (["node", "server.js"]) not the shell form (node server.js) - the exec form makes the process PID 1 directly, which handles signals correctly.

When should I use Docker Compose vs Kubernetes?

Docker Compose is ideal for local development and simple single-host deployments. It has almost no operational overhead and is straightforward to understand. Kubernetes is designed for multi-host production workloads that need auto-scaling, self-healing, rolling deployments, and sophisticated networking. For most teams, the right answer is: Compose locally, Kubernetes (or a managed equivalent like ECS or Cloud Run) in production. The images are the same - the orchestration layer differs.

How do I make containers start in the correct order in Compose?

depends_on alone only waits for the container to start, not for the application inside it to be ready. Use depends_on with condition: service_healthy combined with a healthcheck on the dependency service. The healthcheck runs a command inside the container (pg_isready for Postgres, redis-cli ping for Redis) and Compose waits until it passes before starting dependent services.

The Bottom Line

Master the container lifecycle, write efficient multi-stage Dockerfiles, use Compose for local development, and lock down production containers with non-root users, resource limits, and image scanning. When debugging, docker logs -f, docker exec -it sh, and docker stats will solve 90% of issues without needing anything else.

Use our free tool here → Docker Run to Compose Converter

UK
Written by Usman Khan
DevOps Engineer | MSc Cybersecurity | CEH | AWS Solutions Architect

Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic servers, and building zero-knowledge security tools. Read more about the author.