← Back to Blog

Dockerfile Best Practices 2026: Smaller, Faster, More Secure Images

A poorly written Dockerfile produces 2GB images that take 10 minutes to build, run as root, and contain dozens of known CVEs. A well-written one produces a 50MB image that builds in 30 seconds, runs as a non-root user, and passes security scans. Here is how to get there.

The Problem with Naive Dockerfiles

Most developers learn Docker by writing something like this:

# What NOT to do
FROM node:latest
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]

This works, but it has serious problems: the latest tag means builds are non-reproducible, the image includes development dependencies and build tools the runtime does not need, the entire source directory is copied including node_modules and .git, the container runs as root, there is no health check, and build times are slow because layer caching is not optimized.

The result: a 900MB+ image running as root with thousands of unnecessary packages - a security and performance disaster. Every practice in this guide addresses one or more of these problems.

1. Pin Your Base Image Version

Never use :latest in production Dockerfiles. Pin to a specific version so builds are reproducible and security audits are meaningful:

# Bad - non-deterministic, breaks silently
FROM node:latest

# Good - pinned minor version
FROM node:20.11-alpine3.19

# Best - pinned to exact digest (fully reproducible)
FROM node:20.11-alpine3.19@sha256:a1b2c3d4...

Use docker pull node:20.11-alpine3.19 and check the digest with docker inspect node:20.11-alpine3.19 --format='{{index .RepoDigests 0}}' to get the exact hash for fully reproducible builds.

2. Choose a Minimal Base Image

The base image determines your attack surface. Start with the smallest image that meets your needs:

  • Alpine Linux (~5MB): node:20-alpine, python:3.12-alpine. Minimal packages, musl libc. Good for most apps.
  • Debian Slim (~30MB): node:20-slim, python:3.12-slim. Debian compatibility without full Debian bloat. Better if you need glibc.
  • Distroless (~2MB): Google's distroless images contain only the runtime (no shell, no package manager). Maximum security, harder to debug.
  • Scratch (0 bytes): Empty base. Used for statically compiled Go/Rust binaries.
# A statically compiled Go binary on scratch
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /app/server .

FROM scratch
COPY --from=builder /app/server /server
EXPOSE 8080
ENTRYPOINT ["/server"]

3. Optimize Layer Caching with Proper COPY Order

Docker caches each layer. A cache miss on any layer invalidates all subsequent layers. Copy dependency manifests first, install dependencies, then copy source code. This way, dependency installation is only re-run when package.json changes - not every time you modify a source file:

# Bad - any source change re-runs npm install
COPY . .
RUN npm ci

# Good - source changes don't invalidate the dependency layer
COPY package.json package-lock.json ./
RUN npm ci
COPY . .

This optimization alone can cut build times from 3 minutes to 10 seconds for typical Node.js applications during iterative development.

4. Use Multi-Stage Builds

Multi-stage builds are the single most impactful practice for reducing production image size. Build your application in a full-featured stage, then copy only the runtime artifacts to a minimal final stage:

# Stage 1: Install dependencies and build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci                          # Installs ALL deps including devDependencies
COPY . .
RUN npm run build                   # TypeScript compile, webpack, etc.

# Stage 2: Production image - only runtime artifacts
FROM node:20-alpine AS production
WORKDIR /app
ENV NODE_ENV=production
COPY package.json package-lock.json ./
RUN npm ci --omit=dev               # Production deps only
COPY --from=builder /app/dist ./dist
EXPOSE 3000
USER node
CMD ["node", "dist/server.js"]

Result: a production image that contains only what is needed to run the application. Build tools, test frameworks, TypeScript compiler, and source maps stay in the builder stage and never reach production.

5. Create and Use a .dockerignore File

Without a .dockerignore, COPY . . sends your entire build context to Docker - including node_modules, .git, test fixtures, and local .env files containing secrets. This slows builds and risks leaking credentials into the image.

# .dockerignore
node_modules
.git
.gitignore
*.md
.env
.env.*
coverage/
.nyc_output/
dist/          # Will be generated in the build stage
*.log
.DS_Store
Dockerfile*
docker-compose*
.dockerignore

After adding a good .dockerignore, build context transfers often drop from hundreds of MB to a few KB.

6. Never Run as Root

By default, processes inside Docker containers run as root (UID 0). This is a significant security risk - a container escape or RCE vulnerability gives the attacker root on the host (modulo kernel namespacing). Always switch to a non-root user before the final CMD or ENTRYPOINT:

FROM node:20-alpine

# node:alpine images already have a 'node' user (UID 1000)
WORKDIR /app
COPY --chown=node:node package.json package-lock.json ./
RUN npm ci --omit=dev
COPY --chown=node:node . .

USER node   # Switch before CMD
CMD ["node", "server.js"]

For images that don't have a built-in non-root user, create one:

RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

7. Use COPY, Not ADD

ADD has two special behaviors: it auto-extracts tar archives and it can fetch URLs. Both are footguns that lead to unexpected behavior. Use COPY for copying local files - it does exactly what it says:

# Bad - implicit tar extraction is surprising
ADD app.tar.gz /app/

# Good - explicit extraction
COPY app.tar.gz /tmp/
RUN tar -xzf /tmp/app.tar.gz -C /app/ && rm /tmp/app.tar.gz

The only valid use of ADD is the remote URL fetch, and even that is better done with RUN curl so you can verify checksums in the same layer.

8. Clean Up in the Same RUN Layer

Each RUN instruction creates a new layer. If you install packages in one RUN and delete the cache in the next, the cache is still present in the intermediate layer and contributes to image size. Combine installation and cleanup in a single RUN:

# Bad - cache is in layer 1, deletion in layer 2, total size unchanged
RUN apt-get update && apt-get install -y curl wget
RUN apt-get clean && rm -rf /var/lib/apt/lists/*

# Good - single layer, no wasted space
RUN apt-get update \
    && apt-get install -y --no-install-recommends \
       curl \
       wget \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

The --no-install-recommends flag prevents apt from pulling in dozens of suggested packages you did not ask for.

9. Add HEALTHCHECK Instructions

A HEALTHCHECK lets Docker (and orchestrators like Kubernetes via readiness probes) know whether your container is actually healthy, not just running:

FROM node:20-alpine
WORKDIR /app
# ... (build steps)

HEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \
  CMD wget -qO- http://localhost:3000/health || exit 1

USER node
CMD ["node", "server.js"]

Without a health check, Docker considers a container healthy the moment the process starts, even if it is stuck in a boot loop. Health checks enable proper depends_on: condition: service_healthy in Docker Compose.

10. Use Build Arguments and ARG for Dynamic Values

# Accept build-time arguments
ARG NODE_ENV=production
ARG APP_VERSION=unknown

# Convert ARG to ENV if needed at runtime
ENV NODE_ENV=${NODE_ENV}

# Label images with metadata
LABEL org.opencontainers.image.version="${APP_VERSION}" \
      org.opencontainers.image.source="https://github.com/myorg/myapp" \
      org.opencontainers.image.created="2026-03-26"

Important: ARG values passed at build time are visible in docker history. Never pass secrets (API keys, passwords) as build arguments. Use Docker secrets or environment variables at runtime instead.

A Complete Production-Ready Node.js Dockerfile

Here is everything above combined into a single production-quality Dockerfile:

# ---- Base ----
FROM node:20.11-alpine3.19 AS base
WORKDIR /app
RUN addgroup -S app && adduser -S app -G app

# ---- Dependencies ----
FROM base AS deps
COPY package.json package-lock.json ./
RUN npm ci

# ---- Build ----
FROM deps AS builder
COPY . .
RUN npm run build

# ---- Production ----
FROM base AS production
ENV NODE_ENV=production

# Copy only production deps and compiled output
COPY package.json package-lock.json ./
RUN npm ci --omit=dev && npm cache clean --force
COPY --from=builder --chown=app:app /app/dist ./dist

EXPOSE 3000

HEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \
  CMD wget -qO- http://localhost:3000/health || exit 1

USER app
CMD ["node", "dist/server.js"]

Scan Your Site for Free

Our Exposure Checker runs 19 parallel security checks - SSL, headers, exposed paths, DNS, open ports, and more.

Run Free Security Scan

Scanning for Vulnerabilities

Even with a minimal base image, known CVEs can sneak in through dependencies. Scan images as part of your CI pipeline:

# Docker Scout (built into Docker Desktop and CLI)
docker scout cves myapp:latest

# Trivy (open source, widely used in CI)
trivy image myapp:latest

# In GitHub Actions
- name: Run Trivy vulnerability scanner
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: myapp:${{ github.sha }}
    severity: 'CRITICAL,HIGH'
    exit-code: '1'

FAQ

How much smaller is Alpine compared to a full Debian image?

A typical node:20 (Debian-based) image is around 1.1GB. node:20-slim is about 240MB. node:20-alpine is about 130MB. A multi-stage build with an Alpine production image can get a Node.js app down to 50–80MB depending on dependencies. For Go apps compiled to a static binary, scratch images under 20MB are common.

When should I NOT use Alpine?

Alpine uses musl libc instead of glibc. Some native Node.js modules, Python C extensions, and pre-compiled binaries that link against glibc will fail on Alpine. Common problem packages include sharp, grpc, and anything using node-gyp with glibc symbols. In those cases, use a Debian slim image instead. If you encounter Error loading shared library libstdc++.so.6 or similar, switch to node:20-slim.

Can I cache npm install across builds in CI?

Yes. When using BuildKit (enabled by default in Docker 23+), you can use --mount=type=cache to cache the npm cache directory across builds without creating a layer for it:

RUN --mount=type=cache,target=/root/.npm \
    npm ci

In CI systems like GitHub Actions, use the built-in cache action to cache the Docker layer or use registry-based caching with --cache-from and --cache-to.

Is it safe to store secrets in a Dockerfile ENV instruction?

No. Environment variables set with ENV are visible in docker inspect, docker history, and the image manifest. Anyone with pull access to the image can read them. Pass secrets at runtime via environment variables, Docker secrets (Swarm), Kubernetes Secrets, or a secrets manager. If a secret was accidentally baked into an image, rotate it immediately and rebuild without it - you cannot remove it from an existing image layer.

What is the difference between CMD and ENTRYPOINT?

ENTRYPOINT defines the executable that always runs. CMD provides default arguments to the entrypoint that can be overridden. The common pattern is: ENTRYPOINT ["/app/server"] with CMD ["--port", "3000"]. Using exec form (["node", "server.js"]) instead of shell form (node server.js) means your process is PID 1 directly, so it receives signals like SIGTERM correctly for graceful shutdown.

The Bottom Line

A production Dockerfile is not just instructions for building an image - it is a security boundary, a performance contract, and documentation of your runtime environment. Start with a pinned minimal base, order layers for cache efficiency, use multi-stage builds to strip build artifacts, run as a non-root user, and scan for CVEs in CI. These practices together can reduce image size by 10x and eliminate entire classes of security vulnerabilities.

Use our free tool here → Docker to Compose Converter to wire your optimized image into a production-ready multi-container stack.

UK
Written by Usman Khan
DevOps Engineer | MSc Cybersecurity | CEH | AWS Solutions Architect

Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic servers, and building zero-knowledge security tools. Read more about the author.