Docker Volumes Explained: Persist Data the Right Way (2026)
Containers are ephemeral by design - every file written inside a container disappears when it is removed. But databases, uploads, and logs cannot be ephemeral. Docker volumes solve this, and using them correctly is the difference between a reliable production service and one that silently loses data on every restart.
The Core Problem: Containers Are Stateless
When you run docker run postgres, PostgreSQL initializes its data directory inside the container's writable layer. The moment you run docker rm, every row in every table is gone. This is not a bug - it is a feature. Immutable containers make deployments predictable and reproducible. But you need a mechanism to separate data from the container lifecycle, and that mechanism is volumes.
Docker provides three ways to mount external storage into a container:
- Named volumes - managed by Docker, stored in Docker's storage area
- Bind mounts - map a host directory directly into the container
- tmpfs mounts - in-memory, never written to disk, ephemeral by design
Named Volumes: The Production Default
Named volumes are Docker-managed. Docker stores them under /var/lib/docker/volumes/ on Linux. You refer to them by name, and Docker handles the rest. They survive docker rm, survive container upgrades, and can be shared between multiple containers.
# Create a named volume explicitly
docker volume create pgdata
# Use a named volume with -v
docker run -d \
--name postgres \
-v pgdata:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=secret \
postgres:16
# Or let Docker create it implicitly (same result)
docker run -d \
--name postgres \
-v pgdata:/var/lib/postgresql/data \
postgres:16
# Remove container -- pgdata volume survives
docker rm -f postgres
# Restart with the same volume -- data is still there
docker run -d \
--name postgres \
-v pgdata:/var/lib/postgresql/data \
postgres:16
Named volumes have better performance than bind mounts on macOS and Windows because Docker does not have to cross the VM boundary for every file operation. On Linux, performance is equivalent since there is no VM.
Bind Mounts: Local Development Workhorse
Bind mounts map a specific path on the host filesystem into the container. The container sees the host directory's contents directly. Changes inside the container are immediately visible on the host, and vice versa. This makes bind mounts ideal for local development where you want live code reloading.
# Bind mount current directory into container
docker run -d \
--name dev-api \
-v $(pwd)/src:/app/src \
-p 3000:3000 \
node:20 npm run dev
# The app reloads when you edit files on your host
# Mount a single config file (not a directory)
docker run -d \
--name nginx \
-v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro \
-p 80:80 \
nginx:alpine
The :ro suffix makes the mount read-only inside the container. Use this for configuration files that the container should read but never modify. It is also a good security practice to mark any bind mount read-only unless the container genuinely needs to write to it.
Never use bind mounts in production for application data. Bind mounts create a tight coupling between your container and a specific host path. Named volumes are portable, backed up consistently, and can be moved between hosts with
docker volumecommands.
tmpfs Mounts: Sensitive Data That Must Not Touch Disk
A tmpfs mount is stored entirely in the host's memory. It is never written to disk, and it disappears when the container stops. Use it for secrets, session data, or any sensitive temporary state that must not be persisted.
# Mount /run/secrets as an in-memory tmpfs
docker run -d \
--name api \
--tmpfs /run/secrets:rw,size=10m,mode=0700 \
myapi:latest
# In Docker Compose
services:
api:
image: myapi:latest
tmpfs:
- /run/secrets
- /tmp
Tmpfs is also useful for speeding up applications that create a lot of temporary files. Writing to memory is orders of magnitude faster than writing to disk, and for files that do not need to survive a restart, this is a worthwhile optimization.
Step-by-Step: Volumes in Docker Compose
In Compose, volumes are declared at two levels: in the service definition (where to mount) and at the top level (what to create). Here is a production-realistic pattern for a web application with a database:
services:
api:
image: myapi:latest
volumes:
- uploads:/app/uploads # Named volume for user uploads
- ./config:/app/config:ro # Bind mount for config (read-only)
- /tmp # Anonymous volume for temp files
depends_on:
- db
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data # Named volume for DB data
volumes:
pgdata: # Docker manages this
uploads: # Docker manages this
The top-level volumes: section declares the named volumes. If you run docker compose down, these volumes survive. Only docker compose down -v removes them. This is an important distinction - many developers are surprised to find their database data gone after docker compose down -v.
Sharing Volumes Between Containers
Multiple containers can mount the same named volume. This is useful for sidecar patterns: a log shipper reading from the same log volume as your application, or an Nginx container serving static files that a build container has populated.
services:
app:
image: myapp:latest
volumes:
- static:/app/public/static
nginx:
image: nginx:alpine
volumes:
- static:/usr/share/nginx/html:ro # Read-only for nginx
ports:
- "80:80"
volumes:
static:
When two containers write to the same volume simultaneously, there is no locking. You are responsible for ensuring that concurrent writes do not corrupt data. Databases handle this internally, but arbitrary file writes need coordination at the application level.
Backup and Restore Named Volumes
Named volumes live inside Docker's storage area, not in a path you can easily cp. The standard approach is to run a temporary container that mounts the volume and tars the contents:
# Backup: tar volume contents to a file on the host
docker run --rm \
-v pgdata:/data \
-v $(pwd):/backup \
alpine \
tar czf /backup/pgdata-backup-$(date +%Y%m%d).tar.gz -C /data .
# Restore: extract backup into a volume
docker run --rm \
-v pgdata:/data \
-v $(pwd):/backup \
alpine \
tar xzf /backup/pgdata-backup-20260326.tar.gz -C /data
# For PostgreSQL specifically, use pg_dump instead of raw file backup
docker exec postgres pg_dump -U postgres mydb > backup.sql
# Restore PostgreSQL
docker exec -i postgres psql -U postgres mydb < backup.sql
Volume Drivers: Extending Beyond Local Storage
Docker's plugin system allows volume drivers that store data on remote systems. The most common production use cases:
- local - the default, stores on the Docker host
- nfs - mount an NFS share as a Docker volume (built into the local driver with options)
- rexray/ebs - AWS EBS volumes, survives instance replacement
- vieux/sshfs - mount a remote directory over SSH
# Create an NFS-backed volume using the local driver
docker volume create \
--driver local \
--opt type=nfs \
--opt o=addr=192.168.1.10,rw \
--opt device=:/exports/data \
nfs-volume
# EBS volume (requires rexray plugin)
docker plugin install rexray/ebs EBS_REGION=us-east-1
docker volume create --driver rexray/ebs --opt size=20 ebs-vol
Convert Docker Run to Compose - Including Volumes
Paste any docker run command with -v flags and get a complete docker-compose.yml with volume declarations. Free, runs in your browser.
Inspecting and Managing Volumes
# List all volumes
docker volume ls
# Inspect a volume (shows mount point on host)
docker volume inspect pgdata
# Find which containers are using a volume
docker ps -a --filter volume=pgdata
# Remove a specific volume
docker volume rm pgdata
# Remove all unused volumes (CAREFUL in production)
docker volume prune
# See disk usage by volumes
docker system df -v
Common Mistakes to Avoid
- Anonymous volumes: Using
-v /app/datawithout a name creates an anonymous volume with a random UUID name. These are nearly impossible to track and get left behind when containers are removed. Always give volumes explicit names. - docker compose down -v in production: This flag deletes all named volumes. Never run it in production unless you intend to wipe the data.
- Mounting host root paths:
-v /:/hostgives the container full read/write access to the host filesystem. This is a critical security vulnerability. Always scope bind mounts to the specific directory the container needs. - Relying on container filesystem for uploads: Files written to the container's writable layer (not a mounted volume) disappear when the container is replaced. Every user upload, every generated file must go to a volume or object storage.
Frequently Asked Questions
What happens to a named volume when I run docker compose down?
Named volumes survive docker compose down. They are only removed when you run docker compose down -v or explicitly delete them with docker volume rm. This is why your database data persists across Compose restarts.
Should I use named volumes or bind mounts for a database in production?
Always use named volumes for databases in production. Bind mounts create host-path coupling and have worse performance on non-Linux hosts. Named volumes are managed by Docker and are easier to back up, inspect, and migrate.
Can I mount the same named volume into two containers at the same time?
Yes. Multiple containers can mount the same volume simultaneously. Both get read/write access by default. For concurrent writes, your application must handle locking. Read-only access (:ro) from a second container is always safe.
How do I move a named volume between Docker hosts?
Use the tar backup method: run a temporary Alpine container to compress the volume contents to a tar file, copy the tar to the new host, then extract it into a new volume on the destination host. There is no native Docker command for volume migration between hosts.
What is the difference between VOLUME in a Dockerfile and -v at runtime?
The VOLUME instruction in a Dockerfile creates an anonymous volume at build time for that path. It is a hint to users that this path needs persistent storage. At runtime, the -v flag overrides it with your named volume or bind mount. Always use explicit -v at runtime - relying on Dockerfile VOLUME creates those hard-to-track anonymous volumes.
The Right Approach
Use named volumes for all stateful data in both development and production. Use bind mounts only for local source code during development. Use tmpfs for sensitive temporary state. Document every volume in your Compose file and include backup procedures in your runbook.
Use our free Docker to Compose converter to generate correct volume declarations from your existing docker run commands. Explore our 70+ free developer tools for more DevOps and developer utilities.
Use our free tool here → Docker to Compose Converter
Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic servers, and building zero-knowledge security tools. Read more about the author.