Docker Container Not Starting: Every Error and How to Fix It
You run docker run or docker-compose up and nothing happens. The container exits immediately, throws a cryptic error, or just sits there doing nothing. This guide walks through every common reason a Docker container refuses to start, with the exact commands to diagnose and fix each one.
TL;DR: Run docker logs <container> to see the error. Run docker inspect <container> to check the exit code. The most common causes are a bad entrypoint, missing environment variables, and port conflicts. If you just need a shell inside the image to poke around, use docker run -it --entrypoint /bin/sh <image>.
Step 1: Read the Logs First
Before you start guessing, let Docker tell you what went wrong. The very first thing you should do when a container will not start is check its logs. This sounds obvious, but you would be surprised how many people skip this step and go straight to Stack Overflow.
# Show all logs from a container (even if it already exited)
docker logs my-container
# Show only the last 50 lines
docker logs --tail 50 my-container
# Follow logs in real time (like tail -f)
docker logs -f my-container
# Show logs with timestamps
docker logs -t my-container
If the container exited so fast that there are no logs, that itself is a clue. It usually means the main process crashed immediately or the entrypoint command was not found. We will get to that in a minute.
If you do not know the container name, run docker ps -a to see all containers, including stopped ones. The -a flag is important because docker ps by default only shows running containers, and yours is not running.
# List all containers including stopped ones
docker ps -a
# Filter by status
docker ps -a --filter "status=exited"
Step 2: Check the Exit Code
Every container that stops produces an exit code. This number tells you a lot about what happened. You can see it in docker ps -a output or get it directly with docker inspect.
# Get the exit code
docker inspect --format='{{.State.ExitCode}}' my-container
# Get the full state including OOMKilled flag
docker inspect --format='{{json .State}}' my-container | python3 -m json.tool
Here is what each exit code means:
- Exit code 0 means the process finished successfully. If your container exits with 0 but you expected it to keep running, the problem is that your main process is not a long-running daemon. More on that in Step 3.
- Exit code 1 is a generic application error. Your app crashed, threw an unhandled exception, or returned a failure status. Check the logs for the actual error message.
- Exit code 126 means the command was found but is not executable. Usually a permission issue on the entrypoint script. You probably forgot
chmod +xon your entrypoint file. - Exit code 127 means the command was not found at all. The binary you specified in CMD or ENTRYPOINT does not exist inside the container. Double check the path and make sure the binary is installed in your image.
- Exit code 137 means the process was killed externally, almost always by the OOM killer because the container ran out of memory. This is one of the most common issues and gets its own section below.
- Exit code 139 means the process hit a segmentation fault. This is a bug in the application itself, not a Docker configuration problem. Check if you are running the right architecture (amd64 vs arm64) and that your base image matches your host.
- Exit code 143 means the process received SIGTERM and shut down gracefully. This happens during
docker stopand is usually normal behavior, not an error.
Step 3: Container Exits Immediately
This is probably the number one issue people hit with Docker. You run the container and it instantly exits with code 0. No errors. No logs. Just... gone.
The reason is simple: Docker containers are not virtual machines. A container runs exactly one process, and when that process exits, the container stops. If your CMD or ENTRYPOINT runs something that finishes immediately, the container has nothing left to do.
Here are the most common versions of this problem:
Your process is running in the background
If you are starting a service with something like service nginx start in your entrypoint, the service forks into the background and the shell script exits. Docker sees PID 1 exit and shuts down the container.
# Wrong - nginx daemonizes and the container exits
CMD ["service", "nginx", "start"]
# Right - run nginx in the foreground
CMD ["nginx", "-g", "daemon off;"]
Your shell script exits after setup
If your entrypoint is a shell script that does some setup and then finishes, you need to exec into the actual application at the end.
#!/bin/sh
# entrypoint.sh
# Do setup work
echo "Initializing..."
envsubst < /etc/nginx/conf.d/default.template > /etc/nginx/conf.d/default.conf
# WRONG: script ends here and container exits
# RIGHT: exec replaces the shell with nginx as PID 1
exec nginx -g "daemon off;"
The exec keyword is critical. Without it, nginx runs as a child process of the shell, and signals like SIGTERM go to the shell instead of your app. With exec, nginx becomes PID 1 and receives signals directly.
You used the wrong form of CMD
There are two forms of CMD in a Dockerfile: shell form and exec form. They behave differently.
# Shell form - runs inside /bin/sh -c "..."
CMD node server.js
# Exec form - runs the binary directly (preferred)
CMD ["node", "server.js"]
Shell form can cause unexpected behavior because your process runs as a child of /bin/sh. If you are having signal handling issues or the container does not stop gracefully, switch to exec form.
Step 4: OOMKilled (Exit Code 137)
If your container exits with code 137, it was killed because it used more memory than it was allowed. Docker calls this "OOMKilled" and you can confirm it like this:
# Check if OOMKilled is true
docker inspect --format='{{.State.OOMKilled}}' my-container
If that returns true, your container ran out of its memory budget. Here is how to fix it:
Check current memory usage
# See real-time memory usage of running containers
docker stats
# See memory limit for a specific container
docker inspect --format='{{.HostConfig.Memory}}' my-container
If the memory limit shows 0, that means there is no limit set, and the container can use all available host memory. If it still got OOMKilled, your host itself ran out of memory.
Increase the memory limit
# Set a 2GB memory limit
docker run --memory=2g my-image
# In docker-compose.yml
services:
myapp:
image: my-image
deploy:
resources:
limits:
memory: 2g
For Java applications, remember that the JVM allocates heap memory on top of the container overhead. A 512MB container with a 512MB JVM heap will absolutely get OOMKilled. Set the container memory to at least 1.5x your JVM heap size, or better yet, use -XX:MaxRAMPercentage=75.0 so the JVM automatically sizes itself to 75% of the available container memory.
Step 5: Port Already in Use
You will see an error like this when you try to bind a port that something else is already using:
Error response from daemon: driver failed programming external connectivity:
Bind for 0.0.0.0:8080 failed: port is already allocated
The fix is straightforward. Find what is using the port and either stop it or use a different port.
# Find what is using port 8080
lsof -i :8080
# On Linux you can also use
ss -tlnp | grep 8080
# Check if another Docker container has the port
docker ps --format "table {{.Names}}\t{{.Ports}}" | grep 8080
If another Docker container has the port, you can stop it with docker stop <container>. If a host process is using it, either kill that process or change your Docker port mapping.
# Map to a different host port instead
docker run -p 9090:8080 my-image
# Or let Docker pick a random available port
docker run -P my-image
In Docker Compose, the same applies. Change the host side of the port mapping (the number before the colon).
Step 6: Volume Mount Permission Denied
This one is sneaky. Your container starts but then immediately crashes because it cannot read or write to a mounted volume. The logs will usually show something like Permission denied or EACCES.
The root cause is that the user inside your container does not have permission to access the files on the mounted volume. Docker mounts volumes with the same ownership and permissions as they have on the host. If the host directory is owned by your user (UID 1000) but the container runs as a different user (like nobody or node with UID 1001), it cannot access the files.
Quick fix: match the user IDs
# Run the container as your host user
docker run -u $(id -u):$(id -g) -v $(pwd)/data:/app/data my-image
# In docker-compose.yml
services:
myapp:
image: my-image
user: "1000:1000"
volumes:
- ./data:/app/data
Better fix: set permissions in the Dockerfile
FROM node:20-alpine
# Create a non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Create the data directory and set ownership
RUN mkdir -p /app/data && chown -R appuser:appgroup /app/data
WORKDIR /app
COPY --chown=appuser:appgroup . .
USER appuser
CMD ["node", "server.js"]
If you are on Linux and using SELinux, you might also need to add the :z or :Z suffix to your volume mount to apply the correct security labels.
# :z for shared volumes, :Z for private
docker run -v $(pwd)/data:/app/data:z my-image
Step 7: Missing Environment Variables
Your application expects certain environment variables to be set and crashes when they are missing. This is extremely common when moving from a local development setup to Docker. Locally you have everything in your shell environment or a .env file, but the container starts with a clean slate.
Check what environment variables your container actually has:
# See all env vars inside a running container
docker exec my-container env
# Or for a stopped container, check the config
docker inspect --format='{{range .Config.Env}}{{println .}}{{end}}' my-container
Pass environment variables correctly
# Pass individual variables
docker run -e DATABASE_URL=postgres://localhost/mydb my-image
# Pass from a .env file
docker run --env-file .env my-image
# In docker-compose.yml
services:
myapp:
image: my-image
env_file:
- .env
environment:
- NODE_ENV=production
- DATABASE_URL=postgres://db:5432/mydb
A common gotcha: if your .env file has quotes around the values, Docker will include the quotes as part of the value. So DATABASE_URL="postgres://..." in a .env file will set the variable to "postgres://..." (with the literal quote characters). Remove the quotes in your .env file when using it with Docker.
Step 8: Image Not Found or Pull Failed
Sometimes the container will not start because Docker cannot find or pull the image in the first place.
# Common errors
Error response from daemon: manifest for myapp:latest not found
Error response from daemon: pull access denied for myapp, repository does not exist
Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized
Check the image name and tag
# List local images
docker images | grep myapp
# Pull explicitly to see the error
docker pull myapp:latest
# Check if the tag exists (Docker Hub)
docker manifest inspect myapp:v1.2.3
Authenticate to private registries
# Docker Hub
docker login
# AWS ECR
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
# GitHub Container Registry
echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin
If you are using Docker Compose and the image is built locally, make sure you have a build section in your service definition or that you have built the image with docker-compose build before running docker-compose up.
Pro Tip: If nothing else works and you just need to get inside the container to look around, override the entrypoint. This skips your app entirely and gives you a shell.
# Get an interactive shell without running the app
docker run -it --entrypoint /bin/sh my-image
# If /bin/sh is not available (distroless images), try
docker run -it --entrypoint /bin/bash my-image
# For Alpine-based images, sh is always available
docker run -it --entrypoint /bin/sh my-alpine-image
Once inside, you can check if files exist, test commands, verify permissions, and inspect environment variables. This is the single most useful debugging technique for Docker containers that refuse to start.
Exit Code Reference Table
Here is a quick reference for every common Docker exit code, what it means, and how to fix it.
| Exit Code | Signal | Meaning | Fix |
|---|---|---|---|
| 0 | None | Process exited normally | Make sure CMD runs a long-lived foreground process |
| 1 | None | Application error | Check docker logs for the actual error message |
| 2 | None | Misuse of shell command / invalid argument | Verify CMD/ENTRYPOINT syntax and arguments |
| 126 | None | Command not executable | Run chmod +x on your entrypoint script |
| 127 | None | Command not found | Verify the binary path exists inside the image |
| 137 | SIGKILL (9) | Container killed (usually OOMKilled) | Increase --memory limit or optimize memory usage |
| 139 | SIGSEGV (11) | Segmentation fault | Check architecture mismatch (amd64 vs arm64) or app bug |
| 143 | SIGTERM (15) | Graceful shutdown | Normal. Container received docker stop |
Docker Compose Specific Issues
If you are using Docker Compose, there are a few extra problems you can run into that do not apply to standalone docker run.
depends_on does not wait for "ready"
This is a classic trap. You have a web app that depends on a database, and you add depends_on expecting Docker Compose to wait until the database is ready to accept connections. It does not work that way.
depends_on only waits for the container to start. It does not wait for the application inside the container to be ready. Your database container might be "started" but MySQL could still be initializing its data directory for another 30 seconds.
# This does NOT wait for the database to be ready
services:
web:
image: my-web-app
depends_on:
- db
db:
image: mysql:8
# This DOES wait for the database to be healthy (Compose v2)
services:
web:
image: my-web-app
depends_on:
db:
condition: service_healthy
db:
image: mysql:8
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 5s
timeout: 3s
retries: 10
start_period: 30s
If you are stuck on an older Compose version that does not support health conditions in depends_on, use a wait script like wait-for-it or wait-for in your entrypoint.
Network issues between services
In Docker Compose, services can reach each other by their service name. If your app is trying to connect to localhost:5432 for PostgreSQL, it will not work. You need to use the service name instead.
# Wrong - localhost means the container itself
DATABASE_URL=postgres://user:pass@localhost:5432/mydb
# Right - use the service name
DATABASE_URL=postgres://user:pass@db:5432/mydb
Stale containers from previous runs
Sometimes old containers from a previous docker-compose up linger around and cause conflicts. If you are seeing weird behavior, do a clean teardown.
# Remove everything including volumes
docker-compose down -v
# Remove everything including orphan containers
docker-compose down --remove-orphans
# Rebuild images from scratch
docker-compose build --no-cache
docker-compose up
Six Common Mistakes That Prevent Containers From Starting
- Using
latesttag in production. Thelatesttag is mutable. The image it points to can change at any time. Pin your images to specific versions or SHA digests so you get reproducible builds. - Forgetting to expose ports in the Dockerfile.
EXPOSEin a Dockerfile is documentation, not configuration. You still need-por port mappings in Compose to actually publish the port. But if you skip EXPOSE and also forget-p, nothing will connect. - Copying the wrong files into the image. If your
.dockerignoreis too aggressive, it might exclude config files your app needs. If it is too loose, it might includenode_modulesor.envfiles that cause conflicts. Review your.dockerignorewhen builds fail. Read our guide on Docker build cache invalidation for more on optimizing builds. - Running as root when the app expects a non-root user. Some applications (like Elasticsearch) refuse to start as root. Others create files as root that non-root processes later cannot read. Always define a
USERin your Dockerfile. - Hardcoding file paths that differ between host and container. Your app might work locally at
/Users/you/project/config.jsonbut that path does not exist inside the container. Use environment variables or relative paths. - Using a distroless or scratch base image without understanding the tradeoffs. These minimal images do not have a shell, package manager, or common utilities. If your entrypoint script expects
/bin/shand the image does not have it, you get exit code 127 with no useful error message.
Check Your Infrastructure Security
Docker is just one layer. Make sure your domain, SSL, DNS, and headers are not exposing you to attacks. Run a free 19-point security scan in under 30 seconds.
Scan Your Domain FreeFrequently Asked Questions
Why does my Docker container exit immediately after starting?
Your container's CMD or ENTRYPOINT is running a process that finishes instantly, like a shell script without a long-running command. Docker containers only stay alive while their main process (PID 1) is running. Fix it by running your application in the foreground instead of as a background daemon, or use a command like tail -f /dev/null for debugging.
What does Docker exit code 137 mean and how do I fix it?
Exit code 137 means your container was killed by the system, almost always because it ran out of memory (OOMKilled). Check with docker inspect to confirm OOMKilled is true. Fix it by increasing the memory limit with the --memory flag, optimizing your application's memory usage, or setting appropriate JVM heap sizes for Java apps.
How do I debug a Docker container that will not start at all?
Override the entrypoint to get a shell inside the container: docker run -it --entrypoint /bin/sh your-image. This skips the normal startup command and drops you into an interactive shell where you can inspect files, check permissions, test commands, and figure out what is going wrong. You can also use docker logs and docker inspect on failed containers to see error output and exit codes.
Wrapping Up
Docker container startup failures boil down to a handful of root causes: bad entrypoints, missing environment variables, port conflicts, memory limits, permission issues, and image problems. The debugging workflow is always the same: check the logs, check the exit code, and if all else fails, get a shell inside the container with --entrypoint /bin/sh.
Once you get comfortable with this workflow, you will spend minutes instead of hours fixing startup failures. The exit code table above covers 95% of the cases you will ever see. Bookmark this page and come back to it next time a container refuses to cooperate.
Related reading: Docker Build Cache Invalidation Guide. Related tools: Docker to Compose Converter, ENV Validator, Port Lookup, and 70+ more free tools.
Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic servers, and building zero-knowledge security tools. Read more about the author.