← Back to Blog

Docker Networking Explained: Bridge, Host and Overlay (2026)

Docker networking is one of those topics that works fine until it doesn't - and when it breaks, it breaks in confusing ways. This guide covers every network driver, how DNS resolution works between containers, port mapping pitfalls, and how to structure networks in Docker Compose.

Why Docker Networking Confuses Developers

You spin up two containers and wonder why they can't talk to each other. You publish a port and it still isn't reachable. Or you deploy to a Linux server and behavior differs from your Mac. These are all networking problems, and they stem from a single source: Docker containers each get their own isolated network stack by default, and the rules for how they communicate depend entirely on which network driver you choose.

Docker provides five network drivers out of the box: bridge, host, overlay, macvlan, and none. Each solves a different problem. Understanding when to use each one is the foundation of building reliable, secure containerized applications.

The Five Network Drivers

1. Bridge (Default)

The bridge driver is what Docker uses when you run a container without specifying a network. Docker creates a virtual Ethernet bridge called docker0 on the host. Each container gets a virtual network interface connected to that bridge, and they communicate through it.

# Inspect the default bridge
docker network inspect bridge

# Run a container on the default bridge
docker run -d --name nginx nginx

# Containers on the default bridge can only reach each other by IP, not name
# This is the key limitation of the DEFAULT bridge vs. user-defined bridges

The important distinction: the default bridge network (docker0) does not support DNS-based container discovery. Containers must communicate by IP address. User-defined bridge networks do support DNS - containers find each other by name. This is why you should always create a custom network rather than using the default.

# Create a user-defined bridge network
docker network create myapp-net

# Now containers on myapp-net can reach each other by container name
docker run -d --name api --network myapp-net myapi:latest
docker run -d --name db --network myapp-net postgres:16

# Inside api container: postgres is reachable as "db"
# curl http://db:5432 -- works!

2. Host

The host driver removes the network isolation between the container and the Docker host. The container shares the host's network stack directly - no NAT, no port mapping, no virtual interface. Whatever the host's IP is, the container uses it.

# Container binds directly to host ports -- no -p flag needed
docker run --network host nginx

# Nginx now listens on :80 of the host directly

This gives the best possible network performance because there is no NAT overhead. It is commonly used for high-throughput applications, network monitoring tools, and services that need to listen on multiple ports dynamically. The trade-off is that you lose isolation: a process inside the container can reach anything on the host network, and port conflicts become your problem to manage.

Important: The host network driver only works on Linux. On macOS and Windows, Docker runs inside a VM, so --network host attaches to the VM's network, not your Mac's network. This is a common source of confusion in local development.

3. Overlay

The overlay driver enables containers running on different Docker hosts (different machines) to communicate as if they were on the same network. It is the networking layer for Docker Swarm and can also be used with standalone containers when you need multi-host communication.

# Initialize Swarm mode first
docker swarm init

# Create an overlay network
docker network create --driver overlay myswarm-net

# Services attached to this network can communicate across nodes
docker service create --name api --network myswarm-net myapi:latest
docker service create --name cache --network myswarm-net redis:7

Overlay networks use VXLAN (Virtual Extensible LAN) encapsulation to tunnel traffic between hosts. Each overlay network gets its own subnet, and Docker handles the routing automatically. In a Kubernetes context, the overlay concept is handled by CNI plugins like Flannel or Calico.

4. Macvlan

The macvlan driver assigns a real MAC address to each container, making them appear as physical devices on your network. The container gets an IP directly from your LAN's DHCP server (or a static IP you assign), bypassing NAT entirely.

# Create a macvlan network mapped to your physical interface
docker network create \
  --driver macvlan \
  --subnet=192.168.1.0/24 \
  --gateway=192.168.1.1 \
  -o parent=eth0 \
  macvlan-net

# Container gets a real LAN IP
docker run --network macvlan-net --ip 192.168.1.100 nginx

Macvlan is useful for legacy applications that need to be on the same broadcast domain as physical servers, or for network appliances that need direct Layer 2 access. Most containerized web applications have no reason to use it.

5. None

The none driver gives the container a loopback interface only. No external connectivity at all. Use it for batch processing jobs that read from mounted volumes and write output back - anything that has no reason to touch a network.

docker run --network none \
  -v $(pwd)/data:/data \
  python:3.12 python /data/process.py

Port Mapping: How -p Actually Works

Port mapping (-p host_port:container_port) tells Docker to set up a NAT rule using iptables on Linux. Traffic arriving at host_port gets forwarded to the container's container_port. This is how bridge-networked containers become reachable from outside Docker.

# Map host port 8080 to container port 80
docker run -p 8080:80 nginx

# Bind to a specific host interface only (more secure)
docker run -p 127.0.0.1:8080:80 nginx

# Bind to all interfaces on a random host port
docker run -p 80 nginx
# Then: docker port <container> 80 -- to see what port was assigned

# UDP port mapping
docker run -p 5353:5353/udp dns-server

A common security mistake is binding to 0.0.0.0 (all interfaces) when you only need local access. If your database container is bound to 0.0.0.0:5432, it is reachable from any network interface - including public ones if your firewall is misconfigured. Always use 127.0.0.1 for services that only need to be accessed locally, and rely on Docker networks for container-to-container communication instead of port mapping.

DNS Resolution Between Containers

On user-defined bridge networks, Docker runs an embedded DNS server at 127.0.0.11 inside each container. When container A does a DNS lookup for db, Docker resolves it to the IP address of the container named db on the same network. This works for container names and network aliases.

# Add a network alias -- useful for service discovery
docker run -d \
  --network myapp-net \
  --network-alias postgres \
  --name primary-db \
  postgres:16

# Another container can now reach it as either "primary-db" or "postgres"

This embedded DNS is also what makes Docker Compose work so smoothly: every service name in your Compose file becomes a DNS hostname automatically.

Step-by-Step: Networking in Docker Compose

Docker Compose creates a default network for each project automatically. Every service in the Compose file joins that network and can reach other services by service name. Here is how to structure networks for a multi-tier application:

services:
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
    networks:
      - frontend

  api:
    image: myapi:latest
    networks:
      - frontend
      - backend

  db:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD: secret
    networks:
      - backend
    # No ports: -- db is NOT reachable from outside Docker

networks:
  frontend:
  backend:

In this setup, nginx can reach api (they share frontend). api can reach db (they share backend). But nginx cannot reach db directly - it is on a separate network. The database is not exposed to the host at all. This is the correct layered security model for containerized applications.

Convert Docker Commands to Compose Instantly

Paste any docker run command and get a ready-to-use docker-compose.yml - including network, volume, and environment configuration. Free, runs in your browser.

Open Docker to Compose Converter

Inspecting and Debugging Networks

When containers cannot communicate, these commands diagnose the problem:

# List all networks
docker network ls

# Inspect a network (shows connected containers and their IPs)
docker network inspect myapp-net

# See which networks a container is connected to
docker inspect my-container | jq '.[0].NetworkSettings.Networks'

# Test DNS resolution from inside a container
docker exec -it api-container nslookup db
docker exec -it api-container ping db

# Check iptables rules Docker has created (Linux hosts only)
sudo iptables -t nat -L -n -v | grep DOCKER

Connect and Disconnect Containers at Runtime

You do not have to stop a container to change its network membership. This is useful for debugging or temporarily granting access:

# Connect a running container to an additional network
docker network connect myapp-net existing-container

# Disconnect a container from a network (without stopping it)
docker network disconnect myapp-net existing-container

# Remove a network (all containers must be disconnected first)
docker network rm myapp-net

# Remove all unused networks
docker network prune

Frequently Asked Questions

Why can two containers not communicate even though they are on the same host?

They are almost certainly on different networks. If you ran both with docker run and neither specified a --network flag, they are both on the default bridge - but the default bridge does not support DNS, and containers cannot reach each other by name. Create a user-defined bridge network with docker network create and attach both containers to it.

What is the difference between the default bridge and a user-defined bridge?

Two main differences: DNS resolution and isolation. User-defined bridges support automatic DNS so containers find each other by name. They also provide better isolation - containers on different user-defined bridges cannot communicate at all, whereas all containers on the default bridge can reach each other by IP.

Why does --network host not work on my Mac?

On macOS and Windows, Docker runs inside a lightweight Linux VM (HyperKit or WSL2). The host network attaches to that VM's network, not your Mac's network. So the container cannot see your Mac's localhost, and services bound to the container's ports are not reachable at localhost on your Mac. Use port mapping (-p) instead for local development on non-Linux hosts.

How do I expose a container port only to other containers, not to the host?

Do not use the -p (or ports: in Compose) directive at all. When you omit port mapping, the port is still accessible to other containers on the same Docker network - it is just not published to the host. Use expose: in Compose to document which ports a service listens on without actually publishing them.

Can containers on different Compose projects talk to each other?

Not by default. Each Compose project creates its own isolated network. To connect them, create an external network manually with docker network create shared-net and declare it as external: true in both Compose files. Then attach the relevant services to that shared network.

What happens to network traffic when I use an overlay network?

Overlay networks encapsulate container-to-container traffic in VXLAN UDP packets (port 4789) before sending them between Docker hosts. The sending host wraps the original packet in a VXLAN frame, routes it to the destination host's IP, and the receiving host unwraps it before delivering to the target container. This means your firewall must allow UDP/4789 between Swarm nodes.

The Right Network Architecture

For most applications, the correct pattern is: one user-defined bridge network per application in development, and a segmented multi-network topology in production (frontend network for ingress, backend network for services, data network for databases). Never put database containers on a network that is shared with publicly accessible services unless absolutely necessary.

Use our free Docker to Compose converter to translate your docker run commands - including --network flags - into clean Compose YAML. It handles network declarations automatically.

For more DevOps tools, explore our complete tools collection - all free, all running in your browser.

Use our free tool here → Docker to Compose Converter

UK
Written by Usman Khan
DevOps Engineer | MSc Cybersecurity | CEH | AWS Solutions Architect

Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic servers, and building zero-knowledge security tools. Read more about the author.