← Back to Blog

Nginx Reverse Proxy Setup: Complete Guide with SSL & Load Balancing

A reverse proxy sits between your clients and your backend services, handling SSL termination, load balancing, caching, and security headers. Nginx is the most widely used reverse proxy on the internet. This guide covers every common configuration pattern with copy-paste configs.

Why Use Nginx as a Reverse Proxy?

Running your application directly on port 80 or 443 creates several problems. Your app server needs to run as root to bind to privileged ports. TLS certificate management becomes the application's responsibility. Scaling to multiple backend instances is not possible without another layer. And common security headers need to be added in every application.

A reverse proxy solves all of these. Nginx listens on ports 80 and 443, handles TLS, forwards requests to your app server on a non-privileged port (such as 3000, 8080, or 5000), and adds security headers consistently. Your application stays simple; the proxy handles the infrastructure concerns.

Nginx's event-driven architecture handles tens of thousands of concurrent connections with minimal memory, making it far more efficient at the proxy layer than application servers like Node.js or Gunicorn.

Step 1: Install Nginx

# Ubuntu / Debian
sudo apt update && sudo apt install -y nginx

# CentOS / RHEL / Amazon Linux
sudo yum install -y nginx

# macOS (for local development)
brew install nginx

# Verify installation and check version
nginx -v

# Start and enable on boot
sudo systemctl enable --now nginx

# Test configuration syntax before reloading
sudo nginx -t

Step 2: Basic Reverse Proxy with proxy_pass

The simplest reverse proxy configuration forwards all requests to a backend server. Create a new site configuration in /etc/nginx/sites-available/:

server {
    listen 80;
    server_name example.com www.example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;

        # Pass the original Host header to the backend
        proxy_set_header Host $host;

        # Pass the real client IP to the backend
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Enable the site and reload Nginx:

sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx

The four proxy_set_header directives are critical. Without them, your backend sees Nginx's localhost IP instead of the real client IP, and cannot determine whether the original request was HTTP or HTTPS.

Step 3: SSL Termination with Let's Encrypt

In production, all traffic should be HTTPS. SSL termination at the proxy means your backend can remain plain HTTP on localhost, keeping it simple. Use Certbot to obtain a free Let's Encrypt certificate:

# Install Certbot
sudo apt install -y certbot python3-certbot-nginx

# Obtain certificate and auto-configure Nginx
sudo certbot --nginx -d example.com -d www.example.com

# Test automatic renewal
sudo certbot renew --dry-run

Certbot modifies your Nginx config automatically. The resulting config looks like this (manually written version for reference):

server {
    listen 80;
    server_name example.com www.example.com;
    # Redirect all HTTP to HTTPS
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Modern SSL settings
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Increase timeouts for slow backends
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
    }
}

Step 4: Load Balancing Across Multiple Backends

When you have multiple backend instances (for high availability or scale), use an upstream block to define the pool. Nginx supports round-robin, least-connections, and IP hash balancing strategies.

upstream app_servers {
    # Round-robin by default
    server 127.0.0.1:3001;
    server 127.0.0.1:3002;
    server 127.0.0.1:3003;

    # Least connections (send to server with fewest active connections)
    # least_conn;

    # IP hash (sticky sessions - same client always hits same server)
    # ip_hash;

    # Mark a server as backup (only used when all primaries are down)
    server 127.0.0.1:3004 backup;

    # Health check (nginx plus feature; use third-party module for open source)
    # keepalive 32;
}

server {
    listen 443 ssl http2;
    server_name example.com;

    # ... ssl config ...

    location / {
        proxy_pass http://app_servers;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Scan Your Site for Free

Our Exposure Checker runs 19 parallel security checks - SSL, headers, exposed paths, DNS, open ports, and more.

Run Free Security Scan

Step 5: WebSocket Proxying

WebSocket connections require two special headers to upgrade the HTTP connection. Without these, WebSocket connections will fail silently or fall back to long-polling.

map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}

server {
    listen 443 ssl http2;
    server_name example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;

        # WebSocket upgrade headers
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket connections can be long-lived
        proxy_read_timeout 3600s;
        proxy_send_timeout 3600s;
    }
}

Step 6: Proxy Different Paths to Different Services

A common microservices pattern is to route requests to different backend services based on the URL path. For example, /api/ goes to an API server and everything else goes to a frontend.

server {
    listen 443 ssl http2;
    server_name example.com;

    # Route API requests to the API server
    location /api/ {
        proxy_pass http://127.0.0.1:8080/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Route everything else to the frontend
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Note the trailing slash on proxy_pass http://127.0.0.1:8080/; when the location also has a trailing slash. This strips the /api/ prefix before forwarding, so your API server receives /users instead of /api/users. Omit the trailing slash to preserve the prefix.

Step 7: Security Headers

Adding security headers at the proxy level means they apply to all responses regardless of what the backend sends. Place these in a separate file and include it in each server block.

# /etc/nginx/snippets/security-headers.conf
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

# Include in server block:
# include snippets/security-headers.conf;

Step 8: Rate Limiting

Rate limiting protects your backend from abuse and brute force attacks. Define zones in the http block (in nginx.conf) and apply them in location blocks.

# In http block (nginx.conf)
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;

# In server block
location /api/ {
    limit_req zone=api burst=20 nodelay;
    limit_req_status 429;
    proxy_pass http://127.0.0.1:8080/;
}

location /api/auth/login {
    limit_req zone=login burst=5 nodelay;
    limit_req_status 429;
    proxy_pass http://127.0.0.1:8080/auth/login;
}

Troubleshooting Common Issues

502 Bad Gateway

This means Nginx cannot connect to the backend. Check that your backend is running (ss -tlnp | grep 3000), that the port matches your proxy_pass, and that there are no firewall rules blocking the connection. Check /var/log/nginx/error.log for the specific error.

Real client IP shows as 127.0.0.1 in logs

Your backend is reading the wrong header. It needs to read X-Forwarded-For or X-Real-IP instead of the raw socket IP. Alternatively, use the ngx_http_realip_module to rewrite $remote_addr using the forwarded IP: add set_real_ip_from 127.0.0.1; and real_ip_header X-Real-IP; to your server block.

WebSocket connections keep dropping

This is usually a timeout issue. The default proxy_read_timeout is 60 seconds. If the WebSocket connection is idle for longer than that, Nginx closes it. Increase proxy_read_timeout to a value longer than your application's heartbeat interval, or set it to a large value like 3600s for WebSocket locations.

Scan Your Site for Free

Our Exposure Checker runs 19 parallel security checks - SSL, headers, exposed paths, DNS, open ports, and more.

Run Free Security Scan

Frequently Asked Questions

What is the difference between a forward proxy and a reverse proxy?

A forward proxy sits in front of clients and forwards their requests to the internet on their behalf (common in corporate networks for content filtering). A reverse proxy sits in front of servers and forwards incoming requests to the appropriate backend. From the client's perspective, they are talking directly to the server. Nginx is almost always used as a reverse proxy.

Should I use proxy_pass with a trailing slash or without?

This is one of the most confusing Nginx behaviours. When you use location /app/ { proxy_pass http://backend/; } (both have trailing slashes), Nginx strips the /app/ prefix and forwards only the remainder. When you use location /app/ { proxy_pass http://backend; } (no trailing slash on proxy_pass), the full path including /app/ is forwarded. Choose based on whether your backend expects the prefix or not.

How do I handle large file uploads through Nginx reverse proxy?

By default, Nginx limits request body size to 1MB. Increase it with client_max_body_size 100M; in your location or server block. Also increase proxy_request_buffering off; for large uploads to stream the request body directly to the backend rather than buffering it to disk first.

How do I reload Nginx without dropping connections?

Use sudo nginx -s reload or sudo systemctl reload nginx. This sends a SIGHUP signal to the master process, which spawns new worker processes with the new config and gracefully drains the old workers after they finish handling current requests. Existing connections are not dropped.

How do I proxy to a backend running in Docker?

If Nginx runs on the host and your backend runs in Docker with a mapped port, use proxy_pass http://127.0.0.1:PORT; where PORT is the host port in your docker run -p PORT:CONTAINER_PORT command. If both Nginx and the backend run in the same Docker network, use the container name as the hostname: proxy_pass http://my-app:3000;. For Docker Compose, use the service name.

Summary

A well-configured Nginx reverse proxy gives you SSL termination, load balancing, WebSocket support, rate limiting, and security headers in a single place. The key directives to remember are proxy_pass, the four proxy_set_header lines for forwarding client information, and upstream blocks for multiple backends.

Always test your configuration with sudo nginx -t before reloading, and use tail -f /var/log/nginx/error.log to monitor for issues after changes.

Generate your Nginx config instantly → Use our free Nginx Config Generator here

UK
Written by Usman Khan
DevOps Engineer | MSc Cybersecurity | CEH | AWS Solutions Architect

Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic servers, and building zero-knowledge security tools. Read more about the author.