← Back to Blog

Nginx Server Blocks: Complete Configuration Guide (2026)

Nginx server blocks are the equivalent of Apache virtual hosts — they let a single Nginx instance serve multiple domains, configure SSL, proxy requests to backend apps, balance load, and apply security rules. This guide covers everything with real, copy-paste configs.

Server Blocks vs Virtual Hosts

If you are coming from Apache, you are familiar with VirtualHost directives. Nginx calls the same concept a server block — a server { } stanza inside an http { } context. Each server block defines how Nginx handles requests for a specific combination of IP address, port, and hostname.

The key difference is processing order. Nginx uses a two-phase selection process:

  1. Port + IP match: Nginx finds all server blocks listening on the matching listen directive.
  2. Server name match: Among those blocks, Nginx picks the one whose server_name matches the Host header. Exact matches beat wildcard matches, which beat regex matches.

If no server_name matches, Nginx falls back to the default server — the first block listed, or whichever block has default_server on its listen line.

File Layout: sites-available and sites-enabled

On Debian/Ubuntu-based systems, the conventional layout mirrors Apache's pattern:

/etc/nginx/
  nginx.conf              # main config (includes sites-enabled/*)
  sites-available/        # all defined server blocks (enabled or not)
    example.com.conf
    api.example.com.conf
  sites-enabled/          # symlinks to active configs
    example.com.conf -> ../sites-available/example.com.conf

Activate a site by creating a symlink and reloading:

sudo ln -s /etc/nginx/sites-available/example.com.conf /etc/nginx/sites-enabled/
sudo nginx -t                  # always test before reload
sudo systemctl reload nginx

On RHEL/CentOS, the convention is /etc/nginx/conf.d/*.conf — all .conf files in that directory are included automatically by the default nginx.conf.

Basic Static Site Server Block

The simplest possible server block serves static files from a directory on disk:

server {
    listen 80;
    listen [::]:80;              # IPv6

    server_name example.com www.example.com;

    root /var/www/example.com/html;
    index index.html index.htm;

    location / {
        try_files $uri $uri/ =404;
    }

    # Serve a custom 404 page
    error_page 404 /404.html;
    location = /404.html {
        internal;
    }

    # Access and error logs per virtual host
    access_log /var/log/nginx/example.com.access.log;
    error_log  /var/log/nginx/example.com.error.log warn;
}

The try_files directive is critical. It tells Nginx to look for the request URI as a file, then as a directory (with a trailing slash), and return a 404 if neither exists. Without it, Nginx will pass a 404 request to your application, which wastes cycles.

SSL/TLS Configuration

In 2026, every public-facing site must use HTTPS. The standard pattern for a Let's Encrypt-issued certificate redirects HTTP to HTTPS and serves TLS on port 443:

# Redirect HTTP to HTTPS
server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    return 301 https://$host$request_uri;
}

# HTTPS server block
server {
    listen 443 ssl;
    listen [::]:443 ssl;
    http2 on;                    # HTTP/2 (Nginx 1.25.1+ syntax)

    server_name example.com www.example.com;

    # Certificate paths (Let's Encrypt / Certbot)
    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Modern TLS settings (disable SSLv3, TLS 1.0, TLS 1.1)
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers off;

    # OCSP Stapling
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
    resolver 1.1.1.1 8.8.8.8 valid=300s;
    resolver_timeout 5s;

    # Session resumption (reduces TLS handshake overhead)
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;

    # HSTS (tell browsers to always use HTTPS for this domain)
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

    root /var/www/example.com/html;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }
}

Always run sudo nginx -t after editing SSL configs. A typo in a certificate path will prevent Nginx from reloading and take down all sites on that server.

Reverse Proxy Configuration

The most common use case in modern infrastructure: Nginx sits in front of a Node.js, Python, Ruby, or PHP application running on a local port, and proxies HTTP requests to it.

server {
    listen 443 ssl;
    http2 on;
    server_name api.example.com;

    ssl_certificate     /etc/letsencrypt/live/api.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;

    location / {
        proxy_pass         http://127.0.0.1:3000;  # backend app port
        proxy_http_version 1.1;

        # Required for WebSocket support
        proxy_set_header Upgrade    $http_upgrade;
        proxy_set_header Connection "upgrade";

        # Pass real client IP and host to the backend
        proxy_set_header Host              $host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout    60s;
        proxy_read_timeout    60s;

        # Buffer settings
        proxy_buffering    on;
        proxy_buffer_size  4k;
        proxy_buffers      8 4k;
    }
}

The proxy_set_header X-Forwarded-For line is critical for rate limiting and logging real client IPs. Without it, your application sees every request as coming from 127.0.0.1.

Load Balancing

Nginx's built-in upstream module handles load balancing across multiple backend instances with several strategies:

# Round-robin (default) — requests distributed equally
upstream app_servers {
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
    server 10.0.0.3:3000;
}

# Weighted round-robin — server2 gets 3x the traffic
upstream app_weighted {
    server 10.0.0.1:3000 weight=1;
    server 10.0.0.2:3000 weight=3;
    server 10.0.0.3:3000 weight=1;
}

# Least connections — route to server with fewest active connections
upstream app_least_conn {
    least_conn;
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
}

# IP hash — same client always hits the same backend (sticky sessions)
upstream app_sticky {
    ip_hash;
    server 10.0.0.1:3000;
    server 10.0.0.2:3000;
}

# Health check: mark server down after 3 failures in 30 seconds
upstream app_with_health {
    server 10.0.0.1:3000 max_fails=3 fail_timeout=30s;
    server 10.0.0.2:3000 max_fails=3 fail_timeout=30s;
    server 10.0.0.3:3000 backup;  # only used if all others are down
}

server {
    listen 443 ssl;
    server_name example.com;

    location / {
        proxy_pass http://app_servers;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Generate Nginx Configs Instantly

Use our free Nginx Config Generator to build production-ready server block configs for static sites, reverse proxies, load balancers, and PHP apps — no memorizing syntax required.

Open Nginx Config Generator

Gzip Compression

Enabling gzip compression can reduce text-based response sizes by 60-80%, directly improving Time to First Byte and reducing bandwidth costs. Configure it in the http { } block in nginx.conf, or within a specific server block:

gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;          # 1 (fastest) to 9 (best compression). 6 is the sweet spot.
gzip_min_length 1024;       # don't compress responses smaller than 1KB
gzip_types
    text/plain
    text/css
    text/javascript
    application/javascript
    application/json
    application/xml
    application/rss+xml
    image/svg+xml
    font/woff
    font/woff2;

Do not compress already-compressed formats like JPEG, PNG, GIF, ZIP, or PDF — you will actually increase their size. The gzip_types list above deliberately excludes them.

Security Headers

Security headers are a low-effort, high-impact layer of defense. Add them inside your server block or in a shared include file:

# Prevent browsers from MIME-sniffing content types
add_header X-Content-Type-Options "nosniff" always;

# Clickjacking protection
add_header X-Frame-Options "SAMEORIGIN" always;

# XSS filter (legacy browsers)
add_header X-XSS-Protection "1; mode=block" always;

# HSTS — browsers always use HTTPS for this domain
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

# Referrer policy — don't leak full URL in referrer header
add_header Referrer-Policy "strict-origin-when-cross-origin" always;

# Permissions policy — disable features you don't use
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;

# Content Security Policy — restrict where resources can load from
# Start with report-only mode until you are sure it won't break things
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' https://fonts.gstatic.com;" always;

# Hide Nginx version from error pages and headers
server_tokens off;

Rate Limiting

Nginx's limit_req module provides token-bucket rate limiting that can protect login endpoints and APIs from brute-force attacks and abuse. Define zones in the http context, apply them in location blocks:

# In http { } block (nginx.conf or a shared include):
# Allow 10 requests/second per IP, using 10MB of shared memory for tracking
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;

# Stricter zone for login endpoints — 1 request/second per IP
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;

# In your server block:
server {
    listen 443 ssl;
    server_name example.com;

    # Apply general rate limit to all traffic
    limit_req zone=general burst=20 nodelay;

    location /auth/login {
        # Strict limit on login — allows burst of 5, then 1 req/sec
        limit_req zone=login burst=5 nodelay;
        limit_req_status 429;   # return 429 Too Many Requests (not 503)

        proxy_pass http://127.0.0.1:3000;
    }

    location / {
        proxy_pass http://127.0.0.1:3000;
    }
}

The burst parameter allows short traffic spikes above the rate without rejecting requests. nodelay processes burst requests immediately instead of queuing them, which prevents legitimate users from experiencing artificial delays during a spike.

Serving PHP with PHP-FPM

If your application is PHP-based (WordPress, Laravel, Magento), Nginx communicates with PHP-FPM via a Unix socket or TCP port:

server {
    listen 443 ssl;
    server_name example.com;

    root /var/www/example.com/public;
    index index.php index.html;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_pass unix:/run/php/php8.2-fpm.sock;  # or 127.0.0.1:9000
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
        fastcgi_param DOCUMENT_ROOT   $realpath_root;

        # Prevent executing PHP in upload directories
        fastcgi_intercept_errors on;
    }

    # Deny direct access to hidden files like .env, .git, .htaccess
    location ~ /\. {
        deny all;
    }

    # Deny direct PHP execution in upload folders
    location ~* /uploads/.*\.php$ {
        deny all;
    }
}

Location Block Matching Order

Understanding location matching priority prevents hard-to-debug routing bugs. Nginx evaluates locations in this order:

  1. Exact match (=): location = /favicon.ico { } — stops searching on first match.
  2. Preferential prefix (^~): location ^~ /static/ { } — if matched, stops searching (no regex tried).
  3. Regex case-sensitive (~): location ~ \.php$ { }
  4. Regex case-insensitive (~*): location ~* \.(jpg|jpeg|png)$ { }
  5. Prefix match (no modifier): location /api/ { } — longest prefix wins.
# Practical example showing priority
server {
    # 1. Exact: matches only /
    location = / {
        return 200 "exact root";
    }

    # 2. Preferential prefix: no regex will be checked for /static/
    location ^~ /static/ {
        root /var/www;
    }

    # 3. Regex: matches any .php file
    location ~ \.php$ {
        fastcgi_pass unix:/run/php/php8.2-fpm.sock;
    }

    # 5. Prefix: catch-all
    location / {
        try_files $uri $uri/ =404;
    }
}

Caching Static Assets

Long cache lifetimes on static assets (CSS, JS, images) dramatically reduce server load and improve page speed scores. Use cache-busting query strings or content hashes in filenames so users always get fresh assets on deploy:

location ~* \.(css|js|woff|woff2|ttf|ico)$ {
    expires 1y;
    add_header Cache-Control "public, immutable";
    access_log off;           # no need to log static asset hits
}

location ~* \.(jpg|jpeg|png|gif|webp|svg)$ {
    expires 30d;
    add_header Cache-Control "public";
    access_log off;
}

Common Mistakes and How to Fix Them

Mistake 1: Using if for URL rewrites

Nginx's if is "evil" (as the official docs say). It does not behave like a programming language's if statement — it creates a new nested location context with unpredictable inheritance. Use try_files, return, or rewrite instead:

# BAD — unpredictable behavior
if ($request_uri ~* "^/old-path") {
    proxy_pass http://127.0.0.1:3000;   # this WILL NOT work as expected
}

# GOOD — use rewrite or return
location /old-path {
    return 301 /new-path;
}

# GOOD — try_files for internal rewrites
location / {
    try_files $uri $uri/ /index.php?$query_string;
}

Mistake 2: Forgetting proxy_http_version 1.1 for WebSockets

Nginx defaults to HTTP/1.0 for upstream connections, which does not support persistent connections or the WebSocket upgrade handshake. Always set proxy_http_version 1.1 and the Connection header when proxying to any modern application.

Mistake 3: Not testing before reload

A config error in any included file will prevent Nginx from reloading, leaving the old config running. Always run nginx -t before systemctl reload nginx. Consider adding it as a pre-reload alias:

alias nginx-reload='sudo nginx -t && sudo systemctl reload nginx'

Mistake 4: Exposing server version and OS info

By default, Nginx includes its version number in error pages and the Server response header. Disable this with server_tokens off; in the http block.

Mistake 5: Not handling the www vs non-www redirect

Serving the same content on both example.com and www.example.com creates duplicate content issues for SEO. Pick one canonical form and redirect the other:

# Redirect www to non-www
server {
    listen 443 ssl;
    server_name www.example.com;
    return 301 https://example.com$request_uri;
}

Full Production Server Block Template

Here is a complete, production-ready HTTPS server block combining SSL, security headers, gzip, rate limiting, and PHP-FPM support:

# Rate limiting zones (in http { } context)
limit_req_zone $binary_remote_addr zone=general:10m rate=30r/s;

# HTTP redirect
server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;
    return 301 https://example.com$request_uri;
}

# www redirect
server {
    listen 443 ssl;
    http2 on;
    server_name www.example.com;
    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    return 301 https://example.com$request_uri;
}

# Main HTTPS server
server {
    listen 443 ssl;
    http2 on;
    server_name example.com;

    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols       TLSv1.2 TLSv1.3;
    ssl_session_cache   shared:SSL:10m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;
    ssl_stapling        on;
    ssl_stapling_verify on;

    server_tokens off;
    root /var/www/example.com/public;
    index index.php index.html;

    # Security headers
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
    add_header X-Content-Type-Options    "nosniff" always;
    add_header X-Frame-Options           "SAMEORIGIN" always;
    add_header Referrer-Policy           "strict-origin-when-cross-origin" always;

    # Gzip
    gzip on;
    gzip_comp_level 6;
    gzip_types text/plain text/css application/javascript application/json image/svg+xml;

    # Rate limiting
    limit_req zone=general burst=50 nodelay;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_pass unix:/run/php/php8.2-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
    }

    # Static asset caching
    location ~* \.(css|js|woff2|ico)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
        access_log off;
    }

    # Block hidden files
    location ~ /\. {
        deny all;
    }

    access_log /var/log/nginx/example.com.access.log;
    error_log  /var/log/nginx/example.com.error.log warn;
}

The Bottom Line

Nginx server blocks are powerful but unforgiving — a single syntax error stops a reload. Build the habit of always testing with nginx -t, use sites-available/sites-enabled to manage multiple vhosts cleanly, and layer SSL, security headers, gzip, and rate limiting from day one. The configurations above are battle-tested patterns used in production across thousands of servers.

Use our Nginx Config Generator to generate server block configs instantly, SSL Checker to verify your certificate chain, and DNS Lookup to confirm your domain resolves correctly before going live. Browse all 70+ free DevOps tools.