Kubernetes Ingress Explained: Route Traffic Like a Pro
Running ten microservices and exposing each with its own LoadBalancer would cost a fortune in cloud load balancer fees and produce a mess of IP addresses. Kubernetes Ingress solves this: one entry point, intelligent routing, TLS termination, and full control via YAML. Here is everything you need to know.
The Problem Ingress Solves
In Kubernetes, a Service exposes a set of pods. But how does external traffic actually reach a Service? There are three options:
- NodePort: Opens a port on every cluster node. Works but requires you to manage ports (30000–32767) and deal with node IPs directly. Not suitable for production.
- LoadBalancer: Provisions a cloud load balancer (AWS ALB, GCP Load Balancer, Azure LB) per Service. Expensive and unmanageable at scale - ten services means ten load balancers, ten IPs, and ten separate TLS certificates.
- Ingress: A single load balancer entry point that routes traffic to multiple Services based on hostname and URL path. One LB, one IP, one TLS certificate, unlimited services.
Ingress is the right choice for the overwhelming majority of production workloads. It is how virtually every multi-service Kubernetes application is exposed to the internet.
Ingress vs Ingress Controller: The Distinction
This is a common source of confusion. There are two separate things:
- Ingress resource: A Kubernetes API object (kind: Ingress) that declares your routing rules. It is declarative - you describe what you want, not how to implement it.
- Ingress controller: A pod (or set of pods) running inside your cluster that reads Ingress resources and implements the routing rules. Without an Ingress controller, Ingress resources do nothing.
Kubernetes does not come with an Ingress controller by default. You must install one. The most popular choices are:
- ingress-nginx: Based on Nginx, maintained by the Kubernetes community. The most widely deployed controller. Not to be confused with the separate
nginx-ingresscontroller maintained by F5/NGINX Inc. - Traefik: Cloud-native, supports automatic Let's Encrypt, good default UI dashboard.
- AWS Load Balancer Controller: Provisions AWS ALBs natively, deep AWS integration (WAF, Cognito, target group binding).
- GKE Ingress / Azure Application Gateway: Cloud-managed controllers for GKE and AKS respectively.
- Kong, HAProxy, Istio Gateway: For advanced use cases (API gateway features, service mesh).
Installing ingress-nginx
# Install ingress-nginx via Helm (recommended)
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.replicaCount=2
# Verify the controller pod is running
kubectl get pods -n ingress-nginx
# Get the external IP (takes a minute to provision on cloud)
kubectl get svc -n ingress-nginx ingress-nginx-controller
Once the controller is running, it watches for Ingress resources across all namespaces and configures its Nginx backend accordingly. Every time you create or update an Ingress resource, the controller reconfigures Nginx within seconds, with no downtime.
Your First Ingress Resource
Let's say you have two services running: a frontend on port 80 and an API on port 8080. You want to route traffic based on the URL path:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: production
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
Key fields explained:
- ingressClassName: Tells Kubernetes which Ingress controller should handle this resource. Required in Kubernetes 1.18+. Replaces the old
kubernetes.io/ingress.classannotation. - host: The incoming hostname to match. Requests to any other hostname will not match this rule.
- pathType: Prefix means the path matches if the URL starts with the specified value. Use
Exactfor exact-match routing. - rewrite-target annotation: Strips the path prefix before forwarding to the backend (so
/api/usersbecomes/userson the backend service).
Host-Based Routing (Virtual Hosts)
You can route to entirely different services based on the hostname. This lets you host multiple applications on the same cluster and IP:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-host-ingress
spec:
ingressClassName: nginx
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- host: admin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 80
- host: docs.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: docs-service
port:
number: 80
Point all three DNS records (api.example.com, admin.example.com, docs.example.com) to the same load balancer IP, and the Ingress controller routes to the right service based on the Host header.
TLS Termination
Ingress handles TLS termination, decrypting HTTPS traffic at the controller and forwarding plaintext HTTP to your backend Services. Your application pods do not need to manage certificates.
# Step 1: Create a TLS secret with your certificate and key
kubectl create secret tls app-tls-secret \
--cert=fullchain.pem \
--key=privkey.pem \
--namespace production
# Step 2: Reference the secret in your Ingress resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-ingress
namespace: production
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
secretName: app-tls-secret
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
The ssl-redirect: "true" annotation automatically redirects HTTP requests to HTTPS.
Automated TLS with cert-manager
Manually creating TLS secrets does not scale. cert-manager is the standard solution for automatic certificate provisioning and renewal in Kubernetes, using Let's Encrypt as the CA:
# Install cert-manager
helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set installCRDs=true
# Create a ClusterIssuer for Let's Encrypt
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: admin@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
# Reference the issuer in your Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: auto-tls-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
secretName: app-tls-cert # cert-manager creates this automatically
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
With this setup, cert-manager automatically provisions a Let's Encrypt certificate when the Ingress is created and renews it before it expires.
Validate Your Ingress YAML Before Applying
A YAML syntax error in your Ingress manifest can silently cause routing to stop working. Validate your YAML instantly - free, browser-only, no data sent to any server.
Open YAML ValidatorUseful Nginx Ingress Annotations
The ingress-nginx controller is highly configurable via annotations on the Ingress resource:
metadata:
annotations:
# Rate limiting
nginx.ingress.kubernetes.io/limit-rps: "10"
nginx.ingress.kubernetes.io/limit-connections: "5"
# Request/response size
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
# Timeouts
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
# CORS
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://app.example.com"
# Basic auth
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth-secret
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required"
# Whitelist by IP
nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/8,192.168.1.0/24"
# Custom Nginx config snippet
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Frame-Options: DENY";
more_set_headers "X-Content-Type-Options: nosniff";
Debugging Ingress Issues
When traffic is not routing as expected, work through this checklist:
# 1. Verify the Ingress resource was created and has an address
kubectl get ingress -n production
# Should show an ADDRESS - if empty, the controller hasn't picked it up
# 2. Describe the Ingress for events and warnings
kubectl describe ingress app-ingress -n production
# 3. Check controller logs
kubectl logs -n ingress-nginx \
-l app.kubernetes.io/name=ingress-nginx \
--tail=50
# 4. Verify the backend Services exist and have endpoints
kubectl get svc -n production
kubectl get endpoints frontend-service -n production
# If ENDPOINTS shows "", your pods are not running or selector doesn't match
# 5. Test routing from inside the cluster
kubectl run debug --image=curlimages/curl -it --rm -- \
curl -H "Host: app.example.com" http://ingress-nginx-controller.ingress-nginx
The most common Ingress debugging scenario: the Ingress has an address, but traffic returns 404. Check that your Service name and port exactly match what is in the Ingress spec, and that the Service's selector matches your pod labels.
FAQ
What is the difference between Ingress and a LoadBalancer Service?
A type: LoadBalancer Service provisions one cloud load balancer per Service. Each gets its own IP and handles traffic for a single Service. An Ingress provisions a single load balancer (through the Ingress controller's Service) and routes traffic to multiple backend Services based on hostname and path. For applications with more than one exposed service, Ingress is significantly cheaper and easier to manage.
Do I need an Ingress controller for every cluster?
Yes. Every cluster that uses Ingress resources must have at least one Ingress controller deployed. The controller is what actually processes Ingress resources and configures routing. Some managed Kubernetes services (GKE, AKS) can optionally provision a managed controller for you, but you still need to enable it explicitly.
Can I run multiple Ingress controllers in one cluster?
Yes. You can run multiple controllers (e.g. ingress-nginx for internal services and AWS Load Balancer Controller for external services) in the same cluster. Use the ingressClassName field on each Ingress resource to specify which controller should handle it. Each controller only processes Ingress resources that reference its class.
What is the difference between pathType: Prefix and pathType: Exact?
Prefix matches any URL that starts with the specified path. /api with Prefix matches /api, /api/users, /api/v2/orders, etc. Exact matches only the specified path character-for-character. A third option, ImplementationSpecific, delegates interpretation to the Ingress controller (Nginx treats it as a regex match). Use Prefix for service routing and Exact for specific endpoints like health checks or webhooks.
How does Ingress handle WebSockets?
WebSocket connections require specific proxy configuration because they use HTTP Upgrade. With ingress-nginx, add the annotation nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" and nginx.ingress.kubernetes.io/proxy-send-timeout: "3600" to keep long-lived connections open. The actual WebSocket upgrade is handled automatically by Nginx. Ensure your backend Service uses the same port that your WebSocket server listens on.
What happens to traffic during an Ingress controller update?
The ingress-nginx Helm chart deploys the controller as a Deployment with rolling update strategy. During a rolling update, new controller pods come up before old ones are terminated. The Nginx hot-reload mechanism (using SIGHUP) updates the Nginx configuration without dropping existing connections. For zero-downtime updates, run at least two controller replicas with a PodDisruptionBudget ensuring at least one is always available.
Validate your Kubernetes Ingress YAML syntax before applying it to the cluster: Use our free YAML Validator here →
Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic servers, and building zero-knowledge security tools. Read more about the author.