DEV Community

Cover image for Solved: RESULTS of What Ingress Controller are you using TODAY?
Darian Vance
Darian Vance

Posted on • Originally published at wp.me

Solved: RESULTS of What Ingress Controller are you using TODAY?

🚀 Executive Summary

TL;DR: Choosing the optimal Kubernetes Ingress Controller is crucial for efficient application routing, SSL/TLS management, and performance. This analysis compares NGINX, Traefik, and AWS Load Balancer Controller, detailing their specific use cases, deployment methods, and configuration examples to help users select the best solution for their environment and technical requirements.

🎯 Key Takeaways

  • The Kubernetes NGINX Ingress Controller is the most widely adopted, offering a stable, high-performance, and feature-rich solution with extensive control via annotations, suitable for general-purpose web proxying.
  • Traefik Proxy excels in cloud-native and microservices environments due to its dynamic configuration, auto-discovery capabilities, CRD-native approach, and built-in dashboard for ease of use.
  • The AWS Load Balancer Controller provides deep, native integration with AWS Application Load Balancers (ALBs) and Network Load Balancers (NLBs), leveraging AWS-specific features like WAF and ACM for organizations exclusively on AWS.

Choosing the right Kubernetes Ingress Controller is crucial for application routing and performance. This post explores the leading options, including NGINX, Traefik, and cloud-native solutions, providing practical configurations and a detailed comparison to guide your decision.

The Ingress Controller Conundrum: Symptoms and Challenges

In the dynamic world of Kubernetes, managing external access to your services can quickly become a bottleneck or a source of frustration. While Kubernetes offers a powerful declarative API, the Ingress resource itself is a high-level abstraction. It requires an Ingress Controller to fulfill its promise, acting as the bridge between the outside world and your cluster’s services.

Many IT professionals encounter these common symptoms and challenges when dealing with Ingress in Kubernetes:

  • Complexity in Routing: Difficulty in setting up precise routing rules, path-based routing, host-based routing, or custom HTTP headers for different services.
  • SSL/TLS Management Overhead: Struggling with certificate provisioning, renewal, and termination at the edge of the cluster, often desiring seamless integration with tools like Cert-Manager.
  • Load Balancing and Traffic Management Limitations: Needing advanced load balancing strategies (e.g., sticky sessions, least connections), canary deployments, A/B testing, or rate limiting that basic Ingress setups don’t easily provide.
  • Performance and Scalability Issues: Experiencing performance bottlenecks or challenges scaling the ingress layer under high traffic loads.
  • Cloud Integration Gaps: In cloud environments, a desire to leverage native cloud load balancer features (e.g., WAF integration, native authentication, advanced metrics) without complex manual configuration.
  • Vendor Lock-in Concerns vs. Feature Richness: Balancing the desire for a generic, portable solution against the need for highly integrated, cloud-specific features.
  • Operational Burden: Managing the configuration, monitoring, and troubleshooting of multiple Ingress Controllers or dealing with a single, overly complex setup.

The Reddit thread “RESULTS of What Ingress Controller are you using TODAY?” highlights a shared community interest in effective solutions. Let’s dive into some of the most prominent and powerful choices available today.

Solution 1: The Ubiquitous Kubernetes NGINX Ingress Controller

The NGINX Ingress Controller, often referred to as the “Community NGINX Ingress Controller” or simply “NGINX IC,” is by far the most widely adopted and battle-tested Ingress Controller. It leverages the robust and high-performance NGINX web server to handle incoming traffic.

When to Use NGINX Ingress Controller

  • You need a stable, high-performance, and feature-rich Ingress solution.
  • Your team is familiar with NGINX configuration paradigms or wants extensive control via annotations.
  • You require advanced features like URL rewriting, custom NGINX configurations, or sophisticated routing rules.
  • You value a large community, extensive documentation, and a mature ecosystem.

Deployment Example

The easiest way to deploy the NGINX Ingress Controller is via Helm:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx --create-namespace \
  --set controller.service.type=LoadBalancer
Enter fullscreen mode Exit fullscreen mode

This command deploys the controller into its own namespace, exposing it via a cloud provider’s LoadBalancer (e.g., AWS ELB/NLB, GCP Load Balancer). For bare-metal or on-premises, you might use NodePort or MetalLB.

Configuration Example: Basic Ingress with SSL

Here’s an example of an Ingress resource using the NGINX Ingress Controller to route traffic for myapp.example.com to a backend service, including SSL termination with a TLS secret:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp-ingress
  annotations:
    # Use cert-manager to automatically issue a certificate for this host
    kubernetes.io/tls-acme: "true"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    # Force HTTP traffic to redirect to HTTPS
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    # Rewrite path for the backend service
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  ingressClassName: nginx # Specify the ingress class for the NGINX controller
  tls:
    - hosts:
        - myapp.example.com
      secretName: myapp-tls-secret # Cert-manager will store the certificate here
  rules:
    - host: myapp.example.com
      http:
        paths:
          - path: /my-app(/|$)(.*) # Match /my-app or /my-app/something
            pathType: Prefix
            backend:
              service:
                name: myapp-service
                port:
                  number: 80
Enter fullscreen mode Exit fullscreen mode

In this example:

  • ingressClassName: nginx ensures this Ingress is handled by the NGINX controller.
  • Annotations from cert-manager.io/cluster-issuer and kubernetes.io/tls-acme tell Cert-Manager to provision an SSL certificate.
  • nginx.ingress.kubernetes.io/ssl-redirect: "true" automatically redirects HTTP to HTTPS.
  • nginx.ingress.kubernetes.io/rewrite-target: /$2 rewrites the URL path before sending it to the backend service, removing the /my-app prefix.

Solution 2: Traefik Proxy – The Cloud-Native Edge Router

Traefik (pronounced “traffic”) is an open-source Edge Router that makes publishing your services a fun and easy experience. It integrates with your existing infrastructure components (Kubernetes, Docker, Swarm, AWS, etc.) and configures itself automatically and dynamically.

When to Use Traefik

  • You are building cloud-native applications and microservices, where dynamic configuration and auto-discovery are key.
  • You prefer a CRD-native approach to configuration over extensive annotations.
  • You want a lightweight Ingress Controller with a built-in dashboard for monitoring.
  • You value simplicity and ease of use for typical routing patterns.

Deployment Example

Deploying Traefik with Helm is straightforward:

helm repo add traefik https://helm.traefik.io/traefik
helm repo update
helm install traefik traefik/traefik \
  --namespace traefik --create-namespace \
  --set service.type=LoadBalancer \
  --set providers.kubernetesIngress.enabled=true \
  --set providers.kubernetesCRD.enabled=true \
  --set gui.enabled=true \
  --set gui.dashboard.enabled=true
Enter fullscreen mode Exit fullscreen mode

This deploys Traefik, enables Kubernetes Ingress resource processing, and importantly, enables its Kubernetes CRD provider for more advanced configurations. The dashboard is also enabled for quick monitoring.

Configuration Example: Traefik IngressRoute (CRD) with Middleware

Traefik often encourages the use of its custom resources (CRDs) like IngressRoute, Middleware, and Service for a more declarative and powerful configuration than standard Ingress resources.

apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: redirect-to-https
  namespace: default
spec:
  redirectScheme:
    scheme: https
    permanent: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: myapp-ingressroute
  namespace: default
spec:
  entryPoints:
    - web # HTTP entry point
  routes:
    - match: Host(`myapp.example.com`) && PathPrefix(`/my-app`)
      kind: Rule
      services:
        - name: myapp-service
          port: 80
      middlewares:
        - name: redirect-to-https # Apply the HTTP to HTTPS redirect
  tls:
    certResolver: myresolver # Assuming you have a cert-manager-like resolver configured for Traefik
    domains:
      - main: myapp.example.com
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: myapp-ingressroute-https
  namespace: default
spec:
  entryPoints:
    - websecure # HTTPS entry point
  routes:
    - match: Host(`myapp.example.com`) && PathPrefix(`/my-app`)
      kind: Rule
      services:
        - name: myapp-service
          port: 80
  tls:
    secretName: myapp-tls-secret # Or let certResolver handle it
Enter fullscreen mode Exit fullscreen mode

This example demonstrates:

  • A Middleware resource to enforce HTTP to HTTPS redirection.
  • An IngressRoute defining rules for myapp.example.com, routing traffic to myapp-service.
  • The middlewares section attaches the defined middleware to the route.
  • tls section handles SSL/TLS termination, either via a certResolver (like Let’s Encrypt integration) or an existing secretName.

Solution 3: Cloud-Native Integration with AWS Load Balancer Controller

For organizations operating exclusively on AWS, the AWS Load Balancer Controller (formerly AWS ALB Ingress Controller) offers deep integration with native AWS Application Load Balancers (ALBs) and Network Load Balancers (NLBs). It provisions and manages these AWS resources directly from Kubernetes Ingress and Service resources.

When to Use AWS Load Balancer Controller

  • Your Kubernetes clusters are hosted on AWS (EKS, self-managed EC2).
  • You want to leverage AWS-native features like AWS WAF, Cognito authentication, Global Accelerator integration, or shared ALBs across multiple Ingresses.
  • You prefer a fully managed and scalable load balancing solution provided by AWS.
  • You need high performance and reliability that AWS ALBs/NLBs inherently offer.

Deployment Example

Deploying the AWS Load Balancer Controller typically involves setting up IAM roles and then deploying via Helm:

1. Create IAM Policy and Role: (Simplified, usually more detailed for production)

# Example IAM policy (full policy is extensive, consult AWS docs)
curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam_policy.json

# If using EKS with OIDC, create a Service Account and annotate it
# This binds the K8s service account to an AWS IAM role
eksctl create iamserviceaccount \
    --cluster=your-cluster-name \
    --namespace=kube-system \
    --name=aws-load-balancer-controller \
    --attach-policy-arn=arn:aws:iam::ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy \
    --approve
Enter fullscreen mode Exit fullscreen mode

2. Deploy with Helm:

helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=your-cluster-name \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller
Enter fullscreen mode Exit fullscreen mode

Configuration Example: Ingress for AWS ALB

The AWS Load Balancer Controller uses standard Kubernetes Ingress resources, but with extensive annotations to configure the underlying ALB.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp-alb-ingress
  annotations:
    # Use the ALB Ingress class
    kubernetes.io/ingress.class: alb
    # Specify the scheme for the ALB (internet-facing or internal)
    alb.ingress.kubernetes.io/scheme: internet-facing
    # Specify the target type for the ALB (ip or instance)
    alb.ingress.kubernetes.io/target-type: ip
    # Specify subnets for the ALB
    alb.ingress.kubernetes.io/subnets: subnet-xxxxxxxx, subnet-yyyyyyyy
    # Use a pre-existing ACM certificate for SSL/TLS
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:REGION:ACCOUNT_ID:certificate/CERT_ID
    # Redirect HTTP to HTTPS
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
  rules:
    - host: myapp.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: ssl-redirect # This refers to the action defined above
                port:
                  name: use-annotation
          - path: /my-app
            pathType: Prefix
            backend:
              service:
                name: myapp-service
                port:
                  number: 80
Enter fullscreen mode Exit fullscreen mode

Key annotations here include:

  • kubernetes.io/ingress.class: alb: Directs the controller to manage this Ingress.
  • alb.ingress.kubernetes.io/scheme and alb.ingress.kubernetes.io/subnets: Configure the ALB itself.
  • alb.ingress.kubernetes.io/certificate-arn: Integrates with AWS Certificate Manager (ACM) for TLS.
  • alb.ingress.kubernetes.io/listen-ports and alb.ingress.kubernetes.io/actions.ssl-redirect: Define listeners and a redirect action, routing HTTP to HTTPS.

Comparison: NGINX, Traefik, and AWS Load Balancer Controller

To help you decide, here’s a comparative overview of these three powerful Ingress Controllers:

Feature/Aspect Kubernetes NGINX Ingress Controller Traefik Proxy AWS Load Balancer Controller
Primary Use Case General-purpose, high-performance web proxying, rich feature set, on-premises/cloud agnostic. Cloud-native, dynamic service discovery, microservices, developer-friendly, API Gateway features. AWS-specific environments, deep integration with AWS ALBs/NLBs, leveraging AWS features.
Configuration Model Standard Kubernetes Ingress resource with extensive NGINX-specific annotations. Kubernetes Ingress resource, but highly favors its own CRDs (IngressRoute, Middleware). Standard Kubernetes Ingress resource with extensive AWS-specific annotations.
Dynamic Configuration Requires controller restart or reload for most NGINX config changes (though Ingress updates are dynamic). Highly dynamic, auto-discovers services and updates configuration without restarts. Relatively dynamic, updates underlying AWS Load Balancers in response to Ingress changes.
Cloud Integration Cloud-agnostic, uses cloud LoadBalancer service type for exposure. Cloud-agnostic, can integrate with various providers via specific configurations. AWS-native and exclusive, provisions and manages AWS ALBs/NLBs directly.
Key Features Rewrite rules, custom NGINX snippets, sticky sessions, rate limiting, WAF integration (commercial NGINX Plus). Built-in dashboard, metrics, tracing, middleware chaining, native Cert-Manager integration, API Gateway capabilities. AWS WAF integration, Cognito authentication, Global Accelerator support, Fargate compatibility, shared ALB management.
SSL/TLS Management Via Kubernetes Secrets, often with Cert-Manager. Via Kubernetes Secrets, native integration with Cert-Manager-like resolvers. Via AWS Certificate Manager (ACM) ARNs.
Performance Excellent, highly optimized NGINX core. Very good, lightweight and efficient. Excellent, leverages AWS’s highly optimized and scalable ALB/NLB infrastructure.
Community/Support Largest, most mature community, extensive documentation. Strong and growing community, good documentation. Strong support from AWS and Kubernetes SIGs.
Learning Curve Moderate, familiarity with NGINX and Kubernetes annotations helps. Relatively low for basic use, moderate for advanced CRD usage. Moderate, requires understanding of both Kubernetes and AWS ALB/NLB concepts.

Conclusion

The “best” Ingress Controller truly depends on your specific use case, environment, and team’s expertise. The Kubernetes NGINX Ingress Controller remains a robust, performant, and reliable choice for most general purposes, offering unparalleled flexibility through annotations.

Traefik shines in cloud-native environments, emphasizing dynamic configuration and a developer-friendly approach with its CRDs. It’s an excellent choice for microservices architectures where agility is paramount.

Finally, for those fully committed to the AWS ecosystem, the AWS Load Balancer Controller offers a seamless and powerful integration, leveraging the full potential of AWS’s managed load balancing services, often leading to simpler operations and cost efficiencies.

Evaluate your requirements for features, performance, operational overhead, and cloud integration, then choose the controller that aligns best with your strategic goals. Many organizations even use a combination, perhaps NGINX for a public-facing critical application and Traefik for internal APIs, or the AWS controller for public services and NGINX for internal ones, demonstrating the flexibility of the Kubernetes ecosystem.


Darian Vance

👉 Read the original article on TechResolve.blog

Top comments (0)