Kubernetes Ingress Deep Dive

Lu Andy
3 min readSep 10, 2021

--

TL;NR

In this article, we’ll dive into Kubernetes Ingress. We’ll walk through how does K8S Ingress controller operate in detail, as well as how Ingress routes an external request to reach the application (Pod) it wants.

Kubernetes Ingress

Traditionally, Kubernetes replies on Ingress to handle L7 traffic such as HTTP(S) and gRPC, that enters your cluster from outside.

An Ingress is made up of an Ingress controller and an Ingress Object (aka Ingress resource)

  • Ingress object: a custom resource that contains a collection of routing rules. It is just a static definition
  • Ingress controller: usually an application or program that read the Ingress objects and actually process the routing rules

Ingress controllers

There are a variety of Ingress controllers you can choose from for your application.

You choose your Ingress controller when you create an Ingress object. By including annotations, you choose a controller

If you don’t choose an Ingress controller, your cloud provider may use the default one, e.g. Google uses GKE Ingress controller by default.

In reality, the various Ingress controllers operate differently. Typically, there are two modes:

  • Proxy Mode

Controller runs as as a pod/deployment on nodes, and exposes itself externally as a normal service. The controller behaves like a proxy and routes outside requests based on Ingress rules

  • Load Balancer Mode

Controller works together with an external HTTP LB. The controller creates a separate LB for each Ingress object and the LB is responsible for the routing of requests

Nginx Ingress controller

Nginx Ingress controller runs in proxy mode. It runs as a pod in your cluster, and is exposed as a standard K8S service with the type of Load balancer.

So how does an outside request reach the application pod it wants? It is actually more complicated than you would think. Here is a diagram and a quick overview:

  1. An external user makes a request to app.example.com
  2. DNS translated the domain address to the corresponding IP address
  3. The IP address points to a cloud-provided L4 Load Balancer, such as GCP NLB or AWS NLB
  4. The LB forwards the request to the Nginx Ingress controller service in the backend through NodeIP:Port
  5. The Ingress controller service forwards the request to one of the backend pods that runs the ingress controller using standard K8S networking stack (kube-proxy and endpoints)
  6. The IngressController pod looks up routing rules that defined in all static Ingress objects
  7. The IngressController Pod routes the request to the application service
  8. And finally, the application service routes the request to one of the app pods using standard K8S networking stack

As you can see, the request gets load balanced at three layers (step 4, 5, 8). So there could be a network performance cost along the path.

GKE Ingress controller

On GKE, Ingress is implemented using Cloud HTTP load balancing.

When you create an Ingress object, GKE Ingress controller creates a L7 HTTP(S) load balancer and configures it to route external traffic to your application based on routing rules that you define in your Ingress objects.

Using GKE Ingress, you get improved network performance in terms of both latency and throughput, as request has fewer network hops to reach your application.

To further performance improvement, you can leverage GKE cloud-native load balancing using network endpoint group (NEG).

Next

In the next post, we’ll take a close look at Istio Gateway, and how it differs from (and similar with) K8S Ingress.

--

--