Ingress, Service Mesh & the in-between

In the past few monthes Iv’e been getting this question alot and I thought it might be a good idea to discuss “Ingress”/”Ingress Controllers”, “Service Mesh” and their relations …

These are both two big subjects which would probebly become long read so I’ll do my best to seperate into parts.

TL;DR

If you have one exposed service you “don’t really need an ingress”* (or a service mesh for that matter) you could do with a “classic” standard Reource called service, and I stared* that because you might be running on prem (god-phorbit ;)) and you don’t have a “fancy ELB” …

Ingress comes to solve the router part of our service, in the past we called it a reverse-proxy every time we wanted to expose a service on privilidged ports and we wanted to re-use the underlaying os to route traffic to our applications (for security reasons or efficiency ones).

In OnPrem/OnCloud*/Hybrd infrustructure(s) where you usually have both legacy systems and the new shiney micro-services architected ones, without realizing it, you hit a ton load of tooling (e.g ansible dealing with legacy and terraform for new gen, swarm in staging and kubernetes in production all at the same time), and when your services are distributed on hybrid infrustructure (non-standard / multi-vendored / ad-hock / best effort), which could be partially in the cloud and partially on-prem, could be a monolith which is being breaked down to micro-services, which takes time, budget, resources and until you get there -> the biggest questions are:

  • “what failed” and when you know that
  • “why did it fail” and how do we
  • “make sure it dosen’t happen again” Your really dealing with Cheaos, considering the loossly coupeled nature of distributed-systems and the asyncronous nature of messeging and queuing. The Twitter’s and Netflixe’s which built their networks on micoservices invented the service mesh desinged primerly to provide the insights of what’s going on in their by design distributed syste7 on one hand, and whilst their already controlloing / sniffing on the line, it could and needs to provide techniques to manage / block / allow / re-route etc etc.

In this post I will try to “tell the story” in retrospective to my expreince of course, many practices we used to call X are now being “re-branded” in in some cases “re-invented” but they all adhare to prncipals wev’e been practicing all along.

Considering Kubernetes is the enabler of all these “goodies” we should familairize ourselvs with services and how they are applied, understand the ingress and the ingress -controller roles which are the new implementations I mentioned above. And when weve establised theingress “part of the story” I will try to understand where one might need a service mesh.

Please note -> there is no need to be religous in terms of choices/libreries/utlities, there are many and again we are in the era of

Do the best you can until you know better. Then when you know better do better” (Maya Angelou)

And with my edition don’t just sit there and wait for it to do itself; try; fail; try again; and eventually succeed.

Background

As an OPS guy in my past I had a hard time looking top->down rather than or in addition to bottom->up, Kuberenetes is a complex system and most of the tutorials when you come to look at them are veru high-level and start from the “go to objects” such as deployment, stateful-sets etc and explain how replication, pod’s and services work. It was also very confusing at first to understand the difference between the service types, the ingress and later on Custom Resource Definitions which also wrap that in another wrapper. When you take the botton->up approch and iv’e seen this appoch maninly in blog posts such as this - things make (at least to me) make a hell more lot of sense.

Services & Ingress from Definition to Reality

Before we know how an ingress works we need to establish 2 things:

  1. Overlay Netwoking
  2. Services

Overlay Netwoking - brief

Overylay networks & bridging are not new concepts …

And when you take this pardigm and apply it at the host level so you’ve created a small router - which in a nutshell is what docker (or any container runtime) do.

Docker host-level networking concept

And considering this dosen’t scale to multipule hosts … we must implement some kind of routing mechanisem which is aware of the network layer, this “thing” nowadays is implemented (and extended) via CNI

Services

So once with established how multi-host routing is concerned we understand services are like (beeding fast no TTL) DNS records which “just know” how to resolve a given dns name e.g api.exmaple.com to mulituple ip adresses e.g 172.17.0.2,172.17.0.3 etc.

Kubernetes comes pre-pcked with 4 Service types and, each type has a purpose:

  • NodePort - exposes a certain port e.g 30001 which means you can configure an external network loadbalancer / router to route traffic to all the ip address of you cluster to that port for you application to work.
  • LoadBalancer - Depending on where your running, in all cloud providers this service type will provision a NLB (Layer-4 Network Load Balancer) and add the 3 master servers to route traffic to your application.
  • ClusterIP- exposes services internally, so it is basically used for services to communicate with each other in the cluster.
  • ExternalP - exposes / maps and external DNS name to an internal dns name, so internal services can discover

In this part I will try to visalize from an Operational prespective the evolution of the “ingress” and whilst doing that answering some obvious questions like:

  • How does an Ingress look like ?
  • How does an Ingress differ from a service ?
  • How does an Ingress complete a service ?

If we look for the Documentation’s definition:

Ingress - An API object that manages external access to the services in a cluster, typically HTTP. Ingress can provide load balancing, SSL termination and name-based virtual hosting.

The best way to look at this is, an Ingress is “linked” to a service in the same kind of “link” a service is to a deployment / pod which is literally how we look at our application when running in Kubernetes.

The “Psuedo View” of “Ingress -> Service -> Deployment -> Relationship” would look something like the following.

{
"Ingress": {
  "Service": {
     "Deployment": {
        "ReplicaSet": {
          "Pod": {}
        }
     }
  }
}

From the definition:

“An API object that manages external access to the services in a cluster”

what I am usually asked at this point is aren’t services used to do just that ?

well, it depends …

If I refine the questions above it would be somthin’ like:

How does an ingress differ from service of type NodePort, LoadBalancer or ClusterIP ?

Services

Well, each type has a purpose:

  • NodePort - exposes a certain port e.g 30001 which means you can configure an external network loadbalancer / router to route traffic to all the ip address of you cluster to that port for you application to work.
  • LoadBalancer - Depending on where your running, in all cloud providers this service type will provision a NLB (Layer-4 Network Load Balancer) and add the 3 master servers to route traffic to your application.
  • ClusterIP- which is irrelevant for this scope considering it just exposes services internally, so it is basically used for services to communicate with each other in the cluster.

media/ingress-1-nlb-2-svc.png

The result is pretty simple as described above , considering we already have our “shiny new Micro Service Architecture”, services suffice for inner application communications (like on the right hand side of the diagram), Furthermore it would also suffice if you have 1 / 2 services like api.example.com and admin.example.com it would look liks this:

media/ingress-2-nlb-2-svc.png

But in the longterm this practice is a bit “stupid” (or costy LB per Service) if you have more than 2/3 services considering each service of type LoadBalancer privisions a LoadBalancer from it’s cloud provider (or MetalLB for OnPrem solutions).

Up to this point weve been using Network Load Blancing to route traffic from the world to our application (Layer -4 which is the transport layer) and when it comes to the Application Layer Layer-7 you could and in some cases have to choose either your existing ESB (or Appliction Load Balancer) to perform more sufisticated routing such as pointing example.com/api to the api service and example.com/admin to the admin service ans so on as illustrated below:

media/ingress-2-nlb-esb-svc.png

Or alternatively (like we used to do before “ingress controllers”) -> which is basically a webserver nginx or apache which did this Layer 7 routing for us and communicate with the NLB.

media/ingress-2-nlb-ws-svc.png

Recap

-> So if you have 1/2 exposed services you could get away with one of the simple solution specified above …

  • Reuse your ESB
  • Use a loadBlancer per service (IMHO works up to 3 services)

if your thinking

I can just use an nginx in a container to do this “layer-7” routing stuff what do I need a fancy “Ingress controler” - you can skip to section 3 where the number of integrations built into contorollers might be the reason to change your mind …


Part 1.2 - Ingress & Reality

But the mapping of let’s say as an example: appa.example.com to deployment a or lets be more complex and say we want the www.exmaple.com/admin to point to deployment admin and www.exmaple.com/api to point to deployment api

In this case all you need is a service + an ingress … and someone needs to pickup that piece of configuration and do sothing with it … so let’s say a service API defined like as follows:

And correponds to a service defined below:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: api
  name: api
  selfLink: /api/v1/namespaces/default/services/api
spec:
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 31623
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: api
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

Install an ingress controller { like Nginx, contour, kong and many more}, and in a perfect world you could call it a day ( HINT: if your instagram / netflix ;) ).

Reality

Our Reality is usually somthing like the following:

On the left hand side we have this Monolith no one wants to touch and on the right had side the “shiny new” microservice infra with all the whistles and bells …

1.2 From simple web proxies to Ingress Controllers

Until now, in or out of our monoloth we aready have an “ingress controller”, it may come in shapes or forms of hardware from a Palo Alto / F5 / Citrix and other appliances which did exactly what we are now asked to implement, and in many cases these kind of devices “survive” the battle and continue to do these kind of Layer 7 Routing (when you are interferring in the applcition which in this case is HTTP itself), some called them ESB’s which stand for Enterprise Service Bus and are very simliar in approch to what Ingress Controllers provide.

As we can also see how a service is tightly coupled to the deployment and without it, it isn’t accessible by any one decleratively which we will see in a bit is the key point.

In reality we usually have multipule services and perhaps multipule domains and Ingress provides a way of abstracting that “service to deployment” - coupeling, and in the context of reality (see below) it provides you a way to “bisect” your application piece by pieace, and hopefully without influencing the overall customer expereince (this is parallel to introducing messaging & other microservice related methods of course).

A good exmaple may be that apiv1 is served by our monolith application and apiv2 is already in the shiney new architecture but they are still the same application … In the past we had a “F5” / “Netscaler” /Other which did this type of Layer 4 load balancing at first and quite quickly with growing demand of microservices Layer 7 load balancing and the big deifference is not just being in the control path - Layer 4 (tell x where y is located without inspecting the payload) you can inspect the content of the data and be the data path in Layer 7 load blanacing.

at this point it makes sense to introduce some ingress controller which extends the ingress capabilities and act as a middleware for internal->external communications which really makes sense in the real world as illustraited below:

Thank you for your interest!

We will contact you as soon as possible.

Send us a message

Oops, something went wrong
Please try again or contact us by email at info@tikalk.com