 I'm Lee Calcote. I'm a founder of Layer 5. I'm a cloud native ambassador. We are here today to talk about tales of the Kubernetes ingress networking deployment patterns for external load balancers. So today I'll be moderating our webinar. I am joined by our presenter today and I'd like to welcome Manuel Zapp, a solution architect at Containers. Hi, Lee. Thanks for having me. Here's Manuel. Very good. So let's go ahead and get started then. So just as we do, a couple of housekeeping items. All of your questions are encouraged. Manuel is here to take them on. But of these housekeeping items, I do want to let everyone know that during the webinar, you're not able to talk as an attendee. That said, there is a Q&A box at the bottom of your screen. So please feel free to drop your questions in there and we'll get to them as we go. Likely we'll hit those at the end if there's some urgent ones and Manuel, for warning, we may interrupt and ask an urgent one. But please submit your questions in there. They will be answered. So this is an official webinar of the CNCF. And so as such, it is subject to the CNCF Code of Conduct. So please don't add anything to the chat or questions that would be in violation of that Code of Conduct. Essentially, please be respectful of your fellow participants and the presenter. And with the housekeeping out of the way, let me welcome Manuel and hand it off to you. All right. Thanks for having me go over to me. Hey, all. Welcome to this webinar today. As already introduced, we're going to speak about tales of the Kubernetes Ingress Networking deployment patterns for external load balancers. Just like one organizational thing first, I will keep this slide open for like 10 to 15 seconds in case you want to follow this slide's online while I'm talking. There we go. So as already introduced, my name is Manuel Zapf. I'm a 30-year-old living in Germany. So all good. I'm currently working as a hot-off-product open source at Continious, as we already mentioned. You can find me on Twitter or on GitHub. I'm happy to also answer questions later there or receive your feedback for the talk or whatever you want to talk with me about. As mentioned already, I'm working for Continious. We believe in open source. We are the company that delivers traffic, traffic enterprise edition at Mesh. We also offer commercial support. We are currently around 30 people distributed around the globe, whereas around 90% of our people have some sort of tech background. So we try to take all the knowledge we have and make our products, the products, our user base, and especially the cloud-native landscapes needs. With that being said, without further ado, I would like to take it to the actual topic, which is deployment peristhenics, tonal load balancers. However, we need to start first at the ground. And this is, of course, a Kubernetes cluster. Usually, or how a Kubernetes cluster is built, and you probably all know that, is the cluster is made out of nodes, and these nodes have pods. That's essentially it. That's where the schedule pops workloads on, and we go from there. The first thing that already happens then is pods have private IPs. So in case pods want to talk to each other or traffic should be routed from the outside to these pods, things will get a bit tricky, because they just have private IPs. They can just be addressed on the node by default. So there's a gap, let's say. Luckily, Kubernetes has a resource to overcome this challenge, which is the service. And the service comes right for your rescue, in that case. The goal of the service is to expose pods to allow incoming traffic, no matter where it comes from. So as you can see, pods could be grouped together to a service, and then these pods could start talking to each other, because traffic will just flow through it. As the diagram might suggest, there can be multiple pods grouped in a service. So somehow, service allowed balancers. Services can have multiple endpoints, which can be none or one. And these endpoints are usually determined by the Kubernetes API. So Kubernetes API decides given certain factors, what are the possible endpoints for services throughout their traffic to. How it does determine that? Well, one thing of this is that services can have different types. We can group these services into two major directions, let's say, because these kinds are different or enable you to use one or the other, given the use case you have. If you want to think about from inside the cluster, which we could then take the inside type, the default type is cluster AP. Or if you think about exposing your services from the outside or your applications from the outside, you would go like for one of the outside types, which could either be no port or no balancer, for example. So at first, we will have a quick glimpse about these service types. And of course, we start with the service type cluster IP in that case. If you deploy a service with the type cluster IP, this service will receive a virtual IP, which is private to the cluster. What that actually means is that this IP is just viable inside the scope of the cluster, and it works within that cluster. So if you want, for example, that the pod and this node want to reach like this node, the pod and this node, for example, you would go through the service cluster IP or through the service, which is exposed by the type cluster IP. This would be a virtual IP then. And this service will then connect to this pod or this pod, making the connection work to pod schedule wherever on which node on the cluster. So it can mean any node to any node. As this works fine for communication inside your, like within the Kubernetes borders, within your clusters borders, that is certainly not enough if you want to expose an application to the outside. The first thing we want to look at in this scenario is having a service of type node port. If you go for a service of type node port, what it basically does is it uses two different things. It uses at first public IPs, which your nodes might be addressed with and might be like responding on, and it needs ports on this nodes. Based on that combination, it forms some sort of routing grid, let's say. What the service of this type usually or does is it opens a port on any node inside the cluster. So let's take, for example, we deploy the service with the type node port, and it acts on port 30500. It would open this port on any node. If now a client wants to connect with a certain application, he would go for a nodes, he would go for a nodes public IP on port 3500, which is like handled by the service, and the connection would then go straight to the port. So this is a kind to expose your application easily, let's say. That taking a step further, the second type we have is the type load balancer. It's a bit the same as node port, except it requires and uses an external load balancer in front. So you have like the external load balancer as a service, we're just running a front, and from there it would go, as we've seen before, to a node port and from there to the actual port. But for easiness, we just go currently straight from the external load balancer to the port. So instead of the client would target or try to connect with a node directly, he would connect with the load balancer and with the load balancer straight through. As this seems to be good, services are still not enough. Let's take a context that we want to expose a bunch of applications externally. As typically, or at least in cloud environments, let's say, especially if you deploy a service with type load balancer, there will be a dedicated load balancer resource inside the cloud for you, which will either be then a machine or a dedicated appliance or whatever. This will also, of course, require a public IP because you need to point to something. And if you then think one step further, for example, for Amazon load balancers, let's say, these are typically exposed by C names. So you might have to create a bunch of C names on your own to link to the load balancer, which is exposed for your application and stuff. So just managing those DNS records will be a nightmare. Additionally, as you have multiple instances or multiple resources, you have no real centralization of anything. So either if it's TLS certificates or logs or whatever, everything is basically managed on its own and you don't really have a good way to shorten that. This is quite a problem. Kubernetes allows for a different concept, which is just a step further with the ingress. In this example here, we're using traffic as an ingress controller. What it then does is it's exposed over a service of the type load balancer, what we just saw. But instead then, the ingress controller is like reading the ingress objects inside your Kubernetes cluster and makes sure that your service then routes to the pods of the service it needs to have. So you have some sort of an intermediate there because the request hits the controller and then goes to the pods. Some notes about the ingress though. Ingresses are standard Kubernetes applications. So they're typically deployed as pods, which is these days either a deployment or a demon set, sorry. And from there, they are exposed through the service object we just thought. You still need to access these somehow from the outside because it will be potentially your applications, but ideally you only have to have to deal with one service now because your ingress controller will take care of it. How that looks from a connection standpoint is that the ingress has services as well. So if you go from the outside, you have your public domain or your IP, which then points to your service of the type load balancer, which in reality is an external load balancer. From there, it will get forwarded to the ingress controller, which is typically a pod running inside your Kubernetes cluster, which then goes for the service of the service you want to expose, which is then just a private service of the type cluster IP. And from there, you can go to the pod that is wanted for your service. Why is this actually cool? Or to ask it the other way around, why should you actually care about this? It's a simple fact setup because you just have one entry point, a single entry point, which takes care of it, which results in less configuration because you only have to configure one instance instead of multiples. That also leads to less resources used because you might just have one single load balancer, for example, which targets the entry point, instead of having multiple. And what this actually makes cool is if you take an ingress controller, for example, you can achieve a separation of concerns quite easily because the ingress controller takes care of it. So you can have different load balancer algorithms, for example, or circuit breakers or retries or whatever. And just the ingress controller takes care of it instead of that you need multiple layers, let's say. But what does it actually bring? What could be hard with it? Well, by design, ingresses are for simple HTTP or HTTPS cases. They are virtual host first centric, let's say. So for all that now ingresses, and I guess that's the most of you, you can set a host name, where to react on it, you can set some paths and stuff, but it's always centred around that host name. So it's really designed to help in the HTTP HTTPS case. It can be used with TTP or UDP, but it's definitely not a first class citizen because per specification it is designed for HTTP or HTTPS. That could lead to the feeling that you really have to carefully select your ingress controller because that's the one consuming your ingress object and then making sure somehow, magically, that your pods are actually receiving traffic. But it's a little different in Kubernetes because Kubernetes gives you the freedom. You can use multiple ingress controllers, if you want. How this works is what you have in Kubernetes is the concept of annotations, let's say. So it's some sort of meta information you can attach to an object and the fact those standard annotation is ingress class. So let's say you just have multiple ingress controllers in your environment deployed by just setting these annotations with meta information and giving it basically the class. You could select which ingress controller will take care of which ingress you just deployed. So if you need multiple ingress controllers because maybe not one ingress controller has all the features you need, you're welcome to deploy multiple and just control it by the annotations which one is actually responsible for working on which particular ingress. Kubernetes gives you all the choices you need in that or gives you the freedom for all the choices that you need. There are so many deployment patterns that you can literally do almost anything. It really always depends on the case and targets which pattern might be your most desired one. All right. I already touched this briefly in terms of type load balancer. I said there might be an external load balancer before but what does it actually mean? External in that case means outside of the borders of Kubernetes. Mostly if you're running within cloud environments, for example, on Amazon cloud or Google communities engine or whatever, this runs outside of your actual platform either in infrastructure like on-prem or cloud and there it can just depends on where it actually runs. However, it doesn't mean that this piece of technology is not managed by Kubernetes anymore because what typically happens is that by automation something is listening on the Kubernetes API like the load balancer provider is listening on the Kubernetes API and did write code for operators, modules, plugins, call it whatever you want. As he's listening on the Kubernetes API, he sees that there is, for example, a new service deployed which has the type load balancer and takes the management away from you and does everything it needs to do. In case you are like on bare metal so there might no API for your cloud balancer provider or your load balancer provider does not have Kubernetes support, you will be required to switch back to a service of type node port because as we just learned they are literally the same except that there is an external load balancer and if there is none switch back to node board, all good. To sum that up and I already managed, I hit that briefly, what you will do is hardly depending on which Kubernetes distribution you are currently using because that will decide which load balancer solution you should use. If we have a quick look at the actual Kubernetes distributions, we have like three different flavors, let's say. The first is definitely cloud managed Kubernetes. You all know GKE, EKS, AKS, digital ocean, whatever. That's cloud providers that manage the Kubernetes for you and usually they also provide their own external load balancer solution. How that's works and I already explained that quickly. There is like a fully automated management through the Kubernetes API. You just do what you need, you expose your service by type load balancer and the API kicks in and makes sure that the load balancer resource is created as you need it or as you want it. There are of course results and really a great user experience because based on the integration for use and end user, it just works plain out of the box. You just have to do anything more, it just works. Also, you are very beneficial from all the benefits the cloud provider provides you. High availability, performances, all of that. You just have that because like the cloud provider takes in, spawns the resources for you. You use a product of your cloud provider and of course you immediately have all the benefits he offers you. There are a couple downsides of course to this as well. Of course, you have to pay for this. The cloud provider is not like doing it because he's super nice to you because they need to earn money and that's fine. As already mentioned, configuration can be done using annotations and as this is a cloud specific annotation, configuration can change. It can be that in your, let's say you use the load balancer, like you have your cluster for example on Amazon and you switch it over to Google Kubernetes. It can happen that you have to reconfigure it by finding the other annotations and stuff. There is no standardization in it. You just pop in your meta information which is basically just key value and from there it can change. Of course also, what you can configure actually relies heavily on the actual load balancer implementation. So whatever limits the load balancer in front of you has, you are working within this limits. The second approach is, sorry, the second approach is bare metal Kubernetes or also known as running on your boxes. Let's say you're just taking some droplets or some EC2 instances or whatever and they're managing Kubernetes on your own. Then from our experience during the last month, the best approach is to use something as Metal LB which is a load balancer implementation for Kubernetes and this one is running actually inside your Kubernetes cluster. What that means is that Metal LB uses all Kubernetes primitives you have. So it's deployed like a typical Kubernetes application but has all the benefits as high availability and stuff. Also it operates on the layer 2 or it can operate on the layer 2 on the network layer 2 which is routing as well as BGP. The only bad thing is they are still considering themselves not as production ready. They're keeping us as sort of, yeah, we are working in a beta state, take care of what you do. So of course you have to take care of what you do. And if for that reason, for example, you don't want to use something like Metal LB, an option you have is to use an external static load balancer, something that has been done for quite some time now. But again, this requires you to switch back to the service type node port because you need a target that your external load balancer targets and that is typically a combination of nodes and ports. And so that's exactly where the node port service has its strength and comes back in. Last but not least, we also have something which we call the cloud semi-managed Kubernetes. This is also heavily depending on the actual compute provider, like if you're running on cloud or like completely bare metal inside your data center. You then need a tool for managing those clusters and you typically know them as like QBADM or CUPS or there are some more. If you use these tools to manage your Kubernetes clusters, they sometimes also already manage the load balancer if and there comes the compute provider back in. If the compute provider offers something like this as well, then it can manage this for you. Okay, as we know, I've explored some ways how this actually operates and what are the actual ups and downs. We go back to why is this all important. What is really important in terms of having the network running is the source IP. Like we want to know who is currently emitting requests because sometimes business managers have these requirements for various reasons. It can be that they need to know the IP of the emitter of the request, like to track usage to potentially bill the right person, write access logs, have it for legal reasons, whatever. There can be really a wide variety on cases for that. To make that happen, things can get a little tricky, let's say. Because what happens then is not. Not stands for network address translation. In an IP version 4 world, the router masquerades IPs, like really the network piece, to allow routing from one network to another different network. This process has like two sub-processes, let's say, which is at first D-net, which stands for destination net. In that case, the router masquerades the destination IP with an internal pod IP. We will see a chart for that in the next slide. The other option is a source net, which is S-net. That does actually the other way around. It masquerades the source IP with the router's IP. If we take a quick look at it, just as an example, we have just a setup with a client, a router, and a server. The client is like addressed with this IP, the router with this IP from the external, and from the inside, the router just has an external IP and the actual server also. In case of destination up, the client tries to connect with the IP address like this. The packet reaches the router. It still, of course, has the source IP correctly set from the client. The router then does the destination up. It masquerades the destination IP address, the outer destination IP address with the IP address of the server and the inside, but keeps the source IP. In case of source net, he does the other way around. The request goes to the router, and the router masquerades the source IP with being the router as if he would be emitting the actual request and stays with the destination address and forwards it. That's a bad idea because, as we just said, our business manager wants us to actually preserve the source IP. Our golden rule is to not want a source net to happen because that will lose us the source IP and we cannot, like, fulfill our business manager needs. Therefore, the actual challenges with intermediate components such as, for example, an external load balancer, they can interfere into this process and potentially make sNets to happen in one of the steps, which could make us the source IP lose. If we now take a look inside actual Kubernetes, we have Kube Proxy. What is Kube Proxy? Kube Proxy is a Kubernetes component running on each worker node. The reason we have Kube Proxy is we need someone to actually manage virtual-like piece, which are used by the services, and we need to have something to pass it. The challenge with this is that, given the case, Kube Proxy might actually sNet request, and that depends heavily on the service type. As we just learned, there are three different service types, and on these service types, it depends on what Kube Proxy will actually do. Let's do a quick tour of these. The first we have a look at is the K source IP with a service of type cluster IP. Usually, or by default, let's say, Kube Proxy is deployed in the IP tables mode, which is good for what we need, because in that case, no sNet will happen because there will be no intermediate component. The client requests the service of the cluster IP, which configures Kube Proxy, and we just go straight to the pub. There is no one in between which tempers with request, or might rewrite packages, or whatever, so we just have cleanly the source IP of the request emitter, and we are good. Things get a little more complicated when we go to services with the type node port, for example, because let's have a look of what actually happens. The client requests one of the nodes. Let's take node 1 in that example on the port we just exposed, and Kube Proxy sees, okay, here is no pod to go to. I need to go to a second node and check if there is the pod. So he sNets the original request in order to forward it to another node inside the Kubernetes cluster. At that point, we already lost the source IP, and Kube Proxy then forwards it to the pod running on the node, so we lost. Source IP lost. We can't fulfill our business manager needs. With another variant, let's say, of services having a type node port, what Kubernetes offers is something that is called external traffic policy. You can basically configure this service a bit. If we configure it and set this policy to local, we actually have the good thing that no source nut will happen, but we also have a downside. Let's have a look at it. The client goes for, again, node 1, goes through the port, goes to Kube Proxy, and Kube Proxy sees, all right, I don't have the pod running on my node, and as the external traffic policy is set to true, the request will drop here, because there is no pod to locally forward the traffic to, so he just cuts it. In the other way, the client goes to the other node, having the pod opened, Kube Proxy knows, all right, I have the pod, and just it goes through. In that case, because there is, again, no intermediate component, we have the source IP. Now, the service type building up on this is load balancer. There, by default, S nut is done, same as node port, same reason. It just doesn't work. The external load balancer can route to any node inside your Kubernetes cluster, and if on the node receiving the request is no local endpoint, Kubernetes will forward it to another node, and then S nut will happen, so source IP lost. However, there is also another way. We can also set this local external traffic policy, and there are a lot of load balancers actually implementing this. For example, on Google Kubernetes Engine, the Google Cloud load balancer, or the Amazon NLB, or whatever. What happens then is that nodes without a local endpoint will have failing health checks, because there is no one responding on the part, and therefore these nodes will rotate out of, let's say, the target group, the load balancer is forwarding the requests to. This is, of course, nice, because you have no requests dropped from a client perspective. Everything works, and as the health checks are making sure that all my nodes are healthy, because that's the job of the checks, they are always ready. That's perfect. However, the only downside here is it relies heavily on health check timeouts. If health checks are a bit too slow to happen, you might potentially forward request to unhealthy nodes, as usual, and then your request might be dropped. What we already know is that sometimes source nut is mandatory. Sometimes you just can't go, sometimes you just can't be can't evade it. Usually it happens with external load balancers, as we just heard. Also, there can be network constraints that force S nut to happen, or, for example, in case of an English controller in the middle, because there's like another component which does the job of routing, and you might also need to forward requests to other nodes and stuff, so we have the same problem again. But as we already figured out, network is based on layers. We have seven layers by default on the network, and maybe why don't we try to tackle this issue with moving this issue to another layer? In case of using HTTP, which is also the prime citizen of Ingress, for example, we can retry the source IP from the actual headers. If we are going for TCP UDP, we could use the proxy protocol, which might help us in this case, or also an option would be to use distributed logging and tracing, because our manager just wants to have it early in the cycle, and the flow, sorry, and if we have it in other systems, that could also work as well. So let's have a look at the HTTP part first. We have headers inside HTTP because they are like part of the protocol, and one of the headers is X forwarded from. This header typically holds a comma separated list of all the source IP that has been like placed in this header due to network hops with SNAT taking place. The good thing is lots of external load balances or also Ingress controllers are supporting this header. So in case they receive a request, they need to forward it somewhere else and need SNAT to happen, they just put in their current client IP back in the header. So your actual application running in the pod can still figure out who was the original source caller because he just parses the header basically and sees, all right, the first IP was this. Okay, that's my source. As we see, the header is starting with an X. So that means it's not a standard header, which means that not all HTTP appliances might actually support it. This has been recognized and therefore there will be an official HTTP header called forwarded, which will do the same job. But as it's like a default header or an official header, all appliances should support it and we will have at least some parts of this issue solved. The second option we wanted to have a quick look at is proxy protocol. So proxy protocol was introduced by HA proxy, which is another reverse proxy solution. It happens at layer four, which is the transport layer in TCP UDP. And the goal of proxy protocol is to chain proxies without losing the actual client information. So even if the request is passing through the proxy, the goal is to not use the source IP. Luckily, the proxy protocol is supported by a lot of appliances, like Amazon, an elastic ball balancer, traffic, Apache, Nginx, like really by lots. So this is a good, let's say standard that can be used in many, many cases. And you usually use this when source nut happens. But as you're, for example, operating on the TCP level, there is no way to use HTTP headers yet. So you need an alternative and that could be a good one. Last but not least, out of the three options is distributed logging and tracing. That's kind of an idea for now, let's say. The idea is to collect the source IP as soon as possible and put it in my distributed logging system. Because once I have it, I cannot lose it anymore. And then lose tracing, tracing system to actually track the requests and then do billing or whatever you need to do. The good thing in this is that it doesn't require you to do more complex network setups because the most distribution, logging and tracing stacks already available inside your cluster. So you can just reuse existing tack. However, it really relies heavily on the actual stack being deployed on your cluster. So it can be a bit of cumbersome, let's say, to actually get that running and get it working. Last but not least, I want to give you two use cases and how that could actually work in reality. So the first and probably the most easy use case that you can already think of is having the external load balancer with a traffic policy. How this works then is you just deploy your service of a type load balancer, set the external traffic policy to true. The Kubernetes API has this information available. So the cloud provider can automatically configure a load balancer instance based on this. So a potential client will just point to public domain or AP again, which ends up at being the actual load balancer that has been just automatically configured. From there, he goes to a node which has the actual target pod running. Qproxy will forward it to the to the pod. All good. We're safe. The good thing here is that we have full optimization because like the cloud provider is taking care of it and make sure that it works. You can imagine that already. That also really depends on the actual load balancer implementation. So if your load balancer doesn't support external traffic policy, for example, it won't help you because he could then target also this node and then you're still you're still in the wilds and might have dropped requests. Secondly is capturing the source IP from the HTTP header because that's also an option. The good thing here is it's a really simple setup because it's just relying on the HTTP headers, which are part of the protocol. So the client talks to an HTTP reverse proxy. The HTTP reverse proxy appends the original client IP into the X forward and from as we just heard a couple minutes ago and forwards the request to the actual English controller and then the English controller forwards the request to your back end or to your pod better to say and from there the pod can just read out the value in the header and you're good to go. However, the large downside is it only works with HTTP because it's an HTTP field, but in most cases or in some cases, let's say that can already be sufficient. Then just here are some sources in case you want to to to read up again on what I what I just said and want to potentially learn more about this. Also, we published a well right paper called a routing in the cloud, which takes this topic, which picks up this topic and evaluates a bit more because like you have more time to read it and don't need to present it. So last but not least for the for the actual presentation. Thank you for being with me. Thank you for listening to me. And now I'm looking forward to all your questions. And well, I have to say, I'm not looking forward to all the questions. Why is that? Well, my goodness, what a popular topic you have. So this is great. We've got lots of lots of attendees on the call. Enough questions to fill the rest of our time. So let's see if we can choose a few that are that might benefit the most folks. One was just maybe a brief point of clarification on the last couple of slides. The question is that I think you were speaking about an X forwarded from and just just whether or not that might have been a typo with respect to X. X forwarded for yeah. Yeah, that was a typo. Sorry that. Very good. You pulled the Lee on that one because that's so we've got that. A few other questions here. So one from Karthik is how do you expose traffic How do you expose any? How do you expose traffic ingress controller pod? As I said, pods are exposed through services and that's actually the same for everyone, let's say. So let me bring up the slides. So let's say one of these pods is a traffic pod. You expose it. You expose it by a service or actually we can take this. This is better. This is your traffic pod. This is exposed over a service of type load balancer typically. And then as I said, as there might, for example, if you're running a Google Kubernetes engine or Amazon or whatever, he will see this. He will spread out the cloud instance for you, the load balancer instance for you. And as this is of type load balancer, which is an enhancement of type node part, he will route to the parts that are exposed from the traffic pod. And from there, you will have the routing and you will have it accessible from the outside. Very good. Very good. Some other questions here. The question is, can you please state some pros and cons in using traffic as an ingress controller, as versus using Nginx as an ingress controller? Yeah, sure. The main difference, let's say, is, and I don't mean this like bad or something, just the state of what it is. Nginx has been made like really, really long ago because it was solving a need back in the time. And therefore, for a long time, they have an architectural, I would, yeah, mis-solution or misbehavior, let's say, which is they need to restart the process when they reload configuration. And as we know, within the container world, containers can move like from one node to another in milliseconds. So the endpoint might change. And this will require then the actual Nginx process to boot up again, which could potentially use some requests, let's say. So that's one of the main differences between Nginx and traffic, because traffic is built with that being in mind. So it auto-configures itself in some sort of hot reload fashion, and therefore, you won't lose requests. And that's actually the most difference. Traffic has been built, as I said, for the cloud world in mind. So everything traffic does with fetching led-secret certificates and all of that is meant to make the containerized ingress working better. And Nginx, it works, but it was built for a different use case, and it's just being ported over. All right, very good. Next question is, is it advisable to have more than one replica of the ingress controller for both performance and redundancy? The answer, sadly, is it depends. Of course, having more than one replica is always something you should keep in mind, at least for the failover, redundancy, high availability part. However, especially in terms of an ingress controller, it can be a bit tricky. It depends on what the ingress controller actually does. If the ingress controller, like I said, is also, for example, managing led-secret certificates for you, you bring in some challenges, because then all your replicas need to have access to the same certificates to have all the certificates available, to do TLS termination and stuff. But in general, it is something, if possible, giving your case, I would go for, because it makes your life easier. Yeah, that's my answer. Otherwise, I talk too long. Oh, very good. There's a related question to that. And the question is, how do you scale the ingress controller so that it's not a bottleneck? What we typically do in this sort of scenarios is also with our customers. We assess the situation they are facing with them together, because if someone tells you, okay, I'm having resource issues on my ingress controller, it can have two different reasons, let's say. One, of course, is that you might have found a bug in your software, which might be causing, I don't know, memory leaks or something, and then you, of course, have to fix it. But the other is that they are just having too many requests, for example, per second. But this is sometimes a bad metric, but I will still go for it for now. So that the ingress controller can just not go with it. So what we typically advise is start lower than you think, wait until you might potentially see that sort of behavior that requests are just getting slower, for example, and then spin up another replica if possible. And if then you're like your response time in itself is getting back to a normal state again, you know that you're good. And if you can scale it without having changes, you know, okay, like I can stay at my previous spot, because I'm still good. I don't need to scale out. Very good. We'll keep on this theme for at least another question here. Yes. Related question is, can you have multiple ingress controllers that are load balanced externally for HA? Oh, that's actually a tricky question. So when I when I got the question correctly is, you will you will have multiple ingress controllers that are both exposed from the outside. Okay, and that's tricky, because if you like want to have two load balancers for HA reasons, they will have to work on the same ingress objects, because like they need to have the same configuration need to serve the same applications. And that can be depending on what actual load balance implementation you use ingress controller implementation, sorry, that can be tricky, because it can happen that the ingress controllers fight for the actual ingress, let's say, so that could be tricky. But what I think will be more complex is that these days, usually ingress objects are configured by by putting these annotations on the objects. And as annotations are basically just key value pairs, the keys will probably differ between various ingress controllers. So you might have to have like the configuration twice, let's say once per ingress controller in order that both ingress controller can work with the actual objects just fine. And then of course, if you have two external load balancers, you will have all your DNS records you have, they will need to of course point to both load balancers as well. Otherwise, the request will not float. If we could hand out kudos, I would give those to Jason for the the trickiness of that question. That's great. Yeah, that was actually a good one. All right. Well, we do have a few more minutes. We've got a number more questions to go through. So let's take this one next up. So the question is, do you have any take on the ALB? I guess it's probably I think the question is, do you have any take on the ALB to ingress controllers made available by cloud providers compared to bringing on your own ingress controller? It's probably meant then to not bring like an on load balancer, I guess. But yeah, I have a take on this. It's a personal take. As I said, also in the slides, typically going for if you are in a cloud environment where your cloud provider can assist you with sort of like placing a load balancer there. I personally think it's always a good idea to do so because usually they're not super expensive to have. And you just like get away of the complexity of managing those infrastructure on your own because the cloud provider just does it for you. However, of course, if you need a specific feature, the cloud load balancer just plain doesn't offer, you don't have a choice. You then have to like manage your own infrastructure, bring it up, make sure it works correctly with your cluster. But typically, and this is something we see at our customers as well, if there is a cloud balancer available within the cloud they are running in, they go for it just for the easiness because usually it's fully automated and it just works. And I would say most cases were the money that one is taking from you. Understood. Very good. Here's hopefully an easy one. Does traffic support proxy protocol to preserve the source IP address? Yes, it does. That was really an easy one. Here's a little more involved one. So the question is, if I expose the service as type load balancer, the ingress traffic first hits the load balancer and then it will be forwarded to the back end service, right? And not to the pod directly. So in your diagram. Actually, for traffic, we go to the pod directly. I can't speak about all other ingress controllers because I don't know them that deep. But for traffic, we just use the service object to actually see the pods and directly forward to the pods. Because when we develop this, we figured that to be faster and a bit more stable for us. So we just take the service to see where we actually need to go and go there directly. Nice. All right. Very good. Here's another one. What is a recommended way and considerations that you might make like preventing DDoS to enforce authentication at the ingress service and routing down to the pods only if the client is authenticated? Well, what is my take? Oh, how would I do this? Yeah, like a recommended considerations that you might mull over or what I have? Yeah. That's a pretty tough one because this also very much relies on on what is your exact case or better? What is your exact setup? Because typically, that sort of attacks happen if you have like a certain size as a service or as an application or whatever. And then there are like multiple ways to do so most or not most, but some of the customers we have, I'll solve that just let's say by money, they buy appliances like an F5 or whatever, place it in front of the actual ingress controller and let this appliance handle it. That is, of course, something you could also have with like, for example, placing a cloud flare in front of your Kubernetes cluster, for example, which would also take care of the DDoS. For authentication, it also depends a bit. It would be possible for some ingress controllers that the ingress controller itself talks to a certain to a certain service and forwards just the authentication request, basically, and then just lets the process, the request go through once this request is authenticated. That would also be a possibility. So yeah, it really depends on what is your actual case. You know, it's a good question when there's an, it depends in there somewhere. So that's great. Yeah, it's always hard to answer it in like a correct way because you don't know the exit situation then. So it just can give hints, but yeah, it is what it is. I'll also say there's a couple of folks on the call that have just highlighted the I think engine X's ability and sounds like F5's ability to also directly forward traffic to pods. And so wrapping those up, I think we, so we're, we've got about four minutes left. I, I'm not sure if Manuel, if you could withstand being hammered with this many more questions. Let me see if there's a couple of quick ones. I'm there, there are a couple that, you know, I think are asking for comparisons to other products or projects. And so maybe we'll forego those just to not, you know, put you in it. Overstress it. Yeah. But here's one, you know, there, I think it's just sort of a point of discussion that one of the attendees is noting that many ingress controllers that are deployed, that come from cloud providers can maybe sync with external DNS. Yeah, that is true. That is true. For example, traffic can do this as well and not just traffic, there are others that can do this as well. But that's pretty nice because how this works is ingress objects has a status field where the load balancer can put their IP. And that's what the cloud provider or the ingress controller does then, like, because the service is exposed over type load balancer. And so typically the cloud provider adds the external IP of the load balancer attached to the service to this service object. And then the ingress controllers can basically copy this external IP into the ingress status, let's say. And then other tools such as, for example, external DNS can take this ingress records and can create automatically DNS records for you so you don't even have to manage DNS records anymore. Very good. Okay. Well, that's great. Thank you, Manuel. This has been a fantastic presentation. Thanks for having me. We've got some questions we haven't gotten to, but you've been bludgeoned with them already. So I do want to say thanks to everyone for joining today. It's been a fantastic webinar. As a reminder, both the recording and the slides will be online later today. The slides are available here. The links have been posted. A link has also been posted in the chat as to where you can find the webinar recording. So we look forward to seeing you all at a future CNCF webinar and have a great day. Thank you. Take care. Bye-bye.