 Good morning, good morning, good evening, good afternoon, wherever you might be. Again, my name is Jen Gile from EngineX, where I focus on our Kubernetes and microservices solutions, specifically Ingress and Service Mesh, as well as Kubernetes platforms in general. And Mihal, will you give us a little bit more of an intro? Of course, thanks, Jen. Hey, everybody. My name is Mihal Kingston. I'm a Solutions Engineer in the EngineX product group of F5. I spend a lot of time helping companies adopt the EngineX technology from modern applications to traditional applications also. I see many environments and architectures and topologies, many patterns and modernization, microservices and service mesh, especially in Kubernetes. Thank you. So we can certainly answer questions about our tools, but we're really here to talk more agnostic about what is Kubernetes Ingress, why is it important, why is it a key component of a Kubernetes strategy. And so to kick things off, what I'd like to do is go ahead and launch a poll. So you should see it pop up on your screen. There are two questions. First, we'd love to know from the audience where you are with Kubernetes, you're not using it all, or perhaps you're just getting started with it. You're working in an organization that's a hybrid of both traditional and Kubernetes apps, or perhaps you're at a microservices first organization. And then the second question, we'd like to know what your biggest concerns are with Kubernetes. We've selected ones that we hear most frequently around training and knowledge, security, complexity, visibility. But if there's something else that's not on that list, drop it in the chat. So, Mihal, what are you seeing when you talk to customers with regard to Kubernetes adoption? Where are people? Yeah, most of the time organizations are using a traditional combined with a modern environment using Kubernetes or OpenShift or any other Kubernetes native environment. This is usually the most popular. We often speak to customers who want to modernize, move away from traditional three tier, four tier machine or three tier applications to more microservices based applications where all components are deployed in containers and Kubernetes using more lightweight protocols like APIs, for example. Yeah, and you know it's not necessarily about going from traditional to modern and completely eliminating traditional. There's still some justification for maintaining monolithic apps along with a microservices or a Kubernetes environment. Maybe you could talk a bit about where that fits into the strategy as we wait for people to respond. We're at about three quarters, but still numbers moving up so I'll give people another few seconds. Yeah, of course. Oftentimes a monolith might be perfectly fine, right? Let's say you have an application that's relatively static. It doesn't need to scale that much. Maybe it's an application where you're simply posting some photos or it's a small business. Oftentimes a database, a front end web server and the front end proxy server is perfectly fine to handle all of your traffic. We see more and more companies adopting containers or Kubernetes or OpenShift where we start looking at applications that tend to scale very quickly and very dramatically. This is when you might have two replicas of your application today and 10 replicas of your application tomorrow. So it does vary quite a bit depending on the nature of your organization as well as the nature of the application itself. Alright, I'm going to go ahead and close the poll and we'll see where people are with regard to this first poll that's up on the slide. And so you should see it on your own screen. So we have, let's see here, out of the 80% of people, 103 people respond to our poll here, only 7% are not using Kubernetes. The 95% are just getting started. Unsurprising to us, 43%, the majority are in that traditional to Kubernetes hybrid already, and then we do have 16% already out of microservices first organization. And then the second question, being around the biggest concern. No surprises here. The leading answer is training and knowledge that in our experience tracks pretty well with where we're seeing people adopting, you know, you're earlier in Kubernetes, you're just getting started. Of course, knowledge is usually where you want to be focusing complexity coming in second at 36%, followed by security at 15% and visibility at 5% and we do have a couple people who did choose others, we'll just take a look at the chat. And a quick reminder, we won't be answering questions that come in through the chat so if you have a question please pop it over into the Q&A. So I see a, Kate's on-prem clusters and complexity with exposing the services externally, being one of the other things that people are struggling with or encountering. So we will take a look at the Q&A throughout the session today. We'll answer them as we go if they're relevant to what we're talking about. If they're not, we'll capture them at the end, we'll do our best to get to all of them. And so let's start high level, Mihal. You know, as we go out and we talk about ingress, we find a lot of people aren't using an ingress controller yet. Perhaps they're not sure how an ingress controller is different from cluster IP or node portal mode balancer. So start us off here. Yeah, and one of the comments put in the chat there actually reflects this. People don't understand how to expose your applications from Kubernetes to the outside world. That's exactly what we're talking about here. There was often some confusion around Kubernetes networking. What I'm seeing more and more is the complexity around networking for the application teams. When you start to plug in the applications and Kubernetes interfaces, IP tables, different terminologies that are very Kubernetes specific. So let's simplify this a little bit. Let's separate these into different components that are more associated with associating the application, exposing the application, should I say. So if you wanted to, you could deploy your application and all of its dependencies inside Kubernetes. You could expose the application via node port, load balancer or cluster IP. Let's take the ingress out for just a moment. Cluster IP is essentially the default Kubernetes service. It gives you a service inside Kubernetes that other apps inside the cluster can access. It's an internal service type. So there is no external access and the only way to expose a cluster IP service is to use something like Qt proxy, which is essentially a bunch of IP tables. But there are very, very few scenarios where you would use Qt proxy to access your services. It could be to access your service on your laptop, debug a service or even just look at some monitoring and metrics. A node port service on the other hand is probably one of the most primitive ways to get traffic into your applications. Node port, as the name implies, opens a specific port on all of the Kubernetes nodes and any traffic that is sent to the node on that port is forwarded to the application. You can only have one service per node port, and you can only use ports over 30,000. So if your node IP address changes or if your virtual machine changes, for example, you need to deal with that. A load balancer service is probably one of the most popular ways to expose a service, usually in a cloud platform. For example, in Google Cloud, you can spin up a network load balancer using the load balancer service type. This will give you a public IP address that will then forward all traffic to the service. In AWS, this is ALB, but there is no filtering, no routing. It essentially gives you a public IP address and that's it. And this means you can send almost any type of traffic to this. You could be HTTP, you could be TCP, UDP, WebSockets, whatever you're using. Each service you're exposed with load balancer will get its own IP address. So oftentimes you need to pay for the load balancer service. This is kind of what where Ingress comes in, right, because we often see teams exposing their applications using one of these three methods. But as your application grows and scales or your application stack overall grows, you may have a new service for every single component. You might purchase a new cloud load balancer for each of these components. You might set up new node port services for each of these components, and each of these have different IP addresses. And what all you want to do really is bring traffic into your application using a DNS name or using a URL. You don't want multiple services exposed for a single application. And that's where an Ingress can help proxy requests to your services. On top of that, you might have TLS termination, URI routing and much, much more. So the Ingress would be a simple component that runs inside the cluster. It takes full responsibility for all traffic and routing from outside into the cluster using a symbol URI or FQDN or DNS. You might even have load balancers and load port services everywhere. Now the Kubernetes API, just to mention it, is the core of the Kubernetes server. It's what exposes everything inside Kubernetes. It allows every component to communicate with each other. And one of those API objects is called Ingress. Hence you create an Ingress resource. So, Mihal, we get asked often, you know, can I use node port in production or can I use load balancer in production? I think you summed up some scaling issues around that. Is there anything else that you would say about, you know, what you should be using in production? Absolutely. You can of course use load port or load balancer in production. It does depend on what you're trying to achieve, of course. Load balancer is probably the most popular way in cloud platforms, assigning a public IP address to an application. Node port is perfectly fine also. But the port of a node needs to be over 30,000. So you can't use 80 or 443 directly unless you have some form of load balancer in front. And therefore, if you have a load balancer in front of a node port, you have additional hops. It'll be client load balancer, node port, cluster IP. And then you have additional latency issues. Correct. Potential points of failure. All right, should we move on to some more depth about Ingress? Sure thing. Okay, so the Ingress controller is a component in a Kubernetes cluster that configures HTTP load balancing and some layer four also according to an Ingress resource. This is created by the cluster user. But let's not complicate this. The Ingress controller is essentially a load balancer or proxy with additional capabilities, brings traffic in, deals with layer seven and potentially layer four capabilities. When you deploy an Ingress controller, you create an Ingress object or an Ingress resource. The Ingress controller then uses the Kubernetes API to pull information from this Ingress resource and then configure a load balancer according to those resources. You might have multiple Ingress resources per application, application A might have Ingress A, application B might have Ingress B and so on and so forth. In this diagram, we have a load balancer in front of the Kubernetes cluster. This is just a public endpoint. It could be an engine X, for example, there could be a cloud load balancer or software load balancer or even a hardware load balancer. So this is what brings external traffic in using a public IP and that then points to an Ingress which then deals with all of the application traffic inside Kubernetes. Not only does the Ingress controller deal with traffic management. It has many other additional capabilities depending on what Ingress controller you're using. It could be monitoring, it could be telemetry, it could be security for that matter. So we're seeing many more Ingress controllers acting as web application firewalls. We're seeing advanced analytics and metrics get exported to additional tools like Prometheus and Grafana and Splunk. So the Ingress can be a very valuable tool in your stack to gather information to secure your applications and manage all of the traffic. So we have a very timely question in the Q&A from James. He asks, how is an Ingress controller different from a reverse proxy? Great question. There isn't a huge difference because an Ingress controller, if you look inside the container, it is a reverse proxy. It's proxying requests from the client to the application. The only real difference is that the Ingress controller is managed and configured by a Kubernetes API. So when you put together an Ingress resource in Kubernetes, the Kubernetes API accepts that Ingress resource and creates a configuration for the proxy. Nginx, for example. If you look at an Nginx Ingress controller pod, and if you look at the configuration file, it's a reverse proxy configuration inside the container. A follow up to that, Johanna is asking, so basically an Ingress resource is just a recipe for the Ingress controller. Correct. Exactly. The recipe is a good way of putting it. All right, let's go more into some of the use cases. I think one of these will answer another question we're seeing in here from Karen about how can I get the features of an API management and an Ingress controller. So let's walk through some of these many things that you can use an Ingress controller for. Right, good stuff. So we often use the term a Swiss Army knife or for Nginx itself, but the same thing could be said for the Ingress controller because it's very flexible. There's a long list of use cases that we often come across. It could be web serving, caching, load balancing and traffic management in general, but the Ingress controller can be your Swiss Army knife if you think about it. So as well as you have traffic management, which means proxying, which means load balancing, URI steering, TLS termination, session persistence, everything would use a load balancer for can be done with an Ingress controller oftentimes. If you look at visibility and telemetry and monitoring, I'm sure many of you are using tools like elk, Grafana, Prometheus, Splunk and all of these tools for monitoring and alerting. It could mean you're sending logs to external tools for logs analysis like Splunk, tools like Log Stash, Fluent D, all of these tools we see very, very often. If the Ingress controller is your entry point to your application running in Kubernetes, then then it does a lot of value for proper analysis and analytics for your applications in general. Security and identity can mean a lot of things. It doesn't simply mean firewall. It's a huge topic. We could easily spend a full hour talking about this, but we're seeing a lot more companies looking to add security to the Ingress layer in Kubernetes. This could be a web application firewall. It could be simply TLS or neutral TLS. It could be limiting the amount of requests a client could make and much, much more. You could mean authentication also, like validating JSON web tokens or offloading authentication to an identity provider using standard like OpenID Connect, for example. We're hearing a lot of, we're hearing zero trust a lot, which is essentially built around the idea that you should trust no one. All traffic should be encrypted. No IP should be exposed directly and all services require authorization. We're steering into service mesh territory here, but we won't go there. But one of the main use cases of a service mesh is securing traffic using encryption like neutral TLS, for example. API Gateway is another very broad topic. There's ongoing debate regarding using an Ingress controller as an API gateway. If you think about it, an API gateway is essentially a proxy that accepts client requests and sends those requests to an API endpoint. NGINX, for example, can be used as an API gateway. Many proxies out there can be used as an API gateway. So there is no reason why you cannot use an Ingress controller as an API gateway. Yeah. In fact, it's maybe a better choice because it's a Kubernetes native tool, right? You know, configuring it with YAML instead of whatever the config is for an external tool. Exactly. The Kubernetes platform is your management plane. It's your control plane to configure your API gateway. Most of the features we see from an API gateway perspective are things like TLS termination, GRPC support, authentication using tokens or API keys, rate limiting. All of these things can be done with an Ingress controller or many Ingress controllers out there. Actually, I should mention that this use case is so popular that I believe there is a new API, there's a new gateway API going to be added to the Kubernetes platform so that you can create gateway objects in Kubernetes. And on top of that, you can create different routes for different API endpoints. Yeah, it's going to be really interesting to see how people decide they want to use that new API. Let's see. I'm just going to take a quick gander at the Q&A to see if there's anything that we should hit now before we move on. We've got quite a lot of questions in here. Thank you, everyone. So we have a question here as per the diagram, which was on the previous slide. It looks like we can only have one Ingress resource per node in the cluster. Is that the case? You can have as many Ingress resources as you wish. An Ingress resource is created, it's cluster-wide. You can have as many as you wish, but usually an Ingress resource has its own host name, so you might have multiple hosts for each application. So application1.example.com, application2.example.com, a different Ingress resource for each. That's chosen will depend on the host header or the FQDN that you're using to access, but to answer the question, no, you can have as many Ingress resources as you wish. And a follow-up to that, can you please share in a little bit more depth the difference between the Ingress resource and the Ingress controller? Of course, the Ingress resource is a definition file. It's an object in Kubernetes that you create. And this is a YAML file, usually, and it has your routing rules, it has your host headers, and it has all of the proxy and configured in this resource. When you deploy this Ingress resource using KubeCTL create, for example, the Ingress, sorry, the Kubernetes API will accept this Ingress resource, and the Ingress controller will configure the load balancer based on that Ingress resource. So the Ingress resource, as we said before, is like a recipe. The Ingress controller will take that recipe and convert it into load balancer configuration. And so one final question on this topic, the Ingress controllers are queries like this, have queries like KubeCuttle get ingress-a. They're wanting to understand whether this is the Ingress controller or the Ingress resource. Yeah, when you run KubeCTL get ingress, this is putting an Ingress resource from the Kubernetes API. And is the Ingress controller at the pod level? Yeah, so Ingress controller acts as any other container in Kubernetes. When you deploy it, it runs as a pod, and then you expose the Ingress controller the same way you would expose anything else in Kubernetes using a service. You can actually expose the Ingress controller using node port or load balancer or cluster API. But yeah, it's a pod. And on a slightly different topic, is there any benefit to using more than one Ingress controller and is this common? Great question. Yeah, of course, you can use as many Ingress controllers as you wish in your Kubernetes environments. I don't understand the case, but we used to have Ingress classes where you could specify an Ingress class per Ingress controllers. You might have Ingress controller one for this team and Ingress controller two for another team. There are many reasons why you would have multiple Ingress controllers, but there are also other reasons why you wouldn't. It depends on the use case. If you need certain features of one Ingress controller and that other ones don't have, then that might be a reason why. You can do everything you need to do with one. You have multiple replicas of the Ingress controller running usually. It depends on the environment of course, but the whole purpose of having replicas of the Ingress controller is that you can scale and you can, you can deal with fail over if one of your containers fails as well. Okay, last question before we go talk about the landscape and the different types of Ingress controller options. How do you deal with multiple clusters? Another great question. Yeah, so when it comes to an engine X Ingress controller for example, or any other Ingress controller for that matter, this runs inside the cluster. When you have Ingress running inside the cluster, it's standalone. It's normally doesn't really communicate with other clusters directly. What we are seeing a lot more is global server load balancing in front of a Kubernetes cluster. So you might have multiple Kubernetes clusters. Each of those clusters have an Ingress controller. And you might have a global server load balancer in front that sends traffic to different data centers. Now, when you need an Ingress controller to send a request to an Ingress controller in another cluster. We call it egress, an egress route. And this is when we're looking at service mesh. When you have service to service communication or if you have an Ingress communicating with services outside communities, it can be done. And it's something we are seeing a lot more these days with global server load balancing combined with service mesh solutions. Thank you. So we do have a couple of questions in the Q&A that I think we'll address in the next section about the options of an Ingress controller, for example, versus an AWS ALB or a load balancer provided by a cloud provider. And so we're going to take a few minutes here to talk about the different categories of Ingress controllers that we see some pros and cons of each. And I guess what I would like to emphasize is there's no one best in Ingress controller. It's all about, you know, what your needs are what your use cases are, and just really understanding, what the drawbacks and the advantages to each are. And so we break the Ingress controller landscape down into open source default and commercial open source is pretty straightforward. Those are projects that are maintained by the community. It's possible that some may have dedicated engineering teams. So for example, at EngineX we do produce an open source Ingress controller that's maintained by our engineering team as well. And when we have the default category, we consider those to be the ones that are developed and maintained by a company that's also providing a Kubernetes platform. So that's your AWS, Azure, GCP, Red Hat, OpenShift, Router, perhaps. So in the third category, commercial, these are licensed products that are for large deployments. And so for example, EngineX also makes a commercial version of our Ingress controller that's more for the enterprise grade. So let's look at the pros of each first. Open source. A lot of people choose open source because it's free. And so that just makes it really accessible and easy to start with. They're community driven, and the feature velocity tends to be a little faster. So, Miha, let's talk a little bit about why people value a community driven Ingress controller project. Because we do see quite a lot of companies having, having that as a priority, right. Of course, yeah, community or open source in general is ideal. If you're just getting started, if you're testing, we have very low volume. And this applies to all software rather than just Ingress. With Ingress, open source is perfect for those new to Kubernetes. There's a lot of feedback from the community when you have issues or questions. The features are going growing very, very quickly. And some organizations actually prefer community developed tech. So open source is part of the culture part of the nature of the organization. And then with the default, these also tend to be either free or fairly low cost. Perhaps a bit more reliable than an open source option because you know that they do have a company behind it with the development team. And often there are some support options available. Miha, let's talk a little bit about the reliability of a default option versus perhaps an open source. What's the trade off there of, you know, moving away a little bit from open source with the trade off being reliability. So when you're using a, usually when you're using a cloud platform or if you're using a managed Kubernetes environment, you often get a default Ingress controller and this could be based on any technology. It could be a very popular open source technology, for example, but one of the benefits of this would be that you there might be full support for this ingress controller as it's part of the platform. And usually this is available to you at no extra cost. And this is usually a popular choice for teams newer to Kubernetes also. But if you're using tools like Amazon's elastic Kubernetes service or Google's Kubernetes service, they all have default ingress controllers that are supported. And then the commercial category. These tend to have the more full range of features that might not be available in open source or default that are going to enable those use cases we were just talking about around particularly security and identity, but also API gateway features. They tend to be much easier to scale and then support as well. So let's focus more narrowly on community support versus will say corporate support or support licenses. Where's the value there on having a support contract. And of course it's it does depend on the environment but there are many other production grade features of a commercial products that could just be features rather than technical ability might be certain features of a commercial product that might not be available in the community or open source version. This could be a replication firewall, for example, or the ability to integrate with a service mesh. But if you look at the support side of things. Things like CDs, for example, if an open source project has a known CDE commercial customers are using the first to get a fix for these for these security vulnerabilities. And on top of that as well it's confidentiality on top of that too when if you want to troubleshoot ingress controller for example. If you want to go to a public forum or stack overflow, for example, do you want to put your private log files up there or do you want to ask questions with sensitive data. So a lot of different factors to this, but confidentiality and full commercial support is obviously a huge feature of any commercial product, rather than on top of the features themselves of course. Okay, let's look at the flip side the cons with open source ingress controllers, we tend to hear from customers that there are three reasons that they, you know, maybe they started with it but they decided to move away. What you identified there with support is a big reason. There's a saying, I'm not sure exactly who originally said it free like a puppy. Maybe you may not have cost you any money but it's going to cost you a lot. Excuse me a lot of time. So what does it look like to be investing time because of an open source ingress controller. Yeah, so oftentimes, you end up spending more time on customizations and work around specific needs when using community software. There's always a cost to free software of course from training to actually support itself. So this could be, we see many other community versions of ingress controllers like engine X, for example, that are customized, they have advanced your code within the container. And this can be a learning curve or it could be performance issue on top of that. There is a cost, there's a cost especially when you have a team of people managing an ingress controller and all of a sudden that team leaves or any new resources managing the ingress and there's no documentation available. Yeah, and you know cost again falls under that default column as well but this time it's more about unpredictable costs. You know you get in for a low price. So what are the costs that can occur over time with these default options. So, when it comes to support options, it really means how long does it take you to get a response. I'm sure a vast majority of questions, asked in community forums are potentially unanswered, for example. On top of that, most are self solve, it's just you and the docs. So if you went into problems that you can't solve ourselves it can be difficult or impossible to get help. What's about your only choice is to post your problem on the public forums and hope that somebody responds. And then infrastructure lock in being another major drawback for the default options. I think that's one that probably doesn't get talked about a lot. We're moving to the cloud but we're seeing a fairly high number of our customers using multiple clouds. And so let's say you're using both AWS and Azure, you know, a KS and EKS. What are going to be the ramifications if you're using the default options in each of those instead of an option that's the same across both. Yeah, so you might have multiple clouds, of course, or multiple Kubernetes platforms that have different ingress controllers, and oftentimes different ingress controllers means different configuration language, maybe they're using custom resource definitions for example some ingress controllers actually have their own custom resources to configure them. And usually when you're dealing with some platforms you have to use or the first option to use is the default ingress controller that comes packaged in the platform. And if you have another cloud or another platform with a different ingress controller can be difficult to move from one to the other. You might need different ingress controllers for each deployment. It can cause tool sprawl increases the learning curve for the teams, and it just makes the ingress controller a lot more difficult to to maintain and manage. Yeah, so it's not just that you're going to have, let's say three ingress controllers for three different clouds but you're going to have to probably have three different WAFs as well. So difficult, you know, problems porting across policies. Okay, so the commercial cons. One con is kind of again the flip side of what's great about open source projects is commercial projects develop a little bit slower, and the reality is their license so they cost money. So what's the, the, the issue that people are weighing when they're looking at slower development with a commercial product. Yes, one of the most important features of a supportive product is, is that it needs to be stable, right so it's stability is very important for commercial ingress controllers or commercial software in general. Their feature velocity might lag a little bit behind the open source counterparts, because with the open source counterparts people are forging and people are adding features continuously daily using DevOps methodologies. Whereas with commercial software or enterprise software new features are tested intricately. And sometimes community versions actually have certain features that the enterprise version don't have yet using lower code or JavaScript so yeah there's a there's a lot of differences pros and cons of course but because stability is so important it can take longer to get features in commercial software. For sure. So, over the categories where we're going to focus in right now is an area that we find there's a lot of confusion. InginX is the most popular widely used technology under the hood for ingress controllers, but that doesn't mean every InginX ingress controller is the same. So we're going to pull out two that commonly get confused or we'll say three falling under two categories. So the first, if you Google Kubernetes ingress controller, one of the top results is going to be this one under the Kubernetes project. So you can kind of identify them quickly and easily by looking at the GitHub repo so that'll be Kubernetes slash ingress dash InginX. So this is one that's based on InginX open source, it's completely free and open source, it's developed and maintained by the community. And so they're using forked InginX with Lua and open rusty to kind of achieve some of those more popular components that people are looking for. And then the other item that's going to come up if you're Googling or searching for InginX ingress controller will be the projects that are maintained by InginX ourselves. And so, again, looking at GitHub, that's InginX Inc slash Kubernetes dash ingress. So we're the source code owners for both of these based on InginX open source or based on InginX plus. And it's a combination depending on which one, whether it's developed and maintained both by InginX and the community or just by InginX for the plus based version. Rahal, when people are trying to understand these ingress controllers, you know, you have quite a bit of experience with this having worked directly with users. Where do you see confusion or difficulties? Yeah, it's a great question. So with, so yeah, the fact that there are multiple InginX ingress controllers is confusing enough as it is, right, because the most popular ingress controller out there is actually the InginX community ingress controller that was shipped with Kubernetes by default originally. The only differences between the two, but InginX open source versus InginX community is that the community version does have some additional logic, such as Lua and OpenResty as you mentioned, Jen. But on top of that, there are some performance hugs and that's things like service discovery in Kubernetes is probably one of the biggest features that we have in the plus version. So for example, if you were to scale your applications within Kubernetes, one of the features of InginX plus is to actually resolve the DNS service as it scales within Kubernetes and updates the InginX configuration immediately. I believe with the other versions that uses some newer code to continuously check the number of replicas of a pod running. So that can lead to some performance issues. What are things like health checks, for example, like we believe active health checks or something that are developed using Lua, which are available in InginX plus. But it depends on the features, right? What do you need? Do you need active health checks? Do you need session persistence? Do you need authentication using JSON web tokens? All of these things will determine what ingress controller is best for you. The open source community version is a brilliant solution for testing purposes and for practicing how to configure ingress and configure application traffic management in general. All right, we're going to go ahead and launch another poll as we start to close in on the last 10 minutes of our session. And so this final poll, I can click all the buttons. We'd like to know what ingress controller category you're using. You can select more than one. Are you using an open source option such as that community one from Kubernetes default. So something that's in a cloud provider or Kubernetes platform provider, commercial such as the InginX plus one. You don't know or you're not using one yet, or if it's something not covered somehow in those categories, let us know in the chat. Michal, when you talk with people in the field, what do you commonly see? I commonly see the community version. So ingress dash InginX is probably the most popular ingress controller out there, I would say. Even if you're learning Kubernetes, most of the training courses I've completed usually use this version also. It's like the default. It's the first one that you probably find when you Google an ingress controller. And so we have about a little over half of people have responded. We'll give just a couple more seconds to let us know what type of ingress controller you're using. And we should mention a couple months ago InginX made a commitment to actively work on the InginX or the community version, the Kubernetes version. And so we're looking to actually contribute engineering to that to really help make it stable because it is, like you said, very popular and the entry point for many people to Kubernetes. It'll be also interesting to see where things go with the API. So we'll go ahead and close this poll and show the results. So again, no surprise here. The majority of people are using an open source option, followed by default. We've got a few people using commercial, a good chunk either don't know what they're using or aren't using one yet, which is also very common. And then some people said that they're using something else using contour. And then I see a question there if you have a question, please make sure you put it in the Q&A. And that brings us to the end of our prepared content. And so there's a couple of scenarios we're going to discuss and we'll try to get to as many of these questions I see 17 in there so we definitely won't get to them all. Mihal, when you're working with people who are testing ingress controllers, you know, setting up a proof of concept, what does that look like? How do you, how do you choose one and make sure that you choose the right one? Great question. Yeah, so I would highly recommend putting together a success criteria if you are testing ingress controllers. Oftentimes, if the use case is simple, you need simple load balancing, you need some health checking, for example, then any standard ingress controller will do the job. It's other things such as, do you need load balancing for HTTP2 services? Do you need GRPC? Do you need layer four, like TCP and UDP? Do you need rate limiting, canary testing, traffic splitting, all of these advanced features are the type, the kind of thing that would lead you to test other ingress controllers that are maybe more feature rich, I would say. But I would recommend you do a POC if necessary. There's no reason to settle for a one trick pony. Most ingress controllers can do everything you need to do. But if you need additional problems to solve, let's say you need a web application firewall, you need authentication using OpenID Connect or you need JSON WebToken authentication. Those are the types of things that we see customers asked for, and then that's when the POC comes in and that's where we start to configure more of a feature rich solution. So you mentioned another tool earlier in the session. In short, how is a service mesh different from an ingress controller? Great question. So service mesh goes a little bit deeper than an ingress controller. So the ingress controller is responsible for bringing traffic into the cluster. So it supervises and controls traffic coming in, also known as north-south traffic. But when you go deeper into the cluster or deeper into the application management, ingress has no visibility or control over the traffic flowing within the application. This is known as east-west traffic. Let's say you have one application and you have multiple services and communities for that application, and they all communicate with each other. Service A communicates with service B. The ingress controller cannot see that, so it cannot do open tracing, it cannot do monitoring for those services, and it cannot encrypt traffic between service A and service B. This is where service mesh comes in. Service mesh comes in when you need more granular control of east-west or service-to-service communication within Kubernetes. One of the most popular use cases or popular features of a service mesh would be encryption, encrypting all traffic between your services within Kubernetes. Another would be better visibility, having logs, having metrics, having open tracing between your services, and others being more advanced traffic management, AP testing, circuit breaking, canary testing, all of those things, where you get more deeper into the traffic management sort of things. There's a big difference to answer the question between a service mesh and an ingress, but at the end of the day, it's proxy. You are proxying, but a service mesh allows you to have more control over proxies or sidecar proxies for your applications. Yeah, and I'll go ahead and drop another link. We do find a lot of people actually start exploring a service mesh before they've even implemented an ingress controller, which you can do, but you want to make sure that you're actually kind of checking some box to make sure you get value from that service mesh. And so I'll share that in the chat in just a moment. We do have about five minutes left and a whole host of questions in here. We have one on what is the link between IngenX and Ingress. Is Ingress just IngenX in a container? Quick question. Technically, yes. An Ingress controller, an IngenX Ingress controller, if you look at the container, if you SSH into the container and run IngenX-v, you will see IngenX running in the container. The Ingress controller is it has an additional daemon to configure IngenX. So when you configure an Ingress resource in Kubernetes, the Kubernetes API takes this resource and configures IngenX using that Ingress resource. So yeah, it's an IngenX container, but the Ingress controller converts the Kubernetes Ingress resource into an IngenX configuration file. Okay. Does IngenX Ingress controller modify the response header mainly for error response? Not by default, but it can be configured to do that. You can manipulate request headers and response headers using IngenX. And then we have another question. Does IngenX-plus-based Ingress controller support multi-cluster Kubernetes? Great question. That is actually more of an egress query. So yes, the Ingress controller can send egress traffic, which is traffic outside of a cluster. And that's a very broad topic as well because you might have multiple clusters. You may need a global server load balancer to bring traffic from Ingress 1 and cluster A to Ingress 2 and cluster B. But yeah, egress is supported. Okay. Does IngenX handle cores, C-O-R-S? Yeah, IngenX by default can handle cores. Traditionally, this would be done using an IngenX server on a virtual machine or a physical machine. But from an Ingress perspective, you can do it that way as well. Okay, I'm looking at about two minutes left. Oh, let's see here. I don't know, Mihal, if you saw any that you really wanted to answer. Let's start with Johans about which is the most used or popular orchestration platform. I would say that depends on where you are and what your organization is. You know, we see high compliance industries gravitating towards the likes of Red Hat OpenShift. I know you've done a lot of OpenShift work in Amia. You know, the more cloud native ones are tending to use the cloud providers, companies or organizations with open source mandates will often go with Rancher. What are you seeing as the leader or is there a leader at this point? Yeah, it's a good question. You're right though. It does depend on the region as well. I've noticed that cloud platforms are very popular in certain areas. Other regions may have physical because of the security reasons. So yeah, I would say I don't know who the leader is, to be honest. But what I do see very often are the likes of Red Hat OpenShift, of course. I'm seeing VMware, Tanzu, Rancher, managed Kubernetes clusters, as well as the cloud platforms, AWS, Google Cloud, of course Microsoft has zero managed Kubernetes also. I think we've hit the end of the questions we can answer in the time we have. There were a lot of technical ones in there and I see a request in the chat to have a follow-up session covering a more hands-on approach. We'd love to do that. We also have a YouTube channel. Look for Inginx Inc on YouTube. And there are a lot of demos and other types of videos on there. We have done some quite a few in-gress live streams. So with that, I think we're about done. Thank you so much to Jen and Mihal for their time today and thank you to all the participants who joined us. As a reminder, this recording will be on the Linux Foundation YouTube page later today. We hope you're able to join us for future webinars. Have a wonderful day. Thank you. Thank you.