 All right, we're gonna go ahead and get started now. Thank you everybody who is joining us today. Welcome to today's CNCF webinar, Kubernetes Ingress Simplified Cluster Management across any platform or environment. I'm Caitlin Barnard, Marketing Manager at Comm and I'll be helping to moderate today's webinar. I'd also like to welcome our presenter today, Harry Bagdy, who's a Senior Cloud Engineer at Comm. So just to cover a few housekeeping items before we get started, during the webinar you're not able to speak as an attendee. So there is a Q&A box at the bottom of your screen. As we go through the webinar, please feel free to drop any questions that you have in there and then we'll get to as many as we can at the end. The webinar is also being recorded and will be available online later along with the slides. So with that, I'll hand it over to Harry to kick off today's presentation. Thank you, Caitlin. Hello, everyone. Welcome to this webinar on Simplifying Ingress using Kong. So we are going to focus on Kubernetes Ingress and how Kong can help you do an accomplished Ingress across Kubernetes cluster, but also across other clusters if you wish to. So a little about today's agenda, we're just going to go a brief overview of what Ingress is for those people who are not familiar with the terminology and how it came into existence and what can it allow you to do. We'll go a little bit into Kong, how Kong came into existence, what Kong can do for you and how Kong can do Ingress and a little bit more than Ingress actually. And then we'll just dive into a demo session where I think it's a pretty comprehensive demo which allows you to figure out not only just Ingress but a lot of authentication and other stuff that you can do at the Ingress layer in Kubernetes. All right, so a little bit about myself. As Caitlin said, I'm an engineer at Kong. I am an open source enthusiast. I maintain a couple of open source projects along with Kong. I come from a QMU KVM virtualization background. For those people who are not familiar with that term, that's what powers a cloud computing environment. So all the VMs that you see running in any of your cloud providers underneath it, you have the Linux KVM layer. And so I have moved up the stack and now I'm kind of championing Kubernetes at Kong and outside as well. So with that, Kubernetes Ingress. So you almost be familiar with, you can install services into Kubernetes. You can get them running, be it your stateful pods or be it your stateless services. You can get your pods running. You have your persistence volume claims. But once you have everything in there, how do you see, how do you actually give people outside people access to your services? How do you give services which are not running in Kubernetes and running outside? How do you get access to those services? And that's where Kubernetes comes into play. Ingress is not something new. Even like 20 years ago, we had this concept of DMZ, like demilitarized zone subnets where you would run the first firewall of the proxy. And that's where all the traffic that you get into your network comes from there and that gets propagated into the entire cluster. And let's get forwarded to whatever services you are running. So why should you do that? I don't think that needs a lot of convincing, but we'll still go around and see. One port of entry is something that all your ops guys will love. If you want to firewall some traffic or if you're being DDoSed or if you're being any kind of security measures that you would like to take, you'd like to take that at that first layer, any kind of audits you'd like to do, you'd like to do at that one single point. Well, you could ask, well, Harry, you can go and create a service of type load balancer. And that's it. Like if you're running in a cloud-provider environment, I'll get a load balancer and I can send my traffic through that API or web service or whatever I'm running and that solves my problem. So why not do that? And why do this entire ingress management? So the reason being, like you said, one point of entry and also a lot of traffic management and connection management, you'd want to terminate TCP as early as possible to speed up a little bit, whatever it may be to reduce the overhead of the three-way handshake, same with TLS termination. You might want to encrypt traffic differently internally inside your cluster so that you have some more visibility into traffic when you're debugging or if you're solving a production issue. And that's where things like service mesh also come in. You also want to do load balancing at that layer. So if you are load balancing across zones into avoid failures when one of your AZs go down, you would want to do that layer. You'd also want to do canary deployments. Then you can probably do it the ingress of the DMZ layer. You could also move it further into the stack depending on how your operations looks like. But usually people can do a lot more at the ingress layer. That's what I'm here to champion. So ingress spec has been around since about 2015. So it's about the first time it was in 2015. So it's been around for four years, but it still has been in V1 Beta 1. What the spec allows you to do is it lets you define routing principles, routing policies for your cluster. So any traffic that comes to your Kubernetes cluster goes through that ingress layer and then the traffic, any request matches against a specific ingress rule and that gets sent forwarded to your service that is running inside your cluster. It's pretty vendor-neutral. So you're using any kind of proxy. It does not really matter. You can swap out proxies easily. And that's the main reason ingress is getting so popular is it's pretty much vendor-neutral. It has been stagnant for a long time and now the community has again picked up. It's now going through a transition and it's going to V1 API very soon now. I think in Kubernetes 1.17. So let's look at the spec here. So this is a pretty standard Kubernetes manifest and on the left you can see that this is currently like until Kubernetes 1.15 this was under the extensions.v1-beta1 API group. Extensions API group has been deprecated we're trying to remove that from Kubernetes. So ingress was being put into ingress into the networking API group. And it's being GA soon with some fixes to it. We also want to expand it a lot more since it's a fairly narrow spec in API. So let's look at the ingress spec here. So pretty standard Kubernetes spec. We have the name, let's say finance APIs. And then we have a rule here with a couple of paths here. So what the rule says is any request that comes with example.com as the host, the host name in an HTTP request will forward it to two services based on the path that matches. So if the request path starts with slash builds will forward it to the builds service that we are running. Right. And this build service would be located in the same name namespace as the ingress rule. So here we are not defining a namespace so this would go into the default namespace. You can create ingress rules in any namespace you wish, but it should be in the same namespace as your service. We want to keep that namespace soft isolation inside Kubernetes. And then we have the slash orders path. Slash orders gets forwarded to the orders service that is running inside Kubernetes. And that gets forwarded into the HTTPS port 443. So this is not the complete spec but it's a pretty small example which shows the power of it. Like, you know, you can specify, you can decouple the service that you are running and how the traffic actually needs to get there. And folks who are familiar with, you know, the HAProxy or Nginx or Apache could probably start thinking about it. Okay, how would I write my, you know, server blocks or location blocks and how would I go about handling these? So that's all like the main focus here is being vendor neutral. Now, an important point here to be noted is this ingress resource ships with Kubernetes but Kubernetes, if you create this ingress resource, let's say you're running Minicube, nothing would actually happen. These ingress resources let you define routing policies but then you need to install a controller which actually reads these policies and implements them, you know. So these policies are read by, you know, there are popular ingress controllers out there, Kong being one of them. They can read these policies and configure themselves to route traffic accordingly. So that's where it comes in, where you have this ingress API inside Kubernetes but it won't ship with an ingress controller. If you're running GKE, EKS, or any of the managed offering, they do come with some basic ingress controllers. So that's another option that you have. So this is where it comes in, okay, we have this ingress now, how do we go about and actually start using it? Because if we just create this ingress resource, nothing is going to happen. So that's where Kong comes in and Kong is one of the popular ingress controllers that are out there. And so I'm going to go a little bit into like what Kong is and why you should be using that for ingress. So Kong is a currently very popular cloud native API gateway, although it's not an API gateway anymore. It's going more towards how you can orchestrate microservices, how you can even do service mesh with Kong. It was initially open sourced in 2015 by a company called Mishape. And then it's an Apache too, so fairly liberal licensing. One.do was announced last year. It's been running in production for about four years, so like about three years now. And last year, the one.do launch announced feature completeness of the product. The most important point here to note that I would like to emphasize is platform agnosticity. So this presentation is focusing on a Kubernetes focus of Kong. But anything that we do develop in our core product is never Kubernetes specific. Most of our customers run in a hybrid environment. Any enterprises that we run, we are seeing a lot of Kubernetes adoption, but then you have bare metal data centers still around. You have open stacks still around. So it's not that everybody's just moved to public cloud or everybody's just implementing Kubernetes solutions in their shops. People still use other orchestration solutions. And Kong works across them. So Kong can be deployed in all of these environment and Kong can work with service discovery mechanisms of all these environment. So that's where we want to be very, very Kubernetes friendly and like Kubernetes integrations are extremely important to us, but that's not the only platform that we would like to focus our efforts on. Yeah, so going into a little bit of details of, you know, what actually Kong can do for me is so this is a rough overview of what Kong can accomplish here, right? So on the left side, I have the client. Now this client can be anybody, any user of any API, right? So it can be, you know, you can have like a public API, like Twitter or Google, and your clients could be accessing your APIs over the internet and then coming through that route. Or your client could also be, you know, your internal services. If you are running a microservices architecture in insider organization, you can have one service talk to another and that traffic can flow via Kong instead, right? On the right hand side, we have got a few services and these are services of all kind. So we see we have a GRPC service. We have, you know, a rest, a simple JSON over HTTP and, you know, other kind of restful services that you could be running. JSON over HTTP being the most popular one today and GRPC slowing eating that up. And then we have the database traffic, you know, so you could have, you know, you know, British protocols flowing through, you could have MySQL, you could also have something much more like a different L4 protocol, something like Kafka flowing through as well. So you can flow like proxy any traffic through Kong and then Kong can do a certain magic on top of those requests, right? So as the request flows through Kong, Kong can change the request or Kong can implement certain policies that you wanted to do. So each of the boxes that I've drawn here are sort of plugins in Kong. Kong is an extensible architecture and it's like a plugin based architecture. So you can specify, you know, any request that goes through my builds service, which is JSON over HTTP, load balance it in this particular way. You know, I want to maybe, you know, do sticky sessions with it. And also I want some Prometheus matrix out of it. While for my GRPC service, I want some data dot matrix out of it. And then maybe for the database traffic that I'm seeing, you know, if I have high latencies, I want to do certain kind of logging for those slow queries, right? And all that you can, you know, customize in Kong and get that done. A very popular feature being great limiting, you know, like you don't want people to be hitting your traffic. So you could do all these in these services as well. The reason Kong exists is we try to dry things out. So rather than asking each and every application team that you have for these features, you can just run Kong in front of them. And the services are, you know, core business logic of that service is only inside the service. You're not implementing rate limiting in, you know, 20 different ways, you know, and be, so that also gives you a lot of homogeneous, you know, nature of your, like, if you're, if an external developer is reading something, they naturally know, okay, this is how your API works. So Kong is built on top of a very solid stack, is what I would like to say. So underneath its engine X, engine X needs no introduction to this audience. I hope it's been around for about a decade now. Oh, actually more than a decade. It's, it powers a huge part of internet. On top of engine X is, it's kind of less popular than engine X, but it's called, it's called OpenResty, which extends engine X with Lua and LuaJit. And so Lua and LuaJit are like, you know, Lua is a very popular scripting languages that wasn't initially used only, I think in, like, it's more powerful in the gaming community and, you know, more embedded systems. But with LuaJit, you can get, like, you know, near sea, near like hardware performance, where, you know, it compiles down to very fast bytecode. And so that's another open source product that originally came out of Cloudflare. So Cloudflare's stack is entirely in, you know, C and Lua, and that's being engine X and Lua. And that's where it was born, and now it's being maintained and has a thriving community around it. And by the way, this is OpenResty is what actually powers Ingress Engine X, which was the first engine, which was the first ingress controller that Kubernetes community developed. So OpenResty is a sort of subset of engine X. So if you have any engine X configuration, you can also provide that to OpenResty and it will work out of the box. It has more features and, you know, embedded inside, which make it pretty much dynamic. And then Kong is built on top of OpenResty, which when where we make it, you know, a really API-driven and cloud-native. Cloud-native being, we wanted to not have, you know, like, you don't have to restart any processes. Your configurations change very often. As your Kubernetes services scale, we want Kong to automatically handle traffic across those, load balance across those, and it's completely API-driven. So you can script it to do a lot of things that are specific to your infrastructure, but we also ship with things which are, you know, like standardized, you know, something like rate limiting. Anybody wants to use it, we ship with a standard plugin, but then you can change it, extend it in ways you would like to do it. So how do we do Ingress with Kong? We have a Kong Ingress controller that makes Kong super, super nice and friendly with Kubernetes. So this is a very fairly simple architecture, as you can see. You have the API server on the left here. And in the middle, you have Kong pods, which run two containers. One container is Kong, which is the runtime, which is the proxy that we use. And we have a co-located controller which talks to Kubernetes API server, you know, handles RBAC and everything. Reads the Ingress resources that we created and configures Kong accordingly. And then that gets proxy to your services, you know, and then Kong can execute authentication and whatnot on these things. So as I was saying, the Ingress spec is fairly narrow. As you could see, it's currently HTTP only. You can route based on, you know, host header. You can route based on, you know, the paths you have. You can put in some TLS settings as well, but Kong can do a lot more than that. And we want you to do that. We want you to use Kong for all those things. Excuse me. So we have a concept of plugins. So we have a few different CRDs, and we'll get this more into the demo. What these allow you to do is, you know, extend the Ingress API using, you know, annotations and CRDs to do more things and extend the regular Ingress. These are Kong specific things as well. So that's why some of these can be incorporated into the Ingress API, and that's something that we are trying to work with the community. But some of these will always be, you know, outside the Ingress API. So let's get into the demo. And I should stop talking because that's what people here are for. So let's see. Can you guys see my screen? Caitlin, can you see my screen? Yeah, looks good. Looks good. Awesome. Okay. Let's see. So I'm running a GKE cluster right now, just for the ease of the demo. And whatever I do is actually going to be accessible on the Internet. So I hope nobody on this webinar actually DDoS me. All right. So I'm going to follow, let's see, a script. So we have the Ingress controller that we are, it's a simple YAML that we are going to install right now. So Kong Ingress db is the YAML that we are installing. Oops. Okay. Let's see. Okay. That's better. So we are installing everything in a specific namespace called Kong. And we installed the custom resource definitions that I just showed you in the slide. And we have some RBAC related resources and a service account, which gives Kong the permission to talk to the API server. And then we have a deployment and a config map to configure Kong. All right. Let's see. So we can do this. So we have got the Ingress controller up and running. As you can see, we have got two pods and both are ready to run. We should also have a service. So we have the Kong proxy service, which is a type load balancer, right? So you'd only have, ideally, you would only have one load balancer service that is Kong. And everything would flow through this, right? So you don't have to pay for 15 load balancers if you are running 15 services inside Kubernetes. So let's see all what we have installed. So everything, so that includes a pod, that includes a service deployment and replica set. Replica set belongs to this deployment, right? And replica set is what is powering a pod here. We are running only one pod right now. Obviously, don't do this in production. Run multiple instances for some redundancy to provide some protection against failure of a specific node. Let's see if Google got us a load balancer for Kong. All right. So we got an external IP here. This should be routable on the internet. So what I'm going to do is I'm going to go ahead and set this into an environment variable since I'm going to use that throughout the demo. So we have this up. Now I'm going to send a simple request to this endpoint. You'll see here that, you know, Kong responded with a message saying, no routes, no route matched with those values. This is because we have not created any ingress rules currently. So if we do this, no resource response. So Kong does not know how to proxy any request, how to forward it inside your Kubernetes cluster. So, and you can see we are running, like the response is indeed coming from Kong. So that gives you the hint. So what we are going to do next is going to install two services. These are very simple, fairly demo services. What we have got here is HTTP bin, which is the same HTTP bin.org service, you know, which echoes request content and you can configure it to do a few different, like based on the request, it will send back a different response. And then we have got an echo service which responds back with some pod details and request and response details. All right. So let's see. So QCTL get pods, we'll show, okay. I'm running four instances of the echo service. I've got HTTP bin. External DNS is something that I'm using currently to manage DNS of my Kubernetes cluster. You can ignore that for this demo. So let's go ahead and create an ingress rule here. So what we just did here, the kind is ingress. If you see the API version is extensions because this is a given older Kubernetes cluster. And we have got a couple of rules here. The first rule says there is no, actually let's do QCTL get ingress demo. Okay. Let's describe it. So, okay. So that gives you a nice pretty printed form where the first rule is we haven't set any host header. So that means any request that comes, that starts with slash foo gets forwarded to the echo service that we created. And anything that comes to HTTP bin.yulu42.com gets forwarded to HTTP bin, right? And so let's test this out, right? So we set the proxy IP. So I'm going to send a request to slash again, slash F. Nothing happens because there is no request. There's nothing that gets forwarded. And then if you do slash foo, you can see that you have got some information here. So we'll look at some, so the headers first. So you can see we have got the actual server which responded with to a request was echo server. And the request was proxied via com. So com actually matched this rule against our ingress rule that we created and forwarded it upstream. And then we have, which echo pod actually responded since we have four con with load balance. By default, it will do around robin load balancing, but you could configure it to do sticky sessions or hash based based on source IP or some, or maybe cookie based as well if you want it. And then we have some request information. So if we do something like let's say X poster, we'll see that the method is now post, right? And if we slash foo demo CNCF, we'll see the real path is slash demo CNCF. Okay, so that's like basic ingress that's working here right now, right? Let's see, so I'm going to see if we can do this. So I'm running some automation here to populate cloud fair DNS with the IP cluster IP that we just got. And okay, so we can see that the DNS propagation has happened successfully. So we had this rule of HTTP bin.ulo42.com goes to the HTTP bin service. So if you are familiar with HTTP bin, you can do slash status 200. It will return back a 200. Okay, and you can see, you know, the request is being forwarded by com. Let's ask for the headers. And it will send back the headers that we are sending, right? So foo bar and we'll see that the request actually foo gets goes through com. And these are all the request headers that are coming through, right? So that you don't see com here because these are all the request header. Well, we'll see com in the response header. So this is all pretty, pretty basic, right? You only have, you've got ingress, you created ingress rule, you can proxy services. Now we are going to get into com specific parts here, right? So I'm going to go ahead and create a com plugin resources. So let's go through this. So what we are saying here is we're giving it any name and then we have a config of the plugin. So we are saying that for whenever this plugin gets executed, add a header and add the header demo or colon injected by com, right? And the plugin that we are using right now is response transformer because we want to transform responses. We do not want to transfer requests. We can also use request transformer and then before com sends that request to your service, com will inject this header. While right now, when the response comes back to your service to com, before sending the response back to the client, com will inject this header. Okay. So let's, we created the resource. Now let's execute this plugin. So now let's execute a request and see the, so I'm just outputting the headers because we are not dealing with, we don't care about the body for this part. We don't see that request. The request actually has that header. The reason being we created this plugin, but we didn't configure com to when to actually run this plugin. Like when do we, do we want to execute this plugin on every request or for some request? So for that, what we'll do is we'll go ahead and annotate our ingress rule. So what we are going to do is we are going to add an annotation saying that whenever a request is executed and it matches one of these ingress rules that we created for HTTP pain and our echo service, execute this plugin as well. Right. You can, and you can specify some other plugin as well here. So it's, it's, you can run as many plugins as you want. And then we edit that. Now we go ahead and execute this request and we see that the demo header is actually injected by com. And if we call, let's say HTTP bin, because that is inside the same ingress rule, you can see that it still is injected by com. Right. So, so that's how you, this is the basic version where you know, can you inject header, you can also remove a header or you can replace a header with the content of some other request, or you can append to a header and now you can have, so you can do a lot of those things, but I hope this, this, this gets the message that, you know, like, okay, you can do transformations on request and responses as well. Next, I'll, I'll go through an authentication plugin. So we have got HTTP bin and if you're opening HTTP bin right now in your browser, so you'll see that anybody can access it. I've only got one part of it running. So even if enough of you are actually hitting that, you'll probably need ask me and I don't want to phone that to happen. Right. So what we'll do is we will enable a plugin. So this is another, like, you can have external auth plugin or you can have, you know, plugins that Kong do does the authentication for you. So we have the key auth plugin. Right. So every time you make a request, we will ask for an API key. If the API key is not present, then Kong will reject that request. So we created this plugin and now we will annotate the service, HTTP bin with this plugin. So you see the last time I annotated an Ingress resource, well, this time I'm annotating a service. The reason being, no matter which Ingress rule the request comes from, I want the service to be always authenticated. Right. So anytime if we have defined HTTP bin.ulo42.com, but if we also had HTTP.ulo42.com, both require and these are in separate Ingress rules, both will have authentication enabled. So now I enable that. Now let's send a request to HTTP bin and you see no API key found in request. The response comes from Kong. And it says 401 on authorized. So now nobody can actually access the service. So if we want to get access, what the way we do it, and you can also use YDC or LDAPR if you want to authenticate against an identity provider that you're running or octa, all that is also possible. So what I'm going to do here is I'm going to use two other custom resource definitions that we have. So I'm going to create a user named Harry and I'm going to associate a credential with that user. So I just created a consumer Harry and this is a Kong specific implementation. And then I associate that consumer Harry with an API key. So the type of the credential is key art and the key here being super secret. So this is an API key. If you're using any public APIs, they usually have OR or API key based authentication enabled. So now I'm going to go ahead and make the request using this API key. So you can see we got the response back here. So it responded back. We get the headers back here. So let's fumble with the API key and we'll see that invalid authentication credentials. So Kong is doing the authentication for us. We are, we did not actually edit HTTP bin at all. We have not touched HTTP bin at all. Now you have, now let's do another popular plugin which is called rate limiting. Rate limiting is an extremely popular use case for Kong. And here we are doing something a little different. So what we are doing here is we are adding a label called global and we're setting that to true. What this means is this, this plugin will always execute no matter where the request comes from, right? So even if you click any, any ingress rule, it comes from or it will be matched and we'll allow five requests per minute. And we'll have a policy of local policies, something that you can dig deeper. You can have Redis to have, you know, Redis based rate limiting and those things. So those are more like implementation details. So we do this and let's try to send a request. No API key found, obviously. So we, how do we add a header? Super secret key. There we go. So now we send a request to HTTP bin and we can see Kong injected a rate limit header, right? So you have also the other plugin that we had enabled before is executing, authentication is also going on and Kong is also doing rate limiting now. So, right? So we are allowing five requests per minute and we can make four more requests, right? And if we execute this thing, so let's do even this one. So if we send a request to the other service, we'll see that you still get rate limited, right? So no matter which service you are accessing, you are being rate limited. Now here the rate limit was four again because the minute rolled around. So it's a minute based rate limiting. So that's why it rolled around. Now if you keep on executing this request a few times, you can make one more request. You cannot make any request. And now you get too many requests of 429, right? So you're being denied access to the service if you make more than five requests. So you can configure this again to be hourly or per second or whatever rate you have. And then maybe you want authentication enabled for rate limits, maybe not. So all that's also configurable as well. So that gives you a rate limiting. We have got authentication enabled as well. So we have already secured our service. So we are only writing our API and Kong is doing the rest for us. And this can be done obviously across multiple services, multiple namespaces and whatnot. Next, what we are going to do is I'll show off another plugin which is also pretty popular. We recently open sourced this one as the proxy cache. So it's basically response caching. And we will also enable this at the global level, right? So by global level, you don't need to associate with a specific ingress or a specific service. Kong will execute that for every request that flows through it, right? And if you want to enable it for a specific ingress rule, you can do that as well. So it's pretty flexible based on each team's requirement. You can configure them and have them configure it or maybe your ops people can do it. So let's go ahead and now try to access our endpoint. Now we have got a few things going on here. So we have the rate limiter, obviously, that's going on. We also have got X cache key and X bypass key, cache status. So here we see the cache status is bypass. The reason being by default, Kong will not cache HTML that's coming from the back end. By default, I think only do application JSON. That's also configurable obviously. Like what kind of content types do you want to cache? So let's do headers. So if we do headers, we can pick the headers, but we see that the cache status is miss here because it was the first request that we executed. And we are caching things in memory, right? We can also cache things in Redis. And if you have multiple traffic, multiple services going through, you can cache based on selectively as well. So we see the cache is hit. I think if I add another header here, yeah. So you see, so in the last request, what we did, we got a cache hit, right? And then the next request that I sent, I actually changed the header, right? I added a foobar header, but that didn't appear in this response. So if you enable caching blindly, this is a wrong response, right? The services said a response is being cached, which should not be cached. So this is how, I mean, if there are endpoints that you would like to cache, which are sending back metadata so that Congress sorts to the cached info when your service is not available, you can do those magics easily. Yeah, so that's what we are doing. We are caching, we are rate limiting, we have authentication enabled here. So that's all that I have for demo right now. Let's see. There are a lot of other features that exist and those are specific to Kubernetes as well. So we integrate heavily with Prometheus, we integrate with Open Tracing as well, so you can use those to get metrics. Every request that flows into your cluster, you have a Kubernetes dashboard and you don't have to instrument your services. You don't have to implement Prometheus metrics for in each and every service that you need. You can just get that out of Kong for you, right? And obviously we'll have more Blackbox matrix, meaning we'll have request latency, bandwidth, being consumed, error rates based on HTTP requests or maybe TCP timeouts. Or if you have GRPC, then you have GRPC error rates. So Kong can do all that. We also integrate heavily with sort management and external DNS. I'm using external DNS right now. So if I create any DNS entry, if I create any interest resource, it actually automatically populates my DNS server. And then, as you could see, I created that HTTP rule and it automatically created an A record for it. We also do a lot of route by header and HTTP method if you want more fine grained routing. We do not, we introduce something called Kong ingress which allows you to tweak load balancing configurations. Others, health checking and circuit breakers are also possible. So that all can be done at Kong itself. We have a very strong roadmap. So Kong currently actually can proxy GRPC. It can proxy TCP, TLS as well. So if you have like a custom protocol and you just want to do TLS terminations at Kong, Kong can do that for you. If you have a custom protocol, you can do protocol parsing at Kong, can do that. GRPC routing is just recently landed with 1.3 so you can expose your GRPC service via Kong and Kong will do the routing for you. You can do transformation logging again. Again, the whole set of plugins are available for GRPC as well. We're coming up with a new release next month and that's where we're getting compatible with Istio as well. So you can do mutual authentication with Istio internally in your cluster and Kong can be the ingress point. So Kong will responsible for North-South traffic, any traffic that is incoming into your cluster, any traffic that is flowing inside your cluster that can be handled by a service mesh which can be Istio or LinkerD. And Kong service mesh as well. Upstream TLS is something coming up as well which will allow you to encrypt traffic and have mutual authentication between Kong and the service you're running inside Kubernetes which is a requirement by some of our users. And we have a few other admission webbooks and making it all the more easier to use. So that's the roadmap that we have for this year. And that's it. That's all that I have for this presentation. Installation is pretty simple. It's open source. Feel free to try it out. We've been getting a lot of feedback from the community and adoption so very happy with that. But if you would like, just go to the GitHub repositories. There's a whole slew of docs. This week we have published a lot of guides on how to do these things that I showed you in the demo. So certainly do check that out. And that's it. That's all that I have. Thank you. With that, maybe you can open the floor for questions or back to you, Kaitlyn. Yeah, we have a lot of questions coming in. So thank you, Harry, for the great presentation. If you have questions, please drop them in the Q&A tab at the bottom of your screen. I will say that we do have a lot coming in so if we don't get to them in the next 15 minutes, we do have a Kong Slack channel on the Kubernetes Slack. So please join us there. You want to dive deeper into this. I also will just say that we have a few events coming up later this year. So we have Kong Summit coming up October 2nd and 3rd in San Francisco. We have both some great Kong-specific workshop sessions as well as more sessions around cloud-native technologies in the community. So there is a code here that we're offering if you're interested in joining us. If you go to the next one, Harry. Yep, there we go. We will also be at KubeCon and cloud-native con in San Diego. This will be the largest KubeCon to date. So we'll have a day zero event there. We will cover this with more service mesh content. So the link to our colo isn't up yet, but I will add that to the slides once it is. And then we'll also have a booth there. So let's not buy and say hi. So let's get into these questions. So going back to the demo, I'm going to start here just to make sure we clarify many points. So was there any Kong controller running in master node or how does it accept the demo request? Yeah, that's a good question. So we do not have a controller actually running inside the master node. What happens is KubeCTL get pod Kong. What we have is we have two pods running here. So and we have a controller running inside one of in the we have one pod running with two containers and one of the containers is a controller. So we do not need anything in the master at all. And the controller will talk to the master node, the API server and get those ingress YAMLs. Those YAMLs are stored inside HCD that's Kubernetes object store. So you don't need to configure anything special inside your Kubernetes control plane at all. Everything just works by just installing this. You don't need any any any configuration at, you know, cubes, kubelets or kong, kubeADM or anything like that. Awesome. And then in the ingress demo, you didn't annotate ingress with ingress class kong. How is that possible? You didn't annotate with ingress dot? Yeah, ingress dot class. Yeah. So by default, if you don't annotate an ingress class, then any ingress controller will accept the that ingress and, you know, configure it. If you have, let's say, you know, if you have GCE, like GCP ingress running and you have kong running and you have another controller running, then if you annotate it with kong, only then that would be accepted. So by default, if there is no annotation present, we will accept that rule. But if you want to, you know, segregate those ingress rules for different controllers, you can specify those classes. Or maybe you can run multiple kong ingress controllers as well, one for internal traffic, one for external traffic, or, you know, for different business units, if that's your use case, and then annotate those classes, and then you can segregate that. So that's a good question. And then if there is only one load balancer, how do you avoid a single point of failure? Aha, that's okay. Yeah, so the load balancer that we are using here in case of, we are using GKE. So if you see we have only one IP address, right? So if we go back to the demo, get service kong, we see that there is only one IP address. What is happening under the hood is Google is provisioning a global load balancer, which uses anycast. So even though there is only one external IP address, this is actually being handled by an array of machines, right? So Google, so this is a managed load balancer by Google. So and it comes with the promise of, you know, if a single machine goes down, it won't fail. So that's why it's not a single point of failure. And same with kong, you should probably run multiple instances of kong pods. So and these pods are stateless. So even if one of those dies, you don't have a failure. So if you're running in this inside, let's say Amazon's EKS, I think they call it EKS, yeah. Amazon's load balancer, usually are provisioned with two IP addresses, right? So that's for a little bit of failover. So you can, and those IP addresses change and you get instead of CNAME. So in that case, you have two of those. So it really depends on specific implementations of, you know, load balancer service. So any load balancer service has a different implementation based on the cloud provider that you're using. Or if you're in bare metal, you might not actually have those. So yeah, I hope that answers that question. Awesome. And then is there a soft limit back to how many ingress resources a kong controller could manage? That's a very good question. I've tested it with about 500 resources. And this was for a test, like a test inside, you know, one of the libraries that we're using that we are doing pagination and those things correctly. If you're using a lot of ingress resources, that's not a problem. I think kong ingress controller will easily be able to handle those. If you're using, it's not recommended that you use a lot of consumers and credentials. But ingress resources creating a lot of those and a lot of plugins should not be a problem. If you go like, let's say, beyond 1,000 ingress resources, you'll have to tweak certain kong settings. So you'll have to give it a little bit of memory. And this is really depending on, you know, your traffic patterns. So you tweak them as you go. But by default, I think it's, there's no hard limit that we have imposed or that we have run into yet. So maybe if you find one, please do report back. Thank you. Awesome. And then how does kong work if there is more than one ingress controller installed in Kubernetes? For example, Nginx and kong in the same Kubernetes. Yeah. So that ties back against into the same ingress.class annotation discussion, right? So what you can do is you can annotate the certain ingress rules will be annotated with the ingress class of Nginx. And then certain can be annotated with ingress class of kong, right? And this class itself is also configurable as well. So you'll annotate these and then those, like the ingress rules annotated with that particular ingress type, only those will be satisfied with by kong, right? So you definitely do not want it to happen that you have two ingress controllers fighting for the same ingress, right? If you, if we see here, kubectl get ingress, we'll see that the address of this is updated with the address of the load balancer, right? So if you have two load, two ingress controllers running and they'll constantly keep on over changing your address and keep on fighting to satisfy those resources which might result into conflict. So with ingress or class annotations, which is very widely adopted, that's how you'd handle multiple ingress controllers of the same kind or different kinds. And then we do have a request if you could maybe switch back to that slide with the links on it. Sure. There you go. We have a lot of questions here about comparison to the Nginx ingress controller. Why would you recommend the Nginx? Yeah. So, I mean, I would really, like obviously I have my bias here, but I would rather ask you to go to the Nginx ingress controller documentation and then go to Kong's website, right? So both of them are powered by the same underlying stack, right? Nginx on top of OpenResty and then you have Kong and Nginx has directly OpenResty. So the way a lot of libraries that are used even in the Nginx ingress are shared, right? Both have made, tried to make Nginx as dynamic as possible with Kong, what you get is a lot of other enhancements that don't come out of the box with the Nginx ingress. You have this plugin architecture. So you can do authentication. You can do just caching and these things. And these are all customizable, right? So if certain plugin doesn't fit your need, you can always tweak those. Or if you have some, you know, a specific implementation that you want to do inside, you know, you can write your own custom plugin. We also have serverless plugins so you can execute blob of Lua code inside Kong. So for every request that goes through your billing API and you want to do some special magic, you can write up just a small bit of Lua script, put that inside Kong and Kong will execute that. So that plugin, that plugin based architecture gives you a lot and then comes with a lot of load balancing and so TLS management, right? So you can do a lot of things like, you know, upstream TLS, downstream TLS, neutral TLS if that's needed. So those things are much, much, much easier for you to run that. Also, IngenX Ingress is something very, very specific to Kubernetes. So if you're adopting Kong Ingress controller and let's say a certain part of your organization doesn't run in Kubernetes, you can still, they can still run Kong and you can have a lot of knowledge share, a lot of transparency. Another advantage, if you were to phase out Kubernetes in the future, you can just, there is a small command for dumping the Ingress controller configuration. You can just export that and import that in a different environment and things would work. So you wouldn't have to again start from scratch rebuilding that environment. So these are the things which make it pretty, pretty extensible, but also very much compatible with other other orchestration platforms that you have. A lot of, like the community around it, the plugin community is huge. So that you can take that advantage as well. Is it possible to do rate limiting based on headers like an authorization header? So the idea would be to apply rate limiting on user tokens that will share the same Kong consumer. Yeah, yeah. So you can, so if you, I don't know if you saw it or not in the demo, we were limiting actually by consumers. So you can definitely do that. So, you know, based on authentication, based on, you know, if you have an OIDC authentication, you can, you know, have that consumer created automatically in Kong. If you're doing API key authentication, for every consumer, you can have a rate limit. You can also have a different rate limit for every consumer as well. So, you know, maybe you have certain consumers who are paying you more, you give them a higher rate limit and who are paying you, let's give them a lower limit, or you can have a uniform limit across all the consumers as well. Or you can have exceptions, you know. So for every, for HTTP bin surveys, I have X rate limit file for my Billings API, I have Y rate limit. So all that is pretty configurable out of the box. Awesome. And then, can rate limiting be applied in an origin policy base, for example, by origin IPs or origin geo localized? I don't think that's possible. I do, okay. So that might be possible with if you do header-based routing. So, you know, based on the origin header, you change your request or response path or patterns, you can do that. But this should be something very, very easily possible with a small patch, right? You can easily do that. It's, I'm not too sure if it's possible out of the box, but you might have to write like a small Lua snippet to do that. And then it's cone available as an operator. So then good question. So we are trying to make Kong easier and more and more easier to run. There is a Kong operator that is more like an alpha stage operator and they're trying to put in some resources into that. So definitely check that out. The link there would be Kong slash Kong operator on GitHub. So you can go to GitHub.com slash Kong slash Kong operator. And you can check that out. We have a Helm chart available as well. So that helps you install that. And we are also planning to rule out customized support as well. Customize is something that is becoming pretty popular as well. And then we had a couple of questions about DB list. So is there a plan to go that way in the future? So the current demo that we ran is DB less actually. So we are running a Kong in a DB less mode. That means Kong is not backed by any database. You can also install Kong with a database which is the more popular and the more like DB less came out only like I think earlier this year with a database also you can manage that. So then you can have some custom configuration that you want to put manually inside the database. You can do that and rest of the configuration managed by Ingress controller. I would recommend you put everything inside Kubernetes manifests and that configures the database. Using the database has some pros and cons and that those are discussed in the documentation. So I'll ask you to go that side. And is there a recommended approach to use authentication plugins and selectively apply them to Ingress routes? So for example, having one service that has some unsecured end points and some secured end points. Yes. So you can do that. So what you can do that there is you can create one Ingress resource which by which applies which secures all the end point. Then you create another and then you know so you annotate that with that authentication plugin. Now let's say a very trivial example you want don't want slash health to be authenticated. Then you can create another Ingress resource which is more specific. Right. So it's a slash service a requires authentication but then slash service a slash health does not require authentication. And then you can make it more granular or more general as well. So maybe slash service a slash public does not require authentication at all. And so you can do that as well. That's certainly possible pretty easily. You just have to create another Ingress resource. And can you talk a little bit about the difference between Kong and Kong Enterprise Edition? Sure. So yeah, Kong is the open source product and Enterprise Edition is what we sell to our Enterprise customers both run on the same runtime. So the runtime is not different at all. Kong Enterprise ships with more enterprise features. So, you know, if you want, you know, a nicer UI to manage Kong, you want, you know, developer portal functionalities. You want, you know, RBAC accesses inside the Kong API itself. And that's where Enterprise comes in. And then we are doing a lot of things around machine learning and how we can help you for the manager API is better. So since Kong has access to the traffic, it can figure out, okay, what's going wrong or what has recently changed. So all those are Enterprise features. The core routing always remains inside the open source version. And the Kong Enterprise actually just inherits that and builds on top of it. These Enterprise features, which if, yeah, that's, that's the basic just I work a lot on the open source sites. I'm doing a really terrible job of explaining what Enterprise is. So, yeah. Yeah. And just to reinforce everything we've covered here is an open source feature. Yes. Yes. Awesome. Okay. Unfortunately, we are at time. I know we have more questions that we didn't get to. So we will take this offline and get you all answers to those. If you want to chat in the meantime, again, please join us on Slack, the Kubernetes Slack in the Kong channel. And thank you all for the great questions and for joining us today. Again, the recording and slides will be online later. And we look forward to seeing you at a future CNCF webinar. Thanks, everyone. Thank you, everyone. Have a good day.