 who is joining us today. Welcome to the CNCF webinar, API Gateway and Ingress Management with Kong for Kubernetes. I'm Chris Short, Principal Technical Marketing Manager at Red Hat and Cloud Native Computing Foundation Ambassador. I'll be moderating today's webinar. It's going to be presented by Harry Bagde, Senior Cloud Engineer at Kong. Before we get started, though, I have a few housekeeping items to go over. During the webinar, you're not able to talk as an attendee. Sorry. But there is a Q&A box at the bottom of your screen. Please feel free to drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CNCF. As such, it is subject to CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, be respectful of all your fellow participants, presenters and their time. With that, I will hand it over to Harry, Senior Cloud Engineer at Kong. Take it away. Thank you, Chris. Welcome to the webinar. I'm Harry. I lead all the software development on the Kubernetes side at Kong. Kong is an enterprise product company. But today, whatever I'll be talking to you about is all open source. So all of this is available to you right away right now. So let's get started with the agenda for today. Today's agenda is pretty simple. We have a new open source gateway, Kong 2.0 that we released earlier this month, and then we'll get into how Kong works with Kubernetes and helps you do ingress management. We also have a new release of our ingress controller, which we call Kong for Kubernetes 0.7. So we'll take a deep dive in some features. And we also go through a demo there. It's a pretty short and simple talk, so let's just dive in. So Kong initially was released in 2015, and we released the 1.0 version in about, oops, I'm sorry. So we released 1.0 in 2018, and in last one year we have been working towards Kong 2.0. Kong 2.0 brings in a major feature asked from the community, which is called the Go plugins. Kong is built on top of nginx, on top of OpenResty, and it's written completely in Lua. We use LuaJet for performance reasons, and Kong is super blazingly fast, but the Lua community is pretty small. So we've opened that up to enable you to write plugins in Go. Go is, as we all know, the cloud native language, essentially, Kubernetes, Docker, everything that is essentially written for cloud is nowadays written in Go. So we have extended that to Go. You can do request transformations, authorizations, authentication, whatnot. Everything is possible, whatever is possible in our Lua plugins. The second most important feature is hybrid data plane control plane separation, which we call the hybrid mode. Kong traditionally required a database to run. With Kong 1.1, we removed that requirement altogether, and Kong can function without a database. In the control plane data plane mode, the control plane nodes are associated with the database, and they configure all the data plane nodes. So the data plane nodes do not need to have any connection other than the control plane, which could be running in one and configuring all your ingress clusters across the world, across multiple Kubernetes clusters as well if you want that to be the case. We released a new plugin called ACME plugin, which is named after the ACME protocol. The most popular certificate authority there is Lits Encrypt. So this plugin essentially allows you to automatically encrypt your API traffic using TLS certificates. So you have issue DPS by default. So that's another feature. And then we have a lot of feature under the hood, something like buffered proxy, which allows for more advanced request and response transformations. So that's all in the details. So do check it out. If you're using Kong 1.0 already, Kong 2.0 does not have a lot of breaking changes. There are only two that I'm aware of, and they're pretty easy to work around. So the upgrade path is also very simple and easy. So with that, we want to talk about Kubernetes since this is a Kubernetes community we are talking about. Kong is essentially agnostic to the platform it runs on, but we cater very heavily to the Kubernetes ecosystem because it allows you to do so much automation. So Kong integrates tightly with Kubernetes, but it works across hybrid infrastructures where you have multiple kinds of orchestration platforms deployed. So let's get into what is an Ingress spec and what is an Ingress controller and how Kong fits into it. So Ingress is a specification that was initially launched in about 2015 or 2016, and it has been stuck in the V1 beta 1 phase for about three or four years now. We are moving into V1 with Kubernetes 1.18, which is scheduled to be released later this quarter. So that transfers into V1 spec, which will always be supported by Kubernetes. There is also a V2 spec that is being developed by the community, which is called service APIs for the lack of better naming. So what is Ingress spec? So Ingress is a vendor neutral way of defining access to your services that run inside Kubernetes. So you might be having a few hundred services running, but you want a single point of entry through which you can control how is the traffic routed, how is the traffic authenticated, or maybe how do you want to log this traffic monitor and collect matrix on it. So Ingress is essentially right now HTTP based only so you can route traffic based on HTTP host headers or virtual hosts and paths. Kong can extend that and some other windows also extend that quite a bit, and that's what we'll be talking, looking into it more. Ingress has a lot of wider adoption. A majority of cloud providers also bring in their controllers and the community has seen a huge number of controllers which conform to the spec and it makes it super easy to switch between the different vendors. So if you're running something else, some other controller, you can switch that out and put that other controller in your Kubernetes cluster with relatively easy migration. So let's look at the spec itself. So this is a sample Ingress resource based on the spec. As you can see, our API group is networking V1 beta one. The kind is Ingress. We have the usual metadata type meta here, so it is finance APIs. And then we have the spec section. So as you can see in the green, what we have is called referred to as the routing policy. So what we're saying here is whenever the any request any HTTP request that comes into it comes into our Ingress point, if it has the host header or the virtual host as example.com, follow these two rules. So the rules are if slash bills, if the request path starts with slash bills, we'll send it to the bills service on port 80, which is running inside our Kubernetes cluster. And then we have the slash orders endpoint, which will be sent to the order service. So this using the same respect you can essentially tie up your microservices that are running inside Kubernetes and present them as one single API. Or maybe you have different groups of microservices that you want to present differently. So this spec allows you to define those policies. As you can see here, that is nothing specific to a vendor here. We're not specific on how to do this. We are just declaring what we want the desired state, which is what we refer to as declarative configuration in Kubernetes. Excuse me. So Kong for Kubernetes is an ingress controller. So right now we are here talking about Kong but imagine any other reverse proxy or cloud load balance of that you can replace with Kong. And then you have a controller component. Controllers are the way in Kubernetes to to manage configuration and reconcile state. So you specify a desired state and the controller takes that desired state to the target to the current state. So it matches the current state of the Kubernetes cluster to match the desired state. So as we can see here, we have the Kubernetes API server on the left and we have a proxy running in this case Kong and the controller sits between those two components. Now Kong or beat any other vendor does not understand what Kubernetes API server is talking about by the bot. You can put in a controller in place. And the controller essentially translates what is API server configuration into Kong. So the controller is what configures the proxy. The controller is not itself trials sending over the traffic. It's just configuring a proxy which then sends over this traffic into different services. So in our case we have bills, orders and inventory. So let's focus on the controller piece. So this is the piece that is Kubernetes specific. The proxy could be agnostic to Kubernetes, you know, you can have load balancers and non Kubernetes environments can also have proxies running anywhere in the cloud. But the controller is what configures it and makes it specific to Kubernetes. This also opens up the realm of possibilities where the controller interacts with different CIDs. So imagine you're using cert manager to do certificate management or interacting with how to configure Prometheus matrix and things like that. So all the intelligence is essentially baked into the controller and the proxy software has the proxy capabilities. So each of those two components are doing their own thing. So let's focus on Kong for Kubernetes and what Kong's controller can do. We released a new version, Kubernetes English controller 0.7, which is compatible with the proxy version of Kong 1.x or Kong 2.x. So with 0.7 we have released something called encrypted client credentials. The credentials are now stored in Kubernetes data store, which is at CD and we use Kubernetes secrets to store these credentials. So you get encrypted credentials at rest that are loaded into Kong dynamically by the controller. Kong does not need a database or anything. So it's simply a deployment with a pod having two containers. The controller gets all these configurations and loads it up into Kong and Kong can verify your client credentials. And these can include anything from key authentication, basic authentication, or using some form of OAuth or OIDC, any kind of authentication. You can store the credentials in its CD and Kong loads them up or you cannot even use your own identity provider as well. Another feature is GRPC routing. So Kong has introduced something called a native GRPC routing with plugin support. So essentially if you are using GRPC or GRPC web to expose your services as GRPC, instead of JSON over HTTP or other protocols, you can use again Kong to expose this traffic. And Kong is aware of each and every GRPC request and traffic. So you can route your GRPC request to different services based on different GRPC methods. So if one server is handling your ingestion of events and the other server is a service is pointing to reading requests, you can split those up at Kong. And then Kong can also run plugins. So you get all Prometheus matrix, you get all authentication schemes with Kong right out of the box. So you can keep your GRPC service fairly small, just having the business logic in there. And Kong talks to GRPC with the client and with your service. So it's upstream and downstream both ways. Another highly requested feature was mutual TLS. So a lot of customers run in very sensitive environments and compliance is a very important part of their infrastructure. In such a case, people want to encrypt even internal cluster traffic and Kong allows you to do that using mutual TLS. So you can bring in your own certificate authority or you can use any default one or use piffy for that and on Kong loads that certificate up and it can authenticate itself to your services. And then on the service side, you could probably have some authorization taking place that only Kong can talk to it and you can prevent, you know, service to service communication if you require to do so. Or you can use something like, you know, VST or Kuma or any other service mesh to manage that as well for you. So Kong fits pretty nicely with that ecosystem of service mesh as well. Kong focuses on, you know, the north-south ingress traffic and use the best service solution for you. So that's that's the general overview of zero dot seven. With that, let's get started with a demo. We're going to look at a few like GRPC and some some rate limiting plugin and a little bit of admission controller as well. So with that, let's get started. So as you can see on my screen, I have a Kubernetes cluster. I'm using GKE for this demo just because I've set that up in my Dev environment, but you're free to use any cluster. So you can use any Kubernetes cluster. All you need is support for a service of type load balancer. You can get away without that service as well. If the need be, you can use node board or any other, you know, any other proxy software that you would like to use. So with that, I'll go ahead and get started and deploy Kong for Kubernetes first. So this handy bitly link here. I'm just going to apply that. It's a single installation. So with this, we have created a few resources. So let's take a look at it. So first we have created something called a namespace for Kong, in which we run all these Kong specific services. And then we have four custom resources. Custom resources allow you to define your own API's on top of Kubernetes API's and we use these custom resources to extend com extend the ingress specification. So these are things that are specific to Kong, but are not present in the specification. The ingress resource is also fairly narrow. There's a general consensus on it. And as I said earlier, we're working on a V2 spec as a Kubernetes community in the SIG network channel. So if you're interested, please come on board in the SIG network channels and you can find like a whole new set of API's that are being designed for that. Next we create some RBAC resources. These essentially allow Kong to talk to the Kubernetes API server. So as we saw, there is Kong, there is the controller, and there is the API server. So the controller gets these permissions of, you know, it wants to list all the ingress specification and it wants to know where the pods are and things like that. And then we have a config map for some default server blocks, which can, which is not strictly required. And then we have the services. So we have two services, the Kong proxy, which is of service of type load balancer. So as you can see here, we have the service of type load balancer. And because we are running in GKE, GKE automatically assigns such a cluster and external IP address. And we also have a Kong validation web book, which is a service, but we are going to install the web book next. So if you take a note of this IP address, you can actually hit the IP address from your box as well. This is a public IP address. I'm just going to set an environment variable so that we can use that later on. So if I proxy send a request here, we can see that Kong responds back with a request and it responds back with a 404, nothing found. This is because we do not have anything configured in our cluster. So if we see we do not have any ingress resources, Kong does not know how to send this request for. Next, what I'm going to do is I'm going to also set up something called an admission controller. So let me show you the script that I'm running. So I'm just using open SSL client to generate a self-signed certificate for the Kong validation web book service. Then I'm creating a Kubernetes secret, a TLS secret. Once I have that, I am enabling the admission web book. So we have the kind of validating admission web book and we are going to validate each Kong consumer, each Kong plugin that are being created or updated. So let's go ahead and try to set that up. So we generated the self-signed certificate and then we created a secret. We updated the deployment to use the self-signed certificate and private key. And then we finally update the validation web book. So this makes it super hard for users to shoot themselves in the foot. So if you're making any mistakes while configuring, it's super easy to not indent things correctly or something just goes here. This will catch most of those things. All right. So we have got our Kubernetes ingress setup using Kong. Next, I'm going to install a GRPC service called GRPC bin. This is a pretty straightforward deployment where we are deploying a service of type cluster IP. So it's an internal service called GRPC. And then we have a single part of the GRPC service running. So this service understands GRPC protocol on port 9001. So let's get ahead and see how we can expose this GRPC service to the outside world. So here I'm creating an ingress resource now of name demo. And we have the slash path. So basically, we are not specifically a host header. So every request that comes in, we wanted to go to the GRPC bin service. Now let's go ahead and create this ingress resource. But now one thing to note is ingress is HTTP by default. We do not know that it's a GRPC service or not. So we make the use of annotations here. So we are specifying a set of protocols, essentially telling Kong that treat the traffic as GRPC. Any traffic that comes from the client, you should treat that as GRPC. So we update the client and we also put an annotation on the service resource to say that any service that talks to the service, please use the protocol GRPC. Yes, in this case, since that's what I've configured the services. So we have configured that. And now let's go ahead and hit the service. So here I'm using something called GRPC call, which is very similar to call but for G, but it makes it super easy to talk to GRPC services. So I'm calling the service hello service and the method say hello. And as you can see, I'm using the insecure flag because we have not set up any TLS certificate. So we'll use a self sign certificate. And then I'm sending it to the 443. So as you can see, the method resolved correctly. You have to say using the RPC method. And then we have the response header received. As we can see the services stuff type GRPC and the response is basically echoing the same content back. So instead, let's do hello CNCF community. And we can see hello con CNCF community here. So we have got GRPC request going back and forth, which is nice. You can just expose GRPC traffic. But then what does this bias because we could do this just by using a service of type load balancer by using this. So for that, what we have got here is how you can extend your ingress. So how we can extend and do more things once you have exposed traffic via con. So here we are going to create a custom resource called con plugin. Let's go ahead and create that. And that returns an error. Now, as we can see here, we see that it's admission web hook failed. And it says that food is an unknown field. So what happened is this was intentional that I have put in a food config field, which is not a valid configuration for this plugin. Plugins in Kong are essentially a way to extend com. You can create any amount of custom plugins. And there are a lot of plugins already that come bundled in com. Logly being one of those. So you can log to no elastic source or you can use flu and D or whatever is your logging infrastructure. But let's say you are using logging. So we are going to delete this there on a slide and create the plugin. We have created this plugin inside Kubernetes, but we have not configured it to come to tell it when to run this plugin. So now we're going to go ahead. Cdl edit ingress or demo ingress. We're going to add in yet another annotation and instruct Kong to execute the log lead plugin whenever any request matches this. Any of these rules. So if we find a just a single rule, but any of the rules that are mapped to which in this ingress specification, you want Kong to run the log lead. All right. So configured that. Now let me try to see if I can open the log lead window. All right. So we have, I have this log lead window up here and I'm going to search for last 10 minutes. So as you can see, there are no, no events, anything at all. This is just a simple trial account from log lead. I'm going to go ahead and send a request now. So we are going to do the GRPC code. Hello, CNCF community. So as you can see, we did not have any latency. Increasing latency Congress pro Kong's did not inject any latency. The upstream took 12 milliseconds to execute the request. We got the response back. Now, if everything is good and demo gods are kind to me, I should see a request here. So Kong batches these. So sometimes it can take a while. But as we can see, we have got a request here. And we can see all the details in here and this is configurable where what are the headers that were sent, whatever the request latency is response latencies, which absent it. And if you this is the raw message, and we can see that upstream URL was hello service slash say hello. So you can get all kinds of logging here. So you do not have, you do not have to even implement logging in your microservices to get that out of the box with Kong. All right. So that's proxying GRPC traffic and prox and how to use plugins on GRPC traffic. Now let's take a look at what else we can do. So let's look at how we can take an API that we have developed and how we can expose it and do like have different tiering capabilities. So we're deploying something called HTTP bin, which is a pretty popular, you know, just an echo service around HTTP. And I'm going to create two ingress resources. So here I'm creating an HTTP being free tire. The path starts with slash free and it all the traffic is sent to the HTTP bin service. And then I have paid tier where the slash paid all the request starting with slash paid send also get sent to the same service. So we have the same service, but we have two endpoints. So if I do slash data slash free slash status 200, they get back a response, the request is proxied via Kong, the server is HTTP bin. And if I send the same request to the paid endpoint, I get the exact same response. So we have two endpoints, but they are being processed by the exact same service. Now let's start introducing differences between the free and the paid tier. Excuse me. So the first thing I'm going to do is I'm going to try to use key authentication on the paid tier. So free tier is will be open to the world. We don't want any kind of authentication limits on that for now and we'll introduce key authentication. So I created a plugin called key out. So the plugin is key out. So we are doing key authentication and the name is HTTP bin. Next, I'll go ahead and edit our paid tiering ingress resource and ask Kong to execute a plugin. So as we did before with log in plugin, this time we are going to enable an authentication plan. We'll enable the plugin. And now if I send a request to the paid endpoint, Kong returns back with an unauthorized 401 unauthorized because we did not send an API key. So how do we get the API key now? So for that, we create something called a secret in Kubernetes. So we are specifying here the credential type is key authentication. And the key here is my super secret key. Of course, this is not the most secure way of doing it because it will be in my bash history, but let's go ahead and create the secret. So we created a Kubernetes secret. Kubernetes will encrypt and store this into its database. And then we have a consumer of Kong. So creating a consumer Harry and the credentials it has is Harry API key. This Harry API key essentially is a reference to this Kubernetes secret. All right. So let's go ahead and create that secret. And now let's go ahead and use that API key to authenticate against that API. So as you can see, now we are getting a response 200 201. So if I do, let's say 202 accepted, we'll go back that if I use the wrong API key, Kong will return back with an unauthorized. So this is a key authentication example, but you could do an IDP as well. All right. So we have differentiated ourselves where we have a free endpoint and a paid endpoint. What else could we do more? So let's do rate limiting rate limiting is something that almost anybody uses. It's like the basic defense mechanism so that not somebody cannot like just simply need us you, although it's not full proof, but it's a basic one. So we have issue to be been free to your plugin resource. And here we are saying anybody who accesses the service from the same IP address can access it only five times. Like I'll go ahead and add this plugin and I did this before. I'm going to annotate our ingress resource. So any request that matches slash free. It's sent to issue to be been, but before sending that request to the service, they're going to execute a plugin. All right. So we have the plugin configured now. And we can see that Kong is now started to inject rate limiting headers. I'm going to echo the proxy IP again. So if anybody in the audience wants to test it out on your own, you know, you can send the request to this public IP address, which will be valid for the duration of this webinar. And you can test it out yourself how we are imposing a rate limit. So if I send enough requests, as I can see, I get a four to nine. Basically Kong is saying you have it your API limit. You need to try back again later in a minute. So we gave five requests per minute to the free tire. Now let's give 10 requests to our paid customers. So they're saying that this time we'll limit based on consumer. A consumer is essentially somebody who has a valid authentication scheme. So in this case, we have the API key and we're giving them 10 requests per minute. So we have created the resource. Now let's ask to execute this plugin when the request matches slash paid. So CTO does it here. All right. And as you can see, we already have one plugin defined here. So we're already authentication, authenticating this end point. So any request that comes first needs to be authenticated. We will also ask Kong to execute another plugin. So can execute a number of plugins that you would want to and enable the authentication there. All right. So we have authentication enabled and we have rate limiting enabled. Now when I make this request, as you can see, I get nine requests that are remaining. Excuse me. So we have got a different rate limit for the free endpoint and a different rate limit for our paid endpoint. This is awesome. So we have got a service. We deployed it into Kubernetes and without writing any code, just configuration, you're exploiting the power of Kong by using authentication rate limiting. You're also logging the GRPC traffic. So GRPC traffic here is still going through. So you're proxying GRPC, HTTP, all the traffic just using a single service. Now as a bonus, let's go ahead and see that we want a gold tier. So we have some special customers who are paying more money. And we want to have them, we want them to have a higher rate limit. So here I'm creating a gold tier plugin and I'm giving them 100 requests per minute. So all most of our customers get 10, but our special user will get 100 requests. So I'm going to create another authentication credential. So here we have an API called user one dash key. So let's go ahead and create that Cuban at a secret. And correspondingly, we will create a consumer as well. So we're creating a con consumer. And on the consumer resource, this time we are adding the plugin HTTP bin gold tier. Now we will edit the ingress specification of the paid tier. And we'll ask Kong to run yet another plugin. So this was gold tier. So Kong is now going to authenticate the request. And then it is going to impose rate limit based on which type of consumer you are. So if you're a gold tier consumer or a regular consumer, all right, so I've got that in place. And then if I execute the request, I see that the rate limit is 100. But if I use my old credential, I can see that I get 10 requests per minute. And if I use the end free endpoint, I get five requests per minute. So you can have different kinds of rate limiting. And you can also impose other policies where, you know, you need to have an authorization as well when somebody can access a service based on a given authorization or our buck resource only. So that all is as possible as well. So you've got the GRPC traffic. We have got logging and monitoring in place. We have got authentication rate limiting in place, all just using a plugin ecosystem in Kong. And all of this is open source. So you can try it out yourself. So that's all that I have for the demo. And these are some of the features that we produced in zero dot seven. There are a whole lot of, excuse me, other features that are out there. So we do a lot of load balancing scheme. So Kong can act as a load balancer. So essentially you need a single load balancer for Kong and you do not need any other load balancer for your services. You expose a single load balancer. All traffic goes via that load balancer and you can impose all kinds of policies at one single point of ingress. And then you can do health checking. You can route based on different protocols as well. So you can, you know, route based on GRPC methods. You can do routing based on GRPC or HTTP headers or methods. So that's based on routing and load balancing. That's the basic essence of ingress. And we extend ingress using plugins. So we saw authentication and rate limiting. You can do caching. You can do request transformation. So if you're migrating from a V1 to a V2 API, you can do that. You can also use Prometheus. Or any other data dog, if you want to use for matrix and analysis of your APIs themselves, you can do that. You can also do sort management and external DNS. So essentially in this demo, we did not have any domain defined because we did not set up DNS. But if you define that, we can use that as well. So you're pretty much covered and it's super extensible. So you can even write your own plugin in Go and extend. Okay, maybe I want this different weird authentication scheme that my company manages. Or maybe I want to do this different type of transformation of my API request before they are sent to upstream. Or I want to do some additional validation of the request before it's sent in. All that's pretty much easy and possible. All right, that's all that I have today for you. A few important resources. If you want to take a picture, please take a note of this slide. The first resource is how you can install Kong just with one click on your Kubernetes cluster. It could be mini cube kind or any cloud provider or a bare metal cluster. The second is a very important link. Konglabs.io slash Kubernetes. This is essentially sort of a CataCoder thing, but you can practice how to use Kong in Kubernetes in your browser itself. So this is a custom built environment where we have steps and we also have a Kubernetes cluster running in the cloud for you. And you can just use that to get a feel of how to use Kong. You can even get this demo there as well. If pieced together, this demo based on all the exercises that are available at Konglabs.io slash Kubernetes. And if you have any questions, drop into the Kong channel in Kubernetes Slack server. All of our maintainers are pretty active there. So if you're running to pick up, so do you have any questions? You can ask us questions right there. That's all good. Thank you so much. Awesome. Thank you very much, Harry. So I've got some questions here. Does the host example.com in your example refer to the API server or the ingress controller? And can you repeat that question once again? Does the host example.com in your, this came in very early on in the presentation. So probably one of your first slides. Does the host example.com in your example refer to the API server or the ingress controller? It refers to the ingress controller. So imagine you have API.example.com. You have food.example.com. And yet you're hosting all these services inside your Kubernetes cluster and you want to route to different services based on the host or based on the DNS name of the service. That's what this host refers to. So it's got nothing to do with APIs or what it's just the request host. It's used for routing your request. Yeah, it's basically like a label that says send traffic here and then the pass then, you know, specify where else to send it. Cool. All right. So let's see one more question. Can Kong be configured with OpenShift for, is it officially supported? OpenShift for support is, yeah, it is supported. Kong supports the ingress specification in OpenShift. OpenShift also has something called OpenShift Route. And that's not something we support yet. So if you are using ingress, it will work in OpenShift. We also already have community users using that. Yeah. So it's important to distinguish ingress from route and the OpenShift LAN routes came along before ingresses existed in Kubernetes. So there is a distinction there that needs to be specified very clearly. You must use ingresses with Kubernetes or with in Kong. Correct. And OpenShift routes are making their way into Kubernetes ecosystem in something called ingress v2. So they're very similar to how OpenShift route are designed, but that's still in the works right now. In the works. Yes. Under development. All right. Unless there's any other questions, that's a last of them. Is there anything else you'd like to add? We have some questions in the chat. Oh, let's see. Sorry. Oh, how does typical architecture look like with Kong if we have to deploy API gateways? That's a broad question. But you usually just have me to deploy the, Kong itself is an API gateway and it also does ingress. So you're essentially hitting two birds with one stone in this case. So you deploy the ingress and you also get API gateway features right out of the box. So you don't need two separate installations. You just need one. Makes sense. Another question. Will the documentation of plugins include examples currently is still going to the docs and GitHub. And is there a blog on converting from Kong credential to secrets? Good questions. Yeah. So con credentials are something that existed before the encrypted secrets. We do not have like a ready script for you to do that. But if you open a GitHub issue or drop into the channel, we can do that. We do not have that. Regarding how to write Kong plugins and the configuration, the docs are on docs. Kong HQ. So you can figure out which properties of plugins are supported. And then you can just embed that into Yamen. So there is nothing no difference between how you configure plugins, you know, using a JSON or a Yamen. They're exactly the same. There is no difference at all. So that should be pretty straightforward as well. Cool. I'm hoping there's a typo in this. If, if there isn't, I apologize. How would Kong work with the controller and API server with regards to high availability of pods? Oh, okay. I think I understand what. Like, I guess. Does, does this say, Hey, all of a sudden I need more stuff. Like I, like I'm saturated. Is there any kind of HA horizontal scaling in Kong? I don't think so. No. No. So it's pretty simple. So on the API server side, you can deploy more API server. Yeah. Yeah. Yeah. I don't think so. No. No. So, so it's pretty simple. So on the API server side, you can deploy more API server parts in Kubernetes for that high availability, which is recommended. And on the Kong side, as you can see in this diagram, which I didn't highlight before, you have multiple parts of Kong running and each has its own controller and you can have a different deployment strategy as well. But this is super simple. You're doing it where you deploy a number of Kong parts to scale out horizontally and each part is configured using the controller that is running as a side card. So you have like, even if a machine dies or if what gets stuck, that's fine. The other parts will continue to process your requests. Cool. Yeah. So usually HA and you put a load balance in front of all these Kong parts to load balance across these different parts. Cool. All right. I'm looking through chat. I am not seeing any additional questions. I haven't already been answered. Nothing left in Q and A. So I think that's a wrap. Sure. So thank you very much for great presentation. If, if you're looking for the webinar in the future, go to a CNCF.io slash webinars, the slides and the video will be up there once it's available. And other than that, have a great day. Enjoy the rest of your week. Thank you. Bye. Bye everyone.