 Hello, everybody. Today we're going to be talking about API Gateway and Ingress Management. My name is Marco Paladino. I'm the CTO and co-founder of Kong. And this presentation is divided into two different sections. One, we're going to be talking about Ingress, understanding what Kong is and how we can use Kong on Kubernetes. And then I'm going to be sharing my screen and pull up the terminal to practice a live demo. So we're going to be seeing running it in action. You see, if you take a step back, we're entering a new world for software. And this is a new world that has been with us for a few, for half a decade now. In 2013, 2014, with Docker in 2013 and Kubernetes in 2014, really we are building applications in a new way. And of course, as we are transitioning to this new world, we're distributing our software and we're adding more and more services. We are decoupling our monoliths in more and more services that are going to be connected to each other and we're going to be deploying across the world, across different regions in order to be able to run modern microservice applications. And as we do that, of course, we gain something. We're able to decouple our software. We're able to decentralize not just how we run our applications, but we're also able to decentralize how the teams are building these applications. But as we do that and as we increase the number of services, we're also going to be reducing the simplicity of running everything in production. Of course, the monolith had many problems, but one thing the monolith had, it was simple. It's very simple to conceptually think about it. It's one thing, one thing only, we deploy it and it runs. Then of course, it's hard to scale, it's hard to deploy, it's hard to contribute, but it's simple. And as we decouple our services, of course that simplicity goes away, but we gain something else. It's dynamic. It's much faster. We can increase and improve the business at a much rapid pace. Running all of this would be very problematic if we didn't have a platform like Kubernetes. And so Kubernetes comes to the rescue. We can use Kubernetes to deploy our new services, our new applications in a very easy way by leveraging the abstraction layers that Kubernetes provides. Now, Kubernetes abstracts away our infrastructure, but all the workloads that Kubernetes is scheduling in the underlying virtual machines do not live under a rock. We still need to be able to access them from the outside. And in order to enable this use case, Kubernetes provides us with a few different resources that we could be using in order to make this happen. So there's three different ways. Actually, there is a fourth way, which is Qproxy, and I'm not going to cover that today. But primarily, there is three different ways that we can access our services from the outside using Kubernetes. We can configure a node port. We can configure a load balancer. We can configure an ingress. So let's go through all of them and deep dive into why ingress seems to be the best way to allow external traffic to enter our Kubernetes cluster. So let's start with node port. Node port is perhaps the most primitive way for exposing a service in Kubernetes so that external clients can consume it. It's a very simple concept. With node port, we're going to be opening up a port that's bound to the non-AP address for every service that we want to expose. That means if we have 10 different services, we're going to be having 10 different ports for each one of them. It's quite primitive because, of course, it's bound to the node-AP address. It's one service per port. And even the port range that we can use, it's quite limited. So if we want in production to allow clients to consume our services within our Kubernetes cluster, perhaps it's not the best way to do it. There is going to be another entity called load balancer, which allows us to expose a service via effectively a load balancer functionality that usually the cloud vendors implement with their own load balancer implementations. So we have our services. We want to be able to provision a load balancer. And this will effectively create one load balancer per service. The problem with this is that the cloud vendors are going to be charging us for every load balancer that we're starting. And if we do have many load balancers, this can get very expensive very quickly. Therefore, we're going to be looking at the ingress resource. So this is an independent resource. So this is not attached to our services, but it's independent. Whereas node port and load balancer were attached to the service object, to the service resource. This is independent. Therefore, it makes this resource quite decoupled and isolated from the services. It's like this additional obstruction layer that we are placing on top of our services. And this layer, this ingress allows us to put all the routing rules for all our services in one place. So we don't have them decoupled across each service, but they're in a separate place all together. The advantage of ingress is, you know, ingress is a spec and there's going to be different implementations that are going to be running on top of the ingress. Ingress abstracts away the basic routing functionality, but it still allows for ingress implementations to be able to provide higher value features on top of this layer that we can then apply. And we're going to see this in the case of con. We can then apply on top of our services. Now, the ingress, you know, doesn't change the IP address. It's the same IP address. We can expose our ingress to pretty much consume any service we have inside of our cluster. Like I said, the ingress, it's quite, comes in quite handy because it allows us to centralize all the routing rules in one place. Therefore, we can decide from that one place what services we want to expose and how we want to govern them by using the ingress implementations that we want to deploy, as well as in the case of con, these allows us to apply some of those higher level features in a quite, quite easy way. So, for example, by using a con ingress con, it's an open source and it's an open source API gateway. As a matter of fact, it's one of the most widely adopted open source API gateway in the community. You can find it on GitHub, of course. And we can apply authentication authorization rules. We can apply rate limiting rules, tracing metrics, observability. And the interesting thing about con in particular, you know, different ingress implementations have different feature set. Con is an L4 and L7 ingress, which means that we can use con, not just as your traditional, not just as an ingress used as an API gateway on top of our Kubernetes clusters, but we can also use it as a lightweight router to enable cross cluster communication across multiple Kubernetes clusters. So if we have some, you know, services running in one cluster, some services running in another cluster, and those services may not be APIs, HTTP APIs or GRPC APIs, but maybe, you know, for example, databases or Redis connections, whatever it is really, any TCP connection, con can route and enter within the cluster. It also supports K-native, if that's something that you're adopting. So con makes no assumption, effectively, as to what the underlying service is. It can be an API that we want to expose to our external developers or internal teams in the organization. It can also be a service that we want to consume from another service in another Kubernetes cluster. So it's quite flexible in the way it provisions this ingress functionality. Con, it's open source. You can deploy con on from GitHub. You know, it has been running con was created in 2015 and up until now, con has built not only adoption, but also quite of, you know, engaged community that helps us making the gateway better and better over time. It provides 100, you know, we, not only con, con right now has 1.5 million instances per month running across the world and, you know, was born in 2015 from another company before being the CTO and co-founder of con, CTO and co-founder of mesh shape, which was the largest API marketplace in the world back in 2015. Mesh shape, you know, was a marketplace that allowed the developers to either find the APIs to consume or publish their own API for, to allow other people to consume it. And so, you know, we needed ourselves a gateway that could run on Kubernetes that could run in a distributed way. And back then in 2015, there wasn't really anything like that. So we built it ourselves and we built it in a very, in a very lightweight and fast and fast way. The performance, the ease of use, the performance and the portability of con really are the three pillars that identify the project. And, you know, we built it for ourselves and then we have open-sourced it. And the con gateway adoption was so much faster than the mesh shape adoption that in 2017, we made a call to divest the marketplace mesh shape and focus fully on con. And since then con has been, you know, growing. We have 180 plus core contributors, 50,000 community members. And it's being adopted by not only community, but also organizations from all over the world. So it's quite of a stable and feature-rich gateway. And we also provide more than 500 plugins across the world that the community has built to enhance what the gateway can do. And of course, all of this feature set is available out of the box as an ingress on Kubernetes. I know that I'm seeing there is some questions about service mesh and gateway. I'm going to be addressing those questions later on at the end of this presentation. Born in 2015 from ash shape, Agnostic and Kubernetes naive. You can configure con, we can configure con by using Kubernetes CRDs. You know, hundreds of millions of downloads. It's been used in production and mission critical use cases in pretty much every industry. You know, everybody's moving to Kubernetes. Everybody needs an ingress. It's built on top of engine X when it comes to the networking, non-blocking networking IO. And we have extended engine X with Lua and Lujit. So if you're familiar with the Lua Lujit stack, that's called OpenResty. OpenResty effectively is a framework that allows us to hook into the request and response lifecycle that are, you know, that are being processed by engine X and it allows us to extend it on a very extremely fast virtual machine implementation of Lua, Lujit, and we've built, we've built an ingress and a service mesh. So I'm going to be talking about service mesh later on, data plane that can be deployed pretty much anywhere. In Kubernetes ingress, it's only one of the deployments. There is 20 plus different deployments. You could be downloading this on a Raspberry Pi and make that part of your cluster. So it's very simple, lightweight, but also quite extensible. The concept of plugins, it's a very important concept in Kong. You know, Kong without plugins, it's a pluggable framework, but it doesn't do much. Plugins really are the features and functionalities that we can adopt on top of our APIs. And some of these plugins are built by Kong, some of these plugins are built by the community, and there is a plugin SDK that allows pretty much everybody to build their own plugins if they want to do so. You can build plugins in Lua, in C, in Golang, so it's quite extensible. Plugins can do all sorts of things. Plugins can provide authentication and authorization features. Plugins can provide security, traffic control, integrations with server less, integration with monitoring and analytics solutions, transformations, logging. There is, I believe, more than 60-plus plugins that are bundled with Kong and are available out of the box. And then, like I said, there is a community of 500-plus plugins on GitHub that we could also be using on top of our Ingress. The Kubernetes Ingress Controller, particularly, it was all together in one repository. We found out that that doesn't really scale much well when it comes to the issues and pre-request management, so we decided to separate the Ingress Controller aspects of Kong into a separate repository, Kong slash Kubernetes-Ingress-Controller, and we also provide a set of guides and tutorials that will help you getting up and running with setting up a very flexible Ingress, the Kong Ingress, on top of any Kubernetes cluster on any cloud, and then being able to apply plugins on top of it. So in the demo today, we're going to be seeing security plugins, we're going to be seeing rate-limiting plugins, we're going to be seeing observability plugins. So let's not waste any time, let's go straight into the demo. But before I go, so there is a question, does Kong support service mesh, like Envoy or Istio? Do you need a service mesh with Kong? You can use Kong. Kong doesn't make any assumptions to what the upstream service is. It can be a service mesh, and we have integrated it natively with Istio and with Kuma. Kuma is a service mesh that's open source and has been donated two days ago to the CNCF foundation as a sandbox project, and it integrates natively with Kong. So you can use Kong as an Ingress to enter a mesh, you can also use Kong as an Ingress for a mesh, and we can apply our plugins not just in the Ingress capacity, but also in the Ingress capacity. So it can work pretty much with all sorts of things. We can also use Kong in front of, for example, AWS lambdas, we can also use Kong in front of Kafka. We provide Kafka transformations to automatically transition a service to service request into an event-based request. So the plugins that Kong provides are extremely powerful. Today, in the demo for the sake of simplicity, I'm going to be showing some very simple plugins, but you can take this as a starting point to go and experiment with more plugins, plugins that do request a response transformation, all sorts of things. So today in the demo, I'm going to be sharing my screen now. I'm going to be running a mini cube cluster on my computer. I'm going to be deploying a very simple service that I'm going to be using for the demo, and I'm going to be deploying a Kong Ingress. Then once I have the Kong Ingress deployed, I'm going to be configuring a few plugins on top of the Ingress so we can all see how it works with plugins and what Kong can do. And this is really meant to give you a look and fail of the project. So I'm going to be sharing my screen now, and if everything went well, you should be seeing my terminal. Right now I'm running an empty cluster, so this is an empty Kubernetes cluster running on top of mini cube. There is really nothing going on here, so the first thing I'm going to be doing is deploying a simple service that we're going to be using for our demo. First and foremost, all these instructions showing my browser now, all of these instructions can be found in the documentation, so in the docs directory we do provide a set of guides and tutorials that you can follow, and really it shows just some of the things that we can demonstrate on top of Kong, but then of course all of these all of the plugins could be used in an ingress capacity, so we're talking about many different plugins and many different integrations that can use out of the box on top of Kong. So let's go ahead and install a very simple service I call it the echo service that's going to be echoing back every request I'm making. So this service is quite simple, it's a service, it's a deployment and it echoes back pretty much every request we're sending to the service. So if I look at the name space, I'm sorry at the pods, we see that we have a new echo and if I look at the services that have right now deployed, we see that there is an echo service. So if I port forward the echo service we can make a few requests to it and as you would expect, this is a very simple service that just echoes back the request that I'm making the request headers, the request body and so on and so forth. So it's very simple and it's great for debugging purposes. Now I'm connecting to this service using port forward, but that's not the point of this demo. The point of this demo is to use an ingress to be able to consume this service. So I'm going to close this window and I'm going to stop the port forwarding and I'm going to be installing con ingress. So if you go if we go back to the homepage of our repository there's different ways that we can install ingress, we can just use a YAML, we can use Helm Helm charts for this presentation I'm going to use the YAML and this will create a few resources in our system including a con namespace. So if we go and explore the con namespace we see that the ingress is being created right now this is a blank, it's a brand new minicube so it's going to download the container and then run it. It shouldn't take that much there we go, now it's running I'm waiting for it to be fully fully ready and there we go. So now that we have our ingress running what I'm going to be doing is applying a very simple ingress configuration that is going to be exposing our echo service on a on a slash path. So what this does it's quite simple. I'm echoing back a configuration that creates a new ingress resource called demo we can configure annotations in the ingress in this case I am going to be stripping the path with con, I'm setting up an HTTP rule, every request to a root will go to a service name called echo on the service board, quite straightforward if I apply this configuration we do create an ingress and if I now expose, if I now go to the IP address of minicube which is here I'll be able to access the ingress and access my service so let's go there and there we go this is the service being consumed through the ingress like I said this ingress doesn't do much it's a simple ingress, it's a simple router that runs on top of an HTTP service there is no functionality whatsoever running on top of this right now but let's change that let's go ahead for example and look at some of the plugins that we can apply on top of ingress so let's say I want to protect the ingress with a key authentication protection so this allows us to effectively have an API key to access our services, if I go on the key authentication plugin like every plugin provides a few things it provides a name that's unique in this case it's the key of plugin and provides a configuration I can configure many things here but I'm going to be using the simplest configuration I can possibly have for this plugin and that is just a name so I'm going to be creating a new plugin resource this ships with the ingress I'm going to be assigning a name to this resource and I'm going to be specifying the name of the plugin that I want to use I could be using any plugin in this case I'm using the key op plugin and you can find the names of those plugins by clicking on the plugin hub on you know but anyways let's go ahead and apply this plugin so I'm creating a plugin configuration that it's going to be available to the ingress but in order to make this work with the ingress I also need to update my ingress resource with this new annotation which determines what plugins are going to be running on the execution path of the ingress and in this case it's this new plugin configuration I've just created so if I do this I update the ingress object and if I go back and consume my service this is not going to work anymore so this is not going to work because the ingress will receive the request it will determine that there is a plugin enabled in this case key authentication and it will determine that I didn't set any key in my request therefore I am not outrised to make the request in order to be able to provision a key I need to create a con consumer a consumer in con it's like I'm a client or a developer and a consumer can have multiple keys so you know with con we can implement quite complex and this is just for key op but we can implement quite complex rules to what consumers what users are able to consume what services and how they are going to be managing the keys so in this case in order to move forward I need to create a consumer object in con con consumer named marco and then I need to be able I need to create a key that the marco consumer will use in order to be able to consume the service and this key as you can imagine one second this key is going to be a Kubernetes secret so I'm going to be creating a Kubernetes secret marco api key of con credential type key op with the following secret key the credential type it's key op because we want to create a simple api key but it can be you know different authentication plugins are going to be having different types so if I do this I create an api key and associate the api key to the consumer to the consumer by updating the consumer object and adding a new credentials array that determines what keys belong to this user so in this case I want the marco consumer to be associated with the key we have just created and that is secret one two three so if I do this we have an ingress we have a plugin we have a consumer and we have a key so let's go ahead and now make a request with the api key secret one two three and this request will now work again if I enter a wrong key the request will be blocked quite simple now of course you know being able to secure with an api key it's probably one of the simplest things that we can do another thing that we want to be able to perform on top of an ingress is visualizing what are all the requests that are coming through and for doing that comp provides a native Prometheus and Grafana integration so if you go to analytics and monitoring we can see that it provides we integrate with a few different providers but Prometheus is one of them so what I'm going to be doing is creating a new monitoring namespace where I'm going to be downloading Prometheus and Grafana and then I'm going to be collecting metrics from the ingress out of the box and I create a new monitoring namespace and then within this monitoring namespace I am going to be installing Prometheus and you know the values and the configurations all of these examples are available on the GitHub repository for the ingress so I'm going to be installing Prometheus and I am going to be installing Grafana as well so if I do this obviously there is a new monitoring namespace then I can look into the monitoring namespace monitoring and I can see that we have a few different services running and initializing so this is going to take a while because it's downloading Prometheus and Grafana for my mini cube this is where the fan on my laptop starts turning on and increasing in speed I can hear it and we are waiting for Grafana and then once we have all of these running we should also be able to see a few services here and that is the Grafana service which I am going to be using to load to load the dashboard and see our monitoring information now Prometheus is running, Grafana is running again this is not going to work until I install the Prometheus plugin the Prometheus plugin is a plugin with name Prometheus that determines that I am going to be able to make all the metrics available and so if I go back to my terminal I am going to be running the Prometheus plugin on top of I am going to create this new resource ConqPluginPrometus on top of my Kubernetes cluster and if I do that now we have enabled metrics collection from our rengress so let's go and load the Grafana the Grafana service so if I go look at the services I have here monitoring there is going to be Grafana before I do that let me get to the Grafana I will extract the Grafana password that we are going to be using later on to connect and login into Grafana and now that I can do that I can port forward service Grafana monitoring namespace I am going to now expose it on port 3000 so now that we have that I can go on port 3000 this will start Grafana admin, the password is the password that we have retrieved from the Kubernetes secret and I should be able to login and that's great I am inside Grafana right now as you can see Grafana with the example I have executed it ships with a Kong official dashboard so I am going to be loading this dashboard and we can see obviously we can the charts that out of the box we provide on top of the ingress now there is no request being made so what I am going to be doing is generate, simulate some traffic so that this gets a little bit more interesting in Grafana so I am just going to be simulating some traffic in a while through loop on my terminal and by going back to Grafana the last five minutes we should be seeing you know Prometheus I forgot what's the interval but Prometheus fetches every X number of seconds that the matrix and this is configurable at one point here we should start seeing the charts let's see let's wait a few seconds there we go we are seeing we are seeing the first point coming up online and as the next interval kicks in these points should become a line let's wait a little bit longer let's wait I am just refreshing and refreshing so as soon as it pops up we are going to be able to see it there we go so these are the request per seconds that I am generating from my while through loop on curl and it shows the service in this case the echo service it shows the bandwidth it shows all sorts of things of course now you know we have used the key authentication plugin we have used the Prometheus plugin so we can secure and we can observe what the traffic that we are sending to the ingress another common use case for an ingress is to rate limit the number of requests that our consumers can make on con we have this concept of plugins that can be applied globally to every service or every consumer that we are configuring but we can also apply plugins per specific service we can also apply plugins per specific consumer and we can apply plugins per a specific service and consumer so in your typical APN management use case we are going to be having consumers that can make let's say 100 requests per second but then there is going to be some users or consumers that we want to whitelist so with con we can apply plugins for those kind of consumers to give them a higher limit if we want to do so now the plugins can be applied just the rate limiting plugin but any plugin can be applied on each combination of consumer service so if you are going to be trying con ingress and you are going to be running this on your system you can definitely experiment with all of them so let's go ahead and install the rate limiting plugin so I am going to be stopping I I am going to be stopping my request generator here I am going to be stopping the port forwarding for Grafana and I am going to be running another stuff I am going to be installing another plugin that will protect and secure our request so that we are not going to be able to send more than 5 per minute we are going to be using the rate limiting plugin the rate limiting plugin can be quite complex in a sense that it is quite simple to get up and running with the rate limiting plugin we want to even further customize the behavior we have lots of properties that we can choose from so in the case of rate limiting we can choose how many requests per second per minute per hour per day per month we want to use per year we can limit by the consumer by credential so let's say that we want to limit just one APA key not another APA key we can limit by service we can limit by all sorts of things so we can also store if we want the counters in a third party or we also support Postgres and Cassandra as backend stores for Khan for example in the case we want to build a global rate limiting policy that can be shared across different regions and different clusters so I am going to be installing my rate limiting plugin that's global I am going to be limiting by 5 requests per minute I am going to be applying this configuration and then let's make more than 5 requests per minute 1, 2, 3, 4, 5 and the system will then rate limit the request as you can see it's quite straightforward simplicity is one of the things that we put lots of effort in the past few years to make sure that Khan could be simple to get up and running with simple to deploy, simple to scale and of course there is more advanced functionality and you are only getting exposed to that complexity when the time is right and when we really need it but not to get up and running with very simple use cases I have been doing a demo I will be running this demo for 20 minutes now and we installed the service, we set up an English rule we created a consumer a key, we have set up observability we are now rate limiting all the requests you can see that it's quite straightforward when it comes to configuring these resources so I want to focus a little bit on the service mesh questions Khan can work on top of a very simple service, a very simple API, it can work on top of functions in the case of Kubernetes it can work on top of Knative but it can also work with a service mesh service mesh is a topic that I have been very involved in in the past year in the context of Kuma we do support a Cycler Envoy proxy that allows the Khan Ingress to be able to perform gateway functionality on the Ingress site and then proxy the request to a Cycler Envoy proxy in order to be able to provision the right Mutual TLS certificates that would allow the request to enter the mesh service mesh and we think of the concept of service mesh it's quite service mesh has a very flat view of the world in a service mesh world there are services and it can be anything that makes a request or receives a request over the network and then we're going to be having those service requests that connect these different services together in the context of service mesh any service that makes a request or receives a request like an API database like reddit it is a service so the Khan gateway it's not any different from any other service that we're running in our system the Khan gateway will receive requests and make requests so the gateway is a service itself we can easily deploy a Cycler container next to Khan gateway and by default we support Istio and Kuma and by default that whole flow from ingress to mesh it's completely taken care of so that we don't have to worry about setting it up because it comes out of the box with Istio and Kuma also we've been working with and I've been personally involved with many service mesh use cases with larger organizations that are going to be running multiple meshes for different kinds of businesses or for different teams it's very hard for one large mesh for everybody, for a couple of reasons we may want to start different meshes at different times therefore we don't want to enable too much we don't want to require too much team coordination if they're all working on one mesh especially if they're coming from different lines of businesses and also we want to be able to enforce more security or more isolation across different meshes in different applications so one of the use cases that they've been seeing is using Kong as these hop in the network that would allow to exit one mesh as an egress and then enter another mesh as in an ingress capacity and so it is basically the gateway that not only allows us to accept external traffic to come inside of the Kubernetes cluster but also within the same cluster it could be used as an internal gateway that allows us to exit one mesh and then to another mesh and in between being able to enforce user policies and API governance policies or onboarding policies to determine how those APIs are being consumed if I am in a bank working for a global bank and I'm creating a banking application and there is another team building a trading application for all intents and purposes that trading team needs to be going through an onboarding process in order to be able to access the banking API so even if we work for the same organization I still want to be able to enforce rules on how that team is going to be using my system and I want to expose just a subset of the APIs not all of the APIs that are running in my banking app and so the gateway use case it's perfect for this because I can put a gateway internally expose just the APIs that they want the other teams to consume with no dependencies whatsoever you know when we deploy a service mesh we have a decentralized sidecar deployment that needs to be deployed alongside each one of our services and that sidecar needs to be able to consume the control plane and there is instances where I don't want we can't deploy a sidecar or we don't want the sidecar to connect to our control plane because the control plane is quite sensitive and only our services should be able to get access to it therefore I can use a gateway to exit one mesh, enter another mesh or I can use a gateway to enter a mesh from the edge from a mobile client for example or from external developers that couldn't deploy a sidecar anyways even if they wanted to because we don't want to require them to do that and we don't want their sidecar to talk to our control plane so anyways to answer the question yes con can be used in front of any service mesh in instant human in particular con natively supports an envoy as a sidecar proxy because con like any other service is a service in the service mesh and also con can be used for more traditional APIs as well as con can be used as a lightweight router to enable cross cluster Kubernetes communication when it comes to pretty much any traffic not just HTTP or jrpc traffic but any l4 traffic so it can be Kafka can be mongo can be mysql can be literally anything and we have users using con in all of these different capacities so I hope that helps and I am going to be stop the screen share and go into the next slide so today we have looked at different ways that we can expose our services in Kubernetes not port load balancer ingress we focused on the con ingress controller we've seen a live demo that allows us to secure and protect and rate limit and observe all the traffic that's going through the ingress we went on a dissertation about service mesh and that's all so thank you so much you can download con at cong.q.com slash install I'm going to be connected for the next five minutes just in case you have some additional questions otherwise you can find me online at subnet marco on twitter or you can get involved with con and with the ingress on our community channels that you can find on github you can find on cong.q.com as well and if you're looking for a service mesh that's quite simple to use and it's vendor neutral and it's a cncf sandbox project you can now look at kuma.io as a matter of fact it's the first envoy based service mesh that has been donated to the cncf foundation which is a linux foundation entity and of course the ingress controller natively communicates with kuma and kuma compared to Istio it's much simpler to use it's a turnkey service mesh it scales quite well across the organization by supporting multi-tenancy and multi-mesh support and it can run on top of Kubernetes but also beautiful machine workloads and as a matter of fact there was a new version of kuma yesterday that introduces a quite scalable and flexible way to support multiple regions and multiple Kubernetes clusters within the same mesh of course then you can put an ingress controller like con on top of all of this so you have the full stack end to end ingress and service mesh all connected with each other the api gateway so there is one question how different is it from Istio and this question was asked in the context of the con ingress when I was introducing con ingress the ingress controller an ingress controller like con it has been used in an api gateway capacity and so the question is really if we have to abstract this question away the question is what is the difference between an api gateway and a service mesh you see the api gateway and the service mesh are different kinds of deployments service mesh allows us to so service mesh it's more low level service mesh it's more of a service connectivity concern so we want we're making requests and connecting services together within the application and we want to secure that traffic we want to encrypt it we want to be able to observe that traffic so within the application we're going to be deploying a mesh that does all of that now we want our application to be able to talk to other applications we want to apply user governance on how the users are accessing our system we want to be able to provide a developer portal and a catalog to enter our api in a very specific and we want to do that with a centralized deployment so we do not want our clients consuming apis to require a sidecar in order to be able to consume our services as we know in a service mesh a service can consume another service only if both of them have a sidecar because the sidecar will assign the right mutual intelligence identity to the request that allows us to make that request in the first place if we have external developers outside of the organization we don't want to require a sidecar and most importantly we do not want to require them to connect their sidecar to our control plane which is a very sensitive topic it's a very sensitive system in our infrastructure and so we're going to be having an obstruction layer an api gateway deployed in an ingress capacity that can perform that can allow anybody to consume our services it can perform user onboarding and user governance in a centralized way it exposes only a subset of the apis that we want them to consume and then the ingress will enter the mesh to process that request and you know quite frankly the tracing and the observability aspects of them can be federated together so that we can trace and observe the entire flow from ingress to service mesh you know with a very tight integration in the case of cong that integration is with isio and with kuma can you help talk more about how cong connect to different meshes you see we can use cong as an ingress to a mesh but we can also use cong as an egress from a mesh so if you want to so first and foremost exiting one mesh and entering another mesh means that we are going to be provisioning the right mutual TLS certificates that allow us to exit the mesh and then enter another mesh which is most likely going to be secured with a different certificate authority so what the ingress does what cong do in this case it allows us to funnel the egress requests through the gateway and then the gateway with our anvoy integration will reprovision the mutual TLS to enter the other mesh so effectively the gateway really is this hop in the network that allows us to exit one mesh and enter another mesh there is a blog post that I have written a while ago and it is about the difference between gateway and service mesh this is the blog post that I have written and here I am talking about the difference between gateways the difference between service mesh but most importantly at the end I also show an example of a gateway can be used to enter and exit different meshes so this is a very nice explanation the blog is called the difference between an API gateway and a service mesh that I strongly suggest to read because it really clears all of these up and QMA is the cloud native the CNCF only anvoy based service mesh donated to the foundation that we can use out of the box to create a mesh and then integrate it with a gateway at QAM and QMA like QAM in QAM we have plugins and QMA we have policies that do all sorts of things and we design integration with QAM and of course we welcome more integrations with other gateways if anybody wants to contribute to them I strongly suggest reading this blog post then there is another question would Ingress Controller be a complement or a replacement for the open shift for Ingress Controller? It would be a replacement with Khan we are getting a replacement of an Ingress Controller it's a different implementation that supports L4 and L7, it supports Knitv out of the box it supports a full cycle API management really and it also allows us to exit and enter different service meshes built in either on top of Istio or QMA within our system so it's quite feature rich, it's quite performant and it can cover pretty much all the use cases that normally we would need when deploying an Ingress Controller so I strongly suggest to give it a try and there is a very vibrant open source community that can also help if you get stuck or if you have feedback or if you have questions there is another question, can you integrate with a certificate manager or do you have your own Acne Certificate Management Controller? It can connect with it can provision its own certificates, you can provision Khan, Ingress Controller with your own certificates, we also have integrations with Ashicorp Vault if you want to support at our party PKI so I strongly suggest to look at the plugins because they provide different ways to configure all these different aspects and of course the official documentation by the way we can find that on .com slash docs so that covers pretty much all the different security use cases. We can not only secure with Mutual TLS and encrypt everything that goes through the Ingress, we can also integrate the Mutual TLS of the Ingress with Autoparty Service Mesh we can also install on top of Mutual TLS about 10 plus different authentication and authorization plugins like OpenID Connect Jot Tokens, HMAC, API, API Keys is one of them and basic authentication you know it's quite complex on top of Khan and so we can also have different plugins for different services which is pretty cool that means if we have many different services in our running in our Kubernetes clusters we can secure some of them with OpenID Connect, we can secure them some of them with Mutual TLS only we can secure some of them with API key only so it is quite flexible on how we can manage this entire system and you know also you know something that makes me very proud about Khan, Khan has been used by hundreds of enterprise organizations in production for mission critical workloads so when we're looking at Khan, we're not just looking at an open source Ingress or gateway that does all of these things but we're looking at an enterprise grade gateway that really can run in all of these capacities in pretty much any enterprise environment so Khan can also sync up for example a very more complex use case would be being able to manage multiple Kongs across different Kubernetes clusters, across different clouds and for example integrate all of this with an on-premise data center. Khan can do all of the above so this demo today was quite simple but Khan supports all of these modes as a matter of fact it's quite of a portable system and Kuma, the service mesh that we are also contributing to it's also quite portable. It's a service mesh that can run on service mesh and Kubernetes and we can integrate both together by abstracting away the service-to-service connectivity between a virtual machine workload and a Kubernetes workload so if you're looking for a service mesh that can span across all of your environments I also suggest looking at Kuma. It seems like we're coming up at the top of my time so again if you have any questions I'll be happy to answer them online. You can find me on Twitter, you can find me on GitHub so keep them coming. Thank you so much for coming to my talk and I hope all of these is clear and exciting enough for you so that you can get up and running with it very easily. So thank you so much. You can find Khan at again www.KongHQ.com flash install. I'll see you online. Thank you.