 Hello everybody. Well, it's good to see you here. I'm the only thing before your lunch, you know, so I hope this is going to be enjoyable. And it's going to be very hands-on as well. So we're going to wrap the terminal at one point and see everything in action. Observability in Kubernetes with Kong. As we build our systems, it is impossible to run a modern architecture with no observability in place. Perhaps it's even irresponsible. As we are decoupling our systems into microservices, we're going to be having lots of moving parts that we want to observe in order to determine problems, determine latency, and understanding what we can improve moving forward. Observability is also one of those functions that we don't want to build over and over again for every new application or every new service that our teams create. We want to get that out of the box. And this is what this talk is about. We're going to be learning how to use and leverage Kong to get observability out of the box, in this case in Kubernetes. How many of you are familiar with Kong? My name is Marco Paladino. I'm the CTO and co-founder of Kong. In Kong, of course, it's an open-source project. There are more than almost a million running nodes of Kong in the world. It's a very popular platform for APIs and microservices, and you can download it for free on GitHub, if you go on Kong slash Kong, of course. And it is both an API gateway and a service mesh. These are common use cases within pretty much every organization and every platform. We're going to be having north-south traffic coming inside our Kubernetes cluster or coming inside our data center that we want to protect, we want to enable, and we're going to be having service-to-service communication in an east-west capacity, also known as a service mesh, very common in microservice-oriented architectures. And even in that use case, we want to be able to get some functions out of the box. In a typical reference architecture, we're going to be having east-west traffic within our cluster, within our Kubernetes cluster or data center, so services talking to other services directly, and we're going to be having external clients, but those could be, for example, external teams that want to access those services. So with Kong, Kong is a runtime that can be deployed in a north-south capacity and can be deployed as a sidecar on an east-west capacity, which means that we can out-inject a Kong sidecar alongside each one of these different services in order to implement features like observability but also mutual TLS service segmentation out of the box without having to change our applications. And that's the whole point of Kong. As we build our software, we want to be focusing on the business value that our software, our services, our APIs should be delivering. And we want to take away the extra work of implementing that mutual TLS, that encryption, that full lifecycle, if you wish, for APIs and services from those teams working on our applications. So the idea is to outsource those concerns to something like Kong so that the teams can focus on the service and not on those complementary features that we do need before going in production. Different use cases. So, of course, for north-south, traditionally pay gateway, it's a consumer-centric use case. We're managing requests that are coming outside of the data center that can be outside of the Kubernetes cluster that can be from a mobile application that, you know, an iPhone app, for example, that runs somewhere. Those can also be other teams within your organizations or perhaps those could be requests coming from other Kubernetes cluster in other clouds or other data centers, as well as partners and so on. So when it comes to the API gateway, we want to have something that enables the lifecycle, the enablement of that consumption to those external teams, to those external partners, by implementing onboarding procedures, by securing the traffic in a very specific way, by protecting it, for example, with technologies like web application firewall that are not very viable when it comes to east-west traffic. They're not viable because in east-west traffic we are focusing more on the services that we're running within our architecture, less on the concept of users onboarding, less on the concept of clients that are going to be sending pretty much anything at it because it's a more controlled environment. With that said, in east-west we also have some of those concerns. We want to implement mutually less. We want to implement observability. We want to segment what services can consume other services. And so that's where the service mesh use case comes handy because we can implement all of that in this cycle proxy that effectively becomes the contact point for each request from one service to another service. The services are never connecting to each other directly but are going to be talking to this cycle proxy that transparently, without changing any line of code on the actual service, will be able to implement those features. And when it comes to services, it's not just an API. It's not just a JRPC server, for example. But it can be really anything. It can be a database. It can be a Kafka queue. It can be Redis. It can be anything that runs on a network that we want to consume. In a way, I would say that service mesh is more oriented towards implementing advanced networking features, even more so than an API gateway, an API management platform. So these are the two different use cases. And today I'm going to be focusing on the first one, on the API gateway. So we're going to be having our services running in our Kubernetes cluster. How do we get that observability out of the box? But as you can imagine, writing that out of a service mesh is equally as simple. Kong is a community-driven product. There's more than 150 core contributors that are contributing to the product. There are more than 40,000 community members. The adoption has been pretty great since day one. Kong comes from another company. Before being the CTO and co-founder of Kong, I started a company called Mesh Shape. And Mesh Shape was the largest market place back then in 2015, 2014. We had over 300,000 developers consuming APIs over the marketplace. The marketplace, think of it as an eBay. You could search for APIs, providers could offer their APIs, and we would provide this monetization system on top of all of these APIs, security, observability, monetization, on top of everything. In 2015, we've been running the marketplace for a while. And although the user adoption was growing and was great, we were a VC-backed company back then, and we had to generate much more revenue than we were generating. And so in 2015, we decided to take the most valuable thing we've built, which was the Kong gateway that was powering all of these APIs and then open-sourced. And so this is how Kong was born in 2015. Kong as King Kong, because MassShape had an ape logo back then. So we took Kong, we open-sourced it in 2015, and since then, the adoption of the community has been great. It's a community-driven project. It's on GitHub, runs anywhere. One of the most important things we have took into consideration with Kong was the fact that organizations and developers in teams are going to be running on all sorts of platforms, and although transitioning to Kubernetes is a journey that many teams are doing today, the pragmatism of the situation is that there's going to be lots of workloads running on virtual machines that are not going to move to Kubernetes anytime soon and perhaps never. So when we thought about Kong, Kong was born after Kubernetes was out in 2014, after Docker was out in 2013, so we built something that was platform-agnostic. You can run it natively on Kubernetes by configuring it with Kubernetes CRDs if you want to do so, but at the same time, you can run this on pretty much any other platform. And so by doing that, we want to ease that transition, that journey into microservices, into Kubernetes. And the problem that Kong has been solving over the time really became evident after those two technologies, Docker and Kubernetes came out around 2013 and 2014. Microservices happen to be the software answer to more complex demand within our applications. In order to scale the business, organizations are reconsidering what's the architecture that's going to bring them there. And microservices happens to be the answer for some of those business concerns. Think of Netflix, think of Amazon. So as organizations are driving that business by implementing architectures which allows them to either improve business scalability or team productivity, they adopted more decoupled and distributed architectures, which in a way compounded the problems of security observability over time within the organization. Because we're moving away from having a handful of services and a handful of APIs into having hundreds, thousands of those services. So the way we think about observability, the way we think about security, the way we think about encryption, the way we think about authentication, authorization, logging, all of the above becomes exponentially higher the more decoupled these systems become. And in a way we require that. In a monolithic application, we might find problems by debugging the job virtual machine, but in a microservice or enter architecture we want to be being able to capture those traces, those metrics, those logs in such a way that allows us to understand where the problem is. And so in this journey from monolith to services to microservices to perhaps even serverless and function as a service we've been trying to help those teams with Kong getting those functions out of the box. So we want to be really an enabler and a partner into that technology transformation that's happening nowadays. Kong has been built with a few things in mind. One I've already mentioned being platform agnostic that was very important for Kong and still is, as well as creating an extensible system that can be providing these policies via what we call plugins. So plugins are plugins is basically our extensibility framework. You can build security plugins. You can build monitoring observability plugins. You can build rate limiting plugins. Some of them of course we already have on top of Kong. In fact the community has built more than 500 plugins that you can use today for pretty much the most common use cases. And also you can build your own private plugins if you end up, and you will end up in a specific edge use case very specific to your organization and requires you to build on top of Kong to for example support a legacy authentication system or a legacy trotling system and so on. So plugins are really the core of Kong. Kong without plugins is a reverse proxy with a plugin run loop. All the actual functionalities being delivered by those plugins. And plugins can be security plugins, authentication plugins. In fact there is a hub of plugins that are available on top of Kong that can be adopted and used in one click really. Some are built by the core team for Kong. Some are built by the community. Some are private. You don't have to publish them if you don't want to and they're built by our users. So very specific to their environments. Like I mentioned Kong was born in 2015 from Mesh Shape. That's a little bit outdated. We're now over 100 million downloads and it's been used in production by we do run in the community of course, but we do also work with enterprise organizations and so we're trying to Kong is helping with mission critical workloads in these enterprise organizations and it's all over the place. Every company is a technology company and if they are a technology company they have APIs and services running within their systems. If they have services and APIs running in their systems they have to protect them, secure them, monitor them, enable them offer them to other teams and so they're using Kong for this full life cycle. Kong was built on top of our flavor of engine X the networking runtime that's receiving the incoming requests and then proxying those requests outside of Kong it's effectively engine X but it's not vanilla engine X. We are engine X contributors and we have extended engine X to support a few things that engine X didn't support. In particular, we are running on top of a framework called open resty which allows to hook into the life cycle of every request and response within engine X by writing Lua code L U A which is a very extremely performant and fast language that runs on top of a virtual machine on a Lua implementation of the virtual machine that's written in C and that's called the LuaJet. In the LuaJet it's phenomenal. It's one of the fastest C implementations of a virtual machine you can find in the world. In fact, Lua and LuaJet were invented for embedded platforms. You know, every time you're running in a restricted environment you're running on embedded devices with no memory, with performance constraints. Lua and LuaJet, and in fact if you ever worked into the mobile industry Lua and LuaJet are very popular within that specific mobile use case in games for example to extend what those games can do. The reason why it's popular is because it's lightweight and performant and fast, very fast. We took that technology and we've built con with it which coincidentally happens to be the same technology stack that Cloudflare, if you're familiar with them the Akamai competitor they're also running. So this stack Lua and LuaJet on top of Nginx, it's in fact processing 15% of the world traffic global internet traffic but a few people know that. Extremely fast, built in C and Lua and with this stack and this runtime we've built a gateway that's easy to use. Very simple to use to get started with without sacrificing more advanced features if you want to go in there and tune the machine. LuaJet in particular needs a special mention it was created by Mike Paul this guy did an amazing job into creating something that basically it's a piece of art really the LuaJet. It's extremely performant and fast and efficient. So Kong was raised in 2015 since then we had more than 55 releases public releases, more than a million nodes running per month in the world a large community we did actually release the latest and greatest yesterday 1.3.0 which implements among other things a few features including native L7 JRPC support for ingress and egress we've added an enhanced upstream mutual TLS authentication and much more. So we were now the company itself it's in San Francisco but we're a global company so we are also you know engaging with contributors and developers all over the world and we encourage them to contribute and help us making this a great great open source project so this was released yesterday by the way and as well with Kong 1.3 we also Kong can be deployed in many platforms so we can deploy Kong on bare metal if we want to, we can deploy Kong on Kubernetes and for that we have released a new version of the Kubernetes ingress controller which we're going to be seeing today but you can deploy, we have people running this on Raspberry Pis on the actual device so you can really run Kong anywhere it's very lightweight today we're going to be focusing on the Kubernetes ingress controller of course I've got my mini cube running in the background so we're going to be firing up the terminal and see how to use Kong to protect a few microservices the Kubernetes ingress controller is of course also open source and treats Kong as a first class citizen within the Kubernetes cluster. Kong offers an admin API, it's a restful API, think of Elasticsearch, very similar to that that you can use to configure the system we also support a declarative configuration written in YAML but it's our declarative config but when it comes to Kubernetes we support Kubernetes CRDs you don't want to use admin APIs, you don't want to use our declarative config, you want to be changing the state of the Kong cluster by effectively doing things the Kubernetes way so the Kubernetes ingress controller supports those CRDs and automatically listens to the Kubernetes API server to detect what are the services in the APIs you're adding over time and depending on how you're configuring the system, Kong can add those services and those APIs automatically into our data model so that you can enable that observability or security or all of those plugins out of the box so there are more than 500 features you can apply on top of Kong by the way like open-end connect, rate limiting, throttling logging, bot detection, transformations and so on the ingress controller itself has been around for a while and we recommend using this, you shouldn't be using Kong 0.13 you should be using perhaps 1.x but you can go back as back as 0.13 if you use this so before I finish my tabs and I go on the terminal to see this running I would like to explain a few concepts for Kong when using Kong, Kong itself has a few core entities that we need to understand what they do some of them are services routes, upstreams and plugins, you know the service it's an abstraction of one of our upstream services so basically this can be an API that we're running within our systems and it can be something that runs in Kubernetes we do have a route which determines the ingress route ingress route for routing these requests to an upstream so the route it's the ingress rule if you wish is what the user, the consumer will have to use in order to enter the cluster and the upstream is the egress to the actual API running within Kubernetes and then of course there are plugins so once we have our services configured and the services can be consumed by using this route that goes to this specific upstream then on top of that flow we can apply plugins so we can say okay great we have this API that's available on slash API for example slash API would be the route and now on top of that flow I want to secure it I want to add as many plugins as I want to enhance what we're going to be doing with those requests these of course happens on the con runtime in a most likely sub millisecond performance latency depending on the plugin of course there might be less or more computation and it happens without having to change our APIs and our applications so the teams they build their systems the way they do today they push them in their Kubernetes cluster and then these features can be configured on top of con without requiring the involvement of those teams at all so for example the central platform team can enforce consolidation on how security is being handled observability is being handled across the board without having to involve the actual teams so the route is the ingress routes will point to a service the service it's an abstraction that in a way contains a bunch of upstreams the upstream can be different versions for our APIs if this is a billing API so the service will be billing the upstream is going to be v1 v2 v3 and each upstream has a target and the target it's the actual pod the actual instance we're going to be targeting when making the request so this is just a quick intro to the data model that con enforces this is valid on Kubernetes but also anywhere else and once we have that configured and we're going to be running it very quick very soon we can then for example add new rate limiting on top of this con cluster so we can create a new CRD a con plugin CRD that for example adds a rate limiting such as we rate limit by IP address we only allow one request per second per hour this is not a very significant in production we probably want something else but basically it shows how simple it is to configure a plugin on top of con running on Kubernetes via a CRD as well as the rules so it's very simple to determine what rules we want to implement by using the ingress that con provides so this is very Kubernetes friendly I'm going to be running a demo now I believe that seeing it in action it's worth a million words so whatever I'm running today can also be replicated by you at home or in the office by following that link that link will point to a github gist that shows you all the steps I'm going to be running you just copy and paste the commands and run it on your own and you will be able to replicate this demo in fact let me now switch my tabs so I do have I do have minicube running on my system in fact we can list the namespaces and there is a con namespace so I've started a con namespace which includes the con ingress controller all of this can be initiated by yourself so this is the gist I was talking about by basically using our official Helm chart so you run your Helm chart and this will install con in the ingress on your Kubernetes cluster one command and you install it I've run this before coming in here just because I wasn't sure the connection would be good so now that it's running we can also for example by the way we can also if you're familiar with minicube we can start the dashboard you can see our workload running on the GUI okay so now that we have con running we want to do a couple of things there is no other service or system running within this Kubernetes cluster but con so what we want to do it's to add some API some services that will be our basically represent our APIs that we want to observe we're going to be adding the APIs and then we're going to be using the Prometheus plugin that con offers out of the box to get that observability automatically into that Grafana system without in one click basically so I want to expose con listens on two different ports one is the admin API port that we are not going to be using in Kubernetes because we want to use the CRDs to configure the system and then the other port is what we call the proxy port so that's the port that the consumers will have to use if they want to enter the Kubernetes cluster via the Ingress so what I'm going to be doing now is running a simple command that basically exposes it forwards requests on port 8000 of my con controller so if I run these on my computer we should be seeing a response from con so we made a request and this request goes nowhere because we have no services configured into con and con is complaining hey you're trying to consume something that I can't find and it's fine because the cluster is empty up until now so in fact we can also use we can also use our browser so I do have prepared in this simple YAML file that starts a few services let me actually do this so this starts a few services that we're going to be using to simulate APIs running in the Kubernetes cluster we have a billing service we have an invoice service and we have comments so these are simple services that are going to be echoing back the request we're making mock APIs if you wish in production this will be your actual APIs and your actual services so I'm going to be executing this I'm going to be creating this YAML these services within my minicube by basically applying the YAML file simple as that and Kubernetes is creating now the services so in fact if we go again we're seeing that this is being created now depending on how fast or slow the connection is it's now pulling the container from the internet and then starting the service while this is working I also want to show you the hub that determines the features we can apply on top of Kong so if you go on the website we click plugins we can see all the plugins some of the plugins that Kong offers out of the box you can fetch them from here and each plugin has a unique name so you can apply the plugin by using the plugin Kong plugin CRD type to for example implement proxy caching or implement response rate limiting or supporting Prometheus which is what we're going to be using today and when it comes to Prometheus basically by using the admin API we create a new plugin that applies to this service with name Prometheus and then we can configure it this is a very small container which makes me very concerned from the other ones I have to pull alright so let's see so as we wait for this to happen what I'm going to be doing once we have these services running is creating the Ingress Rules that allow us to consume the service so we're going to be creating a slash billing slash we have three services the billing, the comments and the invoice service that we're going to be creating with this YAML which is running and then on top of that we're going to be creating the Ingress Rules so that when we receive a request on the con port con understands that really we want to proxy this to the billing service we're running likewise for comments and invoices so when we play this CRD con will listen to the Kubernetes API server and it will automatically update the con data model to support the services okay so it's running now alright so okay we got the service running so let's go ahead so this strips the path for the routing so basically we it's a rule that basically we're going to be stripping out the final slash from the original request which I need in this case you might not need in other use cases but this is the fun part so this will create those mappings inside of cong so today right now if I do if I can try to consume con on the on port 8000 slash billing it won't find anything because the system is not being configured yet but if I do apply if I do apply these configuration it will create the mapping so I should be making that request one more time and that in this time it should work so this is created the ingresses we can also by the way explore that from from you know from the actual GUI and then if I make that request again okay this is the billing service so this is the actual service responding with something I want to simulate a bunch of requests to these services because I want to simulate some traffic this is a very inefficient way of doing it I'm creating a while through loop and making a few requests those services to the services so I'm going to be opening up a new tab and just execute that there you go bunch of requests right now so today in this situation we're not seeing anything so we have any apis in our system we configure them people are making requests to our apis but we have no observability whatsoever the whole point here is that now we want out of the box to get that observability so I'm going to be using Prometheus and Grafana to be doing that so we're going to be pushing this Prometheus is going to be fetching this information from Kong and then using Grafana it's a very nice GUI we can visualize those charts now I'm a little bit concerned right now because this because the connection that didn't seem to be very fast so let me try and see what happens if we install Prometheus and how long it takes so we're going to be installing this into a monitoring namespace so that we know that all our monitoring stuff it's in here it's running and then we're going to also be installing Grafana so that we can visualize in a nice dashboard our metrics cool we have some stuff running and these ones are still being pulled how many of you are familiar with Prometheus and Grafana that's very good there is another depending on the internet connection part of the demo also includes demonstrating Zipkin I really like so Jager has a Zipkin compatible server so we can send Zipkin compatible traces to Jager as well and have them visualize in a nice GUI so I might want to as I demonstrate Prometheus and Grafana I might want to install these as well so that we don't have to wait later so basically here I'm installing Jager in the same namespace monitoring so let's just do that okay so now there is much more stuff so we're just waiting for Grafana at this point and then once Grafana is running we can start with Prometheus we can start by we can start installing Prometheus and visualizing the we can start installing the plugin for Prometheus and visualize those charts into Grafana so I mentioned about the Prometheus plugin it's very simple so if we go back to the hub to enable the plugin it has a unique name the name for this plugin is of course Prometheus and every plugin can have its own configuration this is a very simple plugin which means that if we pass the name only by default this will work so to create the plugin I'm going to create a new CRD which is con plugin it's a con plugin type we're going to make this plugin global you can apply plugins for a specific service or for a specific consumer so there are different ways you can select what's the path that the plugin should run so in this case we're making it global which means that every service within con that has been configured and in this case the three services we have the invoice, billing and comments service will be targeted by this configuration error when retrieving I think I'm overloading a little bit so we have just applied this configuration I had some syntax errors in it so I have created this CRD con plugin global and the plugin we're installing is Prometheus let me see if Grafana started in meanwhile I can hear my computer getting a little bit crazy here so let me see it's a little bit overloaded we're running a few pods now we're running Prometheus, Grafana, con, three apis Jaeger so there is a lot going on you would think a $2,000 computer would be able to handle this okay so a few hiccups but let me see if Grafana is now running okay so everything is running right now so it did it cool so what we want to do is to expose Prometheus and Grafana so I'm going to be doing this and then I'm going to expose the Grafana app as well cool so now this is going to be exposed on port 3000 if I go on 3000 I should have Grafana running so basically this is the Grafana that's running within the Kubernetes cluster we did a port forwarding and now we need the credential to fetch the credential we're going to create a Kubernetes secret and this is the password within Grafana there is going to be a con dashboard that we can access and this is the official con dashboard in Grafana it's very unfortunate that this is not running the way it should be but you have the steps these will provide lots of charts that shows you exactly what are all the requests the latency that are running through the Kubernetes cluster right now out of the box so the hiccups we had today is depending on the actual load and internet connection on my computer so that's unfortunate it seems like Grafana is not loading for whatever reason it did and then it stopped but by executing that plugin we're going to be getting all those charts out of the box there we go maybe now we can get it real quick before they kick me out there we go con okay and it's not working but that's the dashboard so I'm so sorry about this you can execute the demo by following those instructions of course everything I showed today is available online on our official docs con nation it's our discuss.congetru.com it's our forum for the community if you have questions as well as Github Twitter of course download con 1.3 has been released yesterday and thank you so much