 I'm going to one, two, three. Okay, great. Thank you, Chris, for getting us rolling. I'm Michael Wheaton. This is another edition of the OpenShift Commons Briefings, our operator hours TV show, which is streaming live. I'm now going to do what I like to call share my whole screen. And I'm going to start from the first slide here. It's not going to be slide heavy, but I did want to put up the intro slide here today. We are lucky enough to have our good friends from Kong with us, the cloud connectivity company, and they're going to be talking about why API management is dead. Some people might think that's somewhat of a controversial title, which I think is a perfect segue for this conversation here today. And they're going to talk about why we need a new approach. So we have the Field CTO, Melissa Vander Hecht joining us from overseas in the UK. And we have Claudio Aquaviva, one of their lead solutions architects joining us from a much warmer place down in Brazil than it is up here in the northeast of the United States. Melissa and Claudio, welcome to the show today. Thank you, Mike. So excited to be here. Thank you. Yes. Thank you. Very, very happy to be here. Yeah. Yes. I'd much rather be down there. I think the next time we do something like this, you should invite us to film it down in your office, Claudio. Yeah, absolutely. I'm getting a little tired of shoveling snow. So Kong, King Kong, what's in a name, Melissa? You know, when I was looking up interesting facts about Kong the other day, I typed in Kong and you can imagine there were all kinds of references to things that were, you know, something other than your company. Tell us a little bit about the company. What's in the name? How did you guys, you know, is this a 20 year old company? Did you start in 2019? Why don't we learn a little bit about that? That's a great place to begin, Mike. Let me tell you a little bit about what we do. And then we'll talk about how we came to be the company. We are on a cross Kong. We are on a mission to power connections to build a reliable digital world. What does that mean? I'm sure you're asking. Mike, everybody watching, I want you to close your eyes. Picture the night sky. If you've ever seen the line of the Milky Way, imagine that beautiful, beautiful image. What you're seeing in front of you is 1 billion trillion. That's a 1 with 21 zeros. You're seeing 1 billion trillion stars in the observable universe. That is an incomprehensible amount, I think. You can open your eyes again now. No, I actually cheated. I opened my eyes. But I'll start again. So we have more 40 times more bikes of data on planet Earth than there are stars in the observable universe. It's quite a challenge that we have within IT at the moment. And it's not just that we have more data, but our data is more and more distributed. There are an estimated 50 billion connected smart devices today. The average enterprise uses over 900 different applications from on-premise, SaaS, different cloud environments. And it's not just that our data is more distributed, but we are creating more and more services. We are in the midst of this relentless trend to move from the monolithic applications and processes to increasingly smaller, more fine-grained capabilities. What we're seeing now is a shift. The way that a company differentiates is now about how they connect all of these distributed things together. It's how they combine and how they assemble and reassemble all of these fine-grained capabilities and microservices. And this is what Kong does. We power that connectivity layer or connectivity fabric, if you will. So Kong is a service connectivity platform. That means we, and Claudia will talk a second about how we came to be being the world's most widely used open-source API gateway. But we also power microservices, service mesh, Kubernetes, everything coming next, trends that we might not even have dreamt to imagine about yet. We are powering the service connectivity platform that allows you to have this reliable connectivity layer. And Melissa, if I can just... So when you say Kubernetes, I work for Red Hat. And as it turns out, we have a Kubernetes platform called OpenShift. I'm assuming that your solutions work on Red Hat OpenShift? Absolutely. In fact, a customer that I work very closely with, one of the largest retailers in the world, they are a fashion retailer and they sell to 202 markets through their online platforms. And they're running Kong in OpenShift to power all of this e-commerce and power all of that online shopping experience. Okay, great. And do you guys have an operator? I know Red Hat spent quite a bit of investment in working with our partners to build operators on OpenShift, not being the most technically savvy person in the world. I'm pretty sure that the operators are pretty important for improving day two supportability OpenShift and business apps. You guys have an operator in our registry and catalog? Yeah, yeah. I can take this one, Melissa. Yeah, absolutely. We provide a operator. As a matter of fact, it's part of the Kong supporting Kubernetes, not just having Kong running the Kubernetes part. But as a matter of fact, just supporting Kong in a much broader way. Supporting operators, supporting ham charts, CRTs, K-native, certification managers, and so on and so forth. That's how we say that we support Kubernetes, not just about to have the API gig to it or self-smash learning as a part. And then again, an absolutely operator is fully supported by us. Yeah. Okay. And so Claudio, you're a solutions architect down there. Can you tell us a little bit about your role at the company? Yeah, absolutely. Yeah, I've been working at Kong for almost two years and a half now as a solution architect in search and responsible for helping our customers and partners and users with designing reference architectures, taking care of these modernization process like workload transfer from, let's say, one premise to multi-cloud deployment, this kind of thing. The fun job, as a matter of fact. Yeah, yeah. Yeah. I was a solutions architect for the first two years when I started at Red Hat back in 2002. It was a little bit different back then. I mean, we had 260 people worldwide. Things have certainly changed. How about yourself, Melissa? You're the field CTO. What are your roles there at Kong? Oh, my gosh, Mike. I love my job. I have the privileged position of working with a lot of different companies worldwide. And what motivates me and kind of my primary focus with them is inspiring that business change through APIs, through microservices. I think everyone who's familiar with APIs, I'm sure we can all go back a few years and remember the time that we spent educating people on what APIs were and then kind of finally reaching that milestone where business people would start understanding that maybe not understanding what APIs are, but understanding that joining the API economy was good for business. And there are very many reasons why joining the API economy is great for business or types of enterprises. But what is equally impactful is that every single one of us, pretty much wherever we are on the planet, our experiences, whether or not we know it, are powered by connectivity. They are joined together, everything we do digitally or even manually. There's some process somewhere that is enabled through APIs and through microservices. One of my favorite examples, actually, I'm sure we will remember kind of at the beginning of the various lockdowns that we had globally last year in the UK. I know various parts of the US and elsewhere. I wonder which other countries this happened in. Do you remember going to the supermarket and seeing all these empty shelves everywhere and then just people with their phones out taking pictures of essentially the apocalypse in the supermarket? They're being the epic toilet roll shortages. In the UK, what we saw was that there was this increased demand because of Covid to deliver PPE and medical equipment to hospitals, to all the staff who needed it, and there was the need to continue to deliver all these supplies to the supermarkets. But there was a huge reduction in the amount of tracks that were on the road because the drivers, so many of them were isolating or not able to work because of the virus. The UK's largest government department used APIs to work in an automated way with the UK authority that issues truck driver licenses so that we could very, very rapidly get more trucks back out on the road and redeliver this PPE and redeliver the toilet roll. This, for me, is what I do and what motivates me and I think what is so amazing and impactful about the work that we all do is this connectivity has a fundamental human impact as well as a business one in everything that we see. And we're just as a reminder for everyone, we're live right now on Facebook and Twitch and YouTube and if anyone has questions while we're here with our friends from Kong, please feel free to put them in chat and our production team will make sure that they get them in here and they'll be addressed. It did strike a chord when you were talking about what it was like at the beginning of the pandemic. I remember going to the grocery stores and not be able to find any of those types of essentials we were just talking about. As a matter of fact, I actually went into a gas station one day and there was someone stealing the rolls of toilet paper out of the bathroom from the wall. Absolute pandemonium over here for toilet paper. So, Kong, but you haven't always been Kong, right? Yeah. You know, we were talking about your founders and how they started in the basement. It was Agi and Marco, right? Didn't they start the company in their basement and were living off of canned tuna fish and spaghetti, right? Just trying to keep things going. Tell us about the early. Was it Mashi? Correct? Mashi, exactly. So, yeah, as a matter of fact, it's a quite interesting story. It begins when these two Italian friends, Augusto Marietti, our CEO and Marco Paladino, they came to the United States and decided to found this company called Mashi. That was around 10 years ago. Mashi was focused on building an EPI marketplace application at that time. But then they realized that, you know, two things. Number one, they realized that there wasn't, at that time, there wasn't an EPI management platform good enough to support the throughput they were already experiencing. Number two, they saw that the real value of the whole solution was at the EPI gateway, not at the EPI marketplace application. So, because of all this, they decided to sell the marketplace and then keep on and move on with the EPI gateway. And that's how Kong was born. Then the name Kong, as you can imagine, is related to Mashi. Mashi, there's a jungle thing related to it. And then they decided to move on with this team. Like, you know, Kong, the gorilla, as a matter of fact, will be the strongest ape of these species. That's the story behind it. It's a great name. Stay with me here. This is going to be very cheesy. The moment that I knew that joining Kong was the right thing to do for me was when I discovered the puns channel in Slack, it is full of everything that is exactly how you imagine. But we do things like our internal intranet is not confluence. It is Kong fluence. When somebody achieves some kind of victory or joins the company, we say Kong, congratulations. I think the name itself conveys so much wonderful imagery, but there's so much potential for humour that those of us who like those kinds of jokes do put into a lot of the things that we do. It's a jungle out there, Melissa. There that it is, Mike. It is. So we're going to have some video demos here that the Claudio has put together. How's your traction on GitHub? Oh yeah, absolutely. So then the Kong project, the open source project, kind of a massive adoption all over the world. Recently we reached the historical number of 200 million downloads. That's for the open source project. And then in terms of GitHub, for instance, we have reached the number of 40k stores. Again, a very successful open source project. And yeah, that's it. Okay. And Melissa, you were talking about the Slack channels and the puns. And why don't you tell us about the acronym for the company culture? What's that about? Yes. Yeah, great question, Mike. So our logo is obviously Kong the Gorilla. Kong the Gorilla has a name. It is, he is Groose. And these are our five company values. They stand for global. We are much like the gateway itself, which is used in 138 different countries, which is mental. As Kong as we represent 24 different countries, which is an incredible feeling. Pretty much every single call I get on internally or externally, there'll be people joining from different countries or even different continents. Like even Mike Claudio, between the three of us, we are representing three continents right now. I mean, you've got people who speak different languages, people who've got different experiences and backgrounds and different schools of thought. And this is such an enriching and creative environment to sit in. The other values are real. We're a bit of an overused word, but we are authentic. We are ourselves. We are not afraid to be humans. We're unstoppable. We are customer obsessed. We want to achieve all of the outcomes by partnering with our customers, partnering with the open source community that they can achieve. The C, that is for champions. We strive to be the best. We set a very high bar, but an achievable bar for ourselves. And the E at the end is for explorers. Everybody's very curious about the world, intellectually curious. We're innovating. We're not just doing API management. We're saying APIs are dead. We're doing something different because that is what connectivity needs. We are really exploring and peeling back the layers of the onion and questioning the assumptions to make sure that actually what we do, what we focus on, what we say are all the right things, not just the things that people are used to hearing. Okay. So how about you, let's crank up the technical discussion here. How are you guys doing it? What is it that you folks are specifically doing to help with the business impact, with the human impact around how APIs have changed? That's a great question. I think maybe I'll take off of that question and then hand over to Claudio to delve into the technology as well. From a business perspective, I think pretty much all business leaders, regardless of which domain they're in, are aware that joining the API economy, being API first, thinking about APIs is not just a nice to have, but a must have. This is because we're all familiar with the nice shiny term of digital transformation. We are trying to power digital experiences for our customers, for our employees. We are businesses that are composed of digital assets. All of this is underpinned by that connectivity, underpinned by APIs and microservices that bridge between those different applications. By powering this connectivity through APIs, you can accelerate your time to market. You can improve your operational efficiency. This is because you need to build for consumption. You need to make sure that these services that internal developers or external developers build are well designed, that they are reliable, that they are secured, and also that they are then discoverable, that you can access them. Therefore, this contributes to a reuse rather than rebuild mindset. Obviously, there's a bit of culture change that goes along with that, but the end result, if you do it right, is that you end up already having built a huge amount of work on your projects before you've even begun, because there are these APIs in place that you can easily find and access and then reuse. You don't spend time rebuilding stuff. Then on the other half of it, there's a very valid focus right now on data privacy and security. If we have no visibility, no control over how our data is moving between different systems, between different locations, then each one of these endpoints is a security risk. By powering APIs, by securing them with a platform like com, you can ensure that every single one of these connection points is managed, that it's well secured, that you actually are compliant with not just the regulations of the company that you yourself sit in, but the regulations of every single nation or locality where you've got data running as well. From a business perspective, APIs are the critical enabler for providing better experiences at a lower cost, the lower risk. Should we turn over, get into a bit more of the cases, I think? I think segue into this is, obviously, there is a valid need deal for the traditional API management that we became familiar with five years ago, maybe 10 years ago. Like you said at the beginning, it's a controversial statement. We are saying that API management is dead. In fact, our CEO, Ivy, he stood up at our summit a couple of years ago and opened up the event with this statement. The rationale here is that we used to think of APIs as restful services that sat in our DMZ or next to our ESP and expose data in a nice pretty JSON format to the web. It's all about that edge gateway management. This is a subset of the use cases that we see today. Probably that's a good place to start off, Claudio. This is still a valid use case, the platforms in which this needs to run, though, that has changed. Maybe go into that a bit more? Yeah, absolutely. From the technical perspective, to put it simply, you cannot solve new problems with no technology. That's the truth. In the past, for instance, customers used to have a single platform, a runtime, to run all the platforms. All the platforms were confined in a single runtime. But this time, because of several reasons, cloud computing, microservice phenomenon, customers are going to other places. They want to run their applications as a container, for instance. In other words, to run Docker and then going a little bit further to run Kubernetes. Cloud number one, cloud number two, in other words, to have these moody cloud deployments. What he's saying is that differently than we had in the past, exactly the same application, logically speaking, is not confined in a single runtime, but this time is being spread out across multiple run times at the same time. And yes, you're going to have components of these applications, is this application running on-premise, at the same time, other components running on Docker, other components running on cloud, Kubernetes, and so on and so forth. And then, again, you still have exactly the same application. That's the way it is, that's the way it should be. And then another common regarding it, then considering what we had in the past, again, a single runtime and the same, responsible for running and solving all these technology requirements. This time, in that time, very, very easy to go like the customers wanted to implement a new protocol, new platform, a new architecture pattern, a new deployment, and so on. Usually, the answer for these were like, I am so sorry, the spread that goes to here doesn't support this protocol this time, maybe next year, because there's supposed to be available in the next version, we are about to launch next year, this kind of thing. Not anymore. Today, what we've seen is that customer, first of all, they are adopting these agile methodologies. At the same time, they are building very strong DevOps teams, very strong tech teams, and then these teams are looking for best of breed in terms of technology options by the market's perspective. And then again, we're not talking about your logical application, your application being deployed in a highly distributed environment, spread it out multiple run times at the same time, but then again, the moody vendor deployment. Again, customers are looking for the best of breed options in the market. And that is exactly where KONG is inserted currently, like these new ecosystem. And then that's why we think the API management, as we know it, it's of course, it's still there, the API data is still there, playing a very, very critical important role for the edge computing player to protect your DCR asset, but then again, not enough. You should have other points in your architecture to take care of the microservice communication and so on and so forth. So in a sense that regardless where you are, regardless where you are, you should have a service control point in your enterprise architecture to take care of the service consumption, take care of the microservice to microservice, or service to service communication if you will. That's why we think the API management, again, it's that. We have to provide the next duration for these new requirements and platforms and so forth. As a matter of fact, we position ourselves not just the API gate to vendor anymore, but instead it has service control platform vendors. Containers have been around for a very, very long time. I mean, I think mainframe Solaris, but people weren't really using them because it was just really the orchestration tooling around that wasn't mature enough. I think Docker did a lot to bring that forward. CNCF ran a study, I think it was this year that showed that 84% of the people who responded to their study were now running containers in production. So how does that affect or impact the need for con API gateways? Or does that complicate things or does it make it easier? Yeah, you broke up a bit. I think you're back. So, yeah, container, yeah, you're back. So, yeah, as a matter of fact, container based platform, I would say, for instance, for installation standpoint, I would say that's the simplest deployment for us. First of all, we provide official Docker images, and then we provide official Kubernetes support, like as I said before, not just to have these images being deployed as a Kubernetes spot, container inside a pod. But as a matter of fact, to support Kubernetes is in a much broader sense, like Helm charts, not just providing official container, official Docker images, but and now again, to support Kubernetes as a whole, like Helm charts, K-Native, certificate manager and so on and so forth. I would say again that Kubernetes is kind of straightforward deployment for us and fully supported. Okay. Well, I know when we did our dry run, I think it was last week, and you were working on putting together some demos. And how did it go? And what do you have for us? I think you said you were going to work on a couple of things. Yeah, as a matter of fact, we brought some more precise, we brought four videos, not that long, around two minutes long each. And then, yeah, we wanted to show them just to illustrate what we've been talking about here. What is this new platform? What is this service control platform we're talking about? So, yeah, let me go through them. So the first one, and then first of all, let me share my screen here. So here's the video. First of all, my beautiful OpenShift cluster running in GCP, as you can see over here. And then the... It really is beautiful. Can you see it? Yes. Yes, it is. You have to start every screen share session with the following words. Can you see my screen? Yeah, absolutely. Can you see my screen now? I actually wanted... I actually... And you guys can take this if you want. I actually... It was about eight months ago, and everyone is always like, can you see my screen? Yes, we... I wanted to make a t-shirt that had, like, that it's set on the front. Can you see my screen? A good idea, yeah. I wanted to send it out to everyone in the company. And I think at the time, human resources felt that it wasn't sensitive to the challenging times, but... One of these days, I'm going to make a t-shirt that says... One of these days, I'm going to make a t-shirt that says, can you see my screen? Maybe even put it on the front. Yeah, yeah. I like it. Yeah, kind of fun, yeah. So, yeah, here again, my beautiful... Again, my beautiful OpenShift cluster deployed in GCP. And then I've got already some con... Run time already deployed. As you can see, you were asking about our operator. The operator, as a matter of fact, is deployed, already deployed, as well as the con Kubernetes. So, you can see there's an English-con deployment in there. That's my English controller. And then, as you can see, there's only one pod running this time. Yeah, here's the pod. But then, again, I got in background, I got a loop injecting request to my cluster. And then, on your right, you can see convitals. Convitals is one component we provide in our Convita Prize edition. Convitals is responsible for keeping track of the main KPIs of the gateway. Like, you know, the number of parts that have been processed, proxy latency, data upstream latency, data stored caching, so on. And then, as you can see over here, here's my pod, ingress-con pod running. And then, as I said before, I'm already injecting some requests to my API. But, you know, what I'd like to show you is that, you know, one nice capability provided by Kubernetes is scaling capability. Kind of, you know, imagine that you want to, you need to support a mess, you know, much higher throughput this time. So, you can go to OpenShift again. And then, here's my pod. Only one pod. And then, I can scale the deployment out. This time, putting three pods to run. And then, this time, you can see on your right Convita Prize dividers, again, showing you three pods running and then taking care of the throughput I'm injecting this time. So, kind of, and then again, similarly, we can reduce the number of pods along the way. So, kind of a very, very, very nice scaling capability provided by Kubernetes. I'm doing, as you can see, I'm doing it manually. So, Kubernetes brings another capability, which is what we call the HPA, horizontal pod outscaler. So, this time, we're achieving elasticity. So, the number of pods this time is going up or down depending on the traffic. But, this time, automatically, you'll have to manually request for more pods to reduce the number of pods. And a very, very cool deployment. So, as I said before, totally supported. This container Kubernetes deployment, totally supported. So, Emilisa, any comments? Want to add any comments on this? I think it's funny because a lot of the companies that we see and that we work with are still using and having to work out how to support APIs and services deployed in legacy environments, quote-unquote legacy, or on-premise environments, their own data centers. But, so many of those companies are also running Kubernetes, running containers, and pushing, kind of, at that other end of adoption. So, we see a lot of use cases where it's not just focusing on the kind of the stuff that we saw a few years ago, or not just focusing on, you know, the really nice, shiny stuff, but actually use cases that have to stand both ends of that spectrum. And I think this is where, going back to what you said a minute ago, Cardio, this is one of the reasons why I absolutely love our technology. It is the same. It is consistent. It is equally capable of performing in all of these different deployment environments. It's not Kubernetes-specific. It is not VM-specific. It is not, you know, cloud provider-specific. It is exactly the same thing that you can deploy in each one of these distributed environments. So, ultimately, you end up with these, kind of, localized or federated, like, governance and security enforcement points. But then all of this visibility bubbles up to what is, uh, to use the phrase, you know, that single pane of glass that gives you visibility across that whole distributed system. Yeah, absolutely. Daniel? Yeah, please go ahead, man. No, I just, when we were talking, I think it was last week, you were saying about, you know, how everything's constantly evolving and increasingly distributed landscapes and that, that, you know, you need to be able to apply security and governance consistently everywhere. How does Kong help with that? Yeah, absolutely. Yeah, please go ahead. We both love this topic. You can each take turns. We're all friends here. I mean, I know Cardio has prepared something to actually show. I think that that will be a really great way of bringing that to life. But this is something I did. I presented this at an event we did last week. I'm speaking to customers almost daily. This is the biggest challenge that I see companies struggling with today. It is how to ensure that there is consistent governance in terms of how the endpoints are secured and managed and the kinds of policies that are applied, but also that the API and microservice best practices are followed throughout the API and service life cycle, that governance. So many companies are struggling with how to achieve that governance consistently across all of these federated development teams in different locations with different tools and processes, operating with all these different types of services, different protocols that all then sit within the different distributed infrastructure. It's chaos, right? So we enable API ops, which is, if you're familiar with the term, this is GitOps applied to the API life cycle. Probably see how animated I am. I think my favourite topic right now. The magic here is that when you are shifting to a first or an API driven business model and development approach, typically what companies do is to create an API platform team that acts as central governors that make sure that anything deployed to the platform has been checked for security, for best practice and all of that. This is enabled typically by running evangelism sessions, brown bag lunches, lunch and lunch, road shows, when we could travel. We used to have a little lexer that would take down to different offices. But the trouble with this approach is that this puts all of the owners right onto the developer to make sure that they manually change how they work. They stay up to date with what's best practices are, and that typically means reading some out of date page and confluence that says this is how you should follow API best practices within an organisation. So what we enable this API ops approach, this is going from that manual workflow to a self-serve automated one. This is using the fact that COM supports declarative configuration in Insomnia from the very beginning of the design of your APIs through to the deployment, through to the configuration of the different runtimes. This is using that single source of truth, the declarative configuration that is stored within Git to power automated checks, automated deployments, automated everything across that lifecycle. Claudia, you want to take over? Yeah, sure. Yeah, that would be, I would say, my most interesting video we brought. I can look to illustrate again a little bit more about these API ops, we guess I was talking about. Again, again, again, share my screen again. Can you see my screen? Claudia, I would be like the first person to say, yes, I can see your screen. I'd like to just point at my t-shirt, but I can't do that. Sorry. Okay, good. So here's the exactly the same API cluster in there. So then again, yeah, first of all, just a quick introduction. So my API ops process is going to show you end-to-end API provisioning process to kind of restarting with the API spec edition. And then I'm using Git, as a matter of fact, Mike, I got inspired by the latest blog posts I read in the OpenShift websites. I'm talking, exactly talking about this, like, you know, the OpenShift and GitHub and GitHub Action integration. So again, I got inspired by that. And that's why I chose GitHub in order to implement my CI-CD pipeline in there. So again, by the way, Claudia, thank you for that. You know, my team, my team is responsible for those blog posts. So if you ever want to, if folks want to do a blog and with us and posted on OpenShift.com site, let me know. I think that'll be really good. Yeah, absolutely. And then again, great blog posts, as a matter of fact, great ones. Oh, cut it out. You're already here on the slide. You don't have to buy this up. Okay, good. So again, you know, beginning with the API spec editor, we're going to put the CI-CD pipeline in action. And then the CI-CD, of course, will be responsible for publishing not just the API spec to our API GitHub run times, but also to publish the API documentation, not that poor, not in our con enterprise that poor. And then, of course, that poor will be responsible for, you know, exposing the documentation to the developers and so on. So yeah, this is my GitHub, the post over here. This is my, on my, oops, and this is in Sony designer. As I said before, beginning with Sony designer, which is the API spec editor, kind of, you know, and that's the tool the API designer is supposed to use to craft, to define, to enhance the API spec that we've been working on. And then, first of all, I'm going to send some requests to the API data to show there's nothing there. The API data is totally, there's no API deployed, there's nothing there. So, and then API, the Sony design API spec editor, there's a LinkedIn process running the background. So every time I got a syntax here, I'm going to get a warning saying, you know, I'm learning, I'm learning about these syntax here. And then, of course, I'll have to, to get it correct. The same time on my, my right, you can see the visual presentation of my API spec. And so then again, in my left, you can see the OPA API 3.0 spec. And then on my right, the visual presentation of it, not just the visual presentation, but actually you can try the API out, send the request to it and so on. Another important capability provided by the editor is the capability to inject some current extensions in order to define policies. As a matter of fact, my are going to deploy these API over here, along with two policies at the same time. The first policy, as you can see up here, the rate-limiting policy, going to define a three-minute rate-limiting policy, meaning that I'm supposed to send only three requests of even minute to the API gateway. And then the second policy, that's the another integration point with the open shift, more precisely, the key clock. So in order to consume these API, I am supposed to send key clock credentials, otherwise it wouldn't be possible to consume it. So, yeah. So here's, I'm injecting these policies over here. And then another capability provided by the editor is the possibility to integrate the, the git base repository. As I said before, I'm using github time. It's my credentials, my github credentials. And then every time I commit a new version for my API spec, and I push it to github repository, I'm going to get the CICD pipeline in action. So here's my github action this time. So we are going to get started in a moment. And then again, just because I push it, a new version for me, my API, I'm going to get the github action. This is my github action in action again. And then I can check it out, all the steps. And then again, the CICD implemented by github action would be responsible for publishing the API spec to my API gateway, not just the, not just the API itself, but then again, the document. So right after, right after having the CICD pipeline done, I can see the new services, new route, new plugins, the plugins considered a plugin as an extension for the API gateway. So each one of these plugins, again, is responsible for an event policy. As I showed you before, like, you know, two policies, the rate printing policy, and the open de-connect policy with the key code integration is already there, deployed and configured for me. Likewise, here's my developer portal, and here's my exactly the same API spec deployed documentation. This time deployed and available for my developers. So I'm going back to the editor, and then this time I'll be able to send requests to my API, finally. But since I got the key cloak integration, I am supposed to send key cloak credentials, otherwise, again, it wouldn't be possible to consume the API. So let's say I'm going to send wrong credential key cloak. I am supposed to get a 401 error code, meaning that key cloak didn't like them. And therefore, the API gateway is not allowing me to consume the API. So once I provide the right credentials, I'll be able to, not just to get authenticated by key cloak, as a matter of fact, to consume the API. And then we're going to see the second policy in action, the rate printing policy. I'm supposed to send only three requests a given minute. So if I try to send more than three, a given minute, given minute, I am supposed to get a 429, meaning that I have reached the rate printing policy. So that's, you know, that's the video I can show a very quick demonstration of the API ops process, revision process. Well, now that, now that everyone knows exactly what your Gmail email address is, I wrote it down when I think it was when you're logging in, how do you, how can people get in touch with Kong? I mean, I'm guessing they don't want to send email to your Gmail. Do you guys have, you know, Claudio at or is there like, you know, it usually would do this at the closing, but I figured I'd sneak it in right here, seeing as your Gmail address was just all over the screen there. Yeah. So again, you know, you can get in contact using my email, my corporate email, if you really could use my personal one, my Kong email is exactly my name, Claudio at KongHQ.com. And then if you go to our official website, KongHQ.com, you have, you know, multiple resources. For instance, one nice resource available is that, you know, our calendar for, you know, virtual events, like, you know, we just had our destination event, a totally focused on security, we call them zero trust network destination event. And then, you know, not just our own events, but as a matter of fact, other, you know, well known events like, you know, keep calm and so on and so forth. And then just for the technical audience, as I really invite you guys to go to our GitHub repositories, you know, a massive collection of information, technical guides, documentation, source codes and so on. Okay. Or if anyone didn't get any of that, they can they can send me an email address. It's just wait at redhat.com and I can get you connected with them. So we're coming up on the top of the hour. We still have some more time here. I think you had another, didn't you have a service mesh demo that you wanted to run? Oh, I'm so glad you brought that up, Mike. I was like, right, you've got one more thing that I really want to make sure that we've got time to talk about. I keep kind of, I guess, speaking about the differences, right, between API management a few years ago and API management or service management as it is today. API management a few years ago was only really focused on one connectivity and that was API management at the edge. So typically at the edge of our network, we host it ourselves in our DMZ or we use some kind of cloud solution that is a centralized deployment in the cloud that we then push all of our API requests. This is one of three key connectivity patterns. The second one is powering application to application connectivity. That is internal API and service management. For example, you have a product domain within your business that releases APIs as services that are consumable by other teams across the same business. So people working within the customer domain can reuse APIs that the product team have shared with them, but they are not externally consumable. The third connectivity pattern is the hot topic really across our industry of the service mesh, right? This is the, we're breaking down our monolith into microservices and we need for that overall large application to be comprised of all those tiny little chunks of code that talk to each other and the service mesh provides that security, provides that traffic management, provides the load balancing to ensure that all of these little nodes that need to talk to each other can do so reliably and effectively. Claudio, I really should start talking and let you show it. Yeah, the service mesh active pattern kind of been very, very important. Again, we're not talking about the external traffic coming to your microservices architecture, but instead different traffic, internal traffic we've got internally, service mesh or microservice active. Kind of, you know, it's very, very important to keep in mind that, you know, talking about microservices, talking about multiple instances at the same time, multiple run time at the same time. So kind of, you know, this microservice to microservice communication, it's not a straightforward topic address. And then you have to deal with breakers, load balancing, health checks, a mutual chalice, you have to encrypt the tunnel with these microservice, microservice communication as well. So again, we're not, you know, it's not an easy test to implement. And then service mesh is exactly addressing this kind of requirements, technical requirements. We got, you know, I mentioned we're live on YouTube and Facebook and Twitch. We've got some questions that just came in from YouTube. I want to bring this up here. If I may. So I don't know if this is from Melissa or Claudia, but in context, are we talking about Kong on OS deployment or Kong integrated with the operating system? In the latter case, does that, in the, sorry, don't have my readers on, in the latter case, does that do for the model ingress service routes? Yeah, I can take this one. I think OS in this context is OpenShift. That's my understanding. I might be wrong. But yeah, if it is, yes, it's totally right. I mean, we, what is Kong running OpenShift? Kong is an OpenShift deployment, implementing the English controller, the Kubernetes English control spec. In this sense, we're totally following the ingress controller specification. So yes, we provide CRDs in order to define an ingress, to modify an ingress, to apply policies to the ingress using exactly the same plugins I showed you before, like in oil authentication policy, a rating team policy and so on and so forth. Yeah, absolutely. We follow the service route ingress model as we, as we, you know, again, in these OpenShift deployment, as you, as you are referring to. Thank you. Just, we just got one more, just came in from Facebook. I want to get it out there. Is there a position on Kong service mesh versus, say, a pod at OS4 Istio model? Yes. As a matter of fact, we provide a, the Kong specific self-selection implementation. It's called KongMesh. It's totally based on Kuma. It's a second open source project that started as a Kong open source project a couple of months ago. We donated to CNCF in, yes, currently it is a CNCF project. And then again, KongMesh is responsible for implementing session mesh implementation. Some differences when comparing to guys like Istio, for instance, much easier to manage deploy, to, to configure and so on and so forth. Other difference, very, very important difference. We, we totally, we totally fine implementing your self-selection mesh in a, again, in a hybrid environment, not just using Kubernetes, but again, self-selection defining your self-selection mesh in a multi-platform deployment. Like, you know, services running in Kubernetes, self-selection running in non-Kubernetes clusters, and still being part of exactly the same self-selection one. So, yeah, that will be it. Okay. Well, I'm getting, I'm getting notified by our producer that Mike, it's time to wrap it up. I was trying to see if we could go over a couple of minutes, but we, we can't right now. So hopefully we can have you folks back on again. I'm going to do what I like to say, what I like to say is I like to share my screen. And then I'm going to pop this up here. So thank you for coming today. If anyone wants to know more about Kong, you can see the link here on the screen. I was going to say click on it, but I don't think that'll work. And then of course, you've got bizdeb at KongHQ.com. Love to have you back. Claudio, if you want to be a part of our, of our blog postings and so forth, let me know. You've got my contact information. Melissa, thank you for joining us today. I hope you had a good time. Thank you for having us, Mike. This has been so much fun. And please promise me, as soon as you release your t-shirts, I would like one. Thank you. Yes, thank you. Thank you again. We're, we're very conservative, but I got my fingers crossed. So anyways, you folks can stay on the bridge if you want, but we're going to wrap it up for the day. Michael, wait for the OpenShift Commons briefings, our operator hours, and we do this every Wednesday, one o'clock Eastern. So please join us. Thank you.