 Hello, and welcome to this special CUBE conversation. I'm John Furrier, host of theCUBE here in our Palo Alto studios. We have CUBE alumni and co-founder and CTO of Kong. Marco Paloandino is back. We get some big news. API management platform supporting dedicated multi-cloud is the pod area. Great to see you. Thanks for coming back. Well, great to see you again. So you guys got some news. We were chatting about the AI gateway recently you and your team did a great sprint on and nailed that product. I want to get an update for sure on that. Obviously, control planes, data planes, the hot thing now is the AI gateway across clouds. Give us the update on the big news. Well, so we are announcing our dedicated cloud gateway solution on top of Connect. So let me explain. Today, Kong already has a cloud control plane that allows every organization in the world to be able to have a unified control plane when they start gateways, ingress controllers and service meshes and AI gateways. And today we are announcing a new capability of Connect, the ability to press one button and deploy global, multi-cloud, multi-region gateway infrastructure across the world. So this is your typical SAS gateway infrastructure, but with a twist. There are some technology innovations that we have built that make these quite unique in the landscape. And of course, it is being powered by Kong Gateway, which is a very famous and popular API gateway technology out there. Yeah, you guys, APIs are the connect points for everything these days. We know that and covering it for over a decade. The global piece is super important. Why is that important? Cause what's different about global? What are some of the things around the global aspect of it? The one click in the cloud, obviously AWS, the big cloud, the biggest cloud. What's the global impact here? Is it simplicity? Is it functionality? Both, what's the upside on the global aspect of this? Well, so the infrastructure that we've built to be able to deploy a globally distributed API gateway on Connect is actually quite unique because we've built a few things that are not available in other products out there. First and foremost, this is multi-cloud. We're shipping at GA with AWS support across a variety of regions. In a couple of months, we're going to be announcing Azure and then before the end of the year, we're also going to be supporting GCP. This is important because every organization in the world, especially the top Fortune 500s, are already powering applications across more than one cloud and being able to deploy infrastructure that can run simultaneously across every cloud reduces complexity and simplifies the operational task that the platform team has to put in place in order to be able to manage that infrastructure. So you manage it the same way, whether it's running on AWS or GCP or Azure. Then we built it on dedicated infrastructure, which means that everything that's being provisioned, it's entirely compartmentalized from what other customers on Connect may be running. Therefore, it is the most secure and most performant gateway offering you can find anywhere else in the world today. And then we've built it in such a way that it can run without scaling, we call it autopilot because you press a button, you provision it across all the regions, all the clouds and then you forget about it. We will automatically scale it up and down based on the live traffic that we're seeing in the infrastructure. Or the customer can choose to decide how many nodes they want to start. And you would typically do that for cost predictability or infrastructure predictability. So multi-cloud, multi-region with auto-provisioning, but also management around loading. Correct, auto-scaling. Auto-scaling up and down, okay. And the provisioning piece, you said auto-provision? It's auto-provision, so essentially the way it works and it's available today. So anybody can go and check it out. We can go on connect, we can provision a control plane and we can attach a dedicated cloud gateway to the control plane. We choose the conversion that we want to run. We choose if you want auto-scaling, yes or no. We choose what clouds and what regions we are obviously announcing AWS today, but there is going to be more clouds like I mentioned. And then we choose if we want our cluster to be public or private. We support private networking. So we could be using this entirely as a private use case for internal APIs or we can use this publicly at the edge if you have APIs that we want to expose, let's say to a mobile application or a developer ecosystem. And then that's it, we press a button and we provision. Yeah, the one click thing, that's a cool feature. I want to get into how it works. You jumped ahead on me. So private networking securing connections across multiple clouds, across regions with private networking is going to be a requirement, especially as the AI gateway comes in with LLMs. How does this all work? Because API management now is, I won't say complicated, but it's getting more complex as more connections happen. So naturally it evolves, it's bigger. What are some of the things of how, why this works? Well, private networking, what's under the hood? Well, so let's look at the journey of an API management solution or technology inside of an organization. We start with a team building APIs. They don't want to build cross-cutting requirements like authentication, observability, load balancing, all of that requirements. And then they use a gateway technology to be able to provision it by themselves. Then there is another team, maybe running in a different cloud, maybe running in a different environment. They are also building APIs and they provision their own infrastructure. Every team today, it is an API team. Every developer is an API developer because every application that we built in the world, it is powered by APIs. It is the backbone of our organization, of our products. Everything is an API. And over time, having different gateway solutions running in different environments increases the complexity and the security risk of the organization. Because now we have much fragmentation across the board instead of having a unified solution to cater to every team. And so with this solution, the dedicated cloud gateways, we can solve that problem because it runs simultaneously within the same cluster. It can run across multiple clouds and multiple regions. It is the easiest way to essentially remove all the complexity that the teams have put in place over the last few years and then give them a solution that just works natively with their environments, with private networking. So it is as secure and as performant as if they were running it themselves, but we run it for them. Obviously, we are experts in running gateways. So we are delegating the complexity to the con team, which by the way also makes it more cost effective because now the team members in the organization, the SREs that we're managing that 24-7 infrastructure can do better things. They can actually focus on their platform, their business, their products instead of managing infrastructure. So what are some of the customer comments? What do they say to you? Give us some anecdotal sound bites. Like, we love you guys. You saved our butts. Or we had a very cluegy, disparate, disjointed architecture. We couldn't auto-provision. What were some of the problems that went away? Or what enhancements did you guys make differently once con came in? Was it a better, faster, simpler environment? Was it better performance, better security? What was the key before and after when you guys come into an environment that's got disparate API systems? Before, I have to deploy the software. I have to run it. I have to scale it. I have to upgrade it. I have to onboard the team members. I have to then build expertise on running it in one cloud, another cloud, Kubernetes, non-cubernities, virtual machines. After, I press one button and I just deploy that infrastructure. Which makes our API management solution become like electricity. Why electricity? Let me explain that. Today, we plug something in an outlet and it just works. In our homes, in our office, we just plug it in and it works. We're turning API management as electricity. It just works. It's always there, up and running, ready to cater to new traffic, to new use cases, it auto-scales, it runs everywhere the same way. And it allows us to have that global visibility across the entire API portfolio that the organization is creating, which is essential to be able to drive business outcomes on top of the APIs that we are building, like accelerating developer productivity, shipping new products faster by assembling existing APIs in a better way. It allows us to enter new markets and it just allows us to have visibility in a way that we can innovate and we can build faster and better. Now, we are removing that complexity from the teams. We are also removing that complexity from the platform team that's catering to the application teams and making that available in one click. Obviously, Kong is now maintaining it for these organizations and I'll tell you more. Even before announcing these capability, we actually closed our first customers because once you see it, you can't unsee it. It's so easy to provision this infrastructure that you are wondering why am I even running it myself ever again? It runs everywhere. It's the fastest technology. It is actually twice as fast as native cloud solutions that otherwise the organization may consider, which is quite of a big deal. It's twice as fast, at least twice more throughput at half the latency. That means that when someone buys a ticket online through our products, that is half the time. It's double the time as quick as it would be otherwise. It means that when we are sending money to somebody else for an API request that money that transfer that API call is twice as fast than otherwise it would be. The product experience is going to be better. Mark, this is one of the things I'm fascinated about your company and where the trend is going because you almost have to step back and say, how did we get here? And it sounds like, and what we've seen is as DevOps grew, the API sprawl happened. And it kind of just grew and it's like, why are we doing this? Is that really the key pain point? It's like, it's kind of become so instituted that it's critical infrastructure, but it became that by default of the growth of DevOps. Is that kind of the core problem? APIs are the new internet. And we had this conversation last time when we spoke about AI gateway, the artificial intelligence gateway, which at the time was a new product we shipped. And we talked and we spoke about at that time about APIs being the new internet. 85% of the internet traffic is APIs. Internet is APIs and the internet as we know it made of websites, made of emails, that internet disappeared and got replaced by API traffic. APIs are the backbone of our digital world, which is everything we do on a daily basis. How you get paid, how you watch something online, how you travel, how you book tickets, everything is powered by an API. Can we make that underlying API infrastructure twice as fast with lower latency, globally distributed and more secure? Then it's a big win, not only for the organizations that we work with, but from the customers and the end users that are using their applications. No debate on that. So check on that. You got me sold on that. I totally align and believe in that religion. The question I'm getting at is, is that on the customer problem set, they've been doing the APIs. It's almost as this been part of the culture. Is there benefit with Kong, the increase in velocity of more API deployments? I mean, what are customers saying to you when after they move from their homegrown or their evolved API management or system to Kong? What are some of the anecdotal feedback? Like we're going faster and redeploying resources. What's the benefit to them? Customer wants to go faster. They want to reduce complexity and they want to gather more visibility and more, how can I say, better management capabilities on top of the API portfolio that the organization is creating. And with Kong, they can do all of that. Quite frankly, Kong is the fastest API technology in the world. Kong also comes from the open source community. We have an open source core that is very popular across the world. We process 20 trillion requests every month that we know of. It's probably much higher than that. Every month across the world and that's all API traffic that we process. It is extensible because you can build plugins on top of it. It already ships with hundreds of integrations and hundreds of capabilities. So there was really no point in reinventing the wheel when there is a technology that is already doing it for them and it's very easy to use and it's very quick and very performant and very extensible too. It gets to very value faster. If you're building something, you go with Kong and you say, okay, they've got it taken care of, check, like electricity. Now I work on other things and building out that in the house. That's the metaphor. So you take care of that. Now I want to, I'm curious about the Amazon Web Services as the first cloud. You're announcing this. What about Azure and Google Cloud Platform? Yep, so today we are announcing AWS SGA. We are going to be announcing very soon in 30, 45 days the tech preview for Azure and then that's going to be the next GA cloud we're going to be announcing and then before the end of the year GCP. So before the end of the year, we will be able to support 30, 40 plus regions across all the three major clouds and the ability to deploy gateway, API gateway infrastructure seamlessly running in one or more clouds in one click. It is something that does not exist today. Okay, so you know I have a big fan of Kong love what you guys are doing. I do believe the internet is powered by APIs. I think it's only going to get better. That's the connective tissue for all applications. It's only going to get better with edge computing and on-premise and as distributed computing goes next level with AI, you're seeing it now. And I want to get back to the AI gateway a couple of questions today, but I want to ask you this question. I'll be the skeptic. I'm a skeptic. I'm a customer. Well, I got other solution options out there. Well, I got Delta. Why you, why Kong over the other guys? What's your answer to that? How do you compete against the other solutions? What do you say to the skeptical customer who's doubting the conjecture and the claims or might think there's a better solution? What do you say to that? Everything starts from the technology. At the end of the day, we are infrastructure technology providers. And so despite everything I say, the real final test is the technology test. Is the technology going to cater to the customers, the organization use case? And whenever we put Kong in place in the organization and we benchmark it, we look at what it can do. We look at how it can be extended. There is no one that can say that the technology is not best in class. And so really we're starting from technology first. And then based on that, we are then looking at the use cases where we can deploy this technology to make the life of the developers easier when they're building new products or building new APIs. But the technology, it is something that I am quite proud of being the city of the organization. That's something that obviously I'm quite proud of. And our company, my company, Kong, has been incredible in the ability to hire the best talent that can then work on creating this best technology for APIs. APIs are so critical that today, Kong is being used in every sort of use case, every vertical that you can think of, but it's powering the top 10 banks in the world. The majority of them is running on top of Kong. Across the world, we're powering airline ticketing, we're powering stadium ticketing, we're powering e-commerce, retail, we're powering healthcare. It is being used in mission critical use cases where the stakes are very high. One customer of mine, I asked them, what happens, Kong is not going to go down, but what happens if Kong were to go down? And his answer was, well, the global economy would fail. That's how critical APIs are and that's how critical our technology is. And we have the best technology to be able to handle the most mission critical workloads and now we're making this technology available in one click. I think having the technology as the base truth grounded in technology, it sounds like you guys in your journey as a startup and now growing like crazy, is to look at customer use cases, deployments. So that's the multi-cloud, multi-region, a dedicated infrastructure and the seamless integration. That's part of this announcement. So this is like the next chapter, if you will, on top of the technology. We are working with organizations that are quite strict in how they are protecting and securing and complying with their data and APIs, they transfer data. So it's quite of a critical piece of the infrastructure. And the reason why we built it in a dedicated, compartmentalized way, it is because we understand these requirements and we do not want to run everything on a shared infrastructure. Where there is a possibility of having more than one customer running on the same underlying servers. We wanted to separate that because we wanted to give the best technology, even in the cloud, even as a SaaS service, the best technology to our customers. Typically when we are looking at SaaS and dedicated technologies, dedicated is typically harder to use, it's harder to deploy, but that's not the case with Connect dedicated cloud gateways. We have the power of dedicated, the compartmentalization of dedicated and yet we have the same ease of use of serverless. We press one button, we fire and forget that infrastructure will scale automatically for us. It will be the fastest infrastructure we will deploy for APIs without having to worry about pretty much anything. Kong does it for the customer. So it's really the best of both worlds. So before we get into the AI gateway next, I want to just ask one final question on this piece in the news. What is an example or what are some signs or symptoms or indicators from customers out there maybe watching this video that would kind of scream, I need to get to Kong? What are some of the things in their environment that would be hotspots, pain points, rooms on fire? What kind of things, if someone looks at their environment, is it certain symptoms? How would you talk about that piece of it? When do they know to call Kong? Organizations are realizing that in order to be able to move faster, they don't need just an API technology in place. They need an API vision and an API playbook that they can build upon in order to be able to automate what the developers do when it comes to shipping and creating and deploying new APIs. APIs are products. We need to have a lifecycle for APIs, the same way we have a lifecycle for any other product in the organization. Now, building that lifecycle, building new APIs, onboarding those APIs, versioning them, decommissioning them, being able to maintain this API portfolio can be quite challenging. And so typically we work with customers that are seeing the limits of what they've built up until now and they decide to, you know, the time is right to standardize on an end-to-end platform that can give them that unified control plane to manage all of the APIs with automation in such a way that the platform team doesn't become the bottleneck, but the developers can self-serve some of those API policies without having to ask all the time. Technology that's so fast that can run in Kubernetes, in the cloud. So they want something that essentially allows their developers to focus on the business, which is the products, the customers, and not on building infrastructure. So if APIs are unwieldy, they basically have so much laying around, so much work's going on, maintaining it. It is a lot of work. When we look at APIs holistically, it is about providing the infrastructure, yes. It is about also providing the onboarding, providing the documentation, providing the analytics, providing the governance, the compliance, the security, and we can keep going and going on and on and on. It's a lot of things that need to be done. No one wants an electrical plant on their facility. They just go right to you guys. Pretty much, right? And so we work with them to make sure that our solution works very well for their use case, but at the same time, it is customizable, because obviously everybody's going to have edge cases they need to cater to, and we want to be able to customize it. So AWS is GA, Azure's coming, Google's after that this year, end of the year, all three clouds. And before the end of the year. Yeah. All right, awesome. So last time you were on theCUBE, let's switch over to the AI gateway because this is like the hottest area right now, and so you guys, you told the story of the product that you announced that you got a small team and in conk fashion, you guys jammed it out the door, got it out, all done, small teams move fast. You were really proud of that. How's that going? Cause that whole idea of building and maintaining LLM based apps is hot. In fact, the CEO of NVIDIA, Jensen Wong said at their developer conference, he validated that there'll be generative culture where LLMs will need to talk to each other, which was a point that you made weeks and weeks before that event. So how's that going? Are customers adopting the AI gateway to build and manage these LLM apps? What's the update on the AI gateway? Well, so the more AI, the more API, right? That's where it all starts. We can use AI, train AI, or have AI interact with our systems and services, and it's always an API powering all three behaviors. And so if there is an API, well, that's the perfect fit for what conk does in the API world. And so we did announce, when was that? Six, eight weeks ago, we announced our AI gateway and believe it or not, we're actually going in production with two Fortune 500s that heard about that announcement and they were telling us, look, we were building the same thing ourselves. You guys have it. We're actually con customers already. Why don't we explore, evaluate your technology and they're now going in production with our AI gateway to be able to cater to all of that AI traffic. When it comes to AI, you know. So you got product market fit going on big time right now. Well, because everybody who's using AI today needs to have security governance around that AI traffic. It's very critical. And either they build it or they use conk, but however they do it, they need something in place. And so if conk can be a good technology fit, why not use conk? And so we are improving developer productivity because we support both cloud and self-hosted LLMs so we can orchestrate across multiple LLMs to improve the reliability of the results, to improve cost, to reduce latency, to self-train, to fine tune self-hosted models that the organizations are building. But then we can also enforce governance and compliance and security on top of all of that AI traffic. Now the cool thing about AI is that with this new announcement, dedicated cloud gateways, because AI is a feature of the gateway, we could be provisioning in the cloud in one click an AI gateway as well. So any organization today who's using AI and they want to have infrastructure to be able to process that traffic and make the developers more productive when using AI and securing that AI traffic, they can do it with this new announcement in one click in the cloud. So AWS and then Azure and GCP before the end of the year. So Amazon has the Amazon gateway that you just announced. AI gateway supports in that cloud one click. What use cases specifically does that support? Someone doing modeling in the cloud or managing data? What would be the example of why I would click on the AI gateway to work on Amazon? What use case? Well, whether we're using AWS, LLMs backed by, let's say Bedrock or whether we're using OpenAI, whether we're using self-hosted technologies that we're running ourselves, we need a gateway that allows us to visualize, orchestrate and manage all of that AI traffic. At the end of the day, APIs, it is what enables the AI consumption and those APIs like every other API need to be secured. Now, with our gateway technology, we can deploy in a AWS con technology or Azure or GCP down the road to cater to that AI requirement in addition to the API requirement. The point being in one click, we can kill two birds with one stone. We can simplify and accelerate adoption of AI while at the same time, also rolling out a modern technology for APIs beyond AI, our APIs that we're building ourselves. So developers on premise could be coding with local LLMs, go to the cloud with Bedrock, maybe Anthropic, maybe do a little things with AWS, VPC setup, you guys are managing all that seamlessly. Correct, and by the way, the cloud gateway offering that we're announcing, it is in addition to the hybrid offering. Hybrid means that we still support the control plane, the cloud, but then you can run the data plane, the gateways or AI gateway still on-prem if the organization wants to have more control on that traffic. But now, they have options. If they want to run it in one click, they could. If they want to run it on-prem, they still can, that's available today. So we are giving them the most options to be able to simplify that infrastructure. Okay, so you got public cloud gateway, AI gateway, and hybrid gateway. What about Edge? We can deploy it in the Edge as well. So today, when we provision a cloud gateway cluster, there is one question in the wizard, and it's asking us, is it a public cluster or a private cluster? If it's a private cluster, we can support private networking to power internal use cases that are not accessible from the internet. It is as secure and as performant as if they were running it themselves. But if they choose public, we provision a globally distributed DNS that targets all the regions in such a way that we can provide them with multi-region and multi-cloud connectivity out of the box. If one region goes down, we can implement automatic failovers in another region, and we do all of that in an automatic way. So there is nothing else that needs to be done other than provisioning the cluster in one wizard, and it's like four or five steps. Huge progress, one click, API in the cloud, dedicated infrastructure, seamless integration into Kong, multi-cloud coming with end of the year with Azure and Google, you got all the bases covered. Oh yeah, I mean, that's our vision. Marco, thank you for coming in and congratulating us on the new update and looking forward to seeing what comes next. Thank you for the opportunity to talk. Okay, Marco Paladios here, CTO and co-founder of Kong, the API management platform now supporting dedicated multi-cloud environments launching first with AWS, Azure and Google Next. Thanks for watching.