 Welcome to this special CUBE Conversation. I'm John Furrier, host of theCUBE here in our Palo Alto studio for a special presentation with Kong Inc. Kong is open sourcing its AI gateway for democratizing multi-LLM usage. And I'm here with Marco Pallandino, co-founder and CTO of Kong to talk about the news. And more importantly, the AI productization as every company starts to go into AI and figure out from a developer standpoint and infrastructure, multiple LLMs are out there and how do you put them all together? Marco, great to see you. CUBE alumni, thanks for coming into our studio. Thanks for having me here. So great news you guys have here. You're the brainchild behind obviously CTO and co-founder, Augie, who was on theCUBE earlier. You guys are doing great. And congratulations on all your success. Developers love you guys. You guys are known for tools. But now as AI comes, you got more than tools, you got a lot more things developing around AI. This is a big part of the news. What is the news? What's important? Well, so the news is that Kong is open sourcing a new AI gateway that will help developers be more productive when building AI applications and will help organizations have visibility on what is the AI traffic that all the teams, all the applications are generating across the organization. So the big challenge today we're hearing is that what do I do with AI? How do I set up my infrastructure? What is the models? Which ones for me? How do I do it? How do I productize AI? Every organization will have to do this. This is not even a question. So what are developers doing? What are some of the best practices that are needed to put in place to get this going? We're early on, it's embryonic. It's just starting, it's accelerating every day. What do you guys see? And when does this news hit that mark? Yes, so in the past few months we worked with developers and organizations that started building AI to ship in their products to build new experiences. And we noticed that developers, they keep doing the same things over and over again. And so we thought that there could be an opportunity for us to provide modern AI infrastructure to accelerate their productivity as they're building these new AI applications. For example, when a developer wants to use or integrate with AI in their applications, they're going to be trying with one or more LLMs. They're going to be then building a fine-tuned model, perhaps self-hosted, that are going to be running themselves. And then they're going to be managing credentials, metrics, observability, and logging information. And so every developer in the organization who is implementing AI has to go and build all these cross-cutting concerns that are required for productization of AI. And so with the AI gateway, what we really wanted to provide is a set of capabilities out of the box for multi-LM consumption, for security, for credentials, for prompt engineering, that at the same time also give visibility to the architects and the platform teams on what is the AI traffic that's being generated to incentivize responsible usage of AI from the organization. So I'm going to read the subtitle to the news that the company, Kong, unveils free open-source, no code, plug-in, suite, featuring support for multi-LLMs, advanced prompt engineering, AI analytics for fast and secure, high-performance AI adoption. Obviously, we know you guys have security built in, although probably didn't want to get in the headline. That's essentially the news, okay? So how did you get here? And then let's get into the news and unpack specifically what that means, because I want people to know your background first, but Kong, you guys have been doing this, it's not like you just woke up one day and said, we're going to do this. How did you get here? And then we'll get to the news. Well, Kong is a provider of API infrastructure, modern API infrastructure. So we provide API infrastructure for the cloud, self-hosted, we provide service mesh products. So we do pretty much everything that needs to be done when it comes to the full API lifecycle, developer portal, service catalogs, API catalogs, and so on and so forth. Now, obviously APIs are being driven by the digital use cases that we are developing in the world. And so a big driver of APIs has been mobile in the past, another driver of APIs has been microservices, as we all know. And of course, AI, it is yet another digital use case that's incentivizing more API adoption. The more AI and the more API we have in the world. Why is that? Well, with AI, we can either use AI, train AI, or have AI interact with the real world. And whether we use it, whether we train it, whether AI is interacting in the real world, there's always going to be an API to, as an entry point, to be able to do this type of operations. And so it's a natural extension of the API use case. And I noticed on your blog post, you wrote last month in January that you had some stats on how much traffic is API traffic, it's like over 80%. Over 85% of the world, the internet traffic is APIs. So obviously APIs is happening, it's gonna be part of the connect, I think it's gonna be part of the connective tissue, but I think more importantly is the rise of the open source AI and LLM's foundation model specifically brings this democratization of AI technology concept to the table. We're seeing it play out in real time. So this product you guys have is an open source plugins part of the gateway explain, okay explain how it all works because we know the gateway three six, that's con. That's right. Okay. And then you got the open source plugins. What does this announcement mean for the tech community to explain specifically what's happening with the product? Yeah. So first and foremost, the AI gateway to your point, it's fully open source. So it's free to use. Anybody can go and download or run it in the cloud for free, right? It's not a commercial product offering, it's a open source offering. Kong itself has an ecosystem of plugins that allow to expand what the product does. And so we have built plugins, six plugins that allow us to perform L7 AI operations on top of any type of AI traffic that the developers are generating. So the six plugins are AI proxy for multi LLM consumption with one endpoint. And then it also provides a way for us to store credentials and logging and metrics and so on. We support six different LLM providers off the bat. We support OpenAI, Azure AI, Coherentropic, Mistral and so on and so forth. And then we have shipped plugins for prompt engineering. So plugins to firewall our prompts, plugins to create templates for our prompts and plugins to be able to set a mutual context on every request so that we can centralize, manage our prompts without having to update our applications every time there is a change that's needed in our prompts. That's quite important for improving our productivity when it comes to AI. But then there is two plugins that I'm quite excited about. And these are the AI requests and response transformer plugins. These plugins allow us to integrate AI in our existing API traffic without having to write any code. So up until now all the other plugins imply that as a developer I'm building an integration for AI that I'm writing codes to make that happen. But with these no code AI plugins we can get the benefit of AI on top of existing API traffic without having to write any line of code. That is the easiest way to start enriching, augmenting our API traffic, existing API traffic with AI. So integration ease of use checks the box. Correct, I mean there is a whole set of orchestration that is possible on top of LLMs using these new AI plugins that organizations are looking forward to use. So we work with organizations that are leveraging LLM providers in the cloud. They're good, they're quite expensive. Most organizations they are going to be building their own self-hosted model. And so they want to essentially orchestrate what is going to a cloud more expensive LLM, what is going to a self-hosted LLM and being able to create fine-tuned models that they host themselves because there is a better value if they do it themselves. So the naysayers will be out there, well okay you got developed, what about the no code? Because there's a whole revolution coming for no code, low code. A lot of IT folks aren't slinging code but they're slinging IT. What's the no code angle here? Well so the no code plugins are quite exciting. I can take any existing API I'm running on Kong and I can augment and reach or transform that API traffic, either the request or the response using any LLM provider that we support. For example, let's say that I'm entering a new country. I'm an organization, I'm expanding into a new country, into a new market and I need to internationalize or localize my API in a different language, let's say French. I can use one of these no code AI plugins to say whenever there is a client that's making a request from France, automatically translate that response into French in such a way that we do not have to do it ourselves and we don't have to write an integration to change our applications. Everything is done on the gateway so by the time the application receives the response it has already been translated. This is one of many use cases that can be implemented with these no code plugins. What's the impact of the tech industry? What's the benefit that HAP comes from this? IT operations is automated, API has become scalable. What's the folks watching out there who are in the industry? What's the impact to the tech side? Well, look, AI at the end of the day is a tool to do some jobs, right? It is a tool for a whole series of operations that we can perform to make our applications more intelligence to make them smarter. What we are doing is to provide the easiest way to bring this type of intelligence inside of the applications themselves. So our goal as a technology vendor and technology provider, it is to make developers more productive. But at the same time, it is also to allow the organization to productize their AI applications. There is a big problem right now where the organization doesn't really know what is the AI traffic that's being generated by the applications. And it's quite problematic. What if we leak customer data? What if we leak sensitive information into an AI model that could potentially be a catastrophe? And so being able to enforce a lifecycle for AI productization using Kong, using these new AI plugins allows the organization to stop being blind on this AI traffic and being able to finally get a hold of it. The goal is responsible usage of AI to bring that intelligence at the end of the day to the end users that were catering to. Nobody wants to fly blind. Nobody wants to fly blind. And when it comes to AI, it's even more important. We've been seeing already incidents where data was leaked that shouldn't have been leaked into an AI model. And so with these plugins, the organization can enforce processes and procedures to make sure that the AI adoption is responsible and does not damage customer data. Yeah, I like the AI proxy concept you mentioned earlier. I want to get into that for a second because you're seeing a lot of activity on the RAG side, the Retrieval Augmentation Generation wave. A lot of hot stuff going on there. But it's interesting. You don't get the same response every time. And so your idea of instrumenting observability I think is right on the money. You're going to see a lot more need for measuring things, whether it's results, having memory, what's going on. There's a lot of weird data coming out of these LLMs because sometimes you're getting different responses. Sometimes you don't know what's good. You keep that. I mean, this is a whole new infrastructure going on. So, okay, entrepreneurs are going to work on this. Developers are going to work on this. So I see a lot more interaction with that kind of raw traffic, raw data because people are trying to figure out what's going to be the set piece that they can install or run for their LLMs. And I believe LLMs will work together. We are already in a multi-LLM world. When it comes to AI adoption in an organization, it's never going to be one LLM. It's going to be more LLMs, whether they're in the cloud, whether they're self-hosted, whether we're trying to orchestrate for cost savings or performance improvements. At the end of the day, it is going to be multi-LLM. Obviously, integrating with multiple LLM providers or technologies, it's quite of a task, right? We don't want our developers to go ahead and reinvent the wheel every time. So with AI proxy, we have created one interface that we can use that's open AI compatible and it allows us to use any AI provider that we support without having to change our code. So at the flip of a switch, we can go from one LLM provider to another and the applications do not have to be updated. Everything can be managed by the control plane of Khan Gateway, which is quite awesome, maybe. We're going to have to have you guys on our SuperCloud next edition next month. We're going to do a lot more AI. We put out on our Wikibon and our now Cube research team, we put out a power law last year, got a lot of criticism, but now people are like, okay, we were right on this one, as we normally are on these things, that there's a power law of the big models, proprietary, it's ironic they call them proprietaries, this will be open, but the proprietary, and then the long tails getting fatter around the neck and torso, and then the tail is specialized. So, yeah, we kind of brought this notion of specialized models. I know it's cruel, I can't trust in their heads, but as it turns out, this is kind of what's happening. The data shows that an open source is seeing a lot more and the torso being more relevancy, seeing more data coming in, which means you're going to have specialist models, specialty models, proprietary in the sense of company having their own proprietary data, not proprietary in open AI, but seeing this power law of LLMs and foundation models. Yeah, and as a matter of fact, to add on top of that, one of the things that can be done with AI gateway that Kong has just announced, it is to be able to train a specialized model based on my API traffic. The API traffic that every organization generates, that is the interface of the business. To our earlier conversation, APIs are 85% of the internet, APIs are the internet, and so everything that the business does today, it is being delivered at the end of the day, whether it's mobile or a website, it is an API. We can train new models using Kong gateway based on our live API traffic in such a way that we can then provide better AI applications that can interact with that information. Let me give you an example. Let's say that we have an API that allows users to, it's an online marketplace, so users can put things for sale, they can buy things, they can search for things. All of that is API traffic. We can take that API traffic, feed it into an LLM to fine tune our model, and then we can create an AI support bot, which in real time is going to be able to determine what the user has done on the website, based on that API traffic, if they have a problem with an order, or if they want to refund, they know in real time exactly what's been happening. You know, there's so many innovation areas there. I mean, just the idea that API traffic can be harvested and automated for the business usage becomes super valuable, and then that changes kind of the business model opportunities. I think you can see companies have to look at that, which brings up the second question to that point, it's okay, I'm a company. I see my traffic, API traffic, I'm going to build an LLM, but now I have my friend's company that I'm working with, he's a partner, but we're different companies. He's got data, I got data, maybe different credentials accessing the system. How do I interact with his LLM? Because this multi-LLM thing, I think will be the new building blocks of how companies will integrate with each other. I think you are touching on something that's revolutionary. When we think of platforms today, what is a platform? A platform is something that allows us to access data using an API or access a service using an API, and up until now, platforms were either data or services. Either I can do something with it or I can access data from the platform, and there is going to be a third dimension moving forward, and I think that this is going to be the decade of this third dimension, which is a models, AI models and LLM, that is yet another thing that is going to be part of that platform. So what is a platform? A platform can be data, can be services, and can be unique intelligence that only that platform has, that can be used to an API by any other provider. Like in your example, I could interface with someone else's LLM and start using it, and how you do that, to an API. This is exactly why when you have these market transitions, some companies don't make it some do, because what we're talking about here is a complete transition to a new thing. Absolutely. So okay, that brings up like in history of, I'm kind of a historian myself. I look at the history of my career and journey. Every transition builds on the last one. And so okay, we've got Linux, we've got an operating system, got distributed computing. AI is different, it's a neural network, it's got foundation models, it's got, so is there an operating system emerging that's coming? Yeah, there is. And it's got to be open. It's got to be open, yeah. What's your vision on that? Give us your thoughts on, what's that going to look like? Is there a Linux for AI in the future? Not Linux itself, but something needs to operate this. I think that we will see the Kubernetes of AI emerging at one point. And that Kubernetes for AI, it is going to be open source. I think open source is yet again, going to be the driver of massive innovation in the enterprise and in the developer ecosystem. Likewise for containers, open source, likewise for managing events and APIs. I mean, Kong itself comes from an open source background, AI is also going to be, the open source AI models are going to be the ones that are going to be defining this decade. And so what we're waiting for is to determine what is going to be the new Kubernetes, so the new platform for AI, what open source technology is going to be the one that everybody will build on top of. And I think that part of it is based on the intelligence and the capabilities of these open source models, which by the way are catching up, they're getting smarter and smarter. And the other way is the governance of these open source model. I think that we're waiting for the right intelligence delivered in the right open source packaging with the right governance model to essentially find our Kubernetes for AI. I think the AI gateway governance piece is huge because if you have control and compliance over the traffic, AI traffic specifically, they can use it. And then that can change the privacy, ownership potential challenges, because who owns the traffic? It's kind of a new thing, right? If I own it, is there privacy? So this becomes a whole nother level of who owns the traffic? I guess there is a whole level of compliance, security, data protection, that needs to be implemented when it comes to AI because some of this information obviously is sensitive and private. The point is we can't even start looking into it if we don't know what the AI traffic is that the applications are generating. So at the end of the day, we have to incentivize developer productivity and agility while at the same time, responsibly knowing what are the prompts that they're asking AI to, the operations they're asking AI to perform, what is the data that's been fed into this AI? And this is not a concern for one team. It's across multiple team, which is why we're working with organizations and enterprise that were telling us, hey, we were thinking to build an AI gateway to ourselves, but now that we know that Kong has shipped one, we might as well use Kong because it turns out they're already using Kong for their API processing. So they upgrade Kong and all of a sudden they have an AI gateway ready to go. Great stuff. I love it. I think you guys are well positioned when Algi was in here. I think we are agreeing that it's a whole nother way. It's just beginning. What do you say? We're on the right wave, but we had the wrong surfboard. And now we got the right surfboard on the right wave. Look, I think we're just at the onset of everything that AI can do. And part of what AI can do is going to be determined by the AI platforms and tooling that we adopt that determine how to do things. And we're seeing in the ecosystem, I mean, it's very vibrant. There's things coming up every day. And I am quite personally excited about everything that's been happening in the ecosystem. And of course, the work that we're doing open source also wants to add another capability into this open source ecosystem that we think is going to be that defining characteristics of AI in this decade, the fact that it is going to be open source. And by the way, cloud and proprietary models, they have their place and there is value in some of the data that they host. But as open source models catch up, we're seeing lots and lots of adoption of open source models versus cloud. Cheaper to run, you own the data. I mean, it's an old brainer. It's a trade-off, right? It's a trade-off. Okay, final couple of questions before we get into what's going on with Kong for the next steps. Future of AI integration is one, people thinking a lot about that safety, you're a lot about that. And then future headroom for developers. And then advice for organizations to effectively deploy it. It's on everyone's plate right now. I got to get my developers lined up. I don't want them getting in the weeds on infrastructure. I want them to be developers and write code, or in this case, no code. What's the future of AI integration is going to look like from your perspective and what's your advice for organizations? Well, I think that the most forward-thinking organizations are the ones that are thinking of a playbook for AI adoption in the organization. Many organizations are experimenting with AI, but in order to go to production with AI, there must be a playbook that the organization can follow to reduce the risk of deploying AI in production, while at the same time making sure that developers are productive they're not reinventing the wheel. So we're seeing organizations that are starting to build these playbooks. So AI becomes productized like any other product that we build. And therefore there must be a life cycle for the deployment of new prompts, for the updates of these prompts, for the decommissioning of older prompts. And there must be a whole life cycle that manages that AI adoption. And so organizations that are doing it right are thinking of this from a very pragmatic standpoint. How do we build an AI playbook? And typically organizations that are building an AI playbooks are the ones that are thinking of this from the platform team standpoint. They are collecting requirements from their application teams. They want to integrate AI in the applications and so they're correcting those requirements to offer AI as an internal service on the platform. So every organization in the world obviously has an internal platform where the customers are the teams that are building things on top of the platform. And part of the platform it is going to be AI. AI is going to be this box that is now going to be available for everybody to use in such a way that the developers don't have to think about how to use it responsibly. They can use it responsibly since they won because the platform team is in charge of making sure that the service is always safe and reliable. To close out, give a quick plug for the Kong situation. You did this product with your team. You got a vision that's right online where the market's going. You kind of where the puck is coming. Kind of, as they say, skates with the puck is you're there. This API economy is transitioning to AI. Okay, great. You're an inventor, co-founder, developer. You're in the midst of it. What's next? What's next for Kong? What are you guys doing after this? Well, there's a lot of things that we're doing right now. I mean, we just announced that we crossed a very important milestone for the business. As a founder, we crossed $100 million over ARR. As a founder, I'm here in the business of building a long-lasting organization, one that can support the digital transformation and AI transformation that all of our customers are doing and performing. And so for Kong, all I can say is that we are always going to be at the forefront of innovation. We want to be enablers of transformation and we're always going to be providing innovation that our customers can leverage in the case of AI with the AI gateway and other tooling that they can allow them to be successful. So it is important that we keep innovating because our innovation can then be used out of the box by all the customers that need to create the digital world that we live and breathe these days. You and your co-founder, exceptional entrepreneurs, you pivoted successfully to Kong from an API product that was ahead of its time. Timing's everything, as they say, but you guys have done a great job. Congratulations. Your advice to other entrepreneurs, because we're living in, you mentioned this earlier, and I believe it's to be true, we're living in a time that's special for entrepreneurship right now where the ability to get a product to the market is potentially easier with AI here. With AI is a generational shift too. A lot of the young entrepreneurs are coming in. That's some of the old systems folks are coming in because it's a new system being developed. So you have this really a generational and a collision of culture coming together in the entrepreneur development circle. So there's a huge new game coming. What's your advice for the folks out there on how they should optimize their thinking and their time as they attack some of these problems that AI is gonna surface where there'll be a solution where they could potentially grab a big white space, category creating opportunities, disrupt an incumbent. What would be your advice on how to optimize their time and their energy? Yeah, so speaking with other entrepreneurs and the mutual sentiment of AI is that AI can be as disruptive as the birth of the internet. We're looking at fundamentally, to your point, a generational shift in technology that's going to be enabling a whole new world that didn't exist in a few years ago. And so the right way to capture this opportunity is to focus on the end user problems that we're trying to solve for. You know, at the end of the day, technology, I'm a big believer, it's a mean to an end. And so being able to focus on what that end is going to be, how we can build better experiences for our customers, how can we simplify usage of our products and how AI can help achieving those goals, focusing on what's the end goal, then working backwards from there, how does AI fit into that? I think that's the right way to think of, not just AI as a problem, but of any problem that anybody's trying to solve. What is that we're trying to solve for? I think that sometimes the mistake is being made when we are focusing too much on the technology per se, and we do not connect the technology with the outcome, then it becomes a lab type of task or it becomes an experimentation, it's not real. So we have to make it real and the only way to make it real, it is to connect with the end users and the outcomes that we create for them. And it gets better every time, it evolves, it's gross. Marco, great to have you on, thanks for coming on theCUBE. Appreciate it. Yeah, fun conversation, thank you. Marco Palandino's co-founder and CT of Kong Inc. Kong Open Source is their AI gateway, democratized multi-LLM usage and more. This is theCUBE coverage. I'm John Furrier, your host, thanks for watching.