 Welcome back everyone, live here from the show floor, the CUBE coverage of KubeCon, CloudNativeCon, CNCS. North America show, we've been to every KubeCon, Europe, North America, since inception, watching the journey of this ecosystem grow, developers, startup, entrepreneurs, open source contributors, big companies, end users, hyperscalers, all kind of adding to the innovation. This year, it's all about next-gen AI and standing up the infrastructure to enable platform engineering. I'm your host, John Furrier, Rob Stretchy. We love platform engineering, Rob. We've got two great guests to break it down for us. It's Yibu Dhani, co-founder and CEO of Ralphie Systems, CUBE alumni, and Vanshee, son of Jiralla, principal architect of Cloud and engineering platforms with American Airlines. Great to have you on again. Thanks for coming on theCUBE. What a show. So, Kubernetes is getting boring, as they've been saying, but it's actually now out there, people are running it. You've done a lot of Kubernetes over the years, seven years? Yeah, since seven years, I've been working with Kubernetes. Initially, I started with Docker, and later, Kubernetes adoption has gotten. Yeah, it's been seven years. Yeah, it's been seven years. What does that translate into? Into cloud-native years, 70 years. Seven times seven, we're all dogged. That's a huge, by the way, that's a big journey. Just curious, where's the progress now? Talk about where we are, because I think we've seen the first wave of how do I stand it up, how do I manage the clusters, get it operational, and then once that happens, the next wave of things happen that you're enabling, because you're standing up workloads. What's the progression, where are we? I would say that there's a small class, I don't know, what percent, but five to 10% of people who've been doing this a long time, well, she's one of them. He's been doing this for seven years, right? So that class of people are now at a point where, well, they've seen it all. So they know exactly what works, what doesn't work, and they have a lot of opinions around, you know, best practices around platform engineering. There's a second class of people who maybe started two or three years ago, when Kubernetes became sort of interesting, and those are now getting to a point where they're going, oh my God, this is really, really hard, what do we got to do? And look, last year when we talked about this, we talked about this concept of platform engineering, which has now become a thing, right? It's become a thing, and what I'm really happy about is the word that you just used about Kubernetes has become boring. I think that's the most important thing, because boring, I mean, we kind of joke about this, but boring really means it's becoming a standard. Standards are boring, right? But it has become a standard. Everybody understands that they have to build a platform engineering program to start with Kubernetes, and look, selfishly, that's very good for business. It's kind of like Linux, we don't really talk about Linux in a most innovative way, but because it's already done its thing, and it's enabling a lot more value. I love that analogy, I think that's cool. I got to ask you on the Scar tissue side, because you know, when you go through these early waves, now, you know, we saw Kubernetes early, as you know, we bought the same page here, it kind of orchestrates, which is standard. The enablement is key, the disruptive enablement, but the Scar tissue from the early days, Vanshi, take us through, what was the learnings early on? What did you get through? Because you now got developers just want a code. Yes, and so yeah, definitely. So when we started initially learning, adopting Kubernetes, right? So at that point of time, just bringing up a cluster, deploying your application, and just testing out how Kubernetes is, and when developers started using it as a front end platform, so we have got new applications that want to test different variety and wide ranges of applications that they want to come in, right? When that has increased a lot, now the clusters have, now we have scaling the clusters along with the number of applications that's coming in. So when that cluster is scaling, the front end needs to be robust for the developers. They do not care about how my platform is, what my server is, where we are running on. So the developer platform has kicked off, where developers doesn't care about Kubernetes, where that's where the platform engineer comes in, where they do the heavy lifting there, they don't make developers very hard and complexity understandable of Kubernetes. So that's where the adoption on the developer experience has increased, and keeping up with the platform engineering, the challenges definitely are scalability and monitoring, especially in the monitoring and observability piece, right? So when we are scaling the platform, about hundreds of clusters that we are running in, observability has been a biggest challenge, and the other biggest challenge was security. Right, wide range of applications, so securing them is a biggest challenge. Yes, I mean, that makes sense, and we've been hearing that all day long. It was one of the main themes this morning in the keynote was security, and I think we've had numerous guests on throughout the day today talking about it, especially when you think about how security is not just about the technology, it's the social engineering aspects, and how you have zero trust and built in and things of that nature. How are you taking steps to kind of address these challenges, I guess? So again, security happens at different layers. So the front end, when we come in with enterprise tools, apart from the Kubernetes ecosystem has to build around, with respect to the front end web app, when they're using the web apps, the web application firewall, the WAFs have to be developed, and the core Kubernetes zero trust, when we talk about zero trust, we need to tap into the image securities, the build pipeline securities, and also the runtime scans. So the security has to scan across from where it starts to where it runs. So it has to go to all the layers, and it has to be at all the layers that needs to be secured. So that will be a biggest challenge to go into different types of tools again, right? So not even a single tool that's all. You know, customers are the developers. You got to spin up an environment for them so they can do their job. Data scientists, now you're going to have AI developers, which probably will be everybody. Yes, and today I was looking into broader use cases of Edge AI ML. So the Edge has been adoption last three years before Edge was about 10% or 20%. Now it is like 30 to 30, 40%. Edge has increased its adoption too. So now when we're tapping into the Edge devices, or IoT devices, security going into that layer is again the biggest thing that needs to be solved. So with your, I mean, you're in an airline, right? I mean, you're very distributed all over the place. Do you have Kubernetes at the Edge in all of these different locations? Or is it, how are you actually using Kubernetes? So we are actually using Kubernetes in different platforms. Our baggage platform is one of the platforms that is on Kubernetes clusters. And we have our centralized developer platforms on the front end, which is on the backstage. So that is also on the Kubernetes platform. And there are internal microservices, a lot of microservices and internal applications that are hosted across multiple clusters. Again, our adoption into Kubernetes is again, as clusters of the service where we give each cluster to the teams that are independent. And also namespace service where a single cluster is spanned for multiple use cases too. I see if you got to love the American Airlines customer you have there, because they got the scale, they got the technical shot, but there the environment is just going to grow. So it's like, this is kind of like the moment when we first met, we kind of saw the Kubernetes vision it's playing out, it's called platform engineering. And Kubernetes is invisible, it's under the covers. Yeah, it was always an enabler. And so the first time John and I spoke about this, I told him that very soon we won't have a show called KubeCon because it doesn't make any sense. It should be hidden. And of course we still have it soon enough, right? We'll call it something else. Maybe we'll call it developer portal con, I don't know. Well, I mean the cloud native con to me makes a total sense. Maybe that's where we're going, but look, this was always the goal, right? And you always talk about Linux as the analogy. These things, once they become well understood and standardized and use the word environments, right? Once we understand what an environment actually looks like they become commoditized to the point that they're just no longer important to the developer. I'm looking forward to that because actually, again, good for business, but this is the right progression for any new technology. It's a 10 year cycle, right? And after 10 years, it doesn't matter anymore because it's part of the fabric and let's go work on the next set of problems. I think we are pretty close to that. Two years from now, we're there. But yeah, sorry. Go ahead. As you said, right? Previously, the technology lifecycle was maybe about 10 years, but now it's rapid. It's like five years, every five years, net new technology and it's like, it's at a very exponential rapid pace that we are moving to. It's interesting, this month I shared on social. Fifty years is when TCPIP, first memo went out with Vinsurf and Con that became the internet at that pre-web. And that moment opened up the interconnection standardization, which Kubernetes is orchestrationally, but it will go like Linux. It'll be invisible to the point where it's just there and everyone loves it, it works, does its job, and it does things around it, it's just through the way. So the next question is okay, what's that next level of abstraction? We were just riffing about compute being widely available at the edge, semantic layers. If you take AI going out further, the infrastructure has to be large scale, programmable, stand it up, tear it down, at will, machines doing it. It's very much an automated kind of playbook, best practices kind of thing. This is the modern IT. Indeed, so funnily enough, because you said the word environments, so we launched a product today called Environment Manager. Because we've seen this all along where developers see when a developer says I need compute to do my job. They actually don't mean Kubernetes. They actually might mean Kubernetes, but they mean Kubernetes, maybe a namespace, I need a pipeline, I need that S3 bucket because I'm going to have my data there. Oh, please bring up a model for me so I can test bedrock versus, I don't know, Lama 2 for, this is an environment. So what does it mean to templatize environments so that IT can build them on the fly and make them available, again, as environment templates. This is something that, John, we've been working on this for a long time. Right, like we started as a Kubernetes company knowing full well that we will not be a Kubernetes company eventually. It has to be a high level abstraction. We launched it like a 1.0 of that today. We have customers who've been testing it for a while. It's phenomenal. Truly, like the vision always was what is that one button a developer presses and says, give me an AI workbench. By the way, today you can try this out with our product. Give me an AI workbench. It's going to build you all the 70 tools that you need to actually do your testing. It's going to build it for you. It's awesome. It's like a 3D printer. Just give me my IT environment. Yeah, give me a call. Yes, that abstraction is at the next level now as we've discussed, right? So previously abstraction was about clusters. Now the abstraction is at a layer that I want to test it out end to end with everything with a single click of a button. I need my whole platform. Whatever I need, all the bells and whistles, it needs to come up. I got to bring up some of the problems that you saw. I saw some of the notes we were preparing for this about guardrails and challenges, but you bring up this instant provisioning kind of concept of just standing up. When you have platform engineering teams that don't have the best practices nailed down, a lot of the faults can come in. For example, the big cloud flare snafu last week was because one of their mission critical modules didn't go through the best practices. They were shipping so much code, they were too agile. So you start to get into the mindset of getting a little slower, but making sure, because if you're going to be scaling with automation and reasoning and generative AI-like techniques in the future, which we're just basically saying, you got to get stuff into a best practices playbook or some sort of, otherwise it could break the whole thing. What's your opinion on that? Definitely. So I mean, my analogy to platform engineering is building a city kind of a thing, right? You're planning for a city. So when you're planning for a city as a platform engineer or an architect, you need to have the proper planning and guardrails across where to build and what to build and what to be allowed and what not to be allowed. So at the same time, you need to have limits and limitations. So you need to have best practices baked into your pipelines, your code, everything but whatever you run it. Otherwise, definitely when you are ready, as I just scale, we all see a lot of things that we don't have to. So definitely we need to tap into those. That highlights what we've been saying, Robin theCUBE, about systems mindset for this role. This is not your average developer. This is a unique position. Persona, data engineering falls in the same category. If you don't know what a city looks like, you can't plan for one. Yes, you can envision it. Most people cannot envision what that city's going to look like. I'll tell you the most interesting problem we've seen in the industry. And I mean, as you know, our sale is, you know, considered top down. So we talked to CTO, CIOs. And their biggest issue is they have all these, even this economy, they have all these slots open for bathroom engineers, they can't find them. Because this is a very, very tough skill to acquire. And then of course, apply to multiple companies. And by the way, skills gap is not the only issue. It's also the attention to, I don't care what happened in the past. Absolutely. And AI takes advantage of legacy opportunities, right? Yes, you just told my word. The skill gap, the skill gap, definitely. Yeah, and I think also it's not just the skills gap. It's so many different skills. If you're not using automation, if you're not using something that aids in that plan. Because as platform engineering is yet the new IT, and you have some aspects of DevOps in there, you have a lot of IT ops, which has kind of been subsumed in there. And it's kind of that mishmash of layers of different technologies you need to understand. And then you bring in multiple hypervisors, multiple flavors of Kubernetes. It's not straightforward. Yes, definitely. And I'm sure this is how you're approaching it. Yeah, definitely. So because, again, it's not even, the platform engineering when you take it's not, as you said, it's not one, right? So it may be a multicloud, it may be a hybrid, or it may be spanning across your data centers anywhere. So the complexity is such, unless until you are going through the journey and path, it's not one kind of distribution, or it's not one kind of a place you are in. So there's a lot of technologies, a lot of tools, a lot of places that you are in. This much of complexity needs to be understood. You can't buy general purpose software anymore, or any kind of general purpose anecdotes or flavors. You've got to really engineer the system architected, like you said, the city. And I think this is where I guess like to kind of wrap up the final question on everyone's mind is, okay, platform engineering, DevOps, platform engineering, and now generative AI, you got the progression, DevOps was introduced as code, platform engineering kind of modernizes, kind of mainstreamed it a bit. That's like a departmental thing, hope for simplifying it. And then with generative AI, what's next guys? So where do the dots connect for generative AI? Because as you said, push a button, there's magic going on behind the curtain. So that's got to be some sort of generative AI data involved. What do you guys see, generative AI and platform engineering kind of connecting? What clicks out of the gate? Where does it scale? Okay, so the generative AI, where it goes to us, my concerns would be no code or low code automation platforms, where everything will be westerned to automations, where anybody who is using the platform will not worry about code, will not worry about server, they will be very close, very at a granite level, the functions will be for the enterprise where they don't worry about what server you're running on, what infrastructure you're running on. The AI makes it more dotted to more thick line where whatever you need it, at a footprint, it's like you just run it like it will fit it up. Everybody in this industry, across the world now, everybody's got a co-pilot now, right? So I guess we got that one too. So we built one, just to see what happens, right? So the initial path we took was, we said, look, we have some pretty good documentation, let's just train this on our documentation, that's pretty cool, that's kind of nice. But in the process, and we showed it to people, what we figured out is, it's kind of nice, but the real problem is not that, the real problem is, like when you drive your car, drive a Tesla for example, right? How many API calls are being made in the back to make every decision, right, at every speed? Okay, I am a very distracted driver, so I love my Tesla, so it always catches me. By the way, I turn on autopilot instantly, right? The minute I can, I do, because I'm a distracted driver, but how many API calls are being made? If all these calls went to OpenAI, right? Or ChadGPD, man, that's a lot of money. So we had to think about that. So these are the problems we're thinking about right now in our company. How do we cash these calls? But I'll tell you the most important learning we've had right now. We've been able to build a model where, when we see somebody work on a platform, the copilot is able to guess what they're going to do. It says, I think you're going to do this. And if you're going to take this path, here's how. So how do we literally build an auto-complete for infrastructure? That's, this is not my words, I'm a head of products, it's his idea. But it's a pretty cool idea, right? How do we do an auto-complete for infrastructure? Right, and then what happens is exactly what Ramchit just said, right? If you have a skills gap, solve it once and for all. Anybody should be able to do it. You don't need specialized skills. I think that's the right way to think about it, right? I see, Benchie, thanks so much for coming on theCUBE and sharing, we got to leave it there. Thanks for coming on. Congratulations on your seven-year run and more. And we've got a great partner in Raffi Systems. Great to see you, and you're right. Kubernetes is going to go in the background and welcome to the distributed computing instant infrastructure world of future. Thank you very much guys. All right, we'll be back with more coverage. We'll unpack this later, believe me, these guys aren't going anywhere. We're going to get back to them in another day or so on theCUBE, we'll have them back soon. We'll be right back with our wrap up of day one after this short break.