 Welcome to Amsterdam and KubeCon, CloudNativeCon 2023. Join John Furrier, Savannah Peterson, Rob Stretcher, and you Pscot as the Kube covers the largest conference on Kubernetes, CloudNative, and open source technologies together with developers, engineers, and IT leaders from around the globe. Live coverage of KubeCon, CloudNativeCon 2023 is made possible by the support of Red Hat, the CNCF, and its ecosystem partners. Hello, and welcome back to the Kube's live coverage here at Amsterdam for KubeCon EU 2023, CloudNativeCon. I'm John Furrier, host of the Kube. Got a great AI panel here. We're going to try to break down what's going on in DevSecOps, DevOps, CloudNative. AI is the hottest trend on the planet. Of course, machine learning has been around for a while. Most of the companies have been implementing machine learning in data centers, the cloud, and pretty much everywhere else. So AI is the hottest trend. We've got a great lineup here. Lewis Ryan, CTO of Solio is here. Brian Gracie, head of Margaret Solio, inside my league's co-founder and CTO of SpectroCloud. Gentlemen, thanks for joining me today. Appreciate it. Thank you for having us here. So the number one hallway talk is kind of two things. VMs moving to CloudNative. That's been going on for a while. So old technology coming into CloudNative. Check, check, check. Of course, developer productivity. But AI, chat, GPT. I've heard a startup tell me they're VCs and they just want to know what their chat GPT strategy is. Like whatever that means. So AI is hot, but everyone has to have a hot take. But there is some impacts. We're going to try to explore that. So you guys got the service mess side. It's a lot going on. Cost optimization, cloud. A lot of things happening in the plumbing, middle layers, services. So I think it's a ripe automation that smells like AI to me. So like let's get into it before we start. Give a little background on Solio and what you guys do. We'll start with you guys. Sure, so Solio is the leading CloudNative application networking platform. We've been on before, love talking to you guys. We're really excited. Louis Ryan, who's the inventor of Istio, just joined us as our CTO. So a lot of exciting things going on. We just announced a multi-cloud solution this week, Monday at our application networking day. So this show is huge, it's hot and we've got a lot going on too. Give a quick plug on Istio, where it started. What does it mean? What's it turned into? So Istio is a service mesh technology that's kind of really born out of trying to help enterprises improve their posture with security, observability, networking and controls. Founded on the basic principles of zero trust, baking that in from the very beginning and then starting to build on top of that to really help enterprises secure, achieve a better security posture than what they were used to with typical boundary controls and edge networks and solutions like that. A lot of services. Spectra Cloud, what do you guys do? So we're a modern Kubernetes management platform. We help our customers to deploy and manage Kubernetes at scale, whether it's in their public environments, private data centers and edge environments. We take a declarative approach to being able to manage everything from your operating system, your Kubernetes and all the integrations which would include service mesh and the layers above that. And so obviously the big trend with Cloud is basically distributed computing, edge is hot, the role of data is huge but we were just talking on another panel around the developer angle on that is that developers don't decide where the data's stored. Someone else does, database person, infrastructure person. So if data becomes important, how does AI implement? Who implements AI? The developers, the infrastructure or both? How do you guys see AI coming in? I mean, not that everyone has a clear answer right now but you can almost connect some dots. What do you guys see AI fitting in? Well when I say AI, we see that the generative AI, modern, the foundation models that are hot right now. We also see large language models as data with language but this multimodal that got computer vision, the data from machines, there's a ton of IoT data so data generally is the piece. What do you guys think? I mean for us, and this is the interesting thing about like this moment in time. So up until now, data like you said was data was sort of the domain of the groups that manage data. So it could have been monitoring tools and APM tools and storage companies and so forth and all of a sudden, and again you, as you said as a developer you were like I'm kind of hands off from it, I go and access it. Now AI has sort of put that back in power in everybody's hands. So all of a sudden you have all these things that maybe you didn't think about before. Just like simple things, right? So I can now plug in AI to like my documentation. So if I'm the guy administering service mesh for an example, I've essentially got like an expert sitting on my shoulder all the time that I didn't necessarily have before. Or if I want to be able to look at logs and be like, what can I do with that? I used to have to go to an expensive tool like Datadog or something else. Now I potentially can leverage those sort of things. So the data is still its own domain, but I think the ability to give people self-service or more access it to themselves becomes really interesting. And I think we're right at the tip of the iceberg of that. What do you think? I think I agree with them for sure. I think one aspect before, data used to be a lot more structured. You'd have to be able to define, hey, this is what I need to send up in terms of my log or my metrics or my information. Now with AI technologies, you're able to look at any unstructured data and actually make meaningful resemblances out of it, whether it's in terms of traffic patterns or usages and make recommendations and correlations from that. It becomes much more useful even going back in time and reading all that data back into your platforms now. Lewis, when I remember SEO came on the scene, one thing I was intrigued by was this lot of service being stood up, torn down on the fly kind of thing, which, you know, okay, that makes sense. A lot of things are happening. Logging was important, but now it's observability is a whole category. I mean, AI, you see a lot of code being generated from the chat GPT kind of trend foundational model. So you see coding, which we're calling code pollution potentially, you have, if bad code comes in, who's going to watch the code? It's going to change observability. So I see that as a factor. It's also just automation and efficiency. Like something's going to break before it gets fixed. Maybe that's my opinion. I don't know, but what do you think? What's your take on all this? So I mean, I think you're going to see a lot of focus on some really high value areas within enterprise. You know, there's obviously things that break and have no real consequences. And then there are things that break and have vastly dangerous consequences to an enterprise and its business. So I think you'll see a large focus on pulling in all these diverse signals and producing assessments about security, right, from these diverse signals, because it's all this unstructured data. And but there are aggregate signals in there about what's actually going on inside your infrastructure, your network traffic, the traffic you have with partners and all these other different agents in the system. And so anything that helps you synthesize from all those signals quickly and respond to them quickly is going to be immensely valuable, right? Because we live in an age of ongoing threats and we've seen the damage that these threats have on. On the security side, it's massive. What's that, where is the AI land first? I mean, I mean, innovations can be messy. Saute's a little bit, meanders around. I think I see that kind of happening, but it's got to land somewhere where it starts hitting practical use cases that's low hanging fruit. Where's the low hanging fruit for you guys? So I mean, we've seen like, you know, already when you're starting to see like basic operational tools, like, you know, read all the Kubernetes docs, like do all the tailoring, the training, the prompt engineering. So you have a Kubernetes SRE, like sitting on your shoulder, talking in your ear. You know, hopefully somebody will call one of those at some point. So I think you'll see, you know, organizing institutional knowledge and making it available, whether it's, you know, as an external tool or also you'll see inside the enterprise, you know, like for a company like us, for like a field engineering team, right? Where we have our own internal best practices, our own internal engineering. And so enabling them to be more effective in customer engagements and time to solution, I think you're going to see that be quite valuable. Yeah, Brian, you talked about last time in Detroit, you talked about workflows, how you guys engineer success for your customers. At some point you can step away and maybe make that automated or cost recovery, cost maintain, cost optimization. These are kind of like institutional data points. They seem to be like the easy ones. Yeah, I mean, stuff like we, you know, we run a Slack channel for every single customer that comes into ours. We'll have customers that are international, but maybe the account team is in San Francisco, right? They don't want to be up at two in the morning. We can drop a Slack bot in there, they can figure out sentiment, is this customer upset? Do I need to wake somebody up? Yeah, I mean, there's a ton of low hanging fruit. I guarantee somebody's going to walk around, look at all these booths, take the messaging from it and go figure out, how do I differ, even differentiating marketing in this crazy crowded environment? So- Someone might even write a summary of the blog post of all the conversations we're having on theCUBE. I mean, what's your take on that? I think it's interesting. I think the cool thing about the AI is the question of large language models is being able to take all the public domain knowledge about the different parameters. What is a sentiment? And what is it like? What is the actual subject about? And then being able to apply maybe 20% of that, with your own institutional knowledge. What is your Slack channel? What is your support or other things? It's such a making your own. Hey, even though this is how you read an actual sentiment for whether somebody's liking it, but specifically with an Istio or Google or whatever, this is how it specifically means for this customer. Yeah, and there's no doubt there's going to be more services, more activity. The question I want to ask you guys next is, if you look at the papers coming out, which by the way has been incredible, I've read more academic papers in the past three months than I have in the past 30 years, okay? And they're pretty compelling. One, I just was reading last night, it's such how kind of lame I am, but nerdy I am, but I was actually at the bar watching these, but it was on prompt tuning. Now remember, prompt engineering's the buzzword of the year. Prompt engineering is chat GPT, you give it a prompt, it spits out an answer, like a query, data, feeding data, but tuning was more about when you're not around. Tuning it, that sounds a lot like self-healing networks or concepts like that. Is that where it goes? I mean, is that, I mean, where's the data for that? So I guess it's a data challenge, domain knowledge that you have that's locked, you can lock in on, and then what data comes in. So machines throw off data. How do you guys see that tuning piece? I'm sure Istio and ServiceMesh is big, and then your side too, what's that take? Right, so you're trying to establish a corpus information and then direct a model to be focused on solving the problems in that domain. Prompt engineering is really a constraint on how the interaction is going to occur between you and the model. Tuning is a bit more about how do I get it to more generally focus on a specific domain of expertise. And obviously chat GPT and the models have been trained on corpuses that are stale at some point, because our industry moves extremely quickly, and they're training on internet data from two years ago or 12 months ago or 18 months ago. And so to kind of keep up with product velocity, you need to be able to do these types of things to be able to keep up with our institutional knowledge, the iteration in the products, and be valuable in that domain. And so I think that's where you're going to see a lot of. Isn't that what we're already doing? Kinda, or no? Anyway, it's not training for some, but this data collection, I mean. Well, to your point, the prompt training, I mean the prompt training is really just experimentation, just rapid experimentation. We already do that with applications today, right? We do A-B testing and so forth. We just may start seeing people be able to do a set of a couple of deploys, like hundreds of those, and then have something else tell you how much small fractions here, small fractions there. And for big industries, whether it's like ad serving, or video streaming, or whatever it is, those will be very expensive or very cost effective ways of doing stuff. So you are collecting the data already? We see it, one of the most interesting things I saw this week was the number one or number two workload that runs on Kubernetes is telemetry, right? So as much as I want to deploy an application, I want to know about what's going on with it as it is. So yeah, we've had the data. It's just, you know, we're still applying a person to it. How much can we start to apply, you know, smarter decision-making around it? What are you seeing out there? And on the infrastructure side, it's the same aspect, right? Being able to make more intelligent decisions without having the operator inside the picture. Before it's about like, hey, I want to be able to place my workloads and my infrastructures across different clouds, different environments. You generally set policies, you look at it, okay, let me go tweak the policies, my auto-scaling groups, et cetera. I think now with AI and technologies, it's more about like, hey, how do we automate that? Maybe the first couple of times, you still need to hint it a little bit. But over time, as it gets retraining, as it gets better at it, it's less and less human intervention to make it possible. I mean, that's the theme of cloud that I like. You brought that up. It's like the human loop or the human aspect of it. All the Kubernetes that we're seeing people use for managed services is the offset that the company doesn't want to deal with. And they want to code or do platform stuff. And Kubernetes should just be running as like lights, like utility. That's the undifferentiated heavy lifting. Where does AI go next for DevOps? I mean, what do you guys see it happening? I mean, I guess that's the question I keep coming back to is like, I can't see the landing spot for AI in DevOps. I mean, even just to Saad's point, right? We have this thing with DevOps where it's like the two groups kind of work together, but there's always this, I just want to write code. For example, I'm a developer. I just want to write code. And it's like, well, we can help you deploy that, but you've got to know these three extra steps. So you're going to drift into my lane a little bit. If we can start reducing those two or three steps, right? The system gives me feedback on which cloud should I deploy this to? Which, where is this going to be most cost effective? Like, even things like that, now you get back to, you know, the system's still fast. The teams are still working together, but I don't have to get out of my lane. I can focus on what I want to focus on. That's the biggest feedback you're getting as these AI's become more kind of like personalized is I don't, my work doesn't go away. I just get rid of a lot of the stuff that I didn't want to think about before. I think that's kind of where it's coming together here is that it's like what Bing, Bing's a search engine. They have chat GBT to help you search and find stuff. What you're getting at is that you don't want to have these hallucinations on the infrastructure. You want to have known kind of augmentation of known practices. As you said. Yeah, so I mean there are obviously AI models that are very targeted. Like you've seen AI used in anomaly detection, right? You'll see AI used in scheduling, right? I saw it mentioned, and I think, you know, starting to use AI and be able to set a higher level objective for what you want to occur for either, you know, a specific application or your platform as a whole, right? And almost like the executive level and watch the system work towards that goal is actually going to be quite compelling for people. We might be a few years out from seeing that in action, right, and there's a bunch of forces that are going to work for and against that, I think, but I think over time you'll see it's like higher level. It's like dynamic policy, almost. Well, the other thing you're going to see is, and this is the same problem we always have, you know, in enterprise tech is like, well, where does the model live? Does, you know, am I reaching back to some open AI thing or do I have to build a model and keep it localized? How do you, you know, how do you tune that? So, I mean, there's going to be a whole governance sort of locality thing that we're going to have to work through. Yeah, democratization is going to be a big deal, right? I mean, obviously we're talking a lot about chat GPT and, you know, single-vendor-ness, right? Obviously this industry as a whole has always fought against that in kind of its, you know, DNA. And so, I think you'll see a lot more democratization. You're sure democratization is the right thing. Sometimes you don't want to have democracy. It tells us something. You still want it to work. Well, okay, final question for you guys. As you guys look at your jobs from your respective companies, when asked, what's your AI strategy? How would you respond? To be honest, it's going to be looking at all different aspects, right? How do we help users and customers to be able to operate at scale? And it's, again, coming back to not having them perform manual operations, having them do less and less. For us, it's about being able to provide a platform where decisions, not only in terms of the workloads, infrastructure requirements, everything is automated for them as much as possible, right? So we're obviously going to go heavy into, it's not going to be chat GPT, but some aspects about that, being able to help users with that, so. And, you know. AI strategy. I'm giving what we do. We are particularly security focused, where we want to make sure that customers are able to achieve strong security posture. So we will absolutely be looking at incorporating AI and helping people observe and detect threats and improve their security posture in terms of how they operationalize all of their applications. Brian. I think it's going to be, you know, as Louis said, security, but the center of it. I think the other two pillars are going to be automation efficiency, so cost efficiency, and then user experience. There's going to be a bunch of things we can do just to make their experience better. Awesome. I should note, Brian Grace, also the podcaster from the Cloudcast. Final question for you. What's the hot podcast these days? What's going to be on the agenda post-CubeCon? What's going to be the headline? We'll be covering a ton of stuff about it. There's so many good topics here. We'll be talking about you and so on. Take your show I.O. hat, I'll put your podcaster on it. What's going to be the headline from CubeCon this year? Well, I think the biggest thing is, you know, we were a little worried that the CubeCon community might be going through a lull. It's exploding. So that's going to be the headline is, you know, if Europe is growing like this, the U.S. is going to be growing. So this one's exciting. Chicago should be exciting as well. Final question for Louis, too, on a service mesh. Where are we? What's the current state of the market for service meshes? So service mesh is, you know, it's become a mature technology. It's pretty widely accepted in the enterprise now as something that people need to solve a lot of their real problems. We have a new initiative in Istio called Ambient Mesh. It really helps kind of scale out and get you the value of the service mesh faster. And so we're excited to see that, you know, go to production readiness as part of Istio and obviously what we do is solo and helping customers get their hands on it. Congratulations. And on the Kubernetes side, what's the platform looking like for you guys? What's next? What's the more interesting initiative we're seeing is we're seeing a switch back towards Beard Metal and we're being able to run more work as efficiently because now more and more applications running containerized. Some of them have regulation reasons, more security reasons. They want to be able to run things on-prem as we're seeing a Bush product with Beard Metal, Kubernetes and Edge. So, awesome. Guys, thanks so much for the conversation. Trying to break down the AI equation in DevOps. Obviously, data, a lot of machine learning, still low hanging fruit. It's going to probably be a multi-year journey to figure out what's going to happen. Of course, we have it here. Cover on the queue. I'm John Furrier. Thanks for watching. I'll be back with more after this short break.