 Welcome back everyone to SuperCloud 6 Live in Palo Alto. I'm John Furrier with Dave Vellante and Rob Streccia here to close out our SuperCloud 6 AI Innovators Live program. What's going to follow is our ecosystem speaks portion where we solicit input from all the stakeholders and leaders in our industry. They are AI innovators as well as many more who will be coming on as an addendum on September 19th. There's so much demand for AI innovation that we're going to have another segment on the 19th, we'll plug that in to the site. So always check back to the site. We've got some really big names on that one. Dave, Rob, great to see you. Rob, good to see you. Rob, welcome to California. Oh, thank you. And I know you did a lot of the interviews on the ecosystem speak as well as you got KubeCon next week and CNCF, a lot of cloud native discussions around how a new teams are forming around platform engineering, DevSecOps, MLOps, DevRel, throw in a little bit of data science in there and data engineering all kind of, like these new spot teams. Yeah, I think, again, you're talk about personas earlier on in the day and how that's going to change. And we actually did a KubeCon or a KubeCon as I keep being told I pronounce it wrong. KubeCon. I always say KubeCon. No, we're the KubeCon. I always say KubeCon, but is it KubeCon or? It's KubeCon. It's KubeCon. It's KubeCon. But what's funny about it was that a lot of the discussion is going to be about ML flow and things that are, Kube flow, which is coming out and having it's kind of coming out party this year. Last year when we were over there, really there wasn't a lot of AI talk. But I think what's interesting going into some of the discussions that I had that are about to go and play is really that there's a lot going on in this ecosystem and there's a lot around security, how you manage it and what the stack is going to look like. And some observability stuff as well. You and Stu talked about this a little bit. It seems to be a need. We've been talking about that at dinner extensively last night, all these piece parts, and it just gets more and more complicated into application performance, application availability. It's all fits together. And I think some of the discussions I've even had briefings that were not in that, that I'm actually finishing my write up that goes along with the video, really talking about platforms for observability. Observability, OpenSearch for instance, which is the offshoot of Elastic and when it was forked, when the licensing changed, you see that actually people are building real platforms on top of OpenSearch, but it's not about just the searchability parts of it. What's interesting, Matt Hull was on, from NVIDIA, the VP of AI Solutions with Barris, who's the head of product at Snowflake of AI, how we, Stu, held that panel just earlier. One observation he made, and I want to get your reaction is because this comes back down to observability and this new end to end systems that are being developed for AI. Matt Hull from NVIDIA said, you go back a few years ago, there was only one, a few use cases for NVIDIA and for AI that they were applying to, now with generative AI, they're all over the place. So what's happened is the use cases of AI, obviously this from NVIDIA's perspective has gotten massive, the aperture's huge, with KubeCon and that community of Kubernetes and CloudNative, AI has opened up huge aperture for changing the definition of observability, increasing the role of CloudNative DevSecOps operators who are now running with the data engineering platform folks. They seem to be driving the change, Rob. And I'm going to be curious coming out from Paris and into the North America show for CNCF, how that accelerates because if this continues, like NVIDIA, you'll see the CNCF community probably take charge of the generative AI because as we've been saying, the democratization of the data side, Dave, is not going to be, by the data side, it's going to be by the AI itself. So the role of the humans, there'll still be data engineering, but the bulk of the work is going to be scaling from the hyperscale side which is CloudNative. We heard that from our last guest who was saying the science and tech is going to move. So I'd love to get your thoughts and reaction to that because NVIDIA is like, hey, of course, the use cases are massive. So CNCF, the Kubernetes could explode. Yeah, I think it's the CloudNative aspect, the CloudNative con part of KubeCon is that it's CloudNative con as well. And I think the name should almost flip flop at this point because Kubernetes is, I know it's overstatementing that it's been solved, but it is. It's the services that go around it. And a lot of it is there to support these modern data apps that are there. Like you have Doc, DOK, which is data on Kubernetes, which is a big push that they've had over the last year and I've been attending those. Well, and we just want to fall up in the highway panel with NVIDIA and stuff like it's like, you know, we get so nuanced sometimes in this industry, we analyze and hyperanalyze, but it really comes down to, you know, Moore's Law is dead, but it's alive and well, a million times performance improvement by the end of the decade and it's all about the data, which was kind of snowflake, bring the AI to the data. So that was a great panel. And I think those are two companies that- Well, it's clear that NVIDIA is the opposite example, but that point sticks because what everyone's been scratching their head is Kubernetes and KubeCon getting too boring. We don't really need a show for it, but as you pointed out, Cloud Native Con, Cloud Native Growth, which is basically DevSecOps, is dev rail, is rocking and rolling, it's booming. It's not like it's slowing down at all. So Kubernetes is getting boring and becoming standard. You don't need a show for that. It's already kind of done. And the platform engineering to your point about how they all come together and how that is the new IT. I think, and again, going back to that discussion, the one thing and you brought up, you know, the power law of distribution of gen AI, and as you go out on that long tail, really, that's the edge. And you talk about things like micro shift and other smaller Kubernetes deployments and containers out at the edge, where the data really doesn't make sense to bring it all back and doing inference out there and then bringing summarization back. I think that's going to be a big story. I mean, Tesla is a great example, right? And, you know, they're talking about how they're sort of rewriting their code, et cetera. But I mean, basically, you know, they're not doing all that stuff in the cloud. They're doing it real time. They take maybe, I don't know, 1% of their data, maybe 5% or less and send it back to the cloud. Deer runs across, you know, and that's where the inference is happening. It's interesting. A lot of people are talking about, well, you don't need a lot of, you know, you don't need GPUs to do inferencing. Vikram was here today who knows this stuff. He's deep, deep into it. He goes, oh, really? You use the math to do inferencing properly is so complicated. You're actually going to need GPUs there. So that's his premise. It's going to be interesting to have him back and, you know, dig into that. We know what you're talking about. Well, we'll get back to the cloud native. I remember the slide we put up on our opening segment. I don't know if we have it, the ETR first slide. Slide one. Let's pull up slide one from that ETR if you have it. Rob, if you look at this, okay, Dave pointed out that the ML AI, which is essentially the Gen AI piece, the red line there is showing you where that, what's below the line is kind of what's, I won't say it's going out of style, but it's really more of the on-deck circle. Well, let me say that. So prior to, during the pandemic, ML AI, containers, cloud, and RPA were all well above that 40% line. And now it's just AI. Well, but it's hovering around, but that's the constellation of what cloud native we're just talking about. So container orchestration, container platforms, cloud computing, that is going to be massively pulled up with Gen AI. And then everything under that line is going to pull up, too, because the rising tide of Gen AI is going to, again, open up the use cases, video conferencing, you look at all the labels in there. I know they're categoricals, but all industries will be disrupted. And I think that's why it's kind of the lull before the storm. Yeah. Well, the point too is, is prior to chat, GPT is like, okay, we're doing, we have this side projects with ML and AI. To your point, John, it's going to be AI in infrastructure, AI in software, AI in everywhere, AI in cloud. And by the way, this is survey data. This is directly from customers and their buyer behavior. It also matches with the cube research data that we have as well. So the combination of that data really kind of validates that. Yeah. And I think what's interesting is, as we see AI ML going up, you see that container orchestration, which is basically cloud orchestration rising. And I think that that makes sense because there is no easy button for doing AI ML. And we've been hearing that all day today and you'll hear it again with Craig Wiley from Databricks. We actually had Enrico Enrique Lazzaro, Lazzasso, I can get his name right, from- Multiverse. Multiverse computing, where he's talking about quantum techniques being joined with AI. And it's not quantum computing, as you would say, the big machines and things like that. It's actually the techniques. And what it does is summarize and make it more efficient so it can run on a CPU or something of that nature. So super interesting to stay tuned to because there's a lot of information that comes through. And then we talked to the CISO, Paul Hawkins from Cypher Stache, where how do you actually protect the data? How do you actually make sure it's encrypted everywhere it needs to be as you're starting to get things that are pulled in and make sure the things that shouldn't be pulled into AI aren't. And I think that starts to give you that tenor to it's really difficult stuff. And why it's complicated. And those videos are coming up right after this close on our ecosystem to speak. I talked to Andrew Joyner from he's the CEO of Hyperscience. They're doing stuff with computer vision that can scan all kinds of contracts as unstructured data and get that into vector embeds, which is incredible. And then Arana Khan as the co-founder of Chara, they're offering cloud insurance for that basically saying, I'll guarantee GPU and if I don't deliver it, I'll pay for it. So you think flood insurance? Or GPU capacity. So no, this is a resource issue. So spend is not cost optimization. It's essentially like flood insurance. Like if you have critical infrastructure and you need to run GPUs and compute these guys will ensure that. So it's better than buying spot instances. This is like guaranteeing. So you're starting to see the fintech side of it go from cost optimization, Rob, to like real business model. And they're killing it. They're going to do another round of funding. They get term sheets and everything. So you start to see that startup. Also earlier this morning, I want to get your reaction to this Rob too. We had the CEO of Rockside on, Venkat and Kyle Weller, had a product of one house. I don't know if you heard that, but it was right up your wheelhouse from our Databricks. They were talking about the data lake and some of those dynamics. They're seeing the same systems end to end architecture. That's not a one vendor thing. That's going on. So you start to see a potential sign that maybe the market and products aren't fitting from the old vendors. So this is going to be a very interesting thing though. What's your reaction to that as you see these new startups with the new models, new business models and new systems models? I think it's a, it's there's, I think going to be a plethora of ways to address, especially on the data side. I think it's going to expand and then it's going to contract like every other one that we've been seeing. I think to your point on observability, we've seen it expand massively in the observability space and actually start to contract into platforms. I think that's similarly will happen with people building things that look like new startups into features of existing data platforms. But I don't think anybody wants to be wed to one data platform. I think that's- We had Rog's verma vehemently say on theCUBE here, vector only databases is a fail. That will not be sustainable. Which we've been saying for a while. It's a feature about a company, but there are companies out there. We V8, a few others. We use one. Milvus, Pinecone. We use a standalone vector database because that's what was available at the time. You know, Mongo's feature wasn't even available when we launched theCUBE AI. But I also want to point out just to all the guests this morning, the two practitioners that we highlighted, Uber and Walmart. Uber, Uday is just amazing. Basically they built this application. They started building a platform in 2015 and now, and they've always been using AI, but they're injecting AI throughout the entire stack, throughout the entire life cycle. And the interesting thing there was they're able to add new businesses. Think about Uber Eats. They're also contemplating and actually making moves toward taking their platform and going to logistics companies and saying, don't write your own app. Use our app. You know, we have it all together. People, places and things. Well, I thought about it. It registered with me of having rented a car this week going, okay, well, wouldn't it be easier if I could do that through the app in the instant when I go into the airport versus having to go and figure it out a week in advance or something of that nature or do it a week in advance. But they have all of that logistics, like you said, part for fleet management, which was really interesting. Yeah, absolutely. And then the other was Walmart who basically took their triplet model, which is their super cloud. And then they developed an AI ML layer, abstraction layer on top of that. And they're serving both the retail side of the organization, kind of like Amazon Rufus, which is like a shopping assistant. That's for the retail side. But also for AI ops, network ops, squeezing value out of the network, better security using machine intelligence to make better infrastructure. So both sides of that, from a platform engineering standpoint, helping the users shop better and helping the infrastructure run better. It actually, it reminded me of something like Bedrock at AWS where it's a service out to the various different parts of Walmart. And they're really focusing in on, from IT ops all the way to supply chain, which was really interesting. Yeah, and it's highly tuned and purpose built for their environment, which they've had such scale, they can afford to do that. You know, not every company can, but it's governed and it adheres to all the edicts of their organization. It's actually quite remarkable what they've done in less than a year, basically eight months they pulled this. Okay, so let me, let's get down to the closing, kind of like wrap up here. Dave and Rob, and I'll share my thoughts, but I'll go to you guys first. The AI innovators are out there. We're documenting them here. We're going to add them on the 19th. Howie and I are going to do some more interviews to add to the program. What does an AI innovator look like? What did you learn from today's show all day live? And what did we glean out of this? What was the learnings? Dave, we'll start with you. What was the takeaway today? I mean, we had great representation from startup founders, series A going for series B, series C going for D, pre-public, public companies, VCs, and again, a lot of these innovators. What did you learn? I think it's playing out the way, frankly, we thought about it. It was last year, it was a lot of excitement, a lot of experimentation. And I think we've said the second half of this year is really when you're going to start to see return on some of these investments. And I think it's throughout the stack. We had a lot of discussion today, both online and offline about the silicon level. Of course, we're setting up for GTC next week, but Nvidia, we've got companies like Grock going after the very low latency piece of the market, but Nvidia's got a really, I think, strong moat. Others are going to come in, but it's really going to be kind of Nvidia's world for a while. And then as you move up the stack, you know, the data platforms piece, the big takeaways for me are there's still huge gaps in the data estates and the data strategies and companies are filling those gaps, but there's a long way to go, which says to me there's a lot of room for innovation and that's going to come from a couple of places. One, existing platforms like Snowflake and Databricks, I would include Oracle, you know, Single Store, et cetera. All these existing platforms that are evolving to catch the AI wave. And then you have all these new startups coming in saying, hey, we're going to be laser focused on solving these problems and filling these gaps, bringing together unified metadata and unified governance. And either they're going to hit escape velocity or they're going to get acquired or they're going to be a niche. So those are my big takeaways from today. Rob, what's your learnings? I think the evolving personas and how that evolving use cases for AI really spoke out this morning about how they're looking about it in their products. So embedding the AI in the products as well as using kind of the models to fix the models kind of discussion and then looking beyond that. So how do they make it easy for other organizations that are not at the top of the pyramid to actually adopt AI? That's great, and I totally agree with you. One of the big things that jumped out at me that was an epiphany for me this time was just as clear as day now, the fog has lifted is, I believe that the market of the customers are ahead of the vendors in the sense of they're forced to start thinking about ways to recast their IT and or their groups in ways to create AI systems. And we've been saying in theCUBE since super computing and recently at MWC that the systems revolution's coming. And of course Broadcom's president loved it because they're doing systems. And so does NVIDIA because they're validating and Jensen Wong last week at Stanford essentially validated that concept as well as the power law. So those things I kind of felt good about but today the wake up call to me was that the customers are moving faster with this experimentation phase to set up new teams where it's not an easy sales motion to say, well, that's the persona, let's go to market and sell something to that person because it's not a person, it's a team and it's developer led and it's cloud native, maybe not CDO specific days. So it's interesting to see how maybe Gen AI is pulling the power away from the CDO, chief data officer and the data players potentially into the DevSec OS because the data states, although they're established become the crown jewels for Gen AI. So but that's not going to happen until the software actually runs on the infrastructure. And Solomon just said, developers won't put stuff into production because they're afraid for the risks. So I think there's a very much a reality that top people are being assembled in the top companies and I just don't see how the vendors are organized to actually motion to that sale. I think that's going to be cause a lot of like probably sales disruptions in the market for these big vendors, but also it's going to set the table for the next wave. I think you're bringing up a really important point and we always talk about how every company is a software company and it's kind of become a bromide that we repeat a lot, but actually that's been true now for a decade and what's happening is the expertise within companies to your point is now at a critical mass where they can lead and the other key point is, you've said this many, many times it's the data that's going to determine the differentiation in the IP value and these organizations know their data within their industries better than any vendor and so they're dragging them along. They are ahead of the game because they understand the edicts, the sovereign requirements, the legal requirements and they are dictating to the vendors, I mean, the customer's always right, customer's always in charge, but from an innovation standpoint, there are actually many of these existing customers are leading, not from the standpoint of developing large language models, there's a few companies doing that, but it's really setting the protocol for how AI is going to deploy. Well, there's a specific technical issue, so Raj Varma was talking about some of the things around single store and they're doing with how they've taken MemSQL. We had Chandra from the CMO of Neo4j, former Google engineer, who had Uday from Uber, they built their own data store using Spanner and their own systems. So okay, all of these things, you've got Venkat from Rockset, Kyle from OneHouse, you've got Databricks and Snowflake. All these companies are doing things that are key to an integrated system. All of that being said, the number one momentum on the data that we're seeing is SQL Server from Microsoft, is the hottest selling product in the survey data. So you have a market where everyone's got Microsoft SQL Server, which I wouldn't even classify as an AI innovator at all. And so it's not. Well, I mean, if you talk to the people who all of the different names, by every different name, they say that anything in Azure is Microsoft SQL under the hood to a certain extent. But I think there's so much of it out there and this goes back to those, again, you have these data estates of all of this IP these companies have and they've had SQL for so many years and there's so much rich data in there. The question is, will the disruption enablement that comes from the GNI movement, if this new systems revolution happens, it has to disrupt SQL as Raj pointed out, it's not distributed. And so you have a big elephant in the room right there in this incumbent system and you get knowledge graphs coming, you got GNI. So the question is going to be, truly is it a disruptive enabler and then how is that disrupted or does SQL adapt? But the fact is that much of this stuff is not distributed, the blockchains distributed and that's slow. So there's still some real serious challenges around, do I optimize, you heard Uday say, we're basically optimizing for availability. We can't ever have the system go down versus making sure that the estimate of when the car is going to arrive is perfectly accurate. Why would we optimize on that in sacrifice availability? So that's, but my point being, John, they've built a truly distributed app and it was really, really hard and it took thousands of engineers and person days and months and years to do that. And so I think that's why I was saying earlier, I think there are still a lot of gaps to be solved and I'm curious as to who's going to solve it? There's got to be, to your point, you always make this point to scale, it's got to be a horizontal layer that people can absorb and then build platforms on top of them. It's going to be available. But that, the distributed platform really isn't there today. Is it the cloud? You know, is it AI? Distributed computing is here. So the question is, does the database not really work that well? Or you look at what Walmart is doing where they have a sidecar of, that's a horizontal sidecar of AI that's plugging into all kinds of different applications and what they're doing is instead of embedding AI or building AI apps, multiple different AI apps that scale horizontally, they're looking at an AI layer that then plugs into all of their different apps and enables it, sort of like a co-pilot model. So I think it becomes a, do you build everything off of one platform or do you build a whole lot of co-pilot type? But with open source now, right, these companies like Walmart, like Uber, are able to lead and we all know, CIOs need to be scared to death of open source and now their first question is, what's the open source alternative that we can use besides this proprietary solution? You can use open source, it might be less expensive, run on a different kind of cluster or system or device, but you still have to host it. Sure, no, I understand that, but I think the point is that's where, you've always made this point, you're going to agree of course, is that's where the innovation is. Charlie Cowess's open always wins, it does eventually and so, but people are leading with open point being, they can now develop on top of that. So I guess what I'm saying is that distributed system, whatever that looks like, a lot of that innovation is going to come from customers. I think the developers will reign. We had Alyssa Viznick, she's the CEO co-founder of Y Labs on earlier. I like her point because she's like, they have to enable a bottoms up developer, go to market like a very much a data dog or a Manga where you give developers the candy up and sort of let them grow into it, but they can't just do that because they have to talk to the enterprise motion because the data that they want to show is in these systems that they have to get the crown jewels of and also they have to protect it. So you have this, I want to build the bottoms up developer motion, but they still have to invest in the expensive top-down sales motion to go to the enterprise because in order to show the ROI, they got to get the generative value out of the data of which that's where the crown jewels are for the company. So you're starting to get into a world where you're starting to see that, you don't see that very often when you got to go bottoms up and go after the crown jewels. And anytime that happens, security's number one, who's got access. So Solomon's right, unless it's not in production, it's in production, it's got to be bulletproof. You can't let these LLMs get the crown jewels. So clearly a lot of nuance points, Rob, I know KubeCon's coming up, you'll see a lot of that, but clearly the infrastructure action is probably the all-time high I've seen. It's moving very fast and the developer appetite is super high and we'll see how it kind of plays out. I mean, I think the pressure point at this point is the infrastructure, not so much software or the apps. Well, you know, it's back to developers. You guys next week are going to be at KubeCon. That's going to be exciting in Paris, of all places, you and Savannah. And Dustin Kirkland. Dustin Kirkland are going to be there. John, I got to be back here. We're going to go to GTC. We got a Broadcom meeting as well that we're going to go to, the financial analysts meeting. So it's going to be a big week. Yeah. And then we got RSA coming up. We've got a bunch of big shows. The Kube is kicking up. Looks like the events are going to be back. Looks like we had a little bit of break in January. We got SuperCloud 7 coming in July. That's going to be about a six data platform or this new modern platform we kind of teased out here with Uber and others. And on the 19th, we're going to have another special addendum to this event, SuperCloud 6. Addendum for AI innovators. We're going to have LangChain in, Lama index, a bunch of panels. Howie Shoe and myself will be here bringing you more startups. Mostly founders. We'll try to bring in some big companies but really the AI industry is making it happen, enabling this next generation AI system happen and we'll be covering for you. And so now stay tuned for the ecosystem speaker, series of interviews from experts and leaders from the Kube community. Watch this now. Thanks for watching. And that's it for SuperCloud 6.