 Welcome back everyone, live coverage at VMware Explorer. 13 years, theCUBE's been here, I'm John Furrier, Dave Vellante, formerly VMworld. I almost said it twice today, it's only the second year to be trained. Laura, the CMO's coming, she's going to be mad if I say VMworld. Said Ragu on the CEO. Now we got the CTOs on, Kit Colbert, CTO of VMware and Amanda Pleven's VP and CTO in the field at America. It's great to have you guys on, both CUBE alumni. A lot of action, your jobs are action packed. You've got 20 dinners to go to. I'm sure the same for you. Your time is in great demand. Lot to explain, lot to build on. You just came back from the hands-on labs, you're talking to customers. This multi-cloud thing has got momentum. The original vision is an operating model but the AI comes a tailwind, it's a gift. Kind of fell on the market. And it's kind of a gift for you guys because it kind of propels the VMware. I mean, customers now are talking about, so it crystallizes the runtime of what super-cloud or multi-cloud's doing. Yep, well it comes in, I think you look at multi-cloud. In some ways you can look at AI as another type of workload for multi-cloud. And I think what's interesting about it is that you look at how we as an industry navigated cloud initially. We went through this phase, we called cloud chaos. And now we're trying to help folks as VMware get to something we call cloud smart. And what I see about it is it's really an architecture. How you build your apps, how you design your infrastructure, and the various components that are needed to support those apps. And so the question I think in front of us is, we went through this sort of cloud chaos phase with cloud native applications. But now we're the next generation of apps coming with AI and GenAI. And so are we going to make the same mistakes as we did in the past? Or can we learn from those mistakes and actually go straight toward this cloud smart architecture that we're talking about? And so I think that, in our mind, is the real opportunity that we see here. Can we talk about this on the SuperCloud 3 event? We were just talking about security and about architecture. Amanda, we were talking before, we came on camera about the role of the cloud architect being very important. Because in the keynote, the comment was the run time is for multi-cloud is what's happening. That's the key thing. In your analyst briefing yesterday, I get, I have a little tech dog whistle when I hear words like, oh, pick up. You said VMware's good at scheduling, scheduler. That's an operating system word. You're scheduling something, that's an operating model. So we're seeing run time, scheduler. We're good at IO. I mean, the words that are being used in the VMware management team and the senior staff and the technical community, it's operating system. That's essentially what's happening here with multi-cloud. Not multi-vendor collection of applications or stuff. It's not a closet of goodies. It's a fully running system. Absolutely. You need to have those controls in place so you know where your workloads are going to go. But you talk about the cloud architect layer, but there's also the platform engineer layer. Because if you're missing that platform engineer layer, that's the team that provides the paths. That's the team that provides the CI CD pipelines. And those are the ones that interact with the developers. When organizations are missing that layer, they have developers managing Kubernetes. That's an infrastructure thing, right? The developers don't need to be in there. We want the infrastructure teams to do that. So the infrastructure folks or cloud operators can up level and be platform engineering. And then that scheduling that you're talking about, the operations, et cetera, happens across that platform engineering layer for the cloud native applications. And you can still leverage your cloud operations and multi-cloud infrastructure super cloud for your traditional apps too, in addition to enabling platform engineering. And I think this notion of operating systems is an interesting one, right? You look at it, we understand an operating system in the traditional sense is managing the hardware and so forth. And people have made that analogy out to the cloud. And I think what we're seeing now though is really an expansion of that concept. This notion that the operating system really is there to handle the infrastructure. But what we see is that what we consider to be infrastructure is changing. I think what Amanda's pointing to is a really important point there, which is that a lot of these platform engineering techniques and concepts in technical areas used to be considered outside the domain of infrastructure. It was not part of a traditional operating system. They used to call it middleware, right? But now what you see is that actually it is kind of that infrastructure type of thing. And so I do think it's interesting how the evolution of the space is really allowing us to help come in and standardize and take away some of that complexity. Well, the operating system thing is very nuanced. It's semantics, but I use it loosely to describe systems thinking. Because operating systems, yeah, it's not an operating system, it's not software on a disk on a server like Linux, but operating system as a concept, it's a system, not a monolith. So for example, this consequences when you change something in a system. And so you have subsystems. Yes, yeah. You have all kinds of other connective tissue. And that's the kind of thing that I think is people zooming out, saying, okay, what do I got to do? So it comes back to what you were saying before, we make the same mistakes twice. When I was a young fool, I would jump in high school, I would jump off a cliffs and to a quarry, right? But so I would take those risks. And so you were a mature company and you worried about these standards, you're worried about the legal compliance. So the chances are no video back then. So... So chances are history is going to repeat itself, right? I mean, you kind of have that expectation. But so what do you think you have to do so that maybe history doesn't repeat itself as much? I mean, to bring that innovation in sooner. I know I can start there, maybe a minute, you can share some thoughts. So I think, not sure how much this metaphor of you jumping off cliffs goes forward here, but we as an industry went into cloud. I don't think we really realized the types of challenges that we're going to see, or that we needed to have from the outset a multi-cloud architecture. It took us 10 years, really, as an industry to figure that one out, as we started seeing the proliferation of clouds, data apps, sort of everywhere. So as it stands today on the dawn of this new age, if you will, in the industry around AI, generative AI, we now know about multi-cloud. We've learned the lessons that, hey, we've got to think about architecture and how to architect solutions in such a way that it gives us sort of optionality and flexibility that multi-cloud can really enable. And so I think we have the knowledge now. So as we start building these systems, we need to make sure that multi-cloud is foundational in terms of how we build those systems. Yeah, I absolutely agree. You know, the fact that we've learned it, the fact that organizations are partnering with VM, we're to say, how do I do this properly? And since we're already engaged with them and we're already working with them, it's just a natural extension. But what else is interesting about GenAI, AML, is data gravity and where it is. And because some organizations weren't as successful going to public cloud is what they thought they might be because they didn't have the multi-cloud architecture, multi-cloud strategy, a lot of their important data is still on-prem. And so that means that for them to be able to use that data to train models, to fine-tune them for where it is created, especially at the edge for various industries, that means that from the edge to on-prem private cloud to the public cloud is just a requirement of where that data is generated and where it needs to be analyzed. And that's why I'd like my AI to run across clouds. Yes. When we first started thinking about the idea of super clouds, I was thinking, okay, an app will maybe run on separate cloud, and then I said, now that's not going to happen. And now I'm thinking maybe it is going to happen. Technically, will that happen, do you think? I think so. I mean, I think what you're going to see is a couple of things. So first what you're seeing is there's a proliferation of open source models. They're actually very, very good at what they do, especially when you do a little bit of extra fine-tuning for your environment. And I think what you're going to see within the industry is sort of a battle between these giant models, such as a GPT-4, let's say, hundreds of billions of parameter type things, versus a lot of these open source models, which you're going to have more of, they're going to be smaller, but they're more targeted, right? And so, you know, we certainly envision, we want to support all these things, but we're going to see a lot of customers who we think are going to be using more of these smaller targeted models spread out everywhere. So some clouds have some data for some apps might be using certain models, other clouds, different apps, different data, different models, on-prem, different apps, different data, different models. And so you're going to see that very much multi-cloud architecture start to get set up. Yeah, and I think another thing to think about is the cost of running these models. So there's a great analogy of, if I ask chat GPT, how do I make toast? That's a dollar, right? But if I have a domain-specific model that's much smaller and using a lot of resources and I ask that model, how do I make toast? Well, maybe that's two cents. And the reason that analogy is interesting because organizations are looking to offload this cognitive load from people to augment the human, so they can do things better and faster, but there needs to be that cost decision point of if it costs me more than all my humans to do the offload, do I even use it for that business in this case, right? There's an interesting analogy there with just supply chains and history. I mean, think about when you have breakfast in the morning. The eggs come from a place and the banana comes from South America. Think about if you had to do that yourself, you couldn't afford it, but you have all these specialists that come together in a supply chain and I can have my breakfast for whatever, 10 bucks or whatever it is. And it goes further than that, right? We've had some really great conversations over the past couple of days here. Yesterday in our executive summit, had a great panel discussion about this and one of the questions that we raised to a lot of the executives in the room were, how do you balance the risk of deploying models, especially chat models, which as we know, these things can say terrible things if you don't control for that, versus the risk of not doing it and being left behind competitively. And one of the interesting things that came up there was you're talking about this multiple models. Well, one of the avenues to helping to put controls on what a model can say is actually have another model review the output of that first one and say, is this thing like totally off the deep end or is it okay or not? And you can imagine seeing models helping to classify data for other models, models taking the output of that model, making sure like doing quality control of it. So I do think you're going to see a proliferation of these things and actually an ecosystem of models working together. And also the, maybe the opposite direction too, I have domain specific models for various lines of business of various functions and I have the supermodel like kind of like a super cloud to be able to route that, you know, like I need to go to this specific model for that query. There's the tech there today so that that supermodel is, can actually do that today better than humans. I don't know if we can do it better, but we're definitely working on it internally. It's moving toward there. Yeah, I don't know. It could do more obviously. Yeah, yeah. I think it really depends on. The quality is not quite there yet. Well, I think that you got to like, in the most general sense, probably not, but if you narrow the scope of it, I think it can. That's really, I think getting back to Amanda's point, which is that if you define the scope appropriately, you can actually have a much smaller model, much easier to train, much less costly to run. And yet that still has really great functionality on par with the biggest model. So do you think autonomous vehicles will ultimately drive better than humans? I heard an autonomous vehicle drove into wet cement. I also have to go into a puddle the other day. So I don't know. My truck didn't want to go into the scope. We just published a power law and breaking analysis showing a long tail, the proprietary ones at the top of the head of the tail. And it's pretty much a straight line down. Then it's kind of like. There's the power law. Power law of NAI. And it's like a rise of the old record industry. And then it kind of has a long tail. We're predicting when we get your thoughts. So it's size of model and domain specificity in the XSAC. So we're predicting, obviously the torso will grow, shift up and the head will get smaller. More people are going to move up the tail. So because of the surge of open source and all the blockers in corporate America, obviously enthusiasm from the board room to the dorm room. Get that. But it's faster rising. So the long tail apps there, you guys just talked about how LLMs and models and foundation models are going to integrate. We see that too. Most like APIs kind of, hey, DevOps, right? So we're living in a DevOps age of super AI. So if that's the case, you're going to have to see more fatter torso neck. Because the proprietary models can't carry the load because they have blind spots, the data. And they shouldn't, right? You trained it for something very specific so we know what it is. And we just have to make sure we're using it properly. There's a need to be that super model in front of it. If I have one line of business that is an underwriter for insurance company, I know that that's just what that model's going to do. But my insurance company has other things around claims and customer service, et cetera. I'll have different models for that. And there's no need to be able to route between them. But if I'm in a different type of business where people have different areas of expertise that they need to focus on as a whole, well then I will need that super modeled around it. And, Kit, we talked about this data fusion again at SuperCloud 3, fusing data together. I mean, we just put the word together, not like it's a discipline, but fusion is fusion. But so if you believe that, it's got a DevOps trajectory. Just go back in the history of DevOps, you know, if it's structured as code. How did that start? Okay, so now we're where we are today. You said in the analyst briefing, and that apps got to land somewhere. So the vision is this, if apps are going to have AI native, developers who don't want to deal with the infrastructure, coding away, do you shift left or right? What does even shifting mean? If you have AI native in the app, where does it land and what does the architecture look like to support that developer that wants just this abstracted layer that does all the heavy lifting for them? You just want to code? What does that look like? Because AI is going to force a new shift in architecture. It is. It is. What does it look like? So I think it's something, thank you for the graphic, by the way. No, I think it's something that's still evolving. I don't know if we've totally figured that out as an industry, but you're correct. When you look at cloud native apps and the sort of scalability aspects there, there was this notion of separating the scale and knowledgeable code from the scale agnostic code. And the idea was that you get a few super hardcore low-level devs to deal with that scale, knowledgeable code has to deal with all the complexity. And the rest of the folks, they don't need to know about the scale. They just assume they're kind of running and they have some guardrails around how they operate, but they can go and write business logic in a much more simplified manner because of that. And I think you're going to see something similar evolve with large language models and really AI more generally, right? I think one of the amazing things about large language models is actually the fact that you don't necessarily need to be an expert to use them. You've got this natural human language interface. Now that being said, natural human language is the programming language of these. And so you do sometimes need some help. And so what we're starting to see actually is the advent of smaller, more specific models that will actually help to take what you want and to turn it into the right prompts for a larger, larger language model. So again, more of this model of multi-shot prompting, right? Yeah, I mean, because a lot of, I mean, a lot of models won't use any prompting other than the human prompting. Or they might do something slightly wrong, unless, I mean, it's like, again, you say some English words, but those words are very fuzzy. And so by being able to set the thing up properly, you can do it right. But you can have a model to help you do that. So you don't need to be an expert. Amanda, if you apply DevOps principles to this concept, what has to happen next? Well, I think that there's a lot of focus today when you're deploying applications and we'll just assume an AI ML workload is an application for this analogy. But there's a lot of focus on where are the users? Geographically, where do I need to put it, right? What is the performance that I need? What is the cost I need to meet? What security compliance, et cetera. But what else needs to be factored into DevOps now is where is the data? And where is the data that I'm training on? Where's the data I'm fine tuning on? Where is the data that's being ingested or being analyzed with what, you know, where I'm doing the inferencing? And that's going to require a different type of architecture. I think that might trigger a lot of distributed systems, distributed application requirements, right? Yeah. Well, I look at AI as, to some degree, a specialization or a specialized type of cloud native application, right? It's inherently distributed. It's got all these properties and, you know, oftentimes runs on Kubernetes nowadays, right? Leveraging all the great things we've learned about cloud native. And I think you're right, though, like what becomes interesting about it is a lot of the data specific requirements, the amount of compute requirements as well, sort of that like scale, networking becomes a really big thing for communication, especially during training. So it gets to be a little bit of a specialized aspect, but still rests on the same bedrock of DevOps that we've been doing for 10 years now. Yeah, yeah. Hey, distributed computing's back. Dave, our prediction years ago that the cloud is just a distributed computing model that on-premise... If we wait long enough, everything we say will come back. Remember when cloud was raging, the on-premise debate was, we talked about private cloud a lot back then, that edge, that's essentially going to be cloud operating model, that was pretty obvious. Yeah. And it's happening. I think AI throws a nice wrinkle into this because it accelerates as well as highlights things. So we're going to see interesting things. So to wrap it up, what would be the thing that you would look for if you're someone out there? What would you advise your ecosystem of practitioners and customers? What should they look for to see the trend line? What should they look for from markers in the path towards digital transformation with AI? And multi-cloud. What's the key markers to pay attention to? You know, I would start with understanding, especially as technologists, sometimes they're a little bit removed from the business. And this is not the time to be removed from the business because every organization is to come and say, I need to do Gen AI, I need an AIML solution. And so really being able to help the business understand like, do you need Gen AI? Like, is this even necessary for use case that we need other technology? But also for these practitioners and technologists to work with others across their field and understand how is your business using Gen AI so they can be proactive and come to the business with these suggestions and make sure that they have the technology in place, be able to solve for that before the business comes to them. And I would suggest, you know, I think first of all, business tied use cases are absolutely critical. And then once you find one of those or a few of them, you know, I think there's this tendency to like rush, head long into these things, just do it as fast as you can. And you want to be fast. But again, getting back to our earlier conversation, you also want to be thoughtful. You don't want to get yourself back into this cloud chaos state. So thinking about multi-cloud, thinking about a cross-cloud architecture, thinking about not just how am I going to solve the problem for the next few months, but how am I going to set myself up for years ahead where, you know, you're going to get over time a lot of technical debt, but you can avoid it by making some of the right moves early on. Kevin, thank you for coming on, sharing the data here at theCUBE, extracting the data for everybody. And congratulations on all the great momentum you guys have with the cross-cloud and multi-cloud. Thank you for having us. Great to be here. Love chatting with you. Love the audience, loves it. Thanks for coming on. Thank you so much. All right, more CUBE coverage right after the short break. 13 years, VMware Explorer. The CUBE's continued coverage. We'll be right back after the short break.