 Hello, welcome to SuperCloud 3. I'm John Furrier, host of theCUBE here in Palo Alto for our third edition of SuperCloud. This is where security and AI meet together for the SuperCloud hottest topic in AI, is security and data. We're here for a keynote fireside chat with Kit Colbert, VMware CTO. Kit, great to see you. Thanks for coming on. Well, thanks for having me again. You've been a leader in the cross-cloud, SuperCloud, multi-cloud effort for VMware in the industry. Lots of change since SuperCloud 1. A lot more visibility on this layer, this interaction. Multi-cloud, much more used term with more meat on the bone, so to speak. Cross-cloud has been getting a lot of traction, so congratulations on that. But now as it goes to the next level, you're seeing Jenae, I enter the scene, which is bringing up the conversation of automation. You got Code Whisper from AWS. You got Co-Pilot. People are writing codes. You're starting to see a lot more action on the dev side. Impacting the operational aspect of data. This is a huge thing. There's a lot of hype in Jenae. What are you hearing? Yeah, well, a couple of things. So I think you're right. It's been not quite a year since our first SuperCloud event, which I really think helped us set the stage across the industry and start the conversation that we're continuing now. And I think your point is accurate that we have seen the industry coalesce around this concept, that there's a certain architecture that you need to adopt in order to be successful in this multi-cloud era that we're in. And I think what's interesting is that, ever since ChatGPT came out in November of last year, tremendous interest that we've seen across the industry. I mean, this was very much like an iPhone moment. You saw it and you can never sort of unsee. You can never go back to the way things were before. And so I think what we're seeing across the board is every company trying to figure out, okay, how do I realize the value of generative AI? And how do I do so without necessarily falling into some of the same traps that I've fallen into before when the previous super hot technology came out, right? And so I do think that we have this interesting coalescing of this cross-cloud or multi-cloud architecture with generative AI. You know what's interesting? VMware, if you go back to the roots of VMware, it was a disruptive technology, came out of nowhere, virtual machines. That created a market, Amazon with cloud, that created cloud computing. And now generative AI has got its own market. Each of these inflection points have the same parameters. A lot of developer action, operational impact, meaning changing the game of how companies work. And so now we're in this mode where cloud's gone to gen AI at scale. You've got two types of profile customers, the startups or developers, and then the enterprises that are already out there. So you have people going to reuse gen AI for their existing stuff and net new capabilities. So you're kind of changing the airplane at 35,000 feet if you're an existing player, and then you get the new entrance coming in. So it's an interesting dynamic. How does gen AI get valued in terms of the person looking at a startup clear, go after an opportunity, but for a big company? How do you tackle this? You got to integrate it in, you got to see some low hanging fruit. What's the playbook? How do you see this creating value? Well, I think it's a great point because what you see is something like gen AI. Gen AI, given the importance of it, it really creates an inflection point for the industry. And I think when you find these inflection points happening in the history of business, what you see is it does provide tremendous opportunities for these startups to sort of get in with a very different or potentially you could argue better value proposition. And so I think for a lot of established enterprises, what they're starting to think about is like, okay, I thought I have a solid moat you might think, but how could that get disrupted by this new gen AI technology? And how can I as an established enterprise think about how to defend against some of these startups coming in there? So I think everyone realizes there's huge potential. I think there's also a lot of discovery happening around how to best realize that potential. And I think the other thing that I hear a lot of concern about from large enterprises is on the security side of this. I want to go in, I want to move quickly to take advantage of this technology, but how do I also ensure that I'm not driving greater risk or damaging myself in that way, right? Like, am I potentially leaking proprietary data from leveraging gen AI? Am I potentially doing some sort of IP contamination through the output of the gen AI system? So some of these things are unknown at this point. So I do see both the excitement as well as the risk management side of it and folks trying to balance those two things. And you see companies like Amazon, for instance, doing the bedrock, they have the licensing guarantee with the Titan models. They also have the ability to do training and ECG GPU clusters. So there's costs associated with doing it is one. The question I want to ask you is, how does that relate to SuperCloud? Because members, SuperCloud and multi-Cloud is kind of coming together. Is AI going to help in the short term push us forward operationally towards SuperCloud? That's a really good question. I think it's going to be one of those things where, I can see it going both ways. You saw my smirk, right? I can make arguments in both directions. Typically what you find is that when folks see a new technology like gen AI, what we're seeing now, they're sort of going all in. They're not necessarily thinking about what comes next. They're saying, hey, I just need to get this thing solved, get something in production, start realizing business value from it. But then typically what they find is that, oh, I've kind of painted myself into a bit of a corner. It's inhibiting my ability to scale, my ability to really take advantage, let's say in this case of multi-Cloud. So I do see some thoughtfulness going into these discussions. People are thinking about, okay, what sort of architecture should I be taking here? What sort of dependencies should I be taking and on whom? And how do I ensure that I still get the sort of choice of location that's super important? So like just as another quick data point, a lot of large enterprises, definitely using public cloud for gen AI, but at the same time, they want to know they have a path back to on-prem. It's actually really, really interesting. Both for things like can I do it more effectively, cost-efficiently on-prem, but also from a security and risk standpoint that I mentioned before as well. So there's definitely a lot of thought going in there. And I think the jury's still out on how it will unfold. You know, I hear two schools of thought on one hand, democratization, that's Databricks' big message, you know, making a development. Obviously open source is driving a lot of value. I'm not going to poo-poo that, it's a great message. No, I think it's amazing how fast the open source community is moving on this and how fast it's evolving fundamentally. Yeah, and I think that's going to be where the, where the canary in the coal mine will come from. I think that's the answer. And then on the other hand, I hear a lot of people saying, it's not that easy and it's not. There's some technical challenges to get AI into the cloud and then bring in the scale for hybrid and then obviously super cloud. And so I have to ask you, is there or do you see this thinking, do you see the trend being more solution architecture? Because the conversation I'm getting into on that hand is that it feels like the solution architect conversations from years ago. Okay, how do we lay it out? There's a lot more holistic systems thinking into the gen AI than just saying, okay, yeah, just spin up some servers and let's get some GPUs. That's a dev rel, that's a dev test, kind of like a more of a sandbox. But once you want to put something into production, I see a lot more conversations that are serious around architecture, playbook, privacy, compliance and governance, kind of foundationally setting that layer and then enabling that value. What's your thoughts on the whole conversation? Well, I think it goes back to your previous point, which is that you see some folks sort of rushing headlong, necessarily thinking about that broader architecture discussion, whereas you have other folks that are thinking very much about it. And I think one of the amazing things that's, I mean, how long has it been? Seven, eight months since chat GPT sort of came on the scene and caught everyone by surprise. But what you're seeing very quickly is this evolution of what the right sort of reference architecture looks like for a gen AI or large language model type of system. And you're starting to see, you know, just an explosion of startups in the space. And so these people are starting to come in. And I think the key thing, kind of like what we've been talking about with a cross-cloud architecture, is how do you ensure that reference architecture is fundamentally cross-cloud in nature? I think what you find, and what we found at least from other domains, like traditional cloud native apps, if you will, like Kubernetes, is that doesn't actually take that much more effort to make it cross-cloud. You just have to plan ahead a little bit, right? And to really be thoughtful around the dependencies on the APIs, et cetera, that you take. So that's really the direction is going. And that, as I said, gives me some confidence that we can actually approach and take advantage of gen AI while also ensuring that people have the flexibility and choice that comes with multi-cloud. I want to get your thoughts on, is AI a next gen workload, or is it a bolt-on feature natively in the application? I can see it going both ways. It kind of has a depends question, but how do you see that playing out? Because AI workloads might be a big thing coming forward. Yeah, so I think they are really a next gen workload. It's not just a bolt-on to, let's say, anything else you do with Kubernetes, right? Clearly you can build an AI large language model on top of Kubernetes, and you should, it's great, right? But the thing is there's so much additional specialization that's needed there, right? Look at things like prompt engineering or things like vector databases, like all the tooling that goes around that, that's all highly specialized. And so I look at it as a new type of application. And I think because of the scale that you're gonna see, the fact that you're gonna see AI, a large language model based apps, being like a quarter, maybe even a third of the application portfolio for many companies, and let's say three to five years, that you do need to treat it as its own type of app and really get a lot of specialization around supporting it. What's VMware's view? I know you guys do a lot of R&D and data and AI, operationally, you guys have great client base. They're all looking at this future cloud, super cloud, cross cloud, as that next gen VMware, VMware Explorer world that's emerging. And it's going to look kind of the same but different. You're going to have more microservices, you mentioned Kubernetes, but now you've got the notion of a changing data infrastructure to support gen AI, it's infrastructure. This data products layer, I call it's emerging, where data is products, I mean, LLMs could be VMware could have their own LLM, right? So who knows, everyone's going to have their own large language models and foundational models. And then you've got the developer. At the end of the day is coding. How do you see that threading together? Infrastructure, data products, and data developer, I call it. So here's what I'll say, kind of give you a little bit of a sneak peek of what we're going to be talking about at Explore, but of course not give away too much here. But the general point of view, as I mentioned, is that we do see the gen AI and large language model space as a new type of application. And yet at the same time, and while it does require some new sets of capabilities that we just talked about, at the same time, it does leverage a lot of the existing things that are already out there, right? You need infrastructure, right? You need infrastructure that's optimized for these sort of applications. You need to, how do you manage your costs across all these locations? How do you do governance and automation? How do you do security, right? Networking, networking is a huge thing, especially high-speed networking for these sorts of apps. So a lot of our current product portfolio we believe we can bring to bear to support customers doing large language models. We want to do that, of course, across cloud. But I think a big focus for us starting off is, how do we create a simple turnkey solution for on-premises environments? Because again, this is one thing we hear from customers and we hear that there aren't good simple cost-effective solutions there. So you're going to see a big focus from us there. You know, one of the things we talked about when we talked about hybrid and Edge was you move your compute to the data. With cross-cloud and super-cloud, the conversation shifts to move your workload to the data. Because now data is much more agile, much more programmable if you get a lot of the stuff done in the foundation layer. Federated catalogs I've heard is big topic. Data fabrics. Horizontally scalable infrastructure to keep track of. The checkboxes, the compliance and the governance. Once that's kind of set up, another layer emerges to handle the data that needs to be available for applications for the AI to work. Remember, AI is only as good as the data, not just the software anymore. Well, I think, and this is, I think a key part of our perspective is, which is that your data sort of spread out all around. I mean, you certainly have some of the data center, some of the public cloud, but a lot of the data that's getting created or will get created over the next few years is going to be out at the edge. And so that's another big focus for us or a reason for our focus on-prem is this notion that we want to, in your words, like bring the workload, bring the infrastructure to where the data is. And so we're trying to support customers to give them that flexibility to say, okay, where do I want to, maybe I build a model and even train it perhaps in one place, but where am I actually going to serve it? Where am I running inference? Where am I bringing in context information? Maybe that's out at the edge. And so I think having that sort of flexibility is key. The second point, we talk about data. And we've been doing a lot of different things in the data space for a while. And one of the new initiatives we have, this notion of what we call the data services manager. And the exact point there is to provide better flexibility to customers around data to help them provide access to their different data sources in a more dynamic way. You mentioned VMware Explos coming up and you didn't want to reveal too much. I got the sense there's going to be some kind of AI discussion in there, large language and foundation models. Makes a lot of sense because the developers and the operators are going frenzy over this stuff. It's going crazy and it's legit. It's hyped up right now, but people are singing their teeth into it. And it's much more of a sustainable trend as say a hyped up quick turn, a hyped up market. Or like something like, you know, you look at blockchain or web three, which I think do have a lot of applications, but may have been a bit overhyped compared to what they can do. There wasn't enough reality there. Right, actual use cases. I think the long game will prove that will work, I think. But here, AI is impacting immediately. There's low hanging fruit out there. Absolutely. Configuration, even from a developer productivity stamp. Developer productivity. I mean, the other big one we hear about is how do you provide a large language model to support your customer service representatives? So that if a customer calls in with questions, these folks can very quickly search the knowledge base, figure out what they need to get an answer, get a quicker time to value. We see a lot of it also on the, whether it's marketing or social, creating copy there. So there's a lot of these use cases and they're effective and it'll be leveraged today. And data is distributed. We're going to see a lot more distributed computing paradigms coming in. I want to get your thoughts on this because on this super cloud event this year, this episode three, we got a lot of inbound, we put a call for speakers page out there. And we got a lot of people from the Vmug and VMware ecosystem who wanted to submit talks. So it's clearly resonating with your customer base. This isn't a VMware direction that you're taking that has another 10 to 20 year horizon. I'm seeing your ecosystem responding to that. I mean, the VMs was a great run. It's continuing to be relevant. But there's almost another 20 mile stair out there in the industry where people can see a career path operating networks and operating data like they were with vSphere. So like it's a whole nother level of career path where people are going to start to settle in. And that's been a huge focus for us. So we have, I don't know how many actually, VMware administrators are out there, I would guess in the high hundreds of thousands, the small number of millions. But a big focus for us is how do we help them to take forward that skill set? There's a lot of work we did, for instance, with deeply integrating Kubernetes with vSphere. So you have that same sort of management model there in vCenter, but you can now extend that to these cloud native applications, applications that have been containerized. And we want to continue to drive that forward with AI based and gen AI based applications as well. And I think the other, the corollary there is not just within vSphere itself but across all of our product portfolio as we go to this cross cloud architecture really helping them to expand their skill set to support this very dynamic environment. And they're the operators of the networks of the environments with computing. I think the aperture of operations is interesting. DevOps started it, then it's DevSecOps security. The next question is when is it DevSec DataOps? And so we're seeing data kind of weave in the similar pattern to security. Shifting left became a huge wave of value because now developers can be at the point of coding embedding security policies that are given to them as guardrails from ops. Exactly. And that's more efficiency, developer productivity, better supply chain, security. Now the data question is, does that look the same to you for data? Absolutely, has to be. I think it's unquestionable. So the reality is when we're talking about data in some ways, you're getting into even bigger risks for a company. As you know, there's all sorts of compliance and regulatory requirements across many different businesses in many different industries. I mean, of course there's highly regulated ones like banks, healthcare, but across any of them, if you're dealing with things like payment information or personal identifiable information, these sorts of things become really, really critical. And so while you may want to say, hey, let's go a bit fast and loose, get something out there in the JNAI or LLM space, the reality is if you're not really thoughtful about what data you're using to train the model or where that data has come from, as well as the impacts of what data you're putting into the model, then you can have huge repercussions from a risk standpoint. Bill Walsh, the famous football coach, was asked why he was so successful. He said, you know, I just control the inputs and let the scoreboard take care of itself. He had Joe Montana, Jerry Rice. You controlling the inputs are critical QA with anything. And then football example, obviously the talent drives the score, data drives the value. This is becoming another operational linchpin to distributed computing. As a CTO, I'm sure you agree, but as other CTOs out there and other data architects or data engineering, whatever we call them, they're going to be thinking about this problem. What's your advice as SuperCloud continues to push, CrossCloud continues to push, this multi-cloud phenomenon, multi-vendor, it's going to look different. What's the advice to CTOs out there and solution architect? Well, here's what we're trying to do. I think, you know, because by the way, we are both a producer of software to support customers and companies doing Gen AI, but we are also doing it ourselves internally, right? So we're also learning these lessons as I think the rest of the industry does. And so I think a key point that we talked about earlier is developing those guardrails, having really clear policies. You know, we've got kind of a one-pager that we put out for everyone internally around what are responsible use guidelines for Gen AI. Not to mention our ethical guidelines. We've got pretty high standards around that. So things like on the ethics side, explainability, transparency, fairness, trying to reduce bias, these sorts of things. But on the responsible use as well, being really thoughtful about what data goes in, what sort of systems we can or cannot use, et cetera, et cetera. So I think part of that, getting back to your inputs, you know, concept is kind of defining that sandboxing. Okay, here's the guardrails. You kind of play, go ahead and play inside of that. That's fine, you know, have at it. But we also need to make sure that we protect the company appropriately. And so I think that's what it comes back to. How do you ensure that you're protecting the company appropriately, while at the same time enabling your developers, employees, to go as fast as they can within that framework? You got different needs for consumption. You got cloud as a enabler. And you got enterprise needs. This is going to be interesting. I mean, all that coming together and with the developers leading the charge, I've never seen anything more exciting than something developer-led right now. And you see on this same movement when SaaS started with cloud, that was clearly a step function from a startup trying to provision a data center, buy a box and then test everything. Get in your PC, put it on the cloud, put your credit card down, you're the next Dropbox Airbnb. Their new brands are going to emerge. This is a big wave. What are some of the things that you might see out there? Just from Kit's perspective, put the Kit Colbert CTO hat on, looking out of the landscape. What might emerge that no one might see? I think it's, I mean, just kind of some wild stuff that's coming out. Like some of the things I've heard about recently, I don't even know the names of some of these companies, but there's one that will just connect onto a Zoom call and listen to your meeting as you're having a meeting with colleagues. And then what it does is takes a transcript and summarizes it, creates bullets for action items and all that and sends it out, right? And so these are like really small ways in which work is going to get automated and where I think we can create a better work environment and to be more efficient. But honestly, some of the biggest transformations and so forth that are going to happen will be surprises, things we can't even think of. And so I think that's really what goes back to what we were talking about before. It's like to some degree, like I as a CTO, I've got a bunch of ideas, but I'm not necessarily on the ground in the trenches, doing the job of coding day to day or answering customer service queries. And so part of it is, how do you enable those folks to see a problem and say, you know what, maybe there's a large language mom that might be able to solve this? You know, it's an opportunity recognition. You nailed it. It's a human AI relationship. And then ultimately it's like the constraints and the solution, putting it together, all coming together. This is kind of a Cambrian explosion. Absolutely. It's going to take VM where the next level. Looking forward to seeing you explore. Yeah. Thanks for coming on SuperCloud 3. Oh, thanks for having me. All right, Kit Colbert here, fireside chat keynote here. SuperCloud 3 security AI meet. This is the next generation cloud connecting everything together. Humans plus AI is better than AI. A lot of constraints and solutions available, but you got to figure out which models, how are you going to build these apps? These are new workloads, major impact to cloud operations. This is SuperCloud 3. Thanks for watching.