 Okay, welcome back to SuperCloud 3. We're live in Palo Alto for a live event. We'll mix something in some prerecords with Simulive, those. I'm John Furrier with Dave Vellante. SuperCloud 3 is security plus AI. We have live on the remote high degree of difficulty. John Rose is the CTO of Dell Technologies and Distinguished Cube Alumni. John, great to see you. Thanks for promoting in live into the Palo Alto studio. We're Dave's here live too. Hey John, good to see you man. Good to be here virtual. It's live virtual, all fun. We'll get this moved down more and more. It's fun to get these events going. SuperCloud 3 is security plus AI. You've been on many times talking about the trends that are important to the industry but also Dell participates in that as a big player with number one market shares across the board in all equipment from servers, edge to whatever. You guys are the leader in this area. Where is these cloud trends going from your perspective? AI is a big part of it. How are you going to participate in that? We'll start with how you see the big market trends to the 30,000 foot level. Yeah, we could spend all day on, there's many trends but the most topical ones right now and it is actually this intersection of AI, cloud and security. No particular order. Obviously multi-cloud architectures are becoming much more relevant. People are realizing that there is no single cloud infrastructure that makes up the composite of their entire IT capacity and even small companies that may start with a single cloud end up building multiple instances of their environment in that cloud or they move from cloud to cloud. And so I think trend number one, independent of AI or security is we are in a world where the composite infrastructure of any business is a collection of cloud infrastructure, public, private, on-prem, off-prem, edge, telco, whatever, it's nice to get to that point. Now the trend is really, how do you make order out of that chaos? How do you not end up with a bunch of silos? How do you make your data move across them? How do you treat them as a system? And that's an early shift. Not everybody has figured it out. Not all companies know how to do it but I think if you look into the future and ask where do you want to be in, I don't know, three years, you want to be able to take advantage of the compute diversity and capability of the collective cloud ecosystem but you wanted to do it in service of your business as a platform. That is a multi-cloud. So that's number one. Number two is we have a gigantic trend happening right now around the acceleration and adoption of AI technology, specifically around large language models, more advanced AI. And I've recently been quoted saying something along the lines of the entire cloud ecosystem was not engineered to run this stuff. We've been running AI in cloud environments but not at this scale, not as the primary workload. And so there is a trend that says even if you get your multi-cloud right and you built it in a perfect way to run containerized code like web browsers and other things, that is not the same workload as AI. An AI workload could be orders of magnitude more intensive. It could be extremely expensive to run if you put it in the wrong place. And so multi-cloud discipline about knowing which parts of the AI system should run in which part of your multi-cloud actually have profound consequences if you get it wrong. To give you an example, if I develop an AI algorithm I can do it anywhere. In fact, I would recommend you do it in the tool chains of any of the public clouds. They're quite good, they're easy to use, simple, you get started really quickly. Check, okay, we've done the academic phase somewhere. Then we decide, okay, I have this algorithm but I need to train it. I've selected, you know, Mosaic or whatever large language model but I wanna make it an enterprise thing. Ah, well then it gets a little more complex. Can I move my data to that environment? Is it a public instance or do I need a private instance? Do I have the money to pay for a large scale training event which can be very, very expensive? And more importantly, is this the only time I'm gonna do it? Or am I gonna constantly retrain this model forever as part of my business model which means I may not wanna be paying on the drip for that. I may wanna do it somewhere that's a little more predictable. Again, training, you'll have to make a decision. And then you get to the fun one which is inferencing which says, you know, now I'm gonna flow data through this machine that can run at machine speed and consume IT resources at machine speed. Where do I put that? You know, I would make an assertion that you'd probably wanna put it close to wherever the data it's ingesting is so that it runs really efficiently maybe not on the other side of the internet. But on the other hand, even if you put it in an edge, you have decisions. Do I put it in a co-located environment or actually in my factory? And so these are all things that the same set of decisions could be made in building an e-commerce application. But AI, it's just three orders of magnitude bigger. And so, you know, lots, lots of thinking right now not only building multi-clouds but how do we make multi-cloud work in the era of AI that we're about to enter? So it's complicated, John. Yes. Good to see you again. Thanks for coming on. Okay, so you got multi-cloud complexity. You bring in AI, multi-cloud security, cross-cloud security, and cloud economics are complicated. Multi-cloud economics makes that even more complicated. So my question is, have you seen a change in patterns as to how customers are deciding where to place workloads? Yes. In fact, and it was a reaction to one very simple thing that their budgets got blown up. Their bills were more than they expected. The utilization was higher than they anticipated. And so about a year ago, we started to see at least big companies get much more deliberate about the decision of where they would run their workload. And it wasn't, you know, it used to be, it was very much keep the developers happy, keep the line of business happy. If they want to do it, let them do it and then try to clean up the mess. And then they realized the mess actually can blow up your economics at your company. And so we did see a pretty significant shift. It happened after people experienced a surprise bill, a budget overrun. And I think that's led to a more deliberate use of cloud. And it's a good thing. I'm not advocating for any one cloud or another, put the workload in the best place, economically and technologically. But it turns out that up until recently, a lot of people didn't really do that level of analysis. They kind of put it there and then assumed they'd figure out how to afford it or make it work. And so, you know, and to be candid, it hasn't really slowed down innovation at all. And companies that are doing that at least have a more predictable cloud experience. And as they go into again, this AI era where the bills for building AI systems are quite large compared to what they were doing before, they at least have a mindset of, I have to contemplate affordability, I have to contemplate best economics, I have to contemplate things like regulatory compliance as part of that cloud decision about placing a workload. So not everybody's there, but you know, you've been looking at this a long time also, you know, three years ago, just blindly put it wherever it is and deal with it later. That's not the tone anymore for people to understand the consequences. I think that's a good call out on the old cloud model, which was a lot of portability discussions, workload management, check, check, check, you guys been there, done that. On the AI side, love your angle on this whole dynamic, power dynamics between cost, performance, stage of evolution, as the builders come in and then the hosting and running the runtime and then you operationalize AI. It's the same wine, new bottle. So you got sustainability, you got as a service, you got workload portability issues, and then you got a cost and AI. So contextual behavioral, now you got training and inference. So I have to ask you, do you see any legitimate player or way customers can deal, compete with NVIDIA, for instance? A lot of NVIDIA demand right now, shortages, you got cloud services that are offering, Inferenza, for instance, from Amazon, it's around chips. You guys have relationship with Intel, NVIDIA, all these guys. AMD, Intel, is there any alternatives in the GPU side that's going to help move the needle? Yeah, you know, look, we're in a privileged position. We're kind of Switzerland, you know, ever since, you know, we don't own VM anymore. They're just one of our partners, a good partner, but we also work with all the cloud hyperscalers. We work with all the semiconductor players. And so we like them all. And legitimately, when you understand the AI ecosystem, what you realize very quickly is that there is a extreme diversity of AI workloads, even large language models. I mean, running, you know, Roberta as a chatbot in front of some automation function, you can do it on a single server at an edge. It doesn't require a lot of capacity to do that. Building a giant chatbot for your entire enterprise, it's a general purpose LLM, might need to much larger infrastructure. And so we're actually, you know, feeling pretty good that there's NVIDIA, which is definitely probably the best executing company in terms of the semiconductor space in this space, on accelerators, but you see that there are other places and other entities that can fill gaps or address other layers of the AI stack. By the way, a lot of the edge AI and the simpler AI use cases, you can run on a CPU, you don't need an accelerator. We have plenty of examples of that. In fact, you can run them on a precision workstation or PC, you know. And so we actually don't see it as being, there's only one semiconductor layer for this. In fact, semiconductor diversity works in our favor for two reasons. The first is, well, three reasons if we throw in supply. The third one is you might not be able to get the parts. So having supply diversity is pretty important. Number two is performance. You know, you don't necessarily need the performance of, you know, the top end NVIDIA chip to do something that doesn't need that. And quite frankly, you know, you'd probably benefit by having it on a smaller footprint entity, you know. And the last is power. You mentioned it earlier. We constantly customers are interested in is this AI evolution that we're about to go through going to break the planet? Is it going to create more energy demand than we can keep up with? Is it going to basically be the anti-sustainability metric? And our belief is, look, you know, you have a fairly wide range of diverse accelerators and they vary in terms of their effective MIPS per watt for an AI function. And NVIDIA chip set is more of a general purpose processor. It has pretty good power, pretty good performance. It's kind of good on all dimensions, but there are some specialized chips that are emerging neuromorphic processors, some of the four bit precision stuff that's out there. And even in some of the hyperscalers, they have optimized TPU is different than an NVIDIA GPU. And it turns out that you can characterize them that for a specific workload, choosing the semiconductor may not be a function of price or even availability. It may be that that's the most energy efficient place for you to run your code. And, you know, today that might not be the primary reason people are choosing an infrastructure, but over time, especially in Europe and the developing markets, sustainability is a very important criteria of selecting how you do IT. And so, you know, we actually do see silicon diversity. We give NVIDIA big kudos for being in the front and, you know, bending the curve on performance. There's a lot of people in that pack and they're picking up other areas. AMD just made some nice announcements and Intel obviously has the Gaudi chip sets, Gaudi 2 coming out. The hyperscalers have different chip sets they're providing, the TPUs continue to exist. And so, overall between the accelerator ecosystem and the CPU ecosystem, it gives us a pretty good pool to give us lots of choice about these dimensions of availability, performance, and environmental impact. And, you know, pretty sophisticated back to Dave said, it's complex and you're going to have to pick and choose. But if you want to have a sustainable AI strategy, you're going to have to get into the detail of not just picking which cloud you run or which IT architecture, but, you know, can you run this on a more efficient semiconductor that will result over time in a much less environmental impact being resulting from your AI activity. And so, you know, it's an interesting race right now, but it's definitely not a one horse race. It's an ecosystem of semiconductors kind of all moving in the same direction. John, I'd like to get your perspective on how you see customers using AI, specifically as it relates to security, because we're talking about super cloud, AI and security at this event. And so when you talk to customers about how they're using AI, they go right to, you know, we're helping us write code, summarization, ideation, writing, marketing, copy. So obviously they're using, you know, chat GPT is just, you know, overwhelm the discussion. But I'm interested in where IT is in terms of understanding the use cases for AI, specifically as it relates to things like zero trust. Yep. Yeah, I mean, it kind of left behind in the dialogue are some of these really important areas like security. Let me give you a couple of examples. One of the most interesting ones is, you know, generative AI systems are really good about automating content creation. If you actually look at the behavior inside of a sock and you look at somebody who's doing security as an analyst, a big portion of their time is generating reports and content about events that happen. They publish what they do effectively. And so one of the things that we're excited about, you know, a very general sense is generative AI co-pilots to automate the content production that documents the security event that does all this rudimentary work might actually bend the curve in freeing up time to solve some of the human capacity issues we have in just finding people to operate security environments. So that's kind of a nuanced one. The second point though is we do have a problem right now and it's, you know, not to scare everybody but if you look into the future, AI application in security and IT is potentially going to lose if we don't start to think differently about our relationship between good and bad actors. What I mean by this is today, most of the AI application of security in the security space is around co-pilots. It's around a human in the middle surrounded by a bunch of automation that makes them more effective. Now that's good. It does make them more effective. That's the good guys. You know what the bad guys are doing? They're taking the attack and they're fully automated. There's no human in the loop. It is literally going to be a race between a machine versus a person with a few machines helping them. Now you do the math. Which one of those can move faster? Which has more capacity? And so at RSA this year, this was a big discussion of, you know, we know we need human in the loop. We know we need humans to make decisions but the bad guys don't. And if the bad guys don't do that we're going to have a performance mismatch. So one of the biggest challenges we have right now in security and AI is you've got to figure out ways to shift more of the full security function into automation with machine intelligence because that's what the bad actors are doing just to stay at parity with them. And that's a really traumatic experience for people because you know, how do you trust the system? Now you brought up zero trust and one of the things that zero trust does that allows you to do that is it's instead of perpetually reacting and having a human interpret events a zero trust environment says everything is authenticated and understood. Policies are about defining the known good behavior not preventing the known bad behavior. And because you have very authoritative behavior that you're instructing the infrastructure to allow the ability to put AIs in place that just do that. They know what the rules are and they just run and humans don't even see what they're doing until they've done something becomes much more predictable than asking an AI to interpret the unknown and act on it. It's much easier to ask an AI to understand the known and enforce it versus interpret the unknown and act on it. So zero trust is a key technology architecturally that allows us to push AI into a more front and center position around automation and the reason we have to do that is the bad guys are going to attack us with machines not people. So that's a great point. I want to just ask a quick follow up question. How does what you just said impact the customer's decision to move from public cloud to hybrid multi-cloud because again, more service area. Okay, I get the zero trust piece. And like you said, the bad guys are going fast. That's a pro game. The speed between college ball and pro ball is two different things as we've been saying on theCUBE. So you're talking about pro securities that the velocity is there. It's hard to compete if you're slow. So how does it impact workload placement, portability choices? What does that mean for multi-cloud choice? Yeah, well, it turns out in the multi-cloud dialogue, one of the decisions you have to make is what pieces of your architecture are not inseparably bound to a particular cloud? Multi-cloud is not just a collection of clouds. It's a collection of clouds and then things that turn them into a system. And this is a debate we've been having. Like we have a strong opinion. We think certain storage layers ought to be horizontal. We definitely think edge ought to be a common platform. We think things like cyber recovery ought to be horizontal. But an interesting one is zero trust security. The control plane for security, if you want to do multi-cloud right and you want to be able to do what I just talked about to be able to automate and control it as a machine, you cannot have a collection of security control planes in each one of your clouds. And so for three very important functions, you almost have to make a conscious decision to treat them as an overlay. Identity management, policy management and threat management and detection. Those have to be things that are an independent authoritative control plane over any infrastructure you use. And it turns out whether it's a hybrid environment or it's just, by the way, even if it's a single cloud going to the edge or if it's a complex multi-cloud system, if you want to do zero trust. And we've been, you know, we're building a full on zero trust implementation that nobody's ever built before called Project Ford Zero. But if you want to ever get there, one of the things we tell people right now is you've got to get your control plane in order. And if your control plane isn't separable and authoritative across your whole collection of infrastructure, not bound to each one of them, you'll never really be able to describe this end-to-end behavior. And if you can never describe the end-to-end behavior back to the previous point, you will never fully automate it because all you'll be doing is automating silos and then reconciling behavior between them. And so, you know, the path people need to take to get to that point of being able to even compete is your control plane. Identity policy and threat management, whether you do zero trust or not, has to be something that exists across your cloud estate, not a function of the individual clouds themselves. Okay, we got one more. Dave, one more question. Sorry, I thought you had one more. No, I got to ask you, John. So based on what you just said, all the technical debt, all the inertia and all the innovation on the technology side with the technologist community, ultimately, who does AI benefit more, the attackers or the defenders? In the security world today, it benefits the attackers. We don't like to talk about it, but it allows them to just move faster and to move at a speed and scale we've never seen before. We're already seeing that. Defensively, we've used it. We do great work on fraud detection and event correlation with the AIs, and that's kept us treading water properly. But over the long term, again, if the fight is between a machine or a person with a few machines helping them, and it's a volume fight, because that's what cyber is about these days, you're going to lose. And so we've got to find a path to be comfortable shifting more of the work into the machine layer. By the way, if that sounds like a broken record, that's the narrative with all AI. We got to find a comfortable way to let it run our supply chain or run our finance systems or do customer support. That's the shift. Let the AI take over in a way that we trust rather than us just sitting in the middle of this process. I think I saw that episode on Star Trek years ago, the AI war. Final question real quick, we got one minute or so left. When you talk to like top CISOs and CEOs of say big banks, you're in there giving them the future, and they ask you the question, I want to do my Dell customer, I'm a long-term competitive strategist, watching you guys be competitive over the years. Where will you be in a few years? Why will you be around? What answer do you give them when they ask that question? Why will you be around in a few years and continuing to be the Dell technology? Yep. Well, specifically for multi-cloud or AI or security, those three domains are not built with a single product. You cannot solve an AI problem with a single box. You cannot solve it with a single technology or even a single cloud, interestingly enough. And one of the things that we've done strategically as a company, is we turned ourselves horizontal. We said, not only do we have a broad line from PCs to servers to hyper-converge to Apex offerings, but we also have an ecosystem. And if you saw Dell technologies this year, it was like the parade of CEOs. It was, you know, CEO of Microsoft, CEO of Nvidia, CEO of Red Hat, all showing up saying, hey, we work great with Dell. Because if you think about it from a customer perspective, your job right now isn't about picking a specific technology. It's about hurting this really complex ecosystem of big giant companies that all have their own opinions and don't necessarily work well with each other. Turns out they work well with one company. That company is Dell. And so we find ourselves in a privileged position that if you're trying to solve an AI problem or a multi-cloud problem or even navigate zero trust, you got to have an anchor somewhere. And we've carefully positioned ourselves to be that kind of foundation horizontally across all of these ecosystems. And so hopefully, yeah, you know what? We'll go on that journey and help the customer make sure that they actually build a collection of clouds, working as a system to be the platform of their IT environment. And I think that's playing out in real time as we speak. John, always a pleasure. Always in masterclass with you. Thanks for coming on the queue. We really appreciate you sharing the insight and the data here on security plus AI and SuperCloud 3, our third edition. Again, thanks for remoting in and we'll see you soon. Yeah, great. Thanks very much for having me. SuperCloud 3, next up we've got the EVP of Cisco security, G2 Patel and Tom Gillis with Cisco coming up next. SuperCloud 3, stay with us.