 Hi everybody, welcome back to VMware Explorer 2023. We're live here and we've got a super panel coming off our super studio with a bunch of thought leaders from IBM. Scott Baker's here, he's the chief marketing officer and VP of the IBM infrastructure business. To my left is Pete Bray, he's the global product executive for IBM and Brandon Mann is the product management leader for storage fusion for IBM. What's in that shirt, Brandon? What do you got there? We got a little Watson X for the representatives. In our super studio event, the IBM Storage Summit that we had last month actually, wow, it seems like just yesterday, we had a fantastic day long, we had Vincent Chew on talking about Watson X, just sort of educating and hopefully inspiring people about the future of data and storage and what's possible in AI. And so Scott, I mean, it's been an amazing summer. I think the busiest summer I can remember. Right? But we're here at VMware, right? You guys have had a long, long relationship with VMware Explorer. How's the show been for you? So I think the show has actually been really great. The foot traffic has been pretty amazing. I think the graphics that we've seen here have just really upped the game from what I've seen previously, but more so the content. It was really interesting to kind of see the focus to your point around generative AI as being one of the foundational, no pun intended, but sort of one of the foundational topics that they're having here and how you can take advantage of maybe what virtualization or containers can do for the purchases that you make in the infrastructure to host those AI-based workloads, especially as you begin to think about the workload all the way down to the bottom of the stack. Yeah, and the big three themes that we've been talking about here all week, Broadcom, obviously the M&A piece, multi-cloud, which kind of near and dear Pete to your heart, and of course AI. And as it relates to the M&A front, we've said, look, under Broadcom, they'd be much more focused. We don't expect that VMware is going to go out or Broadcom is going to go out and buy some AI company like Databricks did or Snowflake did. Rather, they're going to look to partners like you guys, right, Brendan? I mean, that's where potentially you can add value. If you think about what VMware has announced, they got a lot of work to do. Private AI, that's sort of your wheelhouse. So how do you see that playing out and what role do you think IBM can play with customers? Yeah, I mean, I think that there's just a lot of excitement over a generative AI overall, and it's going to be a momentum shifter in the industry in general. And I think each one of the companies are going to have a part to play in that. IBM, definitely, from an infrastructure standpoint, we're going to have a big part in it. And then obviously with what's next, we have a huge aspect of what we're trying to accomplish from AI for business aspect, right? So a lot of the same topics that were discussed under the keynote, I think that they're right in line with a lot of the pieces that we're focused on, and I think that there's a lot of opportunity for partnership there. And Pete, the other big theme is multi-cloud. We kind of showed up and everybody had multiple clouds that, oops, now we got to clean this up. Obviously, you with your Red Hat heritage know a lot about that, and customers are starting to get more deliberate about solving sort of that multi-cloud complexity. VMware obviously is a partner and wants to play there, but customers need to simplify so that they can invest in new initiatives like AI. Absolutely, and we're seeing that. We're seeing the maturity level as people approach these problems, the multi-cloud problem, but even thinking beyond that towards modernizing, but then moving towards transforming their businesses and using multi-cloud, but growing into that next true hybrid cloud, or super cloud, as you like to talk about. I do. But also the data and AI side of it, and the collision of those two, moving towards transforming these organizations and how they operate, that's what's really interesting and what's happening right now. Scott, one of the things we talked about in our super studio, the IBM Storage Summit that we had in our Palo Alto studio in July was ransomware. It was. And we see data protection as an adjacency to cybersecurity. I mean, it's not turning into a cybersecurity company in division right now, but it's a key component. And one of the things, make a couple of observations and give me your thoughts. Pre-pandemic, it was all about DR. And once every 10 years, we're gonna have a hurricane or a fire or some disaster and probabilities, like say, one in 10 years, and the impact is really bad. Now, all of a sudden, pandemic comes, post-pandemic, it's like the probability of a ransomware attack goes through the roof and the impact is potentially just as bad, maybe worse. So you've got this adjacency, ransomware protection, resiliency, whatever you want to call it. And you talk to customers about things like the NIST framework and the MITRE framework, and they sound really good, but they struggle to operationalize them. But your ransomware solutions and what you're doing with customers is an example of how they can operationalize at least one piece of their cybersecurity strategy. They can test it, they can test recovery, and they could sort of check that box, if you will, and really get to the other part of their estate. But are you seeing that that customers are aware of that and are beginning to operationalize that ransomware recovery to add business resiliency? Well, you know, look, let's connect the two topics together, right? You just talked about AI as being heavily data intensive. PD actually talked about the importance of mobility maybe the application stack as well as the data itself. And now you're bringing in this topic around ransomware, right? So what a perfect storm to come together. One of the things that we did release coming up to this event here was the notion of being able to do inline data corruption detection. Actually using entropy to determine behavioral changes in the data set itself as it's transferring between the host and the backend storage array and then responding appropriately, whether that is to alert someone or kick off an automated runbook response. So as you begin to think about these large language models, which I think are interesting topics to bring about, but I think as businesses try to operationalize AI, you're going to see more focused repositories of data where the information that they're using to train the model are going to be verified and validated for accuracy and veracity, maybe even monetized in terms of the relevance to the business itself. And I think those become the new attack surfaces that ransomware will go after because I don't have to take the large language model down to have as an evil in effect, if you will, if I could just inject bad data into that. And so the importance of being able to use information and database to entropy to determine behavioral changes in the pattern of how data is being shared from the host running that AI model to the backend infrastructure and then taking appropriate action, that's going to set infrastructure vendors apart from the run of the mill that's out there trying to do traditional, hey, we make big storage components for you. At IBM, our focus is don't just host the AI, bake the AI into the actual infrastructure itself. But entropy is winning right now. You got this randomness. It's almost like people have said to me, chat GPT is actually getting worse. I'm like, well, yeah. Right, I mean like the digital gene pool is only so big for chat GPT. And after a while, you'll start getting inbred responses from that model. So embedding AI into infrastructure to make multi-cloud run better and infrastructure run better is kind of a no brainer as well, you guys have a lot of experience in data. We built a built, we built a model, I guess you would call it of a power law for AI and then the vertical axis was the size of the model. So you got these big models like chat GPT and Lama, et cetera. And then on the horizontal axis was domain specificity. And the premise is you're going to have a long tail and a lot more of those. A lot of these power laws have no torso, but we think with open source you'll have that torso. But again, this seems to be sort of an area where IBM is going to be very strong is that domain specificity, smaller models, but very focused, maybe more controllable and to try to drive very specific business value. Thoughts on that? Yeah, so I mean, obviously that's a big play of what we're trying to do with Watson X.AI is really bringing the AI to your business domain, right? So being able to take those large language models, but then bringing them in-house and doing fine tuning on your own data. And we're very much involved in that from the storage and infrastructure aspect in that all of that is run on OpenShift, right? All of our Watson X, so we have the entire stack there from our Fusion offering, which we have Fusion HCI, inclusive of GPUs, so running that whole stack and being able to incorporate it in their own data center within the security of their walls and bringing in their data to then fine tune those large language models, whether it's Lama or other models that are coming out, but then fine tuning them with their own data to make them more attractive and differentiated from other companies. And your strategy is to sort of be all of the above, right? You're going to get your own AI, you'll work with other open source tooling, optionality for customers as the theme. I want to confirm that, right? That's the strategy. Yeah, so I think you heard it think, right? So our partnership with Hugging Face, I think very much is leading the charge around openness. And then you're going to do that across clouds, right? I used to think, when we started playing around with SuperCloud, I used to think, all right, at some point the applications are going to run on multiple clouds, single global instance. And then I sort of pivoted, swung my pendulum brain and said, eh, maybe that's not going to happen, it's too complicated. Then AI comes in and it's like, well, maybe it is, actually. Yeah, and it's interesting how we might be able to use AI to work around some of the physical dimensions of that, you know, just speed of light problems, right? And being able to chop it up and, you know, take advantage of the hybrid cloud, the multi-cloud approach. I think that's really the direction that things will head. Yeah, and so you've got, you've now got the edge coming in as well. Right, well, I think it's important to define what the edge is. Yeah, so let's talk about that before you do that. It seems like the public cloud players have the advantage of, you know, speed, innovation, all that, and then the sort of hybrid clouds, if I can call that, like I put you guys in that camp, have the advantage of understanding of legal compliance, industry expertise, you know, true enterprise jobs. And then edge is this Wild West. So how do you think about edge? How do you define, some people say, well, it's not necessarily a place. It's sort of what is and isn't available at the edge. How do you think about it? You know, for me, I think a lot of the times when we have conversations around edge-based computing, I like to think of it more as data center edge, like how far out can you extend the data center to the point at which not only can the data be collected, but also be processed away from sort of the core infrastructure that makes up whatever the organization is that you're supporting. Other people that you talk to may extend that edge all the way out to a sensor sitting on a tower, collecting information. But the thing that's really important for me is the first point of collection and processing to me is truly the edge of the business, right? Otherwise, you just get one machine telling another machine, hey, the temperature is this and I'm okay, right? But the moment you can collect that data and process it, that's the key. Then to determine whether or not value can be driven from it or it's just noise and we don't need it. I think that kicks off your data and AI sort of workflow to then determine, does that data need to come back into the business? Okay, so let me unpack that a little bit. So you're saying a lot of times it's just data is ephemeral, who cares, let it go. Other times you're inferring with AI and you need to take action. That's right. And so you need local processing power, you need low latency to do that, obviously low cost, low power. And I think I'm inferring, you're saying that's when you would persist the data. The determination of persistence needs to occur at the first point of ingest and process. That's what I believe, right? And then that goes back somewhere, cloud or a data center for the model, for the training, right? That's exactly right. Otherwise, if you amass every bit of information, you're just asking the AI that you're training or the model that you're trying to put together to sift through all of that noise. So the more that you can do to clean the data up and really assign value or relevance to that information, at that point, that's when the initial stages of AI kickoff where it's making a determination, based on how I've been trained, is this information valuable to the business? And then that kicks off this information supply chain upon which other models are going to work from. And then that modeling occurs somewhere, data center, cloud, and then goes back out, and you have that virtuous cycle. Yep. I believe so. Do you think? So most of the AI today, I'm going to make a statement, tell me if I'm wrong. Most of the AI today is modeling that's done in the cloud, modeling training. And over time, that's going to flip. And much more activity is going to occur. Let's call it inferencing. I'll call it at the edge, whatever we define that. Right. It's going to be actually more data flowing. I'm not sure it's going to be more data persisted, but maybe it is. But it's almost like that will flip, where the activity will be distributed, and it'll be more happening inferencing at the edge AI than maybe it's not even by the end of the decade. You guys, what do you think about that? Does that make sense to you, or are my nuts? Yeah, no, it makes a lot of sense when you think about it from an efficiency standpoint. Everything that Scott is saying about, at the point of ingest, make those decisions, and do your inferencing at the edge. I can totally see that that model will develop. Yeah, I would say think about what you had said, like a lot of it has begun in the cloud. And if you think about that, the access to large amounts of infrastructure and likely large amounts of data make it much more convenient to do that in the cloud versus trying to create the same experience on-prem. At IBM, from an infrastructure perspective, we like to talk about deploying cloud architectures on-prem or infrastructure that behaves like a cloud would and connects easily to different kinds of clouds. But the ability for me to spin up very rapidly a huge object-based back-end storage environment on-prem that's not already off-servicing some other workload might be a little bit difficult for businesses. So will that flip-flop? I think it will the moment that you begin to isolate and create data that has true relevance for whatever you're trying to do from an AI perspective. On-prem, there's absolutely no reason for any business to try to recreate the Internet, to have a large language model that wouldn't make any sense whatsoever. But I think what AI is going to do is it's going to force higher degrees of data hygiene and responsibility for data onto people and organizations that they've not necessarily been as responsible for having to deal with and be thoughtful of previously. And you guys have a lot of assets that you can bring to bear, whether it's infrastructure, file systems, AI, partnerships that you own, like Red Hat. How does that all come together for your customers to drive value from an infrastructure standpoint generally in an IBM, you know, globally? You know, I know a couple of guys at the table that have good opinions about that. I think Brandon could kick us off. Yeah, so I mean, I think that what most customers are looking for is they're looking for a solution, right? They're not looking for, you know, individual bag of parts, right? They're looking for how do I accomplish what my end goal is? How do I drive business value? And that's through the entire end-to-end stack and process. And I think that that's really where IBM thrives, right? We have the infrastructure. We have software. We have consulting, right? And we have great partnerships. So amongst all of those, you can bring them together and really offer a differentiated solution for customers and that's what they're looking for, right? Is that integration point in a full solution versus individual pieces? And it's so true, Brandon. And unfortunately, a lot of those individual piece parts get purchased for whatever reason because, oh, it's best to breed or it's shiny new toy and then somebody's got to go clean it up. Anything you add to that? I mean, you talked about entropy earlier. I mean, the challenge right now is the noise and assembling these solutions and building them and then maintaining them over time is very challenging. And that's one of the things that we're really focused on is building the complete solution but making it simple to consume. It's just there's so much change happening in all fronts with respect to AI, with respect to generative AI, with respect to even modernization that's happening. And the demand there is that the skills, the technology, the technology is changing too quickly. The skills aren't keeping up. So the demand to make it simple and easy to consume is readily there and that's why with the breadth of portfolio that we have and we're really focused on how do we make that simple and democratize it? Yeah, I would extend that to say radically simple. How do we make it radically simple for people? The faster that you can deploy the appropriate type of infrastructure component, whether it's physical or software defined, the better. And in fact, these two gentlemen here have done a tremendous job even in the Watson X-Base about in making sure that we include available capacity for people via CEP or via the Fusion Foundation Services so that when you buy into Watson X, for example, you don't have to worry about where am I gonna run this thing? The storage is baked in, it's ready to go for you. And I think that speaks to what Brandon has said around solutions for people. Certainly they're parts when you look at them on the table, but it's our job to make sure that we're presenting those useful kinds of scenarios where they can pull those parts together and actually deploy them to solve real-world challenges. The move to reduce IT labor started, you know, it's been going on forever, but it really, in earnest, when we started thinking about converged infrastructure, we started to attack that, but really the goal is to eliminate that. Completely agree. I mean, I think successful infrastructure-free AI is infrastructure that is invisible. And it's hard for me to say that. It actually hurts the back of my throat. You twitched when you said that. But I mean, that's true, because the more that you force the organization from an AI workload perspective to think about the backend infrastructure, now keep in mind there is responsibility to think about how you design it, but if you have to constantly be aware of it and you have to constantly, you know, pander to it, then it sort of ruins the value that AI is supposed to bring for the business. And if you can achieve that, it actually gives you a foundation to deliver that business value on top, which is, it's ultimately, I mean, the biggest nut I think is going to come down to reducing these mundane tasks that are going to reduce the need for people to do things that they don't want to do. And does that mean, you know, job cuts, maybe, maybe not, that's not really the point. The point is it's going to allow organizations to be much, much more productive. We're actually already seeing a little bit in the productivity numbers. Do you remember when the PC revolution came, productivity went through the roof? We've been starved for productivity metrics in the last decade globally. And we're starting to see it pop up a little bit. That's the promise of AI. It's got to deliver that promise, you know? If it doesn't, then there's going to be some real challenges economically, maybe socially. I don't know. Yeah, you got to be careful Dave. We may come to this table one day and there's just a screen in your place with questions to pop up that we respond to. That's why you're smirking. Well, you know, if that happens, then there's going to be a role for me somewhere. That's right. I better get creative. Guys, thanks so much. Yeah, thank you. This is a great, fantastic panel. Follow on to our Super Studio IBM Storage Summit, which is on thecube.net. Go to thecube.net for all of our videos, siliconangle.com for all the news. We're live here. You're watching theCUBE from VMware Explore 2023, day two, Dave Vellante, John Furrier, Lisa Martin. We'll be right back.