 Three, welcome back to SuperCloud, our fourth SuperCloud and with me is my colleague and friend, Stu Miniman, Senior Director of Market Insights with Red Hat. And of course, former superstar on theCUBE. Still good to see you again. How are you doing? Hey, great to be here. Thanks for having me on. Yeah, you're welcome. So SuperCloud is happening in Palo Alto on October 24th. Of course, this is a prerecord. We're going to drop this into the stream. As you know, it's a live event and that people come into the studio all of friends of theCUBE, local Palo Alto people and we get the ecosystem speaking. So let's speak. So the theme here is AI and transformation of industries. But let's go back to the beginning because you and I have, you know, you don't love the term SuperCloud, you know, but, but you know, we talk about multi-cloud complexity. What are you seeing out there? Is it still largely monocloud with a little bit of peripheral stuff by whatever, M&A or happenstance? Or is there really a multi-cloud trend happening? Yeah, so, you know, Dave, I have an aversion to term sometime just because, you know, I've got the scars from all the battles of nobody like the term cloud. And when you talk like hybrid cloud or multi-cloud, what does that really mean? Big data. You know, when I talk to you, Dave, like I talked to a lot of customers. What's the reality today? Change is constant. They have stuff in their data center. They have stuff in Colos. They're using more than one public cloud and of course AI is making things even more distributed because the edge is even more important. So applications are definitely distributed and when it comes to the use of cloud, there's this interesting dynamic that you need to have. On the one hand, the reason I go to a specific public cloud isn't to get the cheapest compute at volume. It's, they've got hundreds of cool services that I want to take advantage of but the more you take advantage of proprietary services, the more you are, you know, stuck on what they're doing. So I want to take advantage of things but I also want to have the flexibility to be able to use, you know, a year ago if I was making a plan to, oh, I wasn't thinking about open AI and where Microsoft is and Google and Amazon are constantly building new things. So most customers have multiple clouds. They're still figuring out how much do I use those services and make them specific to an environment versus like what we've been working on Red Hat, you know, since the early days is, you know, first we put Linux in all the clouds and then we put, you know, OpenShift, our Kubernetes in all the clouds in the data center and out at the edge to give you that consistent experience everywhere. So it's a balance because containers actually allow you to take advantage of a lot of those underlying capabilities as opposed to like the generation before of PaaS was I can sit on top of environments but I kind of got least common denominator if I was using those other services. So you want best of breed, okay, but the developer wants to the extent possible consistent experience he or she, you know, doesn't want to have to have a different experience to secure the cloud or deploy in the cloud. Of course, that's often the case. So what about compliance? These are all or single sign on identity and so forth. Those are all sort of different, but AI, you know, the AI heard around the world as I say sometimes created a real rush to best of breed, which has really been open AI and Microsoft. Yeah, and Dave, I'm glad you brought up that developer experience. You know, what we hear from the developer community so much is developers have cognitive overload and how can I get somebody ramped up faster to do what they need? And we used to talk, you know, our friend Brian Graceley used to say, you know, if I have somebody, a developer working on one cloud that they've worked on for years, if that's what they're doing in an office, then you say, hey, I need you to go work on this other environment. It might be easier to switch jobs and you know, keep on my stack than it is to go there. Open shift that we offer helps with that environment, but another big trend that's been happening is that whole platform engineering discussion where I wanna be able to build the golden path for my developers, help make sure they have access to all of the pieces of tooling that they want and that should be able to live in multiple clouds in multiple environments. Explain platform engineering and where it evolved from and what problem does it solve? Explain that for me. Yeah, great question. So, you know, a lot of times we look at this and we say platform engineering sounds a lot like DevOps and we'd had more than a decade of DevOps where a lot of that was the philosophy of how we build things. So I go from waterfall to agile to, you know, bringing Dev and Ops together. The reality for most customers is you still have somebody needs to have the operations of an environment and somebody does the developer. Amazon themselves actually has the person that builds it, runs it, but you know, Amazon is a special environment. For most companies, you wanna simplify operations. So that is where the platform engineering, I treat the platform like a product, like many companies actually have a product manager running that that says, here's what I define, here's the interfaces that you have, we can give you T-shirts sizes of how to do things and therefore when you come in with a project, I have, you know, a portal that I can go into in the cloud native space. There's a very popular project called Backstage which came out of Spotify that helps in that environment. We've been working in that space, Red Hat, for about a year now to help simplify that and allow companies to take access to it. So again, it's really making simpler that platform piece and developers will then be consumers of that environment. So it's a little overlap with what I used to do with DevOps, it sounds a little bit to something like SRE but SRE usually is running a service. So they don't just have the platform but they're running the applications themselves. So that's kind of the, hopefully a simple explanation to help with the whole platform discussion. Thank you for that. And I want to bring it back to AI, it's kind of the theme here. And I don't know if you saw, we published the Cube Power Law. So the Cube Power Law, it sort of take a page out of the old music business where you've got very few music labels basically own the market and then there's this long tail and we're saying, oh yeah, there's similar dynamic here but the difference is on the chart and we've shown the chart a number of times this week. So but for Stu's benefit, we're saying open source is pulling that torso up. And the other thing we're saying is you've got the big monster open AIs, Amazon, Vertex AI from Google, et cetera, are the consumer giants, maybe not so much as Amazon, but big, big LLLM. So it's the size of the model on the vertical axis and the model specificity on the horizontal axis and you got this big long tail. Kind of makes sense, right? But so my question is, when you talk to customers, are they talking about actually, I mean, most of the actions in the cloud today, but are they talking about doing stuff on-prem because they're concerned about, well, that's where the data is and that's where I'm worried about IP leakage. Of course, the edge, very clearly AI inference is going to happen at the edge. What are you hearing from customers just in terms of how they're deploying AI and where they're thinking about it? Yeah, so Dave, I talked to customers in a lot of industries, but when I talked to my FSI customers, the financials, the banks, the insurance companies, absolutely they're concerned about their data and they are the ones that, are they using public cloud? Yes, but they'll say, let me take an open AI and let me train it on my data in my firewall and under my control so that everything that I'm doing is there. I'm sure you've seen it, like every enterprise has put out the, don't put your proprietary data into chat GPT, some people have ignored that. Things like that because you want to be really careful there. So absolutely, yes, we see that they are going to want it under their governance, under their guardrails for that kind of environment so that they can do what they want. So one of the big themes again of SuperCloud 4 is this sort of industry transformation, the impact of AI. So you mentioned financial services. I feel like financial services, they're pretty astute technically, right? So they're going to, and they've got a lot of data. You know, of course you saw, you know, crypto for a while looked like it was going to disrupt FinTech or financial services FinTech is sort of still a buzzword, but it seems like the big banks are just so big, they can sort of co-opt everything. So maybe financial services gets disrupted more by increased interest rates than it does by tech. But are you seeing, it seems to me like manufacturing with all the geopolitical tensions that's going on with the US and China and move toward India or on-shoring, move toward when you know, autonomous, the manufacturing, I mean the automobile industry, you know, you've tracked Tesla for a while as have we, you're seeing how that's sort of disrupting automotive, you know, although all the manufacturers are sort of leaning that way. What are you seeing in terms of industry transformation, industry disruption, any patterns that emerge when you talk to customers? Yeah, and it's interesting, when you think about data, remember a number of years ago Dave, it felt like we talked about GDPR constantly. These days, every time I talk to Europe, it's sovereignty. So every country wants the data staying in their country and that's been a big shift when you talk about how I think about my data and you know, the power of AI should be the more data I have access to and be able to train on. I'm curious how will sovereignty be pushing against the trend of AI? Will that shift what's happening there? Because yeah, like AI absolutely is, you know, mega trend should disrupt a number of industries and every company is trying to figure out exactly how they take advantage of it. It's interesting if you saw from Red Hat, you know, one of the first pieces we have, our Ansible Automation team has, you know, how do we build Ansible playbooks easily just typing in with, you know, natural language and it spits out and underneath that is the LLM, which our friends at IBM are doing and underneath that actually is OpenShift. So OpenShift is the infrastructure for that. So when I think about SuperCloud, when I think about, you know, large language models, you know, we're an underpinning layer for that just because the platform that we've built, containers, Kubernetes, scalability, automation can actually live in a lot of these environments and AI has been a great workload for us and this should be a great tailwind for us. And in the data, the spending data from ETR, I still see a lot of OpenStack and that's about sovereign cloud, right? I mean, Walmart's one of our guests, I don't think it's a, probably, you know, they have enough resource to do their own OpenSource, you know, development, but their triplet model has OpenStack on-prem and they use Google and Azure. Funny, they don't use Amazon, but... Yeah, Dave, I've talked to Walmart, you and I did a CubeGig once and like they used to have two main environments, OpenStack and GlobalZOS. So they're using their mainframes there and they do a lot of containers, you know, lots of Linux in that environment. So yeah, Walmart's always an interesting one. And of course, they don't use Amazon because the retail competition. You can't blame them. And then talk about disruption. And then we have another guest on this week from SEMA AI, they basically do a system on chip, ML system on chip, all about the edge. So they're doing, you know, autonomous, they're doing robots, very low power, extremely, you know, strong performance per watt metrics. And you know, where we stand on arm-based, you know, systems and the disruption. So that whole edge story is really, I think, kind of exploded. I mean, it's the semiconductor content on edge today, these sort of, you know, edge-like applications is like $40 billion. I mean, it's enormous and you have vehicles and so forth. And it's supposedly going to increase by four to five X by the end of the decade. So that's going to trickle in. I really feel like that's where the blind spot is going to be in the enterprise. You know, it's going to disrupt the traditional computing environment. Not that that's going to go away. We don't think it's going to go away, but the general purpose computing environment is going to shrink as a percentage of the total. And you're going to see accelerated computing as Jensen likes to call it, or intelligent computing. It's going to be everywhere. Do you agree with that? Do you hear that from customers? Yeah, it's interesting. So we talk specifically about, you know, AI for financial services, but financial services aren't the ones that are usually pushing out to the edge. So industrial, telecommunications are a big use case. Healthcare actually has a lot of interesting use cases. Retail. And AI inferencing is something we've seen when we took, we've got an open source project called MicroShift that we're taking, you know, how do we get even smaller than Kubernetes? And we've got a big partnership with ABB, who's a robotics company, and Lockheed Martin, who does their drones for AI inferencing when they're in flight. MicroShift it's called? Yeah, MicroShift is the open source project we've had for a number of years. It's been productized with Relfor Edge in something we call Red Hat Device Edge. Very cool. Yeah, so that's a lot going on there in the world of hybrid. So you're in cloud native, public cloud, obviously you're on prem, you're pushing out to the edge. And is it a pipe dream that you can have a consistent experience across all of those? No, no, it's a big thing because what's interesting is like, look, we partner really closely with the public cloud providers. If you look at what they've been doing from a hybrid environment, how do they take really their footprint and push it closer and closer? So take AWS, you know, they created wavelengths and local zones and then outposts. We can live right on EC2 and AWS and we can actually put our software on all of those other flavors too. So from our standpoint, that same consistent environment, yeah, we can live in the data center and the public cloud and the edge. And when you talk about what the public cloud providers are doing from hybrid, when they put more hardware footprints, the software can just go along with it most of the time. But isn't that the same, isn't that all Amazon stack? It just described hardware. But just as I said, we just live on EC2 in the public cloud. So yes, it's their hardware, but we're just another piece of software on top of that stack that they offer in the public cloud. We're just another native service and we can fit in those other environments too. You remember the work we did with true private cloud. And the whole concept was to substantially sort of recreate or create the experience of the public cloud on-prem. And it's kind of taken a decade to really get there. We were talking about it in 2010 the M world you remember. Are we there yet? So a lot of the things, what's interesting is I talk about our managed service that we have with AWS. Big discussion point recently has been about, right, what about the data center? So HPE GreenLake has been making progress for the last few years. Dell with their Apex program, we announced at Red Hat Summit earlier this year, a partnership there. I almost call it, it's like the cloud native next generation of VxRail, which has been a real winner in the data center. So if we can take that cloud native stack in that kind of footprint, that's a huge opportunity. So it's going to be interesting to see how the cloud guys respond to that. All the cloud guys have an edge strategy, right? And of course they all have AI strategies. But we do see this sort of quasi balance equilibrium happening between, people are being sort of more circumspect about where they actually do work. Sort of it was a rush to the cloud during pandemic. And I was like, okay, let's kind of think about where the best economics are, where the lowest latency is, and physics still matter. And of course, again, I think the edge blows this whole thing sky high and it's a real curveball. Last thoughts too. I mean, super cloud units still not bought in to the term, but conceptually, I can really go in, right? If I zoom out a bit and think about what we have been doing for like the last 20 years, it's really about distributed architectures. I remember talking to Martin Casado, when he was with NYSERA pre VMware acquisition, that was the discussion that we were having. Cloud, while you think about the centralization when we had that pendulum swift swing of everybody going to the public cloud, really cloud is very distributed today. And customers, we know one of the laws of IT is nothing ever dies. So they have so many different pieces. We really try to be able to give that consistency for security teams, for the operations, and developers are so important these days across all of these environments. It absolutely edges a critical place that we're trying to make sure that that consistency is there, but it has to have automation built in and it has to, I'm not gonna have the resources, I'm probably not gonna have the network connectivity. Unfortunately, the laws of physics and some of the other, I remember what was Pat Gelsinger's three laws of that we had. It's even more so out at the edge because I've limited resources and probably no personnel. And if I do a personnel going out there, they're probably going to be low skill level. So we have to have things that just are out of the box, hardened and easily installed and easily managed because otherwise when we get to fleets of devices or highly distributed, we can't take advantage. No truck rolls as automated as possible and low maintenance. Stu, thanks so much for supporting SuperCloud 4. Great to have you, great to see you in studio. Thanks so much, Dave. You're welcome, all right. And keep it right there for more actions from SuperCloud 4 live from our Palo Alto studio, myself, Rob Streche, John Furrier, right back right after this short break.