 Good afternoon, Supercomputing Nerds, and welcome back to Denver, Colorado. My name is Savannah Peterson, and we are here live for four days from Supercomputing 2023, joined by my fabulous co-host, John Furrier. John, how was your lunch? How's your afternoon? Lunch was great, and again, the pace of the play here at this conference is very fast. It's got the AI theme, but it's also blending younger generations, and it's basically a systems, I would call it a systems architecture, hardware meets software, reconfiguration, and it's still the same game. Hardware, middleware apps, storage, networking servers, and all kind of coming together kind of refactored, and it's going to be a great guest here as we get in to some of the elements that are being used to kind of build a new AI environment. AI hardware is all the rage, and Dell's been a leader in the moving fast, establishing a strong position. We love our friends at Dell, and without further ado, Bill Leslie. Welcome to the show. Thanks so much for being here with us. So you are on the HCI team, the human computer interaction team. This show is kind of the ultimate celebration of humans and computers interacting. That's right. What's it like for you to be here with our community? No, this is a blast, first time here with you guys live, so really enjoy myself, and really just being around the floor, seeing all of the universities that are out here, seeing more GPUs than I've seen in my life, it's fantastic. Likewise, I saw, instead of Got Milk, I saw Got GPUs as one of the t-shirts, and I thought, wow, we are really having a GPU party. I love the clever HCI human computer interface, because the hyper-converged infrastructure we had in the last session really kind of ties in the personalization aspect, because one of the things coming out of this show is that it's not just hardware show, or super computer chip show, it's actually showing outcomes around the new model of LLMs. A hyper-converged storage and server's coming together. That was, you know, a generation ago, maybe a generation and a half. Now it's like, okay, it's in the play, but it's actually has to deliver precision, but yet deal with broad sets of data, and this is where I think the AI infrastructure piece leverages kind of the best practices of HCI, but the game is changing. So give us your take on how, you know, the hyper-converged market's changing. Yeah, so one of the things that we saw with VxRail when it was introduced about seven years ago is it was really about how do we offload some of the day-to-day tasks, the mundane things that take up our time, right? And how can we automate, orchestrate, do those things that are the regular, reoccurring things so that IT teams can go out and actually create new ways of delivering their services? Little did we know we'd be where we're at today, where now AI is really radically changing that game in many ways with some of the large language models and things of that sort as well. I just want to, just for a second there, but I'm curious, you've been in the industry for a while. Were you anticipating this type of jump with AI right now? Were you able to see this in your crystal ball or has it all kind of come upon us? I think about four years ago, we started to see more and more of the GPU introductions that would take on new workloads, right? I think we were seeing obviously VDI use case with GPUs forever. And when we started to see, okay, we're going to start doing some things that might be some machine learning, some dabblings here and there on some workloads, we started thinking, okay, what else can we be doing to offload the next level with some of that? And one of the things we've been doing with VxRail along the way is building in capabilities where we're actually deploying those GPUs in the first run experience, managing the day two experiences with the life cycle management. Those things that sound easy, but can be kind of hard if you're not focused on that all the time. Talk about the workload changes around the end to end because one of the things, again, that's coming out of this and this came up at KubeCon and the Linux Foundation around Kubernetes is as these clusters start to get deployed, you got to start thinking about storage and the interconnects around it. And then you got to look at overall throughput and workloads, when are tokens being delivered if you're talking about LLMs, get retrieval models. So you now have this view of, it's not just the shiny new toy, the GPU or the model or the foundation model, it's an end to end workflow. VxRail and vSAN, all these kind of, I won't say old technologies, but the ones that were pre AI, quote, infrastructure have in place some of those things. They had workload, they thought about workloads end to end. AI though is going to the same direction. So what's the new workload dynamic that AI is addressing and how are you guys flexing your VxRail and the technology? Yeah, one of the things, John, that we're seeing VMware push the envelope in is their new architecture with vSAN ESA, right? They moved to a single tier, they removed that cash drive. This did a few things to help improve not only performance, but also improve the cost profile of the environment. Two key fans of the entire show. You don't often get the two for one type of the situation. And what we were starting to see even in the previous generation of vSAN OSA, their original storage architecture, was it was starting to top out some of these networks, 25 gig networks. Well, we wanted to really see what vSAN ESA could do when we push the envelope with 100 gig networking, which is small fry compared to some of the things that we're seeing here this week. But it's that next jump in evolution of performance and speeds that are really needed for those next gen workloads. How do we have those pipelines for all of that data that's going to be massively flowing across the network with HCI and similar types of technologies? It never was huge. It never was huge. I asked a couple of guys yesterday and Gals on theCUBE yesterday. If you could optimize for one of two things, more compute, more GPUs or more networking. Most people right now are into the GPUs, but they want more networking. They want faster network, they want interconnects because it's what's going on around the GPUs now and the CPUs is the big conversation here. It is, and actually one of the things that we've seen with our partner in Intel is their new AMX technology, the on chip accelerators. It's incredible for what it's doing for some of these AI workloads. Compared to the previous generation, you're getting like three acts, the type of performance for the BERT type models that are out there or image classification models with ResNet 50. This is just a game changer with what you're getting included in the fourth gen scalable prox. It's making it a reality where all of this confluence of tech is coming together for these next gen workloads like AI. I mean, high performance networking should be a show in and of itself because there's a lot of networking going on. Maybe that'll be next. I mean, I was talking with some of the storage folks and the conversation is it's not about storage anymore because storage is everywhere. You got to store stuff. It's about computing, accelerated networking. So it's a workload conversation, not so much a point solution because we're already penetrating with storage. Well, I think we sometimes overlook where the compute is going to as well. We've got a couple of customers that have hundreds, thousands of sites. So when you start dealing with 1,500, 10,000 nodes across all of these distributed locations with the compute, guess what? You got to do AI there. You got to be doing the visual queuing, the inferencing. What are customers doing in retail locations? It's not always about the biggest super computer you can get. It's about how you can get that compute right there at the edge. And optimize that action that you need, that workload or that batch or whatever that is. Let's talk about it. You brought up customers there. Are you sensing, I feel like you can almost feel it in the room. There's a lot of FOMO going on. Is that a conversation that you're having with your customers? How are you helping them navigate this entry? If they're not. So one of the things that we're starting to do, at least in the VX rail space, is see what you can get with what's out of the box. You might be amazed by what you can get with just the AMX chips that are part of the 4th Gen Prox. And then you can decide how much you need to go with the GPUs, right? If that 3X improvement is good enough for what their environment is, stick with it. If you need more than that, go for it, right? A lot of our platforms support GPUs. You can be GPU ready if you need to. We see about 25% of our customers right now starting to use GPUs here and there and some of those nodes. So it's going to be a big change for us going forward. That's a significant percentage. I mean, I'm not surprised and I bet it's going to be a hundred percent or just about pretty soon. Right? So you have modern databases are out there. We got a multi-cloud, we call super cloud emerging. You're starting to see, kind of like NVIDIA enabled like Core Weaver. That's an interesting power dynamic. You're going to start to see entrepreneurial activity come out. You guys see this end to end workflow. So I got to ask you, when you think about the cloud relationship, we're seeing a lot of on-premise action. We had a little debate with Vulture, but he thinks there's more repatriation going on than our numbers say, but he's in the data center business. So, you know, that he's a cloud service provider. So I'm sure he says that, but I think the repatriation numbers don't show the true trend, which is it's not about taking work was off the cloud. It's the net new build out going on. And this is the hybrid on-premise edge conversation we've been having for over a year. Now with AI, that's highlighted. Can you share your reaction to that? Do you agree? And if so, what other commentary can you add on top of it? Yeah, I think one of the things we've seen with VxRail, and we've been doing this with VMware Cloud Foundation on VxRail, is getting that most consistent operating model. What do you have in the cloud on-prem? How do you drive consistency? And Dell has introduced a couple new offerings in our Apex Cloud Platform so that we can now drive consistent cloud with Microsoft Azure, with our ACP for Microsoft Azure offering. We've also been doing similar with Red Hat OpenShift, right? And we're going to continue to push into those new forays because customers need the same experience. What they're getting public cloud, bring it back on ground. How do you make sure that you're delivering the same type of operating environment in both locations? Savannah, you know, we've been talking a lot about Dell here. Obviously they're sponsoring the booth and we want to shout out to Dell Technologies for enabling us to do our show here. But you mentioned VMware, and usually I'll be owned by Dell, but now they're going to be bought by Broadcom. So that's going to close. But if you look at VMware, we had reporters in Barcelona for their show. It was packed. And it's not because they have vSphere, that's their install base. There's a ton of excitement around the AI enablement coming out. So how do you see VxRail, VMware, and with a faster ethernet, obviously Broadcom's involved in that too, by the way. So you kind of have a new configuration of players in that game to give the lift to the old school vSphere. And so AI helps legacy, right? And so the excitement of VMware, I'm not sure it's because of vSphere, okay? vSphere's an install base. It's like an operating standard, but they're excited about what's going to happen beyond vSphere. I think what you're seeing in some of that excitement is it's on how easy VMware makes it to work with that existing stack of components and software, right? And then it's adding in the new workloads, whatever the AI inferencing models or visual models or large language learning type things. When you make it easy, like what they're doing with the VMware private AI, right? That's just bringing, setting it right there on the stack that you're already familiar with, the vXRail's right apart of what the cloud foundation environments, it's just start building and going. Bring that new workload set into the environment. I grew with the CEO, it was very technical. He took over the helm. Man, what a pivot they made, Savannah. They really added that on there. And they got people excited. We were involved in that with our super cloud project. But I think this points to architecture, okay? And if you look at the enterprise now, we've had this same exact conversation in KubeCon. Platform engineering is the hottest thing in that cloud native world. Basically that means that's the modern IT architecture. What is that going to look like? Because you guys have a position with vXRail and a lot of instances, a lot of customers, you got the vSAN, you got the VMware review, you got the networking. They're 20,000 customers. What's the architectural conversation going on in the enterprise right now? To make sure they don't miss the AI wave. Yeah. Well, and thank you for mentioning our 20,000 customers. They've been fantastic. An impressive number. And inside the VMware ecosystem of HCI, Dell is about two thirds of that install base. So we are right there in lockstep with what VMware is doing, has been doing with Tanzu, previously pivotal before that. And the Tanzu ready architecture is actually still based off of vXRail. So the platform underneath that infrastructure layer is set so that customers can just start using it. And it's really quite wonderful to see how many customers are adopting Tanzu as well as other Kubernetes frameworks with vXRail, whether it's EKS anywhere with Amazon or some of the Susie ones. It's really quite phenomenal what all we're doing there. And our 20,000 customers is extremely impressive. I see here in my notes to 277,000 nodes. We are not talking about small amounts here. That is very much at scale. And I think it's super impressive. And to be able to get those people and those customers onboarded to achieve their goals that quickly, that ease of use conversation has been such a hot theme here. Everyone wants to get from zero to 60 as quickly as possible. You mentioned just before the cameras went live that you have a nine year old son, Andrew. Yes. And since we're talking about making things easier, I am very curious, how do you explain super computing and what's going on with AI to Andrew? You know, it's fun having him in the room with me because I work from home. He's actually written out resumes before using the words that we use, not knowing how they go together, but he actually strings it together pretty well because when you break it down to the simplicity of how these things work together, right? You've got to be able to use your brain. That's the compute in the system. You've got to know how to assemble the equation because he's starting to put together the math equations. This is kind of what HCI is doing at that infrastructure layer with the software. And then it's a, okay, now, how do we make you smarter? That's where some of the AI stuff comes into this, right? So when we make it very tangible for a young one, and we've got student groups that are around on the show floor today, it's really fun to see their eyes light up when those dots get connected on just how simple it can be. Yes, they're very complex things, but when we can make it simple for them, it's really just quite wonderful to see. You know, I like Savannah bringing up the human-computer interaction angle because, you know, that example is not only the younger generation, it's how they're going to work. So the interface of AI that we've all seen with Cheshire PT checks the box there. Your son's never going to learn data structures, okay? He doesn't have to. This stuff's going to be co-pilot for him and he can stand up Kubernetes clusters with voice activation, potentially. Load that VX rail. That's what it's going to be doing by 10. Programmable. At this rate. I was blown away when I saw 3D printing. Can you imagine an IT environment where you just say, deploy VX rail across this environment and manage the data pipelines from XYZ data sets? That's coming. It'll be here before we know it with it. And one of the things that we've been doing with VX rail is expanding all the types of flexibility that we have with deployments, right? We have compute-only nodes that we call dynamic nodes. So if you just need to add additional compute to an environment, you can go and load your dynamic nodes. They have a personality of sorts for what that first run experience needs to look like. We have satellite nodes for just a single node vSphere edge deployment, right? And in those scenarios, we just say, hey, this is what I'm going to be. Go now map yourself back in our implementation teams, our customers in some cases are installing these on their own. It's really going to evolve as we can build more and more of that AI into it. And into more of the Dell products as well. That's why I was saying about the workflows earlier because you guys have done the work. And I've been following the work you guys have been doing at VX rail and now Apex Cloud. You guys have done the work because you had to do the workflow on lay that out. Now you have scale and automation and AI coming around the corner. Well, those 277,000 nodes that we've got deployed, we've learned a lot from those customers. What are the typical things that they run into? What are the challenges? How do we automate that out so the next customer doesn't have to hit the same pain point that maybe one has already hit before? How do we improve the life cycle management that we're constantly improving upon? We do something like 800,000 hours of test on every single major release. There's not a lot of customers that can have 800,000 people hours and laptop hours and equipment. That's the benefit of coming with these purpose built systems like VX rail for VMware like our Apex Cloud platforms for Microsoft Azure and also Red Hat OpenShift. I mean, I think AI is going to be a friend to you guys because of the tailwind, because of the existing install base, because of the data, because of the easing up on all that hard heavy lifting on configuration. If software continues to go down the coding route, I mean Shopify's entire headless systems now generated by code we heard down the queue by AI, the glue layers could be filled in by AI. So when you start to think about integrations, this is going to be exciting. It's a lot of fun. Some of the things that we haven't even talked about yet is what you can do with the APIs that we're building in with these. Now you can actually codify the infrastructure, the management of that infrastructure. What happens when AI starts to be utilized in combination with that? Now you don't even need to have necessarily a software developer. You can use AI to give you the first pass at what that API instruction set's going to be. Verify it, make sure it's not hallucinating, and then you can actually have now a faster implementation of new infrastructure along the way, right? I'm laughing because you're basically giving a masterclass, and we have our own AI now, so that's actually on the transcript. That'll be in the AI. And Sienna, you know that my view is I'm pro AI, and I'm anti-AI regulation. So that's my view on AI. John likes to live train our AI while we're sitting here on set. Why don't we train some AI? What is VxRail's strategy for AI? Well, I think you're going to have to wait a little bit more on that. Right now it's making sure that our customers can enable AI workloads in their environments, using GPUs, making sure that they've got the right type of networking readiness for that so that whatever is thrown at them, they're going to be able to run it in their environment with VxRail. I like it. Well, okay, I guess we'll have to be patient, but I'm going to ask you one final question, Bill. Because it's your first time and you've been such a stellar guest. We're certainly going to have you back on the Cube. What do you hope next time we get to sit down together that you can say then that you can't say now? Well, I'd like to say a lot more customers that my son is actually doing some of the things you talked about. All right, Andrew. I think that would be a blast. But I really think that I'd like to see VxRail still in the forefront of the conversation, making life easy for our VMware friends and customers that are trying to deploy AI workloads on vSAN and vSphere. Well, that's absolutely fantastic, Bill. Thank you so much for your wonderful insights. I hope that Andrew is proud watching at home right now. And John, thank you for the great questions, as always. And thank all of you for tuning in to our wall-to-wall live coverage, four days here at Supercomputing 2023 in Denver, Colorado. My name's Savannah Peterson and you're watching the Cube, the leading source for emerging tech news.