 From Las Vegas, it's theCUBE. Covering VMworld 2018. Brought to you by VMware and its ecosystem partners. Welcome back to the VM Village inside of the Mandalay Bay Convention Center, which is hosting over 20,000 people for VMworld 2018. With me is John Troyer. I'm Stu Miniman, happy to welcome back to the program to CUBE alumni. Spoke to them a couple of months ago at Dell World. We're back for VMworld. Ashley Garakbarala is the president and general manager of servers and infrastructure systems with Dell EMC, and with him is Ravi Pendekanti, who is the SVP of product management and marketing with PowerEdge servers. Gentlemen, great to see you, and thanks for joining us again. Great to be here. Thanks for having us. All right, so keynote this morning. There were a whole bunch of things that run on servers, leveraged servers, infrastructure solutions. Ashley, where are we starting? Are we talking about hyper-converged first? Sure, let's go there. All right, so if you did watch the keynote, which was very exciting, full of so full of stuff, we ran a little bit late. You know, VxRack was really highlighted, and VxRail is really the preeminent, integrated IS solution out there. What other Dell technologies do we have that we can put something together which is pre-integrated, is pre-validated, backed up by one support call, one integrated experience, and is really the on-ramp to IS for both private and multi-cloud. So Pat highlighted that for us, and you know, if you look at where we've been, we're about 10 quarters into VxRail, hit a few milestones we can't yet talk about in our blackout period, but the momentum, if you remember, Q1 was, we were doubling in size, that's going to continue on going forward. So it's been a very, very fantastic uptake by our customers. Yeah, Ravi, many moons ago, I was a product manager, and if somebody had come to me, it's like, well, now I have Dell servers, I have VMware, I have the EMC pieces, what are you going to build? With the HCI market that you had, that's a nice little toolbox that you had, and I think when the acquisition happened, we knew that this was one of the areas, but gives a little insight as to what we've learned over these couple of years, what's really delivering for customers and helping, you know, we just published some research from our Wikibon side. Yes, indeed. You know, when we call the true private cloud, which includes the HCI and some of the CI and some of the management, you know, on the full solution set, it's not surprising that the Dell family sits at the top of revenue. No, absolutely, you basically have asked the true, as a true product manager, should I say. Couple of things, number one, if you really go back in history, and if you start tracing some of the things that have happened, first and foremost, the advent of software defined, something is picking up on these days. For various reasons, whether it is, you know, lowering the TCO cost, ensuring that you're able to deploy your systems and infrastructure faster, could also be all about ensuring that you're able to keep up with the changing workloads and I bring that up very deliberately because, again, you know, maybe 10 years ago, nobody would talk about some of the workloads we talk about frequently today, which, you know, Pat talked about just not the, you know, traditional workloads, he talked about things like the AIML. I mean, there's not a single company you can talk to these days that doesn't care about stuff like that, talking about Edge. These are all telling us that a new set of workloads to deal with. So the, I would say the healthy confluence of the servers in the past to what we have with the VMware assets and for that matter, the rest of Dell technologies assets give us an opportunity to go ahead and serve our customers in a very different way than what we could have imagined sometime back. Robby, it's been interesting. We're an interesting point, I think for servers and systems in general, right? On one hand, people are doing much more consumption of engineered systems. You know, things like VxRail and VxRack, they put together, show me the whole solution. Show me the power of the whole solution. You've done a lot of the engineering work, systems, software, high-end hardware together. On the other hand, I think a lot of people here are kind of hardware geeks or at least started as hardware geeks. And there you kind of want to say, well, what's new about like PowerEdge and like what's going on in that world and kind of geek out about the actual parts inside and things like, because there's a lot of real engineering in there as well. Can you talk to us a little bit about Mx and what's going on? You're absolutely right. First and foremost, two things. I mean, when you talk about software to find anything, I mean, we on the server infrastructure system side always believe that your software has to run on something. It doesn't run in ether. The whole point is that's where I think, you know, you've got to look at the right hardware partner to come in to help you run your infrastructure. Whether it is your platform as a service or software as a service, all these run on a platform. That's where we have PowerEdge that comes in and then to augment that further, you know, we have talked about PowerEdge Mx, the most recent entry into the brand called PowerEdge where we have come up with a very new entrant with a modular infrastructure architecture that brings together the compute, the networking and the storage side. And the best part of this is we have architected in such a fashion wherein it's a first of its class with no midplane. Think about it. There's no midplane in the design which gives you a tremendous amount of flexibility and the agility you need to go back and take care of some of the things you just talked about, John. Yeah, Ashmi, help us unpack that a little bit because everybody says, well, you know, we're going to be future ready in the next generation but, you know, blade servers were supposed to have some kind of flexibility. What's different now about PowerEdge Mx? Why is this so important for your platform and how does it compare with other things that we've seen in the market over the last couple of years? Sure, sure. As Ravi said, we've been waiting for the announcement for a while. We've previewed a little bit and dealt that last time we talked. We said, wait, wait till it can come back on. So here we are. And it's, we're about a year into 14G for its announcement, shipped over a million 14G servers already. And it's been a series of announcements. If you remember, just in the spring we were talking about the 840 and 940 really releasing four socket powerhouses around machine learning, AI, accelerated compute. This really complements that side where with our Mx launch now, we're starting a new generation of infrastructure that is really a capstone to 14G because it offers a few things. First of all, as Ravi alluded to, if you're going to build an infrastructure and customers are going to invest in it, if you invested in our M1000E blade system, we promised you three generations of technology. You're on generation five, going to six. A really, really well engineered for the future. We have a history of being able to see what we need to do to accommodate that, whether it's expanding thermal, expanding power, really manageability, networking. And what Ravi talked about with the lack of a midplane is really, subtraction is the addition because now, whether it's Gen Z, whether it's other future buses that people haven't quite talked about yet, but we will when we come back. Whether it's the ability to mix and match the technologies that come, not only on the compute side, but the storage side. It's really important that you have a futuristic view of the world because today we're talking about things like disaggregation or kinetic architecture, composability, and what we mean is what we can do today. Sure, we can carve up the resources and give you a virtual machine. We can segment a server. We can logically set up a network. We can even give you direct attached storage or a logical portion. But we haven't yet got to the technology to be able to fully disaggregate all the resources so that we're down to that kind of level of utilization and management. Yeah, in the keynote this morning, Pat got a good chuckle when he talked about, when he was at Intel, building a chipset that was going to be ready for what we now call AI. There are many in the industry that I hear, I'm the best storage infrastructure server or anything for AI and they'll be like, oh come on, you're all using the same Intel chipset. You're all on this next generation. How do you really build something for it? How do we respond to that? Why are the Dell solutions differentiated and best suited for these next generation kind of workloads? I'll take it, Ravin, then you can add on to a few things. One, it's interesting that you would say we all have the same Intel technology. When we think about AI going forward, it's really a move towards heterogeneous compute. And I think it's going to be about that differentiated portion of the technology. And we were canvassing it the other day. There's over a thousand companies that believe they are developing some sort of technology, whether it be software frameworks or hardware, to accelerate or offload those transactions. My personal hope is not all a thousand make it and we have to support a thousand because the matrix M by N is pretty deep. But we think there's quite a bit to go yet for it to converge down on something that customers are saying, look, this is the solution I want. I need, but today, what you need to do is prepare for that. And so we have, for instance, all the way from investments in some of these companies to partnerships with the companies to building out some of these ourselves. And it's going to come in the form of software acceleration, ASICs, FPGAs, GPGPUs, instructions within the chipsets. Every day, there's a new processor, ASIC, that is going to formulate its way there. But I also think it's important for us to think about AI and machine learning has got to be not just connecting offload, heterogeneous compute, but inform factors people can use. And so we've really concentrated on where do you do data analysis? Where do you do training? Perhaps more in the core and the edge. Where do you do decision-making and machine learning and transactional, maybe at the edge? And that's where we're using our different technologies to make sure we have the appropriate pipeline between those. Yeah, I completely agree. And in fact, I think one of the key things to build on what Ashley just rightly mentioned is if you really think about it, it was a start that we could just go do some deep neural networking. But that's not true. I mean, it is. But then there are so many different flavors to it. For rendition, there is a different kind of a neural networking algorithms and frameworks. Do you pick TensorFlow? Do you pick Cafe2? I mean, pick your favorite framework for that matter. Is it Aran, and it's CNN, and it's DN? And my point being that there are so many permutations and combinations beyond just the processor that needs to be brought in. And that's what we are doing here at Dell Technologies is ensuring that we actually get a holistic view of the entire ecosystem. We believe it's the ecosystem that really makes a big difference on how successful any organization would be in coming to market with a real solution. I wanted to jump in actually, Ashley, something you used the E word, which is edge, right, this year. And the nice thing, any time there's a word that is like edge that has a lot of different meanings, it means something interesting is going on there. And when I think of PowerEdge servers, and also VXRAC, I think of data centers, but I know there's a lot of places, we saw a lot of edge this morning in the keynote, right? Wind turbines and ships and warehouses. What's, you know, how are you all looking at remote compute, you know, whether that's like a ship or a warehouse or a turbine or someplace that's not a big air condition data center. And how does the Dell server and system story fit in there? Sure, so some of this is going to be about form factor in that there's just not a data center at the end of a dirt road at the base of an antenna. And so we've got to accommodate that. What's great about what the journey we've been on is about 12 years ago, we started a group called DCS where we were dealing with building hyperscale and that's trickled down into the mainstream. What's also trickled down through there is we ended up having to hire material science, people who knew how to do ruggedized engineering, people who were doing HVAC power, really the best in their field at building things like modular data centers, ruggedized compute, understanding relative humidity, which sounds benign, but is really the killer as opposed to temperature when it comes to putting compute out into a harsh environment. And we're also working of course with products such as Vertex, which we induced quite a while ago, which can take different environments. For instance, it's on ships today, going across the ocean as a data center. So we've been piling all that IP forward towards the day where the environment can be maybe one rack size, not a modular data center, but scaling down. I think when we started we thought, oh my God, we got to scale all this up and get bigger and bigger and bigger. It's actually come full circle to where kind of a lot size of one rack is where people are going to start to see a sweet spot and we've got the ability to wrap all that kind of technology around it. Then it gets to the compute side where we're able to bring in the technology around material science. And we've been doing it for quite a while and if you haven't been doing it, it's going to take you a while to get there. And then finally, if you think about then into the form factor, how much power do you need? What's the right form factor? Do you really need a workhorse there or do you need just enough compute? And really what's important is going to be the fabric of the technology to extend the data back. And then Riving Team have done an awesome job over the last three years, bringing us into data center management leadership. We absolutely are the leaders in securing and managing assets within a data center. Now we're extending that kind of capability out of your data center into the edge point and that's really, really powerful. All right. A very valid point, which is why we got the open managed enterprise modular which we have brought in, which is, again, we think the first of its kind. All right, well, Ashlyn Ravi, the thing I think we've highlighted here is people that hadn't been watching thought that, well, you know, servers just kind of played out and commoditized and everything. I wrote a article like four years ago, it's like you looked at the hyper scale ones, they hyper optimize what they're doing and thank you for going through so many things from, you know, talking about from the edge to rugged to, you know, what's happened in the data center, as you said, in the software defined world, it's all got to live somewhere. And those things in some form factors, compute and servers underlie all of it. So once again, Ashlyn Ravi, thanks so much for joining us. Thank you. For John Troyer, I'm Stu Miniman, lots more programming here, wall-to-wall coverage of VMworld 2018. Thank you for watching theCUBE.