 Live from Las Vegas, it's theCUBE, covering Dell Technologies World 2018. Brought to you by Dell EMC and its ecosystem partners. And welcome back, we are live here in Las Vegas, we're in the sands right now, day two of Dell Technologies World 2018. I'm John Walls, along with Stu Miniman, and it's a pleasure now to welcome Ashley Goruck-Perwala, who is the President and GM of Server and Infrastructure Systems at Dell EMC. Ashley, good afternoon to you. Thank you, great pronunciation, my last name. Well, thank you very much. Not an easy thing to do. I worked hard on that, how about that? Stu and I were just talking briefly with you, what a cool exhibit for, right? I mean, there's just a lot of interest. What have you seen out there that's kind of caught your eye so far? Oh, we have, we brought in a lot of customers this time to show their outcomes. So, I'm a car guy, so you know how it's straight to the player. My son would love the F1 setup with the gaming, virtual reality. Topgolf is a great DX rail customer. We have a goal control, try to beat the AI and see if you can score a goal. I mean, there's some very cool demos back there. Yeah, and so, and then overall, just I'm curious about your thoughts about the show then, because that's a part of it. That's part of it. There's a lot of client relations you're doing here, business relations. Yeah, the, we're only about halfway through, so. But so far, very, very positive energy, I get. I don't know if you caught or already talked to Michael after the keynote, but certainly, certainly Michael's on fire at the keynote and I really, really enjoyed the discussion with Dr. Chip later about, and Jeffrey Wright about, you know, how technology connects to helping people. A lot of times engineers stuck in a lab, looking at R&D, trying to figure out a problem, lose sight of what they're doing. Great opportunity for the team to see that and kind of expand and understand, you know, where their technology's going, what it's doing for the world, what the impact is that they're having. So, Ashley, your team's been real busy leading up to this, seeing some of the new products in the announcement. Before we get into this, though, your role expanded a little bit since the last time we talked about, talked to Tom Burns yesterday as there was the, the group formerly known as VCE that turned into CPSD, you know, was split into some pieces and HCI is now under your domain. That's right. In addition to our server businesses, which are kind of the mainstream power edge business, our extreme scale business, our OEM business, we had a reorganization to really kind of unlock the potential that we have in a great product set, product set before my organization was already number one. It's a position of strength. What we're trying to do is accelerate from that. So, if you think about the HCI marketplace, I think you have to be in the server business to win in the HCI business. I don't envy anyone trying to do this from a position of weakness or trying to adopt other people's technology. Our supply chain, our reach, our global services and support, and then the underlying ability to invest in the server technology and beyond and differentiate and innovate on top of that is what it's going to take to win. And maybe not tomorrow, but in the future as HCI takes off. We wanted to really accelerate that by shortening the decision-making loop, making it one mission for the team. And so that came in. In addition, maybe a quick call out to the storage and data protection platform engineering team who also came into my group to again really put our best hardware and platform and systems engineers together from servers and from data protection storage and kind of create a powerhouse of R&D. Yeah, actually, it's not surprising to us. From our research side at Wikibon, we actually called it ServerSan because it was really taking the functionality in what customers wanted as a business outcome from the sand and was pulling it closer to the server. But at its core, it's really about software. One of the things that has struck me in the last two years, comparing this to EMC worlds in the past, and now Dell is what I used to see at Dell World, which was Dell as a platform that lots of things live on. So there's lots of software storage partners that live on inside of Dell. There's HCI partners. Of course, you've got a broad portfolio all from the Dell families and then OEMs and other partners that fit there. So it gets your team, it makes sense that HCI comes in there because you've got that platform at the server and it grows from there. Yeah, if you circle back to just a Dell legacy world perhaps, much more platform oriented, infrastructure at our heart, bringing that value and prop to our customers. And I've said it before, I think if you give any segment or capability time, I think a standard kind of open infrastructure hardware platform wins. It may not be a server, but it's going to look something like a server going forward. And the specialization and the value move into the IP stack and into the software. So you better be a company that can do the scale of a standards-based platform. You better have the IP, the specialized stacks as we do in our VMware stacks, in our IP stacks around data protection, storage, networking. You can see where Michael's kind of putting those two together. It's not a tomorrow thing, but five, 10 years from now. We've seen it in the carrier space. We've seen it in storage. Everywhere you go, the commoditization curve takes us to standards infrastructure and IP in the software. Yeah, you made an interesting point there, say it might not necessarily be a server. Give us kind of, if you could step back for a second, state of compute, it's compute in the cloud, there's compute at the edge, there's compute all over the place. A few years ago, it was like, ah, it's all going white box and undifferentiated. And in the public cloud, I say there's probably more skews of compute in the public cloud than if I went to Dell and picked that up there. Whether that's a good or a bad thing, you could probably have some insight on. But give us your view on kind of the state of compute in the industry today. Sure. So if I think back 10 years when we started our business with the hyperscalers, building those infrastructures of service, multi-tenant public clouds, there really wasn't any other choice. You either did it in a legacy mode with your IT, maybe slightly modernizing, but you were still probably siloed. You probably had storage admins, networking admins, compute admins, or you went cloud. And it was such a different experience. Since then, what customers have said consistently is, why am I having to make that choice? I either go to this rent version, which is very expensive as I scale up, or I own it or I have to own it and it's different. So multi-cloud, hyper-cloud, private cloud, however you want to instantiate it, and something like hyper-converged infrastructure just didn't exist. They didn't have a choice. Now, with a push of a few buttons, you can scale up your infrastructure perhaps on-prem or in a hosted environment. That is fairly seamless with that, and now you have that portability. Yeah, and I'm sorry, actually, I wasn't trying to poke at the cloud piece. It's compute at edge use cases. It's a little bit different than traditional servers, what's happening with the blade market. Definitely need to talk about the new power edges, but there's the Mx we're going to cover too, but it was just kind of, if you say you're form factors of servers. You bring up a good point. It's maybe emerging, so there's probably a little bit more hype than there is reality behind it, but there are going to be billions of sensors, trillions of sensors, or things that create data outside of data center environments. That's where all the data's going to be produced, and that's where decisions are going to be made. Today, the theory is it has to go back somewhere, although I don't think any of us are getting an autonomous car if it has to talk back to a data center and decide what to do. So there's already examples of what I would call edge compute, but what if your data center has to live at a base of a cell tower, at the end of a 30 mile dirt road where someone only visits 45 days apart and they're not an IT individual. How do you extend that infrastructure, that management domain, that security domain? How do you bring it all the way out there? How do you ruggedize it? Well, you're probably going to start with a company that's been doing fresh air cooling with like 13, 14 billion server hours now at operating in fresh air environments. We understand how to bring that environment the way we've been working on that remote management, lights out management style, our security. I'll give you another emerging trend that's going to come out of that. Just at the time where we're going to extend our environments out of the safety of the data center, we're also going to go back to a stateful compute where a persistent memory, non-volatile memory, storage class memories, and security paradigms are already shifting. We're getting ahead of that with our customers of what if it wasn't just a hard drive yet to protect, but almost everything in that edge device. So the form factors will change, the connectivity will change, but what we know is you're likely gather as much data as you can. You'll throw some of it away because it won't be useful. Right now, there's a sensor telling this building that these lights are on. Until they go off, it's not useful data. But in a car, it's very useful data. Some of that data will go back. It'll get trained because humans won't be able to take in all this data. You'll need a machine. You can't write the algorithm ahead of time. You have to learn something. That goes that IP into the edge and then decisions will be made at that stage. Before we head off, we've talked about some new products. You've alluded a little bit. So you've had a launch this week. Just run us through that if you would real quick. We had a few things. It's nice to have a new baby to talk about. Sure, it's pretty exciting. And it really does stem from what we just talked about. So if I start on the power edge side, if you have a strategy that is to help your customers with that digital transformation from cloud to data center and core all the way to edge, you can start to see why we're launching certain products and why they have certain technologies in them and innovations. So starting with the 940XA, Extreme Acceleration, might have to rename it if you watch the keynote. Jeff called Extreme Performance. He is the boss, so I think it's XP now. We'll keep it at Extreme Acceleration for now. That really is about large data sets training very quickly in database environments. So you want host to GPGPU to be a one-to-one ratio. You want large data sets to be local. So you need massive storage, 32 drives, for instance, in it. And you need the capability to, again, make sure it brings the tenants of security, manageability, the ecosystem with it. So very excited about that one. I think there's some use cases we're just not even ready for. We've already had the technology today to put eight FPGAs in that system, Direct Connect. And there's very few workloads or even talent in the customer set to be able to enable that. But you got to get there first with the technology to allow that innovation to happen and we want to stoke that. Then on the R840, this really was about once you get the data in, you're going to have to make decisions. You need still that processing power. Maybe you don't need 20,000 cores in the box, like a 940XA, maybe you need a little bit less. But you do need massive storage, localized NBME Direct Connect. There's more Direct Connect than any server, I think, period, in the industry. And it's really about streaming those analytics, making those real-time choices. So it really fits into the strategy that we're undertaking. All right. Actually, last thing I wanted to cover, it's a bit of a preview that you showed at the show, the PowerEdge MX. Modular infrastructure, no mid-plane, should be able to upgrade it a lot more. So are we beyond where blade servers have gone? Do you consider this to fit into some called a composable infrastructure? How would you position this kind of compared to some of the other ones? Yeah, it's just a sneak peek. But I tell you how we think about it. Is it a blade server or not? I'm not sure the question is something we've considered yet. It's a form factor that we think for the future is really necessary, which is we want to get to a stage and we're putting our research into a stage of a journey where we want to get to the point where you can utilize the resources that you bring into your environment, whether they be your environment or someone else's. Today, it's so much astranded connected to a CPU. It's just the architecture that we have today, whether it's memory, storage class memory, persistent memory, GPGPUs, heterogeneous compute, FPGAs, ASICs, memory semantics, IO semantics, have to leave the box. Then we can get to things like pulled up resources that can be utilized, unbound, put together, then compose if you want to use your word, or really just aligned around a workload then retired and put back in. APIs and software, we're starting to build that out. It's starting to emerge from certain management orchestration layers we have today, but we're gonna need that fabric. And so as you know, we're showing actually here today a Gen Z demo, we're starting to build that fabric that has the latency, almost memory-like latencies from load to store and usage, all the way out to, it has the memory semantics that go all the way through from CPU, all the way out to memory, so that all of a sudden the node no longer traps and strands the resources. How do you do that? You better have an architecture that treats everything in the box, not just the compute part as a first-class citizen for power, for thermals, for management. Second thing, if you have a midplane, you have a point of failure, but you also are not upgradeable to these fabrics that are coming and these capabilities that are on the horizon, some of which are not even in silicon or in a lab just yet. So when you build infrastructure, let me call it infrastructure for a second, people want it as an investment. I think that's the part we've talked about. There's a lot more to come, so team's excited to get it out there. I tried to hold them back a little bit, but we cheated a little bit and showed it. Little demo goes a long way. Ashley, thanks for being with us. Thanks for telling this story. We appreciate the time. Look forward to seeing you down the road. I appreciate it. You bet. Thanks guys. Back with more, we are live here in Las Vegas Dell Technologies World 2018.