 From around the globe, it's theCUBE with digital coverage of Dell Technologies World. Digital experience brought to you by Dell Technologies. Hey, welcome back everybody. Jeff Frick here with theCUBE. Come to you from our Palo Alto studios with continuing coverage of the Dell Technology World 2020, the digital experience. We've been covering this for over 10 years. It's virtual this year, but you still have a lot of great content, a lot of great announcements and a lot of technology that's being released and talked about. So we're excited. We'll dig a little deep with our next two guests. First off, we have Paul Perez. He is the SVP and CTO of Infrastructure Solutions Group for Dell Technologies. Paul's great to see you. Where are you coming in from today? Yeah, probably in Austin, Texas. Austin, Texas, awesome. And joining him, returning to theCUBE and on many times, Kit Colbert. He is the vice president and CTO of VMware Cloud for VMware. Kit, great to see you as well. Where are you joining us from? Thanks for having me again. I'm here in San Francisco. Awesome. So let's jump into it and talk about Project Monterey. You know, it's funny. I was at Intel back in the day and all of our passwords used to go out and they became like the product names. It's funny how these little internal project names get a life of their own and this is a big one. And, you know, we had Pat Gelsinger on a few weeks back at VMware talking about how significant this is and kind of this evolution within the VMware cloud development. And, you know, it's kind of past Kubernetes and past Tanzu and past Project Pacific and now we're into Project Monterey. So first off, let's start with you, Kit. Give us kind of the basic overview of what is Project Monterey? Yep. Yeah, well, you're absolutely right. What we did last year, we announced Project Pacific, which was really a fundamental rethinking of VMware Cloud Foundation with Kubernetes built in, right, Kubernetes to the core, the core part of the architecture. And the idea there was really to better support modern applications to enable developers and IT operations to come together to work collaboratively toward modernizing a company's application fleet. And if you look at companies starting to be successful there, starting to run these modern applications, what you found is that the hardware architecture itself needed to evolve, needed to update to support all of the new requirements brought on by these modern apps. And so when you're looking at Project Monterey, it's exactly that. It's a rethinking of the VMware Cloud Foundation underlying hardware architecture. And so you think about a project monitor, excuse me, Project Pacific is really kind of the top half, if you will, Kubernetes consumption experiences, great for applications. Project Monterey comes along as the second step in that journey, really being the bottom half, fundamentally rethinking the hardware architecture and leveraging SmartNIC technology to do that. It's pretty interesting, Paul. You know, there's a great shift in this whole move from infrastructure driving applications to applications driving infrastructure. And then we're seeing, you know, obviously the big move with big data. And again, I think as Pat talked about in his interview with NVIDIA being at the right time, at the right place with the right technology and this, you know, kind of groundswell of GPU and now DPU, you know, helping to move those workloads beyond just kind of where the CPU used to do all the work. This is even, you know, kind of taking it another level. You guys are the hardware guys and the solutions guys. As you look at this kind of continuing evolution, both of workloads as well as infrastructure, how does this fit in? Well, Jeff, how this fits in is modern applications and modern workloads require modern infrastructure, right? And Kit was talking about the infrastructure overlaid that VMware is awesome at developing. I was coming at this from the emergence of data-centric workloads. And some of the implications for that, including Silicon diversity heterogeneous computing, the need to desegregate to be able to combine many resources together, as opposed to trying to shoehorn something into a mechanical chassis. And if you desegregate, you have to be able to compose on demand. And when Kit and I started comparing those, we realized that we were hunting on our convergent trajectory and we decided to team up and partner. So it's interesting because part of the composable philosophy, if you will, is to break the components of compute store and networking down to a small pieces as possible. And then you can assemble the right amount when you need it to attack a particular problem. But you're talking about it's a whole different level of bringing the right hardware to bear for the solution. When you talk about SmartNX and you talk about GPUs and DPUs, data processing units, you're now starting to offload and even FPGAs and some of these other things, offload a lot of work from the core CPU to some of these more appropriate devices. That said, how do people make sure that the right application ends up on the right infrastructure? This is that I'm, if it's appropriate, using more of a Monterey based solution versus more of a traditional one, depending on the workload. How's that going to get all sorted out and routed within the actual cloud infrastructure itself? That's probably back to you, Kit. Yeah, sure. So I think it's important to understand kind of what a SmartNX is and how it works in order to answer that question. Because what we're really doing is to kind of jump right to it, I guess, is giving an API into the infrastructure. And this is how we're able to do all the things that you just mentioned. But what is a SmartNX? Well, SmartNX is essentially a NIC with a general purpose CPU on it, really a whole CPU complex. In fact, kind of a whole system, a whole server right there on that NIC. And so what that enables is a bunch of great things. So first of all, to your point, we can do a lot of offload. We can actually run ESX, ESXi on that NIC. We can take a lot of the functionality that we were doing before on the main server CPU, things like network virtualization, storage virtualization, security functionality. We can move that all off onto the NIC. And that makes a lot of sense because really what we're doing when we're doing all those things is really looking at different sort of IO datapaths. As the network traffic comes through, looking at doing automatic load balancing, firewalling for security, delivering storage perhaps remotely. And so the NIC is actually a perfect place to place all of these functionalities, right? You can not only move it off the core server CPU but you can get a lot better performance because you're now right there on the datapath. So I think that's the first really key point is that you can get that offload. But then once you have all that functionality there, then you can start doing some really amazing things. This ability to expose additional virtual devices onto the PCI bus. This is another great capability of a smart NIC. So when you plug it in physically into the motherboard, it's a NIC, right, you can see that. And when it starts up, it looks like a NIC to the motherboard to the X86 system. But then via software, you can have it expose additional devices. It could look like a storage controller or it could look like an FPGA. You could look at really any sort of device and you can do that not only for the local machine where it's plugged in, but potentially remote machines as well with the right sorts of interconnects. And so what this creates is a whole new sort of cluster architecture. And that's why we're really so excited about it because you get all these great benefits in terms of offload, performance improvement, security improvement. But then you get this great ability to get very dynamic disarrogation and composability. So, Kit, how much of it is the routing of the workload to the right place, right? That's got the right amount of, say it's a super data intensive, it wants a lot of GPU, versus actually better executing the operation once it gets to the place where it's going to run. Yeah, it's a bit of a combination actually. So the powerful thing about it is that in a traditional world, where you run an application, the server that you run it, that app can really only use the local devices there. Yes, there is some newer stuff like NVMeover Fabric where you can remote certain types of storage capabilities, but there's no real general purpose solution to that yet. But generally speaking, that application is limited to the local hardware devices. Well, the great part about what we're doing with Monterey and with the SmartNIC technology is that we can now dynamically remote or expose remote devices from other hosts. And so wherever that application runs, it matters a little bit less now in the sense that we can give it the right sorts of hardware it needs in order to operate. If you had, let's say, a few machines with FPGAs, normally if you needed that FPGA to run locally, but now it can actually run remotely and you can better balance out things like compute requirements versus, specialized accelerator requirements. And so I think what we're looking at is especially in the context of VMware Cloud Foundation is bring that all together. We can look through the scheduling, figure out what the best host for it to run on, based on all these considerations. And then if we are missing, let's say, a physical device that needs, well, we can remote that and sort of deal with that missing gap there. Right, right. That's great. Paul, I want to go back to you. You just talked about kind of coming at this problem from a data-centric point of view and you're running infrastructure and you're the poor guy that's got to catch all the asthma taught or the giant exponential curves up and to the right on the data flow and the data quantity. How is that impacting the way you think about infrastructure and designing infrastructure and changing infrastructure and kind of future-proofing infrastructure when you know just around the corners, 5G and IoT and OE and seeing nothing yet in terms of the data flow. Yeah, so Jeff, I come at this from two angles, right? One that we talked about briefly is the evolution of the workloads themselves. The other angle, which is just as important is the operating model that customers are wanting to evolve to. And in that context, we talk a lot about how cloud is an operating model not necessarily a destination, right? So one way to get out what Kit was talking about is that in data-centric computing, you have a separation of control and data plane where the data plane runs on optimized silicon, GPUs, FPGAs, offload engines and the control plane can run on stuff like X-26 and R. The nice thing about SmartNex is SmartNex have ARM boards so you can implement some data plane and some control plane. And they can also be the gateway because you talked about composability. What has been done up until now is early core strength, right? We're carving out software-defined infrastructure out of pre-defined hardware blocks. What we're talking about is making GPUs, residents in a fabric, persistent memory residents on a fabric and via me over a fabric. And being able to tile computing topologies on demand to realize an application's intent and we call that in tank-based computing. Right. Well, just and to follow up on that too, as the cloud as an attitude or as an operating model or whatever you want to say, not necessarily a place or a thing has changed. I mean, how has that had to get you to shift your infrastructure approach? Because you've got to support old school, good old data centers. We've got some stuff running on public clouds and then now you've got hybrid clouds and you have multi-clouds, right? So we know, you know, you're out in the field that people have workloads running all over the place. So, but they got to control it and they've got compliance issues and they got a whole bunch of other stuff. So from, as your point of view, as you see the desire for more flexibility, the desire for more infrastructure-centric support for the workloads that I want to buy and the increasing amount of those that are more data-centric as we move to hopefully more data-driven decisions. How's it changed your strategy and what does it mean to partner and have a real nice formal relationship with the folks over at VMware or excuse me, VMware? Well, I think that regardless of how big a company is, it's always prudent. As I say, when I approach my job, right? Architecture is about balance and efficiency and it's about reducing content. And we like to leverage industry R&D, especially in cases where one plus one equals two, right? In the case of project monitoring, for example, one of the collaboration areas is in improving the security model and being able to provide more air gap isolation, especially when you consider that enterprise wants to behave as service providers internal to their companies. And therefore this is important. And because of that, I think that there's a lot of things that we can do between VMware and Dell. Blending hardware and for example, assets like NSX in a different way that will give customers higher-spellability and performance and more control. Beyond VMware and Dell EMT, I think that we're partnering with, obviously the smart vendors because they're smart to present the gateway to those data plane assets, but also companies that are innovating in data center computing, for example, NVIDIA. Right, right. And I think that what we're seeing is, while NVIDIA has done an awesome job of targeting their capability at AI and L that with workloads, what we realize is applications today depend on platform services, right? But up until recently, those platform services have been databases, messaging pipes, Active Directory. Moving forward, I think that within five years, most applications will depend on some form of AI ML service. So I can see an opportunity to go mainstream with this. Right, right. Well, it's great you bring up NVIDIA and I'm just going to quote one of Pat's lines from his interview and he talked about Jensen from NVIDIA actually telling Pat, hey Pat, I think you're thinking too small. I love it. Let's do the entire AI landscape together and make AI and enterprise class workloads from VMware and Tanzu, first class citizens. So I love the fact, past been around a long time, industry veteran, but still kind of accepted the challenge from Jensen to really elevate AI and machine learning via GPUs to first class citizen status. And the other piece obviously that's coming up is Edge, so it's a nice shot of adrenaline and Kit, I wonder if you can share your thoughts on that in kind of saying, hey, let's take it up a notch, a significant notch by leveraging a whole other class of compute power within these solutions. Absolutely. Yeah, so I mean, I'll go real quick. I mean, it's funny because like not many people really ever challenged Pat to say he doesn't think big enough because usually he's always blown us away with what he wants to do next. But I think it's, you know, it's good though. It's good to keep us on our toes and push us a bit, right? All of us within the industry. And so I think a couple of things, you know, to go back to your previous point around this like cloud as a model, I think that's exactly what we're doing is trying to bring cloud as a model even on-prem. And it's a lot of these kind of core hardware architecture capabilities that we need to enable. The biggest one in my mind just being enabling an API into the hardware so that applications can get what they need. And going back to Paul's point, this notion of these AI and ML services, you know, they have to be rooted in the hardware, right? We know that in order for them to be performant, for them to run to support what our customers wanna do, we need to have that deeply integrated into the hardware all the way up. But then it also becomes a software problem. Once we got the hardware solved, once we got that architecture locked in, how can we as easy as possible, as seamlessly as possible deliver all those great capabilities, software capabilities. And so, you know, you look at what we've done with the NVIDIA partnership, things around the NVIDIA GPU cloud, and really bringing that to bear. And so then you start having this really great full stack integration, all the way from the hardware, very powerful hardware architecture that, you know, again, driven by API, the infrastructure software on top of that, and then all these great AI tools, tool chains capabilities with things like the NVIDIA NGC. So that's really, I think, where the vision's going. And we got a lot of the basic parts there, but obviously a lot more work to do going forward. Right. Yeah, I would say that, you know, initially, look Jack, we're in a journey, right? We want this journey, however, to happen very fast. And initially, what we'll focus is in disaggregating, I think in two things, disaggregating infrastructure services, so there's no contention with applications, customer actual workload applications, you know, and also in enabling how productive it is to get to that out to Mike Silicon for the data path. Over time, just like we have the hands-on vision control over a wide area, there's an opportunity to do something like that, mobility to make sure that you think about the progression from bare metal to VMs to containerized environments. And containerized environments are way more dynamic and more shreddable, right? And they expect hardware to be as dynamic and composable to suit their needs real time. And I think that's where we're headed. Right. So let me throw a monkey wrench in terms of security, right? So now this thing is much more flexible, it's much more software defined. How is that changing the way you think about security and bake security in throughout the stack? Go to you first, Paul. Yeah, yeah. So like, this actually enables a lot of really powerful things. So first of all, from an architecture and implementation standpoint, you have to understand that we're really running two copies of ESXi on each physical server now. We've got the one running on the x86 side, just like normal. And now we've got one running on the SmartNIC as well. And so as I mentioned before, we can move a lot of that networking, security, et cetera, capabilities off to the SmartNIC. And so what this is going toward is what we call a zero trust security architecture. This notion of having really defense in depth at many different layers and many different areas. While obviously the hypervisor and the virtualization layer provides a really strong level of security, you know, even when we're doing it completely on the x86 side, now that we're running on a SmartNIC, that's additional defense in depth because the x86 ESX doesn't really know and doesn't have direct access to the ESXi running on the SmartNIC. So the ESX and the SmartNIC can be this kind of more well-defended position. Moreover, now that we're running those security functionalities directly on the data path in the SmartNIC, we can do a lot more with that. We can run a lot deeper analysis, you know, talk about AI and ML, bring a lot of those capabilities to bear here to actually improve the security profile. And so finally, I'd say this notion of kind of distributed security as well, that you don't necessarily want to have these individual points on the physical network, but actually distribute the security policies and enforcement to everywhere where a server is running, i.e. everywhere where a SmartNIC is. And that's what we can do here. And so it really takes a lot of what we've been doing with things like NSX, but now it connects it much more deeply into hardware, allowing for better performance and security. Yeah, you know, on our side, a common attack method is to intercept the boot of the server, physical server. And, you know, I'm actually very proud of our team because the US National Security Agency recently published a white paper on best practices for secure boot and they picked our implementation of hardware with a trust and secure boot as the reference standard, right? Moving forward, imagine an environment that even if you gain control of the server, that doesn't allow you to change files or update. So we're moving the root across to be in that air gap domain that Kip talked about. And that gives us way more capability for zero trust operations. Right, right. Paul, I got to ask you, I had Sam Bird on the other day, your peer who runs the PC group. Well, he stood up here, he's a little bit higher up. A little bit higher than you. Okay, well, I just promoted you, so that's okay. But it's really interesting because we were talking about, it was literally like 10 years ago, the death of the PC article that came out when Apple introduced the tablet. And, you know, we talked about what phenomenal devices that PCs continue to be and evolve. And then it's just funny how now that doves tails with this whole edge conversation when people don't necessarily think of a PC as a piece of the edge, but it is a great piece of the edge. So from an infrastructure point of view, you know, to have that kind of presence within the PCs and kind of potentially that intelligence and again, this kind of whole another layer of interaction with the users and an opportunity to define how they work with applications and prioritize applications. I just wonder if you can share how nice it is to have that, you know, kind of in your back pocket to know that you've got a whole another, you know, kind of layer of visibility and connection with the users beyond just simply the infrastructure. So actually within the company, we develop within a framework that we call core edge multi-cloud, right? Core data centers, enterprise edge, IOP and then off-premise because it is a multi-cloud world. And within that framework, we consider our client solutions group products to be part of the edge. And we see a lot of benefit. I'll give an example of a healthcare company that wants to develop real-time analytics regardless of whether it's on a laptop or maybe move it to a backend data center, right? Whether it's at a hospital clinic or at a patient's home. It gives us a broader innovation surface and a lot of synergy. And actually the, a lot of people may not appreciate that the most important function within sense team I consider to be the experience design team. So being able to design user flows and customer experience, I hear the technology is very powerful. That's great. That's great. So we're running out of time. I want to give you each the last word. You've both been in this business for a long time. This is brand new stuff, right? Containers aren't new. Kubernetes is still relatively new and exciting and project Pacific was relatively new and now project Monterey. But you guys are, you know, you're multi-decade veterans in this thing. As you look forward, what does this moment represent compared to some of the other shifts that we've seen in ITs, you know, generally, but you know, kind of consumption of compute and you know, kind of this application-centric world that just continues to grow. I mean, software is eating everything. We know it. You guys live it every day. What is, where are we now? And you know, what do you see? Maybe I don't want to go too far out, but the next couple of years within the Monterey framework and then if you have something else generally, you can add as well. Paul, why don't we start with you? Well, I think, look, on a personal level and humility aside, I have a long string of very successful endeavors in my career. When I came back to Dell a couple of years ago, one of the things that I told Jeff Clark, our vice chairman is, hey, Dell Technologies is a big canvas and I intend to paint my masterpiece. And I think that, you know, Monterey is part of that and what we're doing in support of Monterey is also part of that. I think that you will see, you will see our initial approach focused on core data centers. I can tell you that you know how to express it at the edge and we know also how to express some of it even in a multi-cloud world. So I'm very excited and I know that I'm going to be busy for the next few years. Kit, to you. Yeah. So, you know, it's funny. You talk to people about SmartNIC and especially those folks that have been around for a while and what you hear is like, hey, you know, people are talking about SmartNICs 10 years ago, 20 years ago, that sort of thing then they kind of died off. So what's different now? And I think, you know, the big difference now is a few things. You know, first of all, it's the core technology of SmartNIC has dramatically improved. We now have a powerful software infrastructure layer that can take advantage of it. And finally, applications have a really strong need for it. Again, with all the things we talked about, the need for offload. So I think there's some real sort of fundamental shifts that have happened over the past, let's say decade that have driven the need for this. And so this is something that I believe strongly is here to last, you know, both ourselves at VMware as well as Dell are making a huge bet on this. But not only that, and not only is it good for customers, it's actually good for all the operators as well. So whether this is part of VCF that we deliver to customers for them to operate themselves just like they always have, or if it's part of our own cloud solutions, things like VMware Cloud on Dell, this is going to be a core part about how we deliver our cloud services and infrastructure going forward. So we really do believe this is kind of a foundational transition that's taking place. And as we talked about, there is a ton of additional innovation that's going to come out of it. So I'm really, really excited for the next few years because I think we're just at the start of a very long and very exciting journey. Awesome. Well, thank you both for spending some time with us and sharing the story and congratulations. I'm sure a whole bunch of work from a whole bunch of people and into getting where you are now. And as you said, Paul, the work has barely just begun. So thanks again. Thank you. All right. He's Paul, he's Kit, I'm Jeff. You're watching theCUBE's continuing coverage of Dell TechWorld 2020, the digital experience. Thanks for watching. We'll see you next time.