 Hello and welcome to this special CUBE conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We have a conversation around AI for the enterprise. What this means, we've got two great guests, John Finnelli, vice president, virtual GPU at NVIDIA, and Maurizio Davini, CTO, Maurizio Pisa in Italy, practitioner, customer, partner. Got VMworld coming up, a lot of action happening in the enterprise, John. Great to see you, Maurizio. Nice to meet you remotely coming in from Italy for this remote conference. John, thanks for having us on again. Yeah, nice to meet you. I wish we could be in person, face to face, but that's coming soon, hopefully. John, we were just talking before we came on camera about AI for the enterprise. And the last time I saw you in person was in CUBE interview, we were talking about some of the work you guys were doing in AI. It's gotten so much stronger and broader in the execution of NVIDIA and the success you're having, set the table for us. What is the AI for the enterprise conversation frame? Sure, so we've been working with enterprises today on how they can deliver AI or explore AI or get involved in AI in a standard way, in the way that they're used to managing and operating their data center, running on top of their Dell servers with the VMware vSphere. So that AI feels like a standard workload that an IT organization can deliver to their engineers and data scientists. And then the flip side of that, of course, is ensuring that engineers and data scientists get the workloads positioned to them or have access to them in the way that they need them. So it's no longer a trouble ticket that you have to submit to IT and count the hours or days or weeks until you can get new hardware. By being able to pull it into the mainstream data center, IT can enable self-service provisioning for those folks. So we actually, we make AI more consumable or easier to manage for IT administrators. And then for the engineers and the data scientists, et cetera, we make it easy for them to get access to those resources so they can get to their work right away. Quite progress in the past two years. Congratulations on that. And it's only the beginning, it's day one. Mercy, I want to ask you about what's going on as the CTO of University of Pisa. What's happening down there? Tell us a little bit about what's going on. We have the Center of Excellence there. What does that mean? What does that include? You know, University of Pisa is one of the biggest and oldest in Italy. If you have to give you some numbers, it's around 50 case students and 3,000 staff between professors, researchers and other literacy staff. So we are looking into data operation of the data centers and especially supports for scientific computing. And this is our daily work, let's say this taking us a lot of times, but we are able to reserve a percentage of our time for R&D. And this is where the Center of Excellence is coming out. So we are always looking into a new kind of technologies that we can put together to build new solutions, to do next generation computing, as we always say. We are looking for the right partners to do things together. And at the end of the day is a work that is good for us, is good for our partners and typically ends in a production system for our university. So it's the evolution of the scientific computing environment that we have. Yeah, and you guys have great track record and reputation of R&D, testing, software hardware combinations and sharing those best practices. With COVID impact in the world, certainly we see it on the supply chain side. And John, we heard Jensen, your CEO at NVIDIA, talk multiple keynotes now about software. NVIDIA being a software company. Dell, you mentioned Dell and VMware. COVID has brought this virtualization world back and now hybrid. Those are words that we use basically in the text industry. Now it's you're hearing hybrid and virtualization kicked around in real world. So it's ironic that VMware and Dell and theCUBE and eventually all of us together doing more virtual stuff. So with COVID impacting the world, how has that changed you guys? Because software is more important. You got to leverage the hardware you got, whether it's Dell or in the cloud. This is a huge change. Yeah, so as you mentioned, organizations and enterprises, they're looking at things differently now. The idea of hybrid, when you talk to tech folks and we think about hybrid, we always think about how the different technology works. What we're hearing from customers is hybrid effectively translates into two days in the office, three days remote in the future when they actually start going back to the office. So hybrid work is actually driving the need for hybrid IT or the ability to share resources more effectively. And to think about having resources wherever you are, whether you're working from home or you're in the office that day, you need to have access to the same resources. And that's where the ability to virtualize those resources and provide that access makes that hybrid part seamless. Marcia, what's your world has really changed? You have students and faculty, things used to be easy in the old days, physical, this network, that network, now virtuals there, you must really be having an impact. Yeah, we have, of course, as you can imagine, a big impact in any kind of the IT offering from design new networking technologies, deploying new networking technologies, new kind of operation. We found that we were not able anymore to do bare metal operations directly. But from the IT point of view, we were, how can I say, prepared in the sense that we run from three or four years a parallel environment. We have bare metal and virtual. So as you can imagine, traditional bare metal, HPC cluster, DGX machines, multi GPUs and so on. But in parallel, we have developed a virtual environment that at the beginning was, as you can imagine, used for traditional enterprise application or VDI. We have a significant horizon farm with VGrid for remote desktop or remote workstation that we are using for, for example, developing virtual classroom or virtual stations. And so this was the typical operation that we did in the virtual world. But in the same infrastructure, we were able to develop first HPC in the virtual world. So virtualization of HPC resources for our researchers. And at the end, AI, AI offering and AI software for our researchers. You can imagine our virtual infrastructure as a sort of whiteboard, where we are able to design new solution in a fast way without losing too much performance. And in the case of the AI, we will see that the performance almost the same that the bare metal. But with all the flexibility that we needed in the COVID-19 world and in the future world too. So a couple of things there, I wanted to get John's thoughts as well. Performance you mentioned, you mentioned hybrid, virtual. How does VMware and NVIDIA fit into all of this? As you put this together, okay, because you bring up performance, that's now table stage. The scale and performance are really on the table. Everyone's looking at it. How does VMware and NVIDIA John fit in with the university's work? Sure, so I think your argument comes to enterprises or mainstream enterprises, beginning their initial foray into AI. There of course is performance and scale and also kind of ease of use and familiarity are all kind of things that come into play in terms of when an enterprise starts to think about it. And we have a history with VMware working on this technology. So in 2019, we introduced our virtual compute server with VMware, which allowed us to effectively virtualize the CUDA compute driver. At last year's VMworld in 2020, the CEOs of both companies got together and made an announcement that we were going to bring AI, our entire NVIDIA AI platform to the enterprise on top of vSphere. And we did that starting in March this year. We finalized that with the introduction of VMware's vSphere 7 Update 2 and the early access at the time of NVIDIA AI Enterprise. And we have now gone to production with both of those products. And so, you know, customers like the University of Peace are now using our production capabilities. And whenever you virtualize, in particular in something like AI where performance is really important, the first question that comes up is, does it work and how quickly does it work? Or, you know, from an IT audience, that's how much did it slow down? And so we've worked really closely from an NVIDIA software perspective and a VMware perspective. And we really talk about NVIDIA AI Enterprise with vSphere 7 as optimized, certified and supported. And the net of that is we've been able to run the standard industry benchmarks for single node as well as multi-node performance with about maybe potentially a 2% degradation in performance depending on the workload of course, it's very different. But effectively being able to trade that performance for the accessibility, the ease of use and even using things like vRealize Automation or self-service for the data scientists. And so that's kind of how we've been pulling it together for the market. Great stuff. Well, I got to ask you, I mean, and people have that reaction about the performance, I think you're being polite around how you said that, shows the expectation, it's kind of skeptical. And so I got to ask you, the impact of this is pretty significant. What is it now that customers can do that they couldn't or couldn't feel they had before? Because if the expectation was, well, does it work well? I mean, does it go fast? Means it works, but like performance is always a concern. What's different now? What's the bottom line impact on what customers are doing now that they couldn't do before? Yeah, so the bottom line impact is that AI is now accessible for the enterprise across their, I'll call their mainstream data center. Enterprise is typically used consistent building blocks like the Dell VxRail products, right? Where they have to use servers that are common standard across the data center. And now with NVIDIA Enterprise and VMware vSphere, they're able to manage their AI in the same way that they're used to managing their data center today. So there's no retraining, there's no separate clusters, there isn't like a shadow IT. So this really allows an enterprise to efficiently deploy and cost effectively deploy it without because there's no performance degradation without compromising what their data scientists and their researchers are looking for. And then the flip side is for the data science and researcher using some of the self-service automation that I spoke about earlier, they're able to get a virtual machine today that maybe has a half a GPU. As their models grow, they do more exploring. They might get a full GPU or two GPUs in a virtual machine and their environment doesn't change because it's all connected to the backend and storage. And so for the developer and the researcher, it makes it seamless. So it's really kind of a win for both IT and for the user. And again, University of Pisa is doing some amazing things in terms of the workloads that they're doing and are validating that performance. Marceo, weigh in on this, share your opinion on or your reaction to that, what you can do now that you couldn't do before. Could you share your experience? So our experience is, of course, if you go to your data scientists or researchers, the idea of sacrificing performance to flexibility at the beginning is not so well accepted. It's okay for the IT management. As John was saying, you have people that know how to deal with the virtual infrastructure. So nothing changed for them. But at the end of the day, we were able to test with our data scientists and researchers, but the performance was almost similar around really 95% of the performance for the internal developer, the developer do our workloads. So we are not dealing with benchmarks. We have some workloads that are internally developed and apply to healthcare, music generator or some other strange project that we have inside. And we were able to show that the performance on the virtual and the metal world were almost the same. The addition that in the virtual world, you are much more flexible. You are able to reconfigure everything very fast. You are able to design solution for your researcher in a more flexible way and effective way. We were able to use the latest technologies from Dell Technologies and Vidya. You can imagine from the latest power edge, the latest cost from NVIDIA, the latest network cards from NVIDIA, like the Bluefield tool, the latest switches to set up an infrastructure that at the end of the day is our winning platform for our data science. It's a great collaboration. Congratulations, it's exciting. Get the latest and greatest and get the new benchmarks out there, new playbooks, new best practices. I do have to ask you, Mauritius, if you don't mind me asking, why look at virtualizing AI workloads? What's the motivation? Why did you look at virtualizing AI workloads? For the sake of flexibility, because in the latest couple of years, the AI resources are never enough. So we are, if you go after the bare metal installation, you are going into a world that is developing very fastly. But of course, you can afford all the bare metal infrastructure that your data scientists are asking for. So we decided to integrate our virtual infrastructure with AI resources in order to be able to use in different ways in a more flexible way. Of course, we have a two-parallels world. We still have bare metal infrastructure. We are growing the bare metal infrastructure, but at the same time, we are growing our virtual infrastructure because it's flexible, because our staff people are happy about how the platform behaves and they know how to deal with that. So they don't have to learn anything new. So it's a sort of comfort zone for everybody. I mean, no one ever got hurt virtualizing things and makes things go better, faster building on that workloads. John, I got to ask you, you're on the Nvidia side. You see this real up close with Nvidia. Why do people look at virtualizing AI workloads? Is it the unification benefit? I mean, AI implies a lot of things. It implies you have access to data. It implies that silos don't exist. I mean, that's hard. I mean, is this real? Are people actually looking at this? How is it working? Yeah, so again, for all the benefits and activity that AI brings, AI can be pretty complex. It's complex software to set up and to manage. And with Nvidia AI Enterprise, we're really focusing in on ensuring that it's easier for organizations to use. For example, I'd mentioned, we had introduced our virtual compute server BCS two years ago. And that has seen some really interesting adoption, some enterprise use cases. But what we found is that at the driver level, it still wasn't accessible for the majority of enterprises. And so what we've done is we've built upon that with Nvidia AI Enterprise. And we're bringing in pre-built containers that remove some of the complexities. You know, AI has a lot of open source components. And trying to ensure that all the open source dependencies are resolved so you can get the AI developers and researchers and data scientists actually doing their work can be complex. And so what we've done is we brought these pre-built containers that allow you to do everything from your initial data preparation, data science, using things like Nvidia Rapids to do your training using PyTorch and TensorFlow to optimize those models using TensorFlow RT and then to deploy them using what we call Nvidia Triton server, which is our inferencing server. So really helping that AI loop become accessible and that AI workflow as something that an enterprise can manage as part of their common core infrastructure. Yeah, I mean, having the performance and the tools available is just a huge godsend. People love that, it makes them more productive. And again, scales up existing stuff. Okay, great stuff, great insight. I have to ask, what's next with this collaboration? This is one of those better together situations. It's working. Maurizio, what's next for your collaboration with Dell VMware and Nvidia? For sure, we will not stop here. We are just starting working on new things, looking for new development, looking for the next piece to come. You know, the virtual world is something that is moving very fast. And we will not stop here because the outcome of this work has been very big for our research group. And what John was saying, the fact that all the software stack for AI are simplified is something that has been accepted very well. Of course, you can imagine, researching is developing new things. But for people that needs integrated workflow, the work that Nvidia has done in the development of software package, in developing containers that gives the end user the capabilities of running their workloads is really something that is, some years ago it was unbelievable. Now everything is really, is really easy to manage. John mentioned open source, obviously a big part of this. What's, are you going to quick follow up if you don't mind? Are you going to share your results so people can look at this? So they can have an easier path to AI? Oh, yes, of course. All the work that is done at IT level, premium university of PISA is here to be shared. So we, as much as we have time to write down, we are trying to find the way to share the results of the work that we are doing with our partner, Dell and Nvidia. So for sure it will be shared. We'll get that link into the comments. John, your thoughts, final thoughts on the collaboration with the university of PISA and Dell, VMware and Nvidia, where does this all go next? Sure. So with the university of PISA, we're absolutely grateful to Moricho and his team for the work they're doing and the feedback they're sharing with us. We're learning a lot from them in terms of things we can do better and things we can add to the product. So that's a fantastic collaboration. I believe that Moricho has a session at VMworld. So if you want to actually learn about some of the workloads, they're doing like music generation, they're doing COVID-19 research, they're doing deep multi-level deep learning training. So there's some really interesting work there. And so we want to continue that partnership university of PISA, again across all four of us, University, Nvidia, Dell and VMware. And then on the tech side, for our enterprise customers, one of the things that we actually didn't speak much about was I mentioned that the product is optimized, certified and supported. And I think that support cannot be understated. So as enterprises start to move into these new areas, they want to know they can pick up the phone and call Nvidia or VMware or Dell and they're going to get support for these new workloads as they're running them. We are also continuing to think about we spent a lot of time today on like the developer side of things and developing AI. But the flip side of that, of course, is that when those AI apps are available or AI enhanced apps, pretty much every enterprise app today is adding AI capabilities, all of our partners in the enterprise software space. And so you can think of a Nvidia enterprise as having a runtime component. So that as you deploy your applications into the data center, they're going to be automatically take advantage of the GPUs that you have there. And so we're seeing this future as you're talking about the collaboration going forward where the standard data center building block of still maintains and is going to be something like a VxRail 2U server. But instead of just being CPU storage and RAM, they're all going to go with CPU GPU storage and RAM. And that's going to be the norm. And every enterprise application is going to be infused with AI and be able to take advantage of GPUs in that scenario. Great stuff, AI for the enterprise. This is a great CUBE conversation. Just the beginning, we'll be having more of these virtualizing AI workloads is real, it impacts data scientists, impacts the compute, the edge, all aspects of the new environment we're all living in. John, great to see you. Maurizio. Good to meet you. All the way in Italy, looking forward to meeting in person. And good luck on your session. I just got a note here on the session. It's at VMworld. It's session 2263, I believe. And so if anyone's watching, want to check that out. Love to hear more. Thanks for coming on, appreciate it. Thanks John for having us. Thanks Maurizio. It's a CUBE conversation. I'm John Furrier, your host. Thanks for watching. We'll talk to you soon.