 From around the globe, it's theCUBE with digital coverage of VMworld 2020, brought to you by VMware and its ecosystem partners. I'm Stu Miniman, and welcome back to theCUBE's coverage of VMworld 2020, our 11th year doing the show, of course, the global virtual event and what do we love talking about on theCUBE? We love talking to customers. It is a user conference, of course. So really happy to welcome to the program from the University of Pisa, the Chief Technology Officer, Mauricio Devini, and joining him is Thierry Pellegrini, one of our CUBE alumni. He's the Vice President of Worldwide, I'm sorry, of Workload Solutions and HPC with Dell Technologies. Mauricio, Thierry, thank you so much for joining us. Thanks, Stu. Thanks to you. All right, so Mauricio, let's start. The University of Pisa, obviously, everyone knows Pisa, one of the famous city. Iconic out there. I know, you know, we all know things in Europe are a little bit longer when you talk about, you know, some of the venerable institutions here in the United States. Yeah, it's a couple hundred years, you know, how they're using technology and everything. I have to imagine the University of Pisa has a long storied history. So just if you could start before we dig into all the tech. Give us our audience a little bit, you know, if they were looking up on Wikipedia, what's the history of the university? So, University of Pisa is one of the oldest in the world because it's been founded in 1343 by a Pope. We were authorized to do university teaching by a Pope during the latest Middle Ages. So it's really one of, it's not the oldest, of course, but it's one of the oldest in the world. It has a long history, but has never stopped innovating. So anything in Pisa has always been good for innovating. So either for the teaching or now for the technology applied to remote teaching or calculation or scientific computing. So never stop innovating, never try to leverage new technologies, a new kind of approach to science and teaching. Yeah, you know, one of your historical teachers, Galileo, you know, taught at the university. So, you know, phenomenal history. Help us understand, you know, you're the CTO there. What does that encompass? How, you know, how many students, you know, are there certain areas of research that are done today before we kind of get into the specific use case today? So, consider that the University of Pisa is a campus in the sense that the university faculties are spread all over the town. Medieval town like Pisa pose us a lot of problems from the infrastructure point of view. So we have worked a lot in the past to try to adapt a medieval town to the latest technologies advance. Now we have 50,000 students and consider that Pisa is a general purpose university. So we cover science, like we cover letters, engineering, medicine and so on. So during the latest 20 years, the university has done a lot of effort to build an infrastructure that was able to develop and deploy the latest technologies for the students. So for example, we have a private fiber net will cover you on the town. 65 kilometers of dark fiber that belongs to the university. Four data centers, one big and three little data center connected today at 200 gigabytes ethernet. We have a big data centers big for an Italian university of course, not for a US university but also all the infrastructure for the enterprise services and the scientific computing. Yeah, Maurizio, it's great that you've had that technology foundation. I have to imagine the global pandemic, COVID-19's had an impact. What's it been? How's the university dealing with things like work from home and then, you know, Thierry would love your commentary too. You know, of course, you are not ready. So we were eaten by the pandemic and we have to adapt our service offer to transform from in-person to remote services. So we did a lot of work, but we are able thanks to the technology that we have chosen to serve almost 100% of our curriculum studies from remote. We did a lot of work in the past to move to virtualization to enable our users to work for remote either for War Station or PC or remote laboratories or remote calculation. So virtualization has designed in the past our services and of course when we were eaten by the pandemic we were almost ready to transform our service from in-person to remote. Yes, Thierry, I think it's true, like Mauricio said, nobody really was preparing for this pandemic and even for Dell Technologies, it was an interesting transition and as you can probably realize a lot of the way that we connect with customers is in-person and we've had to transition over to remote or digitally connecting with the customers. We've also spent a lot of our energy trying to help the HPC and AI community fight the COVID pandemic. We've made some of our own clusters that we use in our HPC and AI Innovation Center here in Austin available to genomic research or other companies that are fighting the virus. And it's been an interesting transition. I can't believe that it's already been over six months now but we've found a new normal. Mauricio, let's get in specifically to how you're partnering with Dell. You've got a strong background in the HPC space working with supercomputers. What is it that you're turning to Dell and their ecosystem to help the university with? So we are, we have a long history in HPC of course, like you can imagine not the biggest HPC like is done in the US or in the biggest supercomputer center in Europe. We have several systems doing HPC, traditional HPC that are based on Dell technologies offer. We typically host all kind of technologies that now is available of course, not in a big scale but in a small medium scale that we are offering to our researcher and student. We have a strong relationship with Dell technologies developing together solution to leverage the latest technologies to the scientific computing. And this has a lot during the research that has been done during this pandemic. Yeah, and it's too, I mean Mauricio is humble but every time we have new technologies that are to be evaluated, of course we spend time evaluating in our labs but we make it a point to share the technology with Mauricio and the team at the University of Pisa that that's how we find some of the better usage models for customers help tuning some configurations whether it's on the processor side, the GPU side, the storage, the interconnect. And then for the topic of today, of course with our partners at VMware, we've had some really great advancements. Mauricio and the team are a, what we call a center of excellence. We have a few of them across the world where we have a unique relationship sharing technology and collaborating on advancement. And recently Mauricio and the team have even become one of the VMware certified centers. So it's a great marriage for this new world where virtual is becoming the norm. Well, Tiri, you and I had a conversation to talk earlier in the year when VMware was really geeing their full kind of GPU suite and big topic in the keynote, Jensen, the CEO of NVIDIA was up on stage. VMware was talking a lot about AI solutions and how this is going to help. So help us bring us in. You work with a lot of the customers, Tiri. What is it that this enables for them and how to Dell and VMware bring those solutions to bear? Yes, absolutely. It's one statistic I'll start with. You believe that only on average, 15 to 20% of GPUs are fully utilized. So when you think about the amount of technology that's at our fingertips, and especially in a world today where we need that technology to advance research and scientific discoveries, wouldn't be fantastic to utilize those GPUs to the best of our ability. And it's not just GPUs. I think the industry has in the IT world leveraged virtualization to get to the max number of cycles for CPUs and storage and networking. Now you're bringing the GPU in the fold and you have a perfect utilization and also flexibility across all those resources. So what we've seen is that convergence between the IT world that was highly virtualized and then this highly optimized world of HPC and AI because of the resources out there and researchers but also data scientists and company wanna be able to run their day-to-day activities on that infrastructure. But then when they have a big search need for research or data science use that same environment and then seamlessly move things around, work with us. Yeah, well, Terry, I do believe your stat. The joke we always have is anybody from a networking background, there's no such thing as eliminating a bottleneck, you just move it. And if you talk about utilization, we've been playing the shell game for my entire career of let's try to optimize one thing and then, oh, there's something else that we're not doing. So, it's so important. Three two, I wanna hear from your standpoint, virtualization and HPC, AI type of uses there. What value does this bring to you and key learnings you've had in your organization? So, we as a university are a big users of VMware technologies starting from the traditional enterprise workload and VDI. We started from there, from there in the sense that we have an installation, quite significant but also almost all the services that the university gives to our internal users either personnel or staff or students. At a certain point, we decided to try to understand if VMware virtualization would be good also for scientific computing. Why? Because at the end of the day, the request that we have from our internal users is flexibility. Flexibility in the sense of be fast in deploying, be fast in reconfiguring, try to have the latest bits on the software side, especially on the AI research. At the end of the day, we designed a VMware solution like you, I can say, like a whiteboard. We have a whiteboard and we are able to design new solution of this whiteboard and to deploy as fast as possible. At the end of the day, what we face as IT is not a request of the maximum performance. Our researchers ask us for flexibility then want to be able to have the maximum possible flexibility in configuring the systems. How can I say? We can deploy a small test cluster on the virtual infrastructure in minutes. Or we can use GPUs inside the infrastructure to do a test of a new algorithm for deep learning. And we can use fast storage inside the virtualization to see how certain algorithms where they are internally developed can leverage the latest bits in storage like MVMEs or so. And this is why at a certain point, we decided to try virtualization as a base for HPC and scientific computing and we are happy. Yeah, I think Mauricio described it, it's flexibility. And of course, if you think optimal performance, you're looking at bare metal, but in this day and age, as I stated at the beginning, there's so much technology, so much infrastructure available that flexibility at times trump the raw performance. So when you have two different research departments, two different portions, two different parts of a company looking for an environment, no two environments are gonna be exactly the same. So you have to be flexible in how you aggregate the different components of the infrastructure. And then think about today, it's actually fantastic. Mauricio was sharing with me earlier this year that at some point, as we all know, there was a lockdown. You can really get into a data center and move different cables around or reconfigure servers to have the right ratio of memory to CPU to storage to accelerators. And having been at the forefront of this enablement has really benefited the University of Pisa and given them the flexibility that they really need. Wonderful. Well, Mauricio, my understanding, I believe you're given a presentation as part of the activities this week. Give us just a final glimpse as to what you want your peers to be taking away from what you've done. What we have done is something that is very simple in the sense that we adapt some open source software to our infrastructure in order to enable our system managers and users to deploy HPC and AI solution fastly and in an easy way to our VMware infrastructure. We started doing a sort of POC. We designed a test infrastructure early this year and then we go fastly to production because we were happy about the results. And so this is what we present in the sense that you can have a lot of way to deploy virtual HPC, but we went for a simple and open source solution also thanks to our friends of Dell Technologies in some parts that enable us to do the works and now to go in production. And as Thierry told before, it helped us a lot during the pandemic due to the fact that we are stuck too. Wonderful. Thierry, I'll let you have the final word. What things are drawing customers to really dig in? Obviously, there's a cost savings there. Are there any other things that this unlocks for them? Yeah, I mean, cost savings, we talked about flexibility. We talked about utilization. You don't wanna have a lot of infrastructure sitting there and just waiting for a job to come in once every two months. And then there's also the world we live in. We all live our life here through a video conference or at times through the interface of our phone and being able to have this web-based interaction with a lot of infrastructure and at times the best infrastructure in the world makes things simpler, easier, and hopefully brings science to the fingertip of our data scientists without having to worry about knowing every single detail on how to build up that infrastructure. And with the help of the University of Pisa, one of our centers of excellence in Europe, we've been innovating and everything that's been accomplished for University of Pisa can be accomplished by our customers and our partners around the world. Terry, Mauricio, thank you so much for sharing and congratulations on all you've done building out that COA. Thank you. Thank you. Stay with us, lots more coverage from VMworld 2020. I'm Stu Miniman and as always, thank you for watching theCUBE.