 Artificial intelligence, the words are full of possibility, yet to many it may seem complex, expensive, and hard to know where to get started. How do you make AI real for your business? At Dell Technologies, we see AI enhancing business, enriching lives, and improving the world. Dell Technologies is dedicated to making AI easy, so more people can use it to make a real difference. So you can adopt and run AI anywhere with your current skill sets, with AI solutions powered by PowerEdge servers and made portable across hybrid multi-clouds with VMware. Plus, solve IO bottlenecks with breakthrough performance delivered by Dell EMC-ready solutions for HPC storage and data accelerator, and enjoy automated, effortless management with OpenManage Systems Management so you can keep business insights flowing across a multi-cloud environment. With an AI portfolio that spans from workstations to supercomputers, Dell Technologies can help you get started with AI easily and grow seamlessly. AI has the potential to profoundly change our lives. With Dell Technologies, AI is easy to adopt, easy to manage, and easy to scale. And there's nothing artificial about that. From the CUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation. Hi, I'm Stu Miniman, and welcome to this special launch with our friends at Dell Technologies. I'm going to be talking about AI and the reality of making artificial intelligence real. Happy to welcome to the program two of our CUBE alumni, Ravi Pendikanti. He's the senior vice president of server product management and Kiri Pellegrino, vice president of data-centric workloads and solutions and high-performance computing, both with Dell Technologies. Thank you both for joining. Thanks, Stu, for having me. So, you know, as the industry we watch, you know, AI has been this huge buzzword. But one of the things I've actually liked about one of the differences about what I see when I listen to the vendor community talking about AI versus what I saw too much in the big data world is, you know, it used to be, you know, oh, there was the opportunity and data is so important. And yes, that's real, but it was a very wonky conversation. And the promise and the translation of what this meant to the real world didn't necessarily always connect. And we saw many of the big data solutions, you know, failed over time. With AI, and I've seen this in meetings from Dell, talking about, you know, those business outcomes in general overall on IP, but, you know, how AI is helping make things real. So, maybe we can start there. I know there's some product announcements and things we're going to get into, but Ravi and Thierry, talk to us a little bit about, you know, the customers that you've been seeing and the impact that AI is having on their businesses. Sure, Stu, I'll take a shot at it. A couple of things. For example, if you start looking at, you know, the autonomous vehicles industry or the manufacturing industry where people are looking at building better tools for anything they need to do on their manufacturing floor, for example. This is a good example of where with autonomous vehicles and stuff, you've got Zinuti, it's actually a World War company which is using our whole product suite right from the hardware and the software to do multiple iterations of ensuring that the software and the hardware come together pretty seamlessly and more importantly, ingesting, you know, probably tens of petabytes of data to ensure that we've got the right training algorithms in place. So, that's a great example of how we are helping some of our customers today in ensuring that we can really make this real in terms of moving away from just a modeling scenario to something that customers are able to use right today. Yeah, and if I can have one more. E&I, one of our, we call them more partners than just customers in Italy. In the energy sector, they've been really, really driving innovation with us. We just deployed a pretty large 8,000 accelerator cluster with them, which is the largest commercial cluster in the world. And what they're focusing on is digital transformation and the development of energy sources. And it's really important nowadays, you know, the planet's not getting younger and we have to be really careful about the type of energies that we utilize to do what we do every day. And they've put a lot of innovation. We've helped set up the right solution for them. And we'll talk some more about what they've done with that cluster later during our chat, but it is one of the example that is tangible with the deployment that is being put to good use to help with AI. Great, well, love starting with some of the customer stories. Really glad we're going to be able to share some of those, you know, actual hear from some of the customers a little bit later in this launch. But Ravi, you know, maybe give us a little bit as to what you're hearing from customers, you know, the overall climate in AI, you know, obviously, you know, so many challenges facing, you know, people today, but, you know, specifically around AI, you know, what are some of the hurdles that they might need to overcome to be able to make AI real? I think the two important pieces I can share with you. Number one, as much as we talk about AI machine learning, one of the biggest challenges that customers have today is ensuring that they have the right amount and the right quality of data to go out and do the analytics per se. Because if you don't do it, it's GIGO, garbage in, garbage out. So one of the biggest challenges our customers have today is ensuring that they have the most pristine data to go work on, and that takes quite a bit of an effort. Number two, a lot of times, I think one of the challenges they also have is having the right skill set to go out and have the execution phase of the AI part in a work done. And I think those are the two big challenges we hear of, and that doesn't seem to be changing in the very near term, given the very fact that, and I think Forbes recently had an article that said that less than 15% of our customers probably are using AI machine learning today. So that talks to the challenges and the opportunity ahead, if I may. All right, so Ravi, give us the news. Tell us the updates from Dell Technologies. How are you helping customers with AI today? Going back to one of the challenges, as I mentioned, which is not having the right skill set, one of the things we are doing at Dell Technologies is making sure that we provide them not just the products, but also the ready solutions that we are working with, for example, Teer and his team. We're also working on validated configs or called reference architectures. The whole idea behind this is we want to take the guesswork out for our customers and actually go ahead and prescribe things that we have already tested to ensure that the integration is right. There's a right sizing attribute. So they'll know exactly the kind of a product they have to pick up and not worry about making time and the resources needed to get to that particular location. So those are probably the two of the biggest things we are doing to help our customers make the right decision and execute seamlessly and on time. Excellent. So Teer, maybe give us a little bit of a broader look as to, you know, Dell's participation in the overall ecosystem when it comes to what's happening in AI and, you know, why is this a unique time for what's happening in the industry? Yeah, I mean, Stu, I think we all live it. I mean, I'm right here in my home and I'm trying to ensure that business continues to operate. And it's important to make sure that we're also there for our customers, right? The fight against COVID-19 is changing what's happening around the quarantines, etc. So Dell as a participant, not only in the AI world that we live in and enabling AI is also a participant in a lot of the communities. So we've recently joined the COVID-19 High Performance Computing Consortium and we've also made a lot of resources available to researchers and scientists leveraging AI in order to make progress towards a cure and potentially the vaccine against COVID-19. Examples are, we have our own supercomputers in the lab here in Austin, Texas and we've given access to some of our partners. TGen is one example. At the beginning of our chat, I mentioned ENI. So not only did they have barely deployed the cluster with us earlier this year that COVID-19 started hitting, so they've done what's the right thing to do for the community and humanity. They've made the resource available to scientists in Europe and TAC just down the road here, which had the largest academic supercomputer that we deployed with them too, is doing exactly the same thing. So this is one of the real examples that are very timely and it's happening right now and we hadn't planned for it, but we're there with our customers. The other piece is this is probably going to be a trend, but healthcare is going through an explosion of data. Ravi mentioned it at the beginning. You're talking about 2,000 exabytes, about 3,000 times the content of the library of conquer. It's incredible and that data is useless. I mean, it's great. We can put that on our great isolation storage, but you can also see it as an opportunity to get business value out of it, and that's going to require a lot more resources with AI. So a lot happening here that's real, and if I can get into more of the science of it, because it's healthcare, because it's the industry, we see now that our family members at VMware, part of the Dell Technologies portfolio, are getting even more relevance in the discussion. The industry is based on virtualization and VMware is the number one virtualization solution for the industry. So now we're trying to weave in the reality in the IT environment with the needs of AI and data science and HPC. So you will see VMware just added the Kubernetes control plane to this sphere, and we're leveraging that to have a very flexible environment where on one side we can do some data science, on the other side we can go back to running enterprise class hardware class software on top of it. So all of this is great and we're capitalizing on it with validate solutions, validate design, and I think that's going to be adding a lot of power in the hands of our customers and all this based on their feedback and their asks back to us. Yeah, I may add to just to build on that interesting comment that you made on, we are actually looking at, very shortly we'll be talking about how we're going to have the ability to, for example, preload vSphere on all our servers. Again, that essentially means that we're going to cut down the time our customers need to go ahead and deploy on their sites. Yeah, excellent, definitely been, very strong feedback from the community. We did a bunch of videos around some of the vSphere 7 launch. You know, Tiri, we actually had done an interview with you a while back at your big lab. Jeffrey got to see the supercomputers behind what you were doing. Maybe bring us in a little bit inside as to some of the new pieces that help enable AI. It often gets lost on the industry. It's like, oh yeah, well we've got the best hardware to accelerate or enable these kind of workloads. So bring us in as to what the engineering solution sets that are helping to make this a reality today. Yeah, and truly, Stu, you've been there, you've seen the engineers in the lab and that's more than AI being real. That is double real because we spend a lot of time analyzing workloads, customer needs. We have a lot of PhD engineers in there and what we're working on right now is kind of the next wave of AI HPC enablement. As we all know, the consumption model or the way that we want to have access to resources is evolving from something that is directly in front of us, the one-to-one ratio to when virtualization became more prevalent we had a one-to-many ratio. And GPUs historically have been allocated on a per user or sometimes it is slightly modified view to have more than one user per GPU. But with the addition of bitfusion to the VMware portfolio and bitfusion now being part of vSphere, we're building up GPU as a service solutions through a VMware-validated design that we are launching. And that's going to give more flexibility and the key here is flexibility. We have the ability as you know with the VMware environment to bring in also some security some flexibility through moving the workloads and let's be honest with some ties into cloud models and we have our own set of partners we all know the big players in the industry too. But that's all about flexibility and giving our customers what they need and what they expect in the world we live in today. Yeah Ravi, I guess that brings us to one of the key pieces we need to look at here is how do we manage across all of these environments and how does AI fit into this whole discussion between what Dell and VMware are doing with vSphere pulling in new workloads. Stu, actually there are a couple of things so there is really nothing artificial about the real intelligence that comes through with all the artificial intelligence we're working on. And so one of the crucial things I think we need to ensure that we talk about is it's not just about the fact that it's a product for here or a storage there I think the crucial thing is we are looking at it from an end to end perspective from everything from ensuring that we have the right workstations to support servers, the storage making sure that it's well protected and all the way through working with an ecosystem of software vendors so first and foremost that's the whole integration piece making sure they realize the ecosystem but more importantly it's also ensuring that we help our customers by taking the work out. Again I can't but emphasize the fact that there are customers who are looking at different areas of entry for example somebody may be looking at an FPGA somebody might be looking at GPUs FPGAs probably as you know are great because their price points and the thermal should I say the power needs are a lot less than the GPUs but on the flip side there's a need for them to have a set of folks who can actually program it is why it's called the programmable gate arrays of SATs my point being in all this it's important that we actually provide the right end to end perspective making sure that we are able to show the integration show the value and also provide the options because it's really not a cookie cutter approach of where you can take a particular solution and think that it'll fit the needs of every single customer it doesn't even the same industry for that matter so the flexibility that we provide all the way to the services is truly our attempt at Dell Technologies to get the entire gamut of solutions available for the customer to go out and pick and choose what serves their needs the best all right well Ravi and Terry thank you so much for the updates we're going to turn it over to actually hear from some of your customers talk about the power of AI hear from their viewpoint how real these solutions are becoming love the plan words there about enabling real artificial intelligence thanks so much for joining after the customers looking forward to the VMware discussion we want to put robots into the world's dullest deadliest and dirtiest jobs we think that if we can have machines doing the work that put people at risk then we can allow people to do better work Dell Technologies is the foundation for a lot of the work that we've done here every single piece of software that we develop is simulated dozens or hundreds or thousands of times and having reliable compute infrastructure is critical for this a lot of technology has matured to actually do something really useful that can be used by non-experts we try to predict when system fails we try to predict the illness in patients things into images at the end of the day it's that now we have machines that learn how to speak a language from zero everything we do really at Epsilon is centered around data and our ability to get the right message to the right person at the right time we apply machine learning and artificial intelligence so in real time you can adjust those campaigns to ensure that you're getting the most optimized message Zenrity is a joint venture between Volvo cars and Vianir our pure focus is automated driving and advanced driver assistance systems Zenrity is really based on safety and how we can actually make lives better where you typically get bored and distracted in car if we can take those kind of situations away it will bring the accidents down about 70-80% so what I appreciate with Dell Technologies is their overall solution that they have to deliver to us being able to deliver the full package that has been a major differentiator compared to your competitors alright welcome back to help us dig into this discussion happy to welcome to the program Chris Rassaud he's the senior vice president and general manager of the vSphere business and Josh Simon chief technologist for the high performance computing group both of them with VMware gentlemen thanks so much for joining us thank you for having us all right Chris when VMware was in position everybody was looking what this will do for the space of course we're talking GPUs we're talking about things like AI and ML so bring us up to speed as to the news today as to what VMware is doing with the Bitfusion technology yeah today we have a big announcement I'm excited to announce that we are taking the next big step in our AI ML and modern application strategy with the launch of Bitfusion which has now been fully integrated with the vSphere 7 platform and we will be releasing this very shortly to the market as you said when we acquired Bitfusion a year ago we had showcased its capabilities as part of the VMware event and at that time we laid out a strategy that talked about Bitfusion as the cornerstone of our capabilities in the platform in the AI ML space since then we have had many customers take a look at the technology and we have had feedback from them as well as from partners and analysts and the feedback has been tremendous excellent well Chris what does this then mean for customers you know what's this value proposition that Bitfusion brings to vSphere 7 Yeah if you look at our customers they are in the midst of a big journey in digital transformation and basically what that means is customers are building a ton of applications and most of those applications have some kind of data analytics or machine learning embedded in it and what this is doing is that in the hardware and infrastructure industry this is driving a lot of innovation so you see the advent of a lot of specialized accelerators custom ASICS, FPGAs and of course GPUs being used to accelerate the special algorithms that these AI ML type applications need and unfortunately customers run most of these specialized accelerators in bare metal kind of setup so they are not taking advantage of virtualization and everything that it brings with it so with Bitfusion launched today we are essentially doing the accelerator space what we did to compute several years ago and that is essentially bringing virtualization to the accelerators but we take it one step further which is we give the customers the ability to pull these accelerators and essentially kind of decouple it from the server so you can have a pool of these accelerators sitting in the network and customers are able to then target their workloads and share the accelerators get better utilization drive a lot of cost improvements and in a sense have a smaller pool that they can use for a whole bunch of different applications across the enterprise so it's a huge improvement for customers and that's the tremendous positive feedback that we are getting both from customers as well as our staff Excellent, well I'm glad we've got Josh here to dig into some of the pieces before we get to you though, Josh Krish part of this announcement is the partnership of VMware and Dell so tell us about what the partnership is and the solutions for this launch Yeah, we have been working with Dell in the AI and ML space for a long time we have a good partnership there this just takes the partnership to the next level and we will have with future solution support in some of the key AI ML targeted Dell servers like the C4140 and the R740 those are the servers that we will be partnering with them on and providing solutions on top of it Excellent, so Josh we've watched for a long time various technologies oh it's not a fit for virtualized environment and then VMware does what it does make sure performance is there all the options there, bring us inside a little bit what this solution means for leveraging GPUs Yeah, so actually before I answer that question let me say that the Bitfusion acquisition and the Bitfusion technology fits into a larger strategy at VMware around AI and ML that I think matches pretty nicely the overall Dell strategy as well in the sense that we are really focused on delivering AI ML capabilities or the ability for our customers to run their AI and ML workloads from edge to core to cloud and that means running it on CPU or running it on hardware accelerators like GPUs whatever is really required by the customer in this specific case we're quite excited about the Bitfusion technology because it really allows us as Chris was describing to extend our capabilities especially in the deep learning space where GPU accelerators are critically important and so what this technology really brings to the table is the ability to as Chris was outlining to pool those resources those hardware resources together and then allow organizations to drive up the utilization of those GPU resources through that pooling and also increase the degree of sharing that we support that's supported for the customer Okay, Josh take us in a little bit further as to how you know the mechanisms of Bitfusion work. Sure, yeah that's a great question so think of it this way there is a client component to Bitfusion and a server component. The server component is running on a machine that actually has the physical GPUs installed in it the client machine which is running Bitfusion client software is where the user or the data scientist is actually running their machine learning application so there's no GPU actually in that host and what is happening with Bitfusion technology is that it is essentially intercepting the CUDA calls that are being made by that machine learning application and remoting those CUDA calls over to the Bitfusion server and then injecting them into the local GPU on the server so it's actually you know we call it interposition and the ability to remote these CUDA calls but it's actually much more sophisticated than that there are a lot of underlying capabilities that are being deployed in terms of optimizations to take maximum advantage of the networking link that sits between the client machine and the server machine but given all of that once we've done it with Bitfusion it's now possible for the data scientist to either consume multiple GPUs or single GPUs or even fractional GPUs across that interconnect using Bitfusion technology. Okay maybe it would help illustrate some of these technologies if you've got a couple of customers. Sure so one example would be a retail customer I'm thinking of who is actually it's a grocery chain that is a large number of video cameras into their stores in order to do things like watch for pilfering identify when store shelves could be restocked and even looking for cases where for example maybe a customer has fallen down in an aisle and someone needs to go and help them. Those multiple video streams and the multiple applications that are being run that are consuming the data from those video streams and doing analytics and ML on them would be perfectly suited for this type of environment where you would like to be able to have these multiple independent applications running but having them be able to efficiently share the hardware resources of the GPUs. Another example would be retailers who are deploying ML powered checkout registers to help reduce fraud by customers who are buying buying things with fake barcodes for example. So in that case you would not necessarily want to deploy a single dedicated GPU for every single checkout line. Instead what you would prefer to do is have a pooled set of resources that each inference operation that's occurring within each one of those checkout lines could then consume collectively and that would be two examples of the use of this kind of pooling technology. Okay, great. Josh, last question for you. Is this technology, is this only for GPUs and anything else you can give us a little bit of a look forward to as to what we should be expecting from the Bitfusion technology? So currently the target is specifically NVIDIA GPUs with CUDA. The team actually even prior to acquisition had done some work on enablement of FPGAs and also had done some work on OpenCL which is a more open standard for device access. So what you will see over time is an expansion of the Bitfusion capabilities to embrace devices like FPGAs the domain specific ASICs that Chris was referring to earlier will roll out over time. But we are starting with the NVIDIA GPU which totally makes sense since that is the primary hardware acceleration engine for deep learning currently. Excellent. Well, Josh and Chris thank you so much for the updates to the audience. If you're watching this live please join the crowd chat now. It's time to ask your questions, participate. If you're watching this on demand, you can also go to crowdchat.net makeaireal to be able to see the conversation that we had. Thanks so much for joining us. Thank you very much. Thank you. Dell EMC OpenManage Mobile enables IT administrators to monitor data center issues and respond rapidly to unexpected events anytime, anywhere. OpenManage Mobile provides a wealth of features within a comprehensive user interface including server configuration push notifications remote desktop augmented reality and more. The latest release features an updated AR interface power and thermal policy review emergency power reduction and internal storage monitoring. Download OpenManage Mobile today.