 Hey, welcome back everybody. Jeff Rick here with theCUBE. We're in Austin, Texas at the Dell EMC High Performance Computing and Artificial Intelligence Labs. The lab's been here for a long time. As you can see behind us and probably hear racks and racks and racks of some of the biggest baddest computers on the planet. In fact, I think number 256 we were told earlier is just behind us. So we're excited to be here. Really as Dell and EMC puts together, you know, pre-configured solutions for artificial intelligence, machine learning, deep learning applications, because that's a growing, growing concern and growing, growing importance to all the business people out there. So we're excited to have the guy right on the show. He's Terry Pellegrino, the VP of HPC and business strategy and a whole bunch of stuff. You're a pretty busy guy. I'm busy, but you can see all those servers. They're very busy too. They're humming. Absolutely. So just your perspective. So the HPC part of this has been around for a while. The rise of kind of machine learning and artificial intelligence as a business priority is relatively recent, but you guys are jumping in with both feet. Oh, absolutely. I mean, HPC is not new to us. AI, machine learning, deep learning is happening. That's the buzzword, but we've been working on HPC clusters since back in the 90s. And it's great to see this technology or this best practice getting into the enterprise space where data scientists need help. And instead of looking for a one processor that will solve it all, they look for the knowledge of HPC and what we've been able to put together and applying into their field. Right. So how do you kind of delineate between HPC and say the AI portion of the lab? Or is it just kind of on a continuum? How do you kind of slice and dice? Absolutely. It's all in one place and you see it all behind us. This area in front of us, we try to get all those servers put together and add the value for all the different workloads. So you get HPC, you get SAP, SQL, AI, ML, DL, all in one lab. Right. And they're all here. They're all here. They hold the legacy application, they probably don't like to be called legacy applications all the way to the meanest and the newest and greatest. Exactly. The old stuff, the new stuff. And actually, you know what, some things you don't see is we're also looking at where the technology is going to take all those workloads. AI, ML, DL is the buzzword today, but down the road, you're going to see more applications and we're already starting to test those technologies in this lab. So it's past, present and future. Right. So one of the specific solutions you guys have put together is the DL using the new NVIDIA technology. I wonder if you could talk, we hear about NVIDIA all the time. Obviously, they're in really well position in autonomous vehicles and their GPUs are taking data centers by storm. How is that going? Where do you see some of the applications outside of autonomous vehicles for the NVIDIA base? Oh, there are many applications. I think the technology itself is proving to solve a lot of customer problems. And you can apply it in many different verticals, many workloads again. And you can see it in autonomous vehicles. You can see it in healthcare, life science, in financial services, risk management. It's really everywhere you need to solve a problem and you need dense compute solutions. And NVIDIA has one of the technologies that a lot of customers leverage to solve their problems. Right. And you're also launching a machine learning solution based on Hadoop, which we've been going to Hadoop Summit and Hadoop World and Strata for eight, nine years, I guess since 2010, eight years. And it's kind of funny because the knock on Hadoop is always there aren't enough people. It's too hard. It's just a really difficult technology. So you guys are really taking, again, a solutions approach with Hadoop for machine learning to basically deliver either a whole rack full of stuff or that spec that you can build at your own place. No, absolutely. That's one of the three major tenants that we have for those solutions that we're launching. We really want it to be a solution that's faster. So performance is key. When you're trying to extract data and insights from your data set, you really need to be fast. You don't want it to take months. It has to be within countable measures. So it's one of them. We want to make it simple. A data scientist is never going to be a PhD in HPC or any kind of compute technologies. So making it simple is critical. And the last one is we want to have this proven trusted advisor feel for our customers. You see it around you. This HPC lab was not built yesterday. It's been here showcasing our capabilities in HPC world, our ability to combine the Hadoop environment with other environments to solve enterprise class problems and bring business value to our customers. And that's really what we think our differentiation comes from. Right. And it's really a lab. I mean, you and I are both wearing coats right now, but there's like gear stacked up all over the place of every shape and size. And I think what's interesting is when we talk about the sexy stuff, the GPUs and the CPUs and Hadoop, there's a lot of details that make one of these racks actually work. And it's probably integrating some of those things as lower tier things and making sure they all work seamlessly together. So you don't get some nasty bottleneck on an inexpensive part that's holding back all that capacity. Oh, absolutely. You know, it's funny you mentioned that we're talking to customers about the technologies we're assembling and contrary to some web tech type companies that just look for any compute at all costs. And they'll just stack up a lot of technologies because they want the compute in HPC type environments. Or when you try to solve problems with deep learning and machine learning, you're only as strong as your weakest link. And if you have a server or a storage unit or an interconnect between all those that is really weak, you're going to see your performance go way down. And we watch out for that. And you know, the one thing that you alluded to, which I just wanted to point out, what you see behind us is the hardware. The secret sauce is really in the aggregation of all the components and all the software stacks. Because AIMLDL, great easy acronyms. But when you start peeling the layers, you realize that it's layers and layers of software, which are moving very fast, where you don't want to be spending your life understanding the interrupt requirements between those layers and worrying about whether your compute and your storage solution is going to work. You want to solve problems as a data scientist. And that's what we're trying to do. Give you a solution, which is an infrastructure plus a stack that's been validated, proven, and you can really get to work. Right. And even within that validated design, for a particular workload, customers have an opportunity, maybe needs a little bit more IO as a relative scale needs a little bit more storage needs a little bit more compute. So even within a basic structured system that you guys have spec and certified, still customers can come in and make little mods based on what their specific workload is. You've got it. We're not in a phase in the acceptance of AIMLDL where things are cookie cutter. It's still going to be a collaboration. That's why we have a really strong team working with our customers directly and trying to solution for their problem. If you need a little bit more storage, if you need faster storage for your scratch, if you need a little bit more IO bandwidth because you're in a remote environment, all those characteristics are going to be critical. And the solutions we're launching are not rigid. They're a perfect starting point for customers. I want to get something to run directly. They feel like it. But if you have a solution that's more pointed, we can definitely iterate. And that's what our team in the field and all the engineers that you have seen today walk through the lab. That's what their role is. We want to be as a consultant as a partner designing the right solution for the customer. Right. So Terry, before I let you go, just kind of one question from your perspective of customers when you're out talking to customers and how the conversation around artificial intelligence and machine learning has evolved over the last several years from kind of a cool science experiment or it's all the HPC stuff with the government or weather or heavy lifting, really moving from that actually into a boardroom conversation as a priority and a strategic imperative going forward. How's that conversation evolving when you're out talking to customers? Well, you know, it has changed. You're right. Back in the 60s, the science was there. The technology wasn't there. Today, we have the science. We have the technology and we're seeing all the C class decision makers really want to find value out of the data that we've collected. And that's where the discussion takes place. This is not a CIO discussion most of the time. And what's really fantastic in my mind, contrary to a lot of the technologies that have grown on like big data, cloud and all those buzzwords, here we're looking at something that's tangible. We have real life examples of companies that are using deep learning and machine learning to solve problems, save lives and get our technology in the hands of the right folks so they can impact the community. It's really, really fantastic. And that growth is set for success. And we want to be part of that. Right. It's just the continuation of this democratization trend. You know, get more people, more data, get more people, more tools, get more people, more power, and you're going to get innovation. You're going to solve more problems. And it's so exciting. Absolutely. Totally agree with you. All right, Terry. Well, thanks for taking a few minutes out of your busy day and congrats on the innovation lab here. Thank you so much. All right. He's Terry. I'm Jeff Rick. We're at the Dell EMC HBC and AI Innovation Labs in Austin, Texas. Thanks for watching.