 Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in a very special place. We're in Austin, Texas at the Dell EMC HPC and AI Innovation Lab. High-performance computing, artificial intelligence. This is really where it all happens, where the engineers at Dell EMC are putting together these ready-made solutions for the customers. They got every type of application stack in here and we're really excited to have our next guest. He's right in the middle of it. He's Michael Bennett, Senior Principal Engineer for Dell EMC. Mike, great to see you. Great to see you too. So you're working on one particular flavor of the AI solutions and that's really machine learning with Hadoop. So tell us a little bit about that. Sure, yeah. The product that I work on is called the Ready Solution for AI Machine Learning with Hadoop. And that product is a Cloudera Hadoop distribution on top of our Dell PowerEd servers. And we've partnered with Intel who has released a deep learning library called BigDL to bring both the traditional machine learning capabilities as well as deep learning capabilities to the product. The product also adds a data science workbench that's released by Cloudera. And this tool allows the customer's data scientists to collaborate together, provides them secure access to the Hadoop cluster. And we think All-Round makes a great product to allow customers to gain the power of machine learning and deep learning in their environment while also kind of reducing some of those overhead complexities that IT often faces with managing multiple environments, providing secure access, things like that. Right. Because the big knock always on Hadoop is that it's just hard. It's hard to put in. There aren't enough people. There aren't enough experts. So you guys are really offering a pre-bundled solution that's ready to go. Correct. Yeah, we built, you know, seven or eight different environments going in the lab at any time to validate different hardware permutations that we may offer of the product as well as, you know, we've got, we've been doing this since 2009. So there's a lot of institutional knowledge here at Dell to draw on when building and validating these Hadoop products. Our Dell services team has also been going out, installing and setting these up. And our consulting services has been helping customers fit the Hadoop infrastructure into their IT model. Right. So is there one basic configuration that you guys have, or have you found there's like two or three different kind of standard use cases that call for, you know, two or three different kind of standardized solutions? We find that most customers are preferring the R740XD series. This platform can hold 12, three and a half inch form factor drives in the front, along with four in the midplane, while still providing four SSDs in the back. So customers get a lot of versatility with this. It's also won several Hadoop benchmarking awards. And do you find when you're talking to customers or you're putting this together that they've tried themselves and they've tried to kind of stitch together and cobble together the open source proprietary stuff all the way down to network cards and all this other stuff to actually make the solution come together. And it's just really hard, right? Right, exactly. What we hear over and over from our product management team is that their interactions with customers come back with customers saying it's just too hard. They, you know, get something that's stable and they come back and they don't know why it's no longer working. They have customized environments that each developer wants for their, you know, big data analytics jobs, things like that. So, yeah, overall, we're hearing that customers are finding it very complex. Right. So we hear time and time again that same thing. And even though we've been going to Hadoop Summit at Hadoop World and Stratas since 2010, you know, the momentum seems to be a little slower in terms of kind of the hype. But now we're really moving into heavy-duty real-time production. And that's what you guys are enabling with this ready-made solution. So with this product, yeah, we focused on enabling Apache Spark on the Hadoop environment. And that Apache Spark distributed computing has really changed the game as far as what it allows customers to do with their analytics jobs. No longer are we writing things to disk, but multiple transformations are being performed in memory. And that's also a big part of what enables the big DL library that Intel released for the platform to train these deep learning models. Right. Because the Spark, so the Spark enables the real-time analytics, right? Now you've got streaming data coming into the same versus the batch, which was kind of the classic play of Hadoop. Right. And not only do you have streaming data coming in, but Spark also enables you to load your data in memory and perform multiple operations on it and draw insights that maybe you couldn't before with traditional MapReduce jobs. Right, right. So what gets you excited to come to work every day? You've been playing with these big machines or in the middle of kind of nerd nirvana, I think, with all the servers and spin disk. What gets you up in the morning? What are you excited about as you see kind of AI get more pervasive within the customers and these solutions that you guys are enabling? You know, for me, what's always exciting is trying new things. We've got this huge lab environment with all kinds of equipment. So if you want to test a new iteration, let's say, tiered HDFS storage with SSDs and traditional hard drives, throw it together in a couple hours and, you know, see what the results are. If we wanted to add new PCIe devices like FPGAs for the inference portion of, you know, the deep learning development, we can put those in our servers and try them out. So I enjoy that on top of the, you know, validated, thoroughly worked through solutions that we offer customers, we can also experiment, play around and work towards that next generation of technology. Right, because it's any combination of hardware that you basically have at your disposal to try to sit together and test and see what happens. Right, exactly. And this is my first time actually working at a OEM. And so I was surprised, not only do we have access to anything that you can see out in the market, but we often receive, you know, tests and development equipment from partners and vendors so that we can work with and collaborate with to ensure that once the product reaches market, it has the features that customers need. Right. That's the one thing that trips people up the most, just some simple little switch configuration that you think is like a minor piece of something that always seems to get in the way. Right, or switches in general, I think that, you know, people focus on the application because the switch is so abstracted from what a developer or even somebody troubleshooting the system sees that oftentimes, you know, some misconfiguration or some typo that was entered during the switch configuration process that throws customers off or has somebody scratching their head wondering why they're not getting the kind of performance that they thought. Right. Well, that's why we need more automation, right? That's what you guys are working on. Right, yeah, exactly. Keep the fat finger typos out of the config settings. Right. Consistent, reproducible, none of that. I did it yesterday and it worked. I don't know what changed. All right, Mike. Well, thanks for taking a few minutes out of your day and don't have too much fun playing with all this gear. Awesome. Thanks for having me. All right. He's Mike Bennett. I'm Jeff Frick. You're watching theCUBE from Austin, Texas at the Dell EMC High Performance Computing and AI Labs. Thanks for watching.