 From the CUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE conversation. Hey, welcome back. You're right, Jeff Frick here with theCUBE. We are doing a special presentation today, really talking about AI and making AI real with two companies that are right in the heart of it, Dell EMC, as well as Intel. So we're excited to have a couple of CUBE alumni back on the program. We haven't seen them in a little while. First off from Intel, Lisa Spellman. She is the corporate VP and GM for the Xeon Group or Xeon and memory group. Great to see you, Lisa. Yep, good to see you again too. And we've got Ravi Pinterkanti. He is the SVP server product management also from Dell Technologies. Ravi, great to see you as well. Good to see you, Jeff and Lisa, of course. Yes, so let's jump into it. So yesterday, Ravi, you guys announced a bunch of new kind of AI based solutions. I wonder if you can take us through that. Absolutely. So one of the things we did, Jeff, was we said, it's not good enough for us to have a point product, but we talked about, I hope, the total of products, more importantly, everything from our workstation site to the server, to these storage elements and things that we are doing with VMware, for example. Beyond that, we are also obviously pleased with everything we are doing on bringing the right set of validated configurations and reference architectures and ready solutions, so that the customer really doesn't have to go ahead and do the due diligence on figuring out how the various integration points are coming across in making a solution possible. Obviously, all this is based on the great partnership we have with Intel on using not just their CPUs, but FPGAs as well. That's great. So Lisa, I wonder, I think a lot of people, obviously everybody knows Intel for your CPUs, but I don't think they recognize kind of all the other stuff that can wrap around the core CPU to add value around a particular solution set or problem set. I wonder if you can tell us a little bit more about the Xeon family and what you guys are doing in the data center with this kind of new interesting thing called AI and machine learning. Yeah. So thanks, Jeff and Ravi. It's amazing the way to see that artificial intelligence applications are just growing in their pervasiveness and you see it taking off across all sorts of industries and it's actually being built into just about every application that is coming down the pipe. And so if you think about needing to have your hardware foundation able to support that, that's where we're seeing a lot of the customer interest come in and not just of course at Xeon, but like Ravi said, on the whole portfolio and how the system and solution configuration come together. So we're approaching it from a total view of being able to move all of that data, store all of that data and process all of that data and providing options along that entire pipeline of moving it. And within that on Xeon specifically, we've really set that as our cornerstone foundation for AI. If it's the most deployed solution and data center CPU around the world and every single application is going to have artificial intelligence in it, it makes sense that you would have artificial intelligence acceleration built into the actual hardware so that customers get a better experience right out of the box regardless of which industry they're in or which specialized function they might be focusing on. It's really wild, right? Because in the process, right, you always move to your next point of failure. So having all these kind of accelerants and these ways that you can carve off parts of the workload, part of the intelligence so you can optimize better is so important. And as you said, Lisa and also Ravi, on the solution side, nobody wants general AI just for AI's sake. It's a nice word and interesting science experiment, but it's really in the applied AI world is where starting to see the value and the application of this stuff. And Ravi, I wonder, you had a customer you wanted to highlight, Epsilon, tell us a little bit about their journey and what you guys did with them. Great, sure. I mean, if you didn't start looking at Epsilon, they're in the marketing business. And one of the crucial things for them is to ensure that they're able to provide the right data based on the analysis they run and what is it that the customer is looking for? And they can't wait for a period of time, but they need to be doing that in their near real-time basis. And that's what Epsilon does. And what really blew my mind was the fact that they actually service or send out close to 100 billion messages. Again, it's 100 billion messages a year. And so you can imagine the amount of data that they're analyzing, which is in petabytes of data, and they need to do it real-time. And that's all possible because of the kind of analytics we have driven into the powered servers, using the latest of the Intel Xeon processor coupled with some of the technologies from the FBGS side, which again allows them to go back in and analyze this data and serve it to the customers really rapidly. You know, it's funny. I think MarkTech is kind of an under-appreciated world of AI and machine-to-machine execution, right? There's the amount of transactions that go through when you load a webpage on your site that actually IDs who you are puts a marketplace together, sells time on that or a spot on that ad, and then lets people in. It's a really sophisticated, and as you said, massive amounts of data going through. So, pretty interesting stuff. And if it's done right, it's magic. And if it's done not right, then people get pissed off. So you got to have a sharp tool. You got it. I mean, this is where I talk about, it can be garbage in, garbage out. If you don't really act on the right data, right? So that is where I think it becomes important. And also, if you don't do it in a timely fashion and you don't service up the right content at the right time, you miss the opportunity to go ahead and grab the attention. Right, right. So Lisa, kind of back to you. You know, there's all kinds of open source stuff that's happening also in the AI and machine learning world. So we hear things about TensorFlow and all these different libraries. How are you guys, you know, kind of embracing that world as you look at AI and kind of the development we've been at it for a while. You guys are involved in everything from autonomous vehicles to the Mar-Tech as we discussed. How are you making sure that these things are using all the available resources to optimize these solutions? Yeah, I think you and Ravi were just hitting on some of those examples of how many ways people have figured out how to apply AI now. So maybe at first it was really driven by just image recognition and image tagging, but now you see so much work being driven in recommendation engines and in object detection for much more industrial use cases, not just consumer enjoyment, and also those things you mentioned and hit on where the personalization is a really fine line you walk between how do you make an experience feel good personalized versus creepy personalized is a real challenge and opportunity across so many industries. And so open source, like you mentioned, is a great place for that foundation because it gives people the tools to build upon. And I think our strategy is really a stacked strategy that starts first with delivering the best hardware for artificial and intelligence. And again, Xeon's the foundation for that, but we also have milliwatt type processing for out at the edge. And then we have all the way through to very custom specific accelerators into the data center. Then on top of that, the optimized software, which is going into each of those frameworks and doing the work so that the framework recognizes the specific acceleration we built into the CPU, whether that's still boost or recognizes the capabilities that sit in that accelerator silicon. And then once we've done that software layer, and this is where we have the opportunity for a lot of partnership is the ecosystem and the solutions work that Robbie started off by talking about. So AI isn't, it's not easy for everyone. It has a lot of value, but it takes work to extract that value. And so partnerships within the ecosystem to make sure that ISVs are taking those optimizations, building them in, and fundamentally can deliver to customers a deployable solution is the last leg of that strategy, but it really is one of the most important because without it, you get a lot of really good benchmark results, but not a lot of good, happy customer. Right. I'm just curious, Lisa, because you kind of sit in the capboard seat, you guys at the core, you know, kind of under all the layers, running data centers, running these workloads. How do you see kind of the evolution of machine learning and AI from kind of the early days where it was science projects and really smart people on mahogany row versus now people are talking about trying to get it to like a citizen developer, or really a citizen data science and exposing in the power of AI to business leaders or business executioners, analysts if you will, so they can apply it to their day-to-day world and their day-to-day life. How do you see that kind of evolving? Because you not only were in it early, but you get to see some of this stuff coming down the road and design wins and reference architectures. How should people think about this evolution? It really is one of those things where if you step back from the fundamentals of AI, we've actually been around for 50 or more years. It's just that the changes in the amount of computing capability that's available, the network capacity that's available and the fundamental efficiency that IT and infrastructure managers can get out of their cloud architectures has allowed for this pervasiveness to evolve. And I think that's been the big tipping point that pushed people over the sphere. Of course, AI went through the same thing that cloud did where you had maybe every business leader or CIO saying, hey, get me a cloud and I'll figure out what for later. Get me some AI, we'll figure out if we can make it work. But we're through those initial use pieces and starting to see business value derived from those deployments. And I think some of the most exciting areas are in the medical services field and just the amount, especially if you think of the environment we're in right now, the amount of efficiency and in some cases, reduction in human contact that you could require for diagnostics and just customer tracking and ability to follow their entire patient history is really powerful and represents the next wave in care and how we scale our limited resource of doctors, nurses, technicians. And the point we're making of what's coming next is where you start to see even more mass personalization and recommendations in that way that feel very, not spooky to people, but actually comforting and they take value from them because it allows them to immediately act. Ravi referenced the speed at which you have to utilize the data. When people can immediately act more efficiently, they're generally happier with a service. So we see so much opportunity and we're continuing to address across, again, that hardware, software and solution stack so we can stay a step ahead of our customers. Right, that's great. And Ravi, I want to give you the final word because you guys have to put the solutions together and actually deliver them to the customer. So not only the hardware and the software but any other kind of ecosystem components that you have to bring together. So I wonder if you can talk about that approach and how it's really the solution at the end of the day, not specs, not speeds and feeds. That's not really what people care about. It's really a good solution. You're absolutely right, Jeff, because end of the day, I mean, it's like this. Most of us probably use the ATM to withdraw money but we really don't know what really sits behind the ATM. My point being that you really care that particular point in time to be able to put your ATM into the machine and get your dollar bills out, for example. Likewise, when you start looking at what the customer really needs, what Lisa hit upon is absolutely right. I mean, what they're looking for and you've said this on the whole solution side of the house. So our mantra to this is very simple. We want to make sure that we use the right basic building blocks, ensuring that we bring the right solutions using three things. The right products, which essentially means that we need to use the right partners to get the right processors in, GPS or GPUs in. But then we take it to the next level by ensuring that we can actually do things. We can either provide no ready solutions or validated reference architectures. The idea being that you have the sausage making process that you now don't need to have the customer go through. In a way, we have done the cooking and we provide a recipe book and you just go through the ingredient process of curing this and then off your off to go get your solution done. And finally, the final stages, there might be help that customers still need in terms of services. That's something else Dell technology provides. And the whole idea is if customers want to go ahead and have some help deploying the solutions, we can also do that via services. So that's probably the way we approach or that's the way we approach providing the right building blocks, using the right technologies from our partners, then making sure that we have the right solutions that our customers can look at and finally if they need deployment help, we can do that via services. Well, Ravi, Lisa, thanks for taking a few minutes. That was a great tee up Ravi because I think we're going to go to a couple of customer interviews enjoying that nice meal that you prepared with that combination of hardware, software, services and support. So thank you for your time and great to catch up. Yeah. Thank you. All right, let's go ahead and run the tape. Hi, Jeff. I wanted to talk about two examples of collaboration that we have with the partners that have yielded real examples of output through HPC and AI activities. So the first example that I wanted to cover is with the Neuromod team up in Canada. With that team, we collaborated with Intel on a tuning of algorithm and code in order to accelerate the mapping of the human brain. So we have a cluster down here in Texas called Zenith based on Xeon and Optane Memory. And we were able, with that customer, with the three of us, our friends at Intel, the Neuromod team and the Dell HPC and AI innovation engineering team, to go and accelerate the mapping of the human brain. So imagine patients playing video games or doing all sorts of activities that help understand how the brain sends the signal in order to trigger responses in the nervous system. And it's not only a good way to map the human brain, but think about what you can get with that type of information in order to help cure Alzheimer or dementia down the road. So this is really something I'm passionate about is using technology to help all of us and all of those that are suffering from those really tough diseases. I'm Julie Boyle. I'm the project manager for the Courtois Neuromod project and the idea is actually to scan six participants really intensively in both the MRI scanner and the MEG scanner and see if it can use human brain data to get closer to something called generalized intelligence. What we have in the AI world is systems that are mathematically and computationally built. Often they do one task really, really well but they struggle with other tasks. A really good example of this is video games. Artificial neural nets can often outperform humans in video games but they don't really play in a natural way artificial neural net playing Mario Brothers. The way that it beats a system is by actually kind of gliding its way through as quickly as possible and it doesn't collect the pennies for example. And if you play Mario Brothers as a child you know that collecting those coins is part of your game. And so the idea is to get artificial neural nets to behave more like humans so that we have this transfer of knowledge. It's just something that humans do really, really well and very naturally. It doesn't take 50,000 examples for a child to know the difference between a dog and a hot dog. One you eat, one you play with but an artificial neural net can often take massive computational power and many examples before it understands that. Video games are awesome because when you do video games you're doing a vision task constantly. You're also doing a lot of planning and strategic thinking but you're also taking decisions several times a second and we record that. We try to see can we from brain activity predict what people were doing. We can break almost 90% accuracy with this type of architecture. Yu Zhang is the lead postdoc on this collaboration with Dell and Intel. She's trying to work on a model called Graph Convolutional Neural Nets. We have been involved with like two computing systems to compare like how the performance was going. The lab relies on both servers that we have internally here so we have a GPU server. But what we really rely on is Compute Canada and Compute Canada was just not powerful enough to be able to run the models that we was trying to run. So it would take her days, weeks, it would crash, she would have to wait in line. Dell was visiting and I was invited into the meeting very kindly. And they told us that they had started working with the new type of hardware to train artificial neural nets. Dell's using traditional CPUs pairing it with a new type of memory developed by Intel which they'll obtain. They're also their new CPU architectures that really optimize to do deep learning. So all of that sounded great because we had this problem, we ran out of memory. The innovation lab, having access to experts to help answer questions immediately, that's not something to negate. We were able to train the Alex Mat within 20 minutes. But before, if we do the same thing on the GPU, we need to wait almost for three hours to finish one Apple. We were able to train the traditional computational neural net. Dell has been really great because anytime we need more memory, we send an email, Dell says, yeah, sure, no problem, we're all extended. How much memory do you need? It's been really simple from our end. And I think it's really great to be at the edge of science and technology. We're not just doing the same old, we're pushing the boundaries. Like often we don't know where we're going to be in six months. Let's face it, in the big data world, computing power makes a big difference. The second example I'd like to cover is the one that I will call the Data Accelerator. That's a partnership that we have with the University of Cambridge in England. There we partnered with Intel and Cambridge and we built up at the time the number one IO 500 storage solution. And it's pretty amazing because it was built on standard building blocks, PowerEdge servers, Intel Xeon processors, some NVMe drives from our partners at Intel. And what we did is we built up this system with a very, very smart and elaborate software code that gives an ultra fast performance for our customers that are looking for a front-end fast scratch to their HPC storage solutions. We're also very mindful that this innovation is great for others to leverage. So the software code will soon be available on GitHub. And as I said, this was number one on the IO 500 where it was initially released. Within Cambridge, we've always had a focus on opening up our technologies to UK industry where we can encourage UK companies to take advantage of advanced research computing technologies. And we have many customers in the fields of automotive, oil and gas, life sciences that find our systems to really help them accelerate their product development process. My name is Paul Collegia. I'm the director of research computing at Cambridge University. Yeah, we are a research computing cloud provider, but the emphasis is on the consulting and the processes around how to exploit that technology rather than the bare results. Our value is in how we help businesses use advanced computing resources rather than the provision of those resources. We see increasingly more and more data being produced across a wide range of verticals of life sciences, astronomy, manufacturing. So the data accelerator was created as a component within our data-centric compute environment. Data processing is becoming more and more central element within research computing. We're getting very large data sets. The traditional spinning disk file systems can't keep up and we find applications being slowed down due to lack of data. So the data accelerator was born to take advantage of new solid state storage devices and try to work out how we can have a staging mechanism of keeping your data on spinning disk when it's not required, pre-staging it on fast NVME storage devices so that can feed the applications at the rate required for maximum performance. So we have the highest AI capability available anywhere in the UK where we match AI compute performance with very high storage performance because for AI, high performance storage is a key element to get the performance up. Currently, the data accelerator is the fastest HPC storage system in the world and we are able to obtain 500 gigabytes of the second read write with IOPS up in the 20 million range. We provide advanced computing technologies that allow some of the brightest minds in the world really push scientific and medical research. We enable some of the greatest academics in the world to make tomorrow's discoveries. All right, welcome back, Jeff Rick here and we are excited for this next segment. We're joined by Jeremy Rader. He is the GM Digital Transformation and Scale Solutions for Intel Corporation. Jeremy, great to see you. Hey, thanks for having me. I love the flowers in the backyard. I thought maybe you ran over to the Japanese garden or the Rose Garden, right? Two very beautiful places to visit in Portland. Yeah, you only get them for a couple weeks here so we hit the timing just right. Excellent. All right, so let's jump into it really and this conversation really is all about making AI real. And you guys are working with Dell and you're working with not only Dell, right? There's the hardware and the software and a lot of these smaller solution providers. So what is some of the key attributes that needs to make AI real for your customers out there? Yeah, so you know, it's a complex space. So when you can bring kind of the best of the Intel portfolio, which is expanding a lot, you know, it's not just CPU anymore. You're getting into memory technologies, network technologies and kind of a little less known is how many resources we have focused on the software side of things. Optimizing frameworks and optimizing kind of these key ingredients and libraries that you can stitch into that portfolio to really get more performance and value out of your machine learning and deep learning space. And so, you know, what we've really done here with Dell is started to bring a bunch of that portfolio together with Dell's capabilities and then bring in that ISV partner, that software vendor where we can really take and stitch and bring the most value out of that broad portfolio. Ultimately, easing the complexity of what it takes to deploy an AI capability. So a lot going on there bringing kind of the three-legged stool of the software vendor, hardware vendor, Dell into the mix and you get a really strong outcome. Right. So before we get to the solutions piece, let's stick a little bit into the Intel world. And I don't know if a lot of people are aware that obviously you guys make CPUs and you've been making great CPUs forever, but there's a whole lot more stuff that you've added, you know, kind of around the core CPU, if you will, in terms of access to libraries and ways to really optimize the Xeon processors to operate in an AI world. I wonder if you can, you know, kind of take us a little bit below the surface on how that works. What are some of the examples of things you can do to get more from your Intel processors for AI specific applications, the workloads? Yeah, well, you know, there's a ton of software optimization that goes into this, you know, that having the great CPU is definitely step one, but ultimately, you know, you want to get down into the libraries like TensorFlow. We have data analytics acceleration libraries. You know, that really allows you to get kind of, again, under the covers a little bit and look at how do we get the most out of the kinds of capabilities that are ultimately used in machine learning, in deep learning capabilities and then bring that forward and try and enable that with our software vendors so that they can take advantage of those acceleration components and ultimately, you know, move from, you know, less training time or it could be a cost factor, right? Those are the kind of capabilities we want to expose to software vendors through these kinds of partnerships. Okay, and that's terrific. And I do think that's a big part of the story that a lot of people are probably not as aware of that there are a ton of these optimization opportunities that you guys have been leveraging for a while. So shifting gears a little bit, right? AI and machine learning is all about the data. And in doing a little research for this, I found actually you on stage talking about some company that had like 350, I wrote down 315 petabytes of data, 140,000 sources of those data. And I think probably a not great quote of six months access time to get that data and actually work with it. And the company you're referencing was Intel. So you guys know a lot about data, managing data, everything from your manufacturing and obviously supporting a global organization for IT and a lot of complexity and secrets and good stuff. So, you know, what have you guys leveraged as Intel and the way you work with data and getting a good data pipeline that's enabling you to kind of put that into these other solutions that you're providing to the customers? Well, it's absolutely a journey and it doesn't happen overnight and that's what we've seen it at Intel and we see it with many of our customers that are on the same journey that we've been on. And so, this idea of building that pipeline, it really starts with what are the kind of problems that you're trying to solve? What are the big issues that are holding you back as a company or where you see that competitive advantage that you're trying to get to? And then ultimately, how do you build the structure to enable the right kind of pipeline of that data? Because that's what machine learning and deep learning is, is that data journey. So, you know, really a lot of focus around, you know, how we can understand those business challenges, bring forward those kinds of capabilities all the way through to where we structure our entire company around those assets. And then ultimately, some of the partnerships that we're going to be talking about with these companies that are out there to help us really squeeze the most out of that data as quickly as possible because otherwise, it goes stale real fast, sits on the shelf, and you're not getting that value out of it. So, yeah, we've been on the journey. It's a long journey, but ultimately, we can take a lot of those kind of learnings and we can apply them to our silicon technology, the software optimizations that we're doing and ultimately how we talk to our enterprise customers about how they can solve or overcome some of the same challenges that we did. Well, let's talk about some of those challenges specifically because, you know, I think part of the challenges that kind of knocked big data, if you will, and Hadoop, if you will, kind of off the rails a little bit was there's a whole lot that goes into it besides just doing the analysis. There's a lot of data prep, there's data collection, data organization, a whole bunch of things that have to happen before you can actually start to do the sexy stuff of AI. So, you know, what are some of those challenges? How are you helping people get over kind of these baby steps before they can really get into the deep end of the pool? Yeah, well, you know, one is you have to have the resources. So, you know, do you even have the resources? If you can acquire those resources, can you keep them interested in the kind of work that you're doing? So that's a big challenge and actually we'll talk about how that fits into some of the partnerships that we've been establishing in the ecosystem. It's also, you get stuck in this POC dooloo, right? You finally get those resources and they start to get access to that data that we talked about. They start to play out some scenarios, they theorize a little bit. Maybe they show you some really interesting value, but it never seems to make its way into a full production mode. And I think that is a challenge that has faced so many enterprises that are stuck in that loop. And so that's where we look at who's out there in the ecosystem that can help more readily move through that whole process of the evaluation, the prove the ROI, the POC, and ultimately move that thing, that capability into production mode as quickly as possible. That, you know, that to me is one of those fundamental aspects of if you're stuck in the POC, nothing's happening from this. This is not helping your company. We want to move things more quickly through that. Right, right. And let's just talk about some of these companies that you guys are working with that you've got some reference architectures, DataRobot, GridDynamics, H2O, just down the road in Iguazo. A lot of the companies we've worked with here is theCUBE. And I think, you know, another part that's interesting, and again, we can learn from kind of old days of big data, is kind of generalized AI versus solution-specific AI. And I think where there's a real opportunity is not AI for AI's sake, but really it's got to be applied to a specific solution, a specific problem so that you have better chatbots, better customer service experience, better something. So when you were working with these folks and trying to design solutions, what were some of the opportunities that you saw to work with some of these folks to now have an applied AI application slash solution versus just kind of AI for AI's sake? Yeah, I mean, that could be anything from fraud detection and financial services, or even taking a step back and looking more horizontally, like back to that data challenge, if you're stuck at the, hey, I've built a fantastic data lake, but I haven't been able to pull anything back out of it. Who are some of the companies that are out there that can help overcome some of those big data challenges and ultimately get you to where, you know, you don't have a data scientist spending 60% of their time on data acquisition, pre-processing, that's not where we want them, right? We want them on building out that next theory. We want them on looking at the next business challenge. We want them on selecting the right models, but ultimately they have to do that as quickly as possible so that they can move that capability forward into the next phase. So really it's about that connection of looking at those problems or challenges in the full pipeline and these companies like DataRobot and H2O, Aguazio, they're all addressing specific challenges in the end to end. That's why they've kind of bubbled up as ones that we want to continue to collaborate with because they can help enterprises overcome those issues more fast, you know, more readily. Great. Well, Jeremy, thanks for taking a few minutes and giving us the intel side of the story. It's a great company. You guys have been around forever. I worked there many, many moons ago. That's a story for another time, but really appreciate it and- I'll interview you. We'll go there. All right, so super, thanks a lot. So he's Jeremy, I'm Jeff Frick. So now it's time to go ahead and jump into the crowd chat. It's at crowdchat.net slash make AI real. We'll see you in the chat and thanks for watching.