 From around the globe, it's theCUBE. Covering HPE Discover Virtual Experience, brought to you by HPE. Official intelligence, Monica Livingston. Hey, Monica, welcome to theCUBE. Hi, Lisa, thank you for having me. So AI is a big topic, but let's start just understanding Intel's approach to artificial intelligence. Yeah, so at Intel, we look at AI as a workload and a tool that is becoming ubiquitous across all of our compute solutions. We have customers that are using AI in the cloud, in the data center, at the edge. So our goal is to infuse as much performance as we can for AI into our base platform. And then where acceleration is needed, we have accelerator solutions for those particular areas. An example of where we are infusing AI performance into our base platform is the Intel Deep Learning, boost feature set, sorry, the Intel Deep Learning Boost feature set, which is in our second generation Intel Xeon scalable processors. And this feature alone provides up to 30X performance improvement for deep learning inference on the CPU over the previous generation. And we are continuing infusing AI into our base platform with the third generation Intel Xeon scalable processors, which are launching later this month. Intel will continue that leadership by including support for B Float 16. B Float 16 is a new format that enables deep learning training with similar accuracy, but essentially using less data. So it increases AI throughput. Another example is memory. So both inference and training require quite a bit of memory and with Intel Optane persistent memory, customers are able to expand large pool of memory closer to the CPU. And where that's particularly relevant is in areas where data sets are large, like imaging with lots of images and lots of high resolution images, like medical diagnostic or seismic imaging. We are able to perform some of these models without tiling. And tiling is where if you are memory constrained, you essentially have to take that picture and chop it up in little pieces and process each piece and then stitch it back together at the end, whereas that loses a lot of context for the AI model. So if you're able to process that entire picture, then you are getting a much better result that is the benefit of having that memory accessible to the compute. So when you are buying the latest and greatest HPE servers, you will have built-in AI performance with Intel Xeon scalable and Optane persistent memory. A couple of things that you said in there that piqued my memory or my interest or a 30X improvement in performance. Do you talk about that with respect to the deep learning booster? 30X is a huge factor. And you also said that your solution from a memory perspective doesn't require tiling. And I heard context. Context is key to have context to the data to be able to understand and interpret and make inferences. So talk to me about some of those big changes that you're releasing. What were some of the customer compelling events or maybe industry opportunities that drove Intel to make such huge performance gains in second generation? Right, so second generation is, these are the processors that are out now. So these are features that our customers are using today. Third generation is coming out this month. But for second generation and deep learning boost, what's really important is the software optimization and the fact that we're able to use the hooks that we built into the hardware, but then use software to make sure that we are optimizing performance on those platforms. And it's extremely relevant to talk about software in the AI space because AI solutions can get super expensive. You can easily pay two to three X what you should be paying if you don't have optimized software. Because then what you do is you're just throwing more and more compute and more and more hardware at the problem, but it's not optimized. And so what's really impactful is being able to run a vast number of AI applications on your base platform that essentially means that you can run that in a mixed workload environment together with your other applications and you're not standing up separate infrastructure. Now, of course, there will be some applications that do need separate infrastructure that do need appliances and accelerators. And for that, we will have a host of accelerators via FPGAs today for real-time low-latency inference. We have Armavidius VPU for low-power vision applications at the edge, but by and large, if you're looking at classical machine learning, if you're looking at analytics, deep learning inference, that can run on a base platform today. And I think that's what's important in ensuring that more and more customers are able to run AI at scale. It's not just a matter of running a POC in a back lab. You do that on the infrastructure that you have available, not an issue, but when you are looking to scale, the cost is going to be significantly important. And that's why it's important for us to make sure that we are building in as much performance as it's feasible into the base platform and then offering software tools to allow our customers to see that performance. Okay, so talking about the technology components, performance, memory, what's needed to scale on the technology side, we want to then kind of look at the business side because we know a lot of customers in any industry undertake AI projects and they run into pitfalls where they don't, they're not able to even get off the ground. So, conversed to the technology side, what is it that you're seeing, what are the pitfalls that customers can avoid on the business side to get these AI projects designed and launched? Yeah, so on the business side, I mean, you really have to start with a very solid business plan for why you're doing AI. And it's even less about just the AI piece, but you have to have a very solid business plan for your solution as a whole. If you're doing AI just to do AI because you saw that it's a top trend for 2020, so you must do AI, that's likely going to not result in success. You have to make sure that you're understanding why you're doing AI. If you have a workload that could be easily solved or a problem that could be easily solved with data analytics, use data analytics. AI should be used where appropriate and where to provide true benefit. And I think if you can demonstrate that, you're a long way in getting your project off the ground. And then there's several other pitfalls like data, do you have enough data? Is it close enough to your compute in order to be accessible and feasible? Do you have the resources that are skilled in AI that can get your solution off the ground? Do you have a plan for what to do after you've deployed your solution because these models need to be maintained on a regular basis? So some sort of a maintenance program needs to be in place. And then infrastructure. Costs can be prohibitive a lot of times if you're not able to leverage a good amount of your base infrastructure. And that's really where we spend a lot of time with customers in trying to understand what their model is trying to do and can they use their base infrastructure? Can they reuse as much of what they have? What is their current utilization? Do they maybe have cycles in off times if their utilization is diurnal and during the night they have lower utilization? Can you train your models at night rather than putting up a whole new set of infrastructure that likely will not be approved by management? Let's be honest. And I imagine that that is all part of the joint go-to-market strategy that HPE and Intel have together to have such conversations like that with customers to help really build a robust business plan. Yeah, so HPE is fantastic at consulting with customers from beginning to end looking at solutions and they've got a whole suite of storage solutions as well which are crucial for AI. And Intel works together with HPE to create reference architectures for AI and then we do joint training as well. But yes, talking to your HPE rep and leveraging your ecosystem I think is incredibly important because the ecosystem is so diverse and there are a lot of resources available from ISVs to hardware providers to consulting companies that are able to support with AI. So Monica, the ecosystem is incredibly important but how do you work with customers, HPE and Intel together to help a customer, whether it's in biotech or manufacturing to build an ecosystem of partnership that can help the customer really define the business plan of what they want to do to get that cross-functional collaboration and buy-in and support and launch a successful AI project? Yeah, it really does take a village but both Intel and HPE have an extensive partner network these are partners that we work with to optimize their solution. In HPE's case, they validate their solutions on HPE hardware to ensure that it runs smoothly and for our customers, we have the ability to match make with partners in the ecosystem and generally the way it works is in specific segments we have a list of partners that we can draw from and we introduce those to the customer the customer generally has a couple of meetings with them to see which one is a better fit and then they go from there but essentially it is just making sure that solutions are validated and optimized and then giving our customers a choice of which partners are the best fit for them. Last question for you, Monica. We are in the middle of COVID-19 and we see things on the news every day about contact tracing, for example, social distancing and a lot of the things that are talked about on the news are human contact tracers, people being involved in manual processes. What are some of the opportunities that you see for AI to really help drive some of these because time is of the essence yet there's the ethics issue with AI, right? Yes, yes, and the ethics issue is not something that AI can solve on its own. Unfortunately, the ethics conversation is something that we need to have broader as a society and from a privacy perspective how are we going to be mindful and respectful while also being able to use some of the data to protect society, especially in a situation like this? So contact tracing is extremely important. This is something that in areas that have a wide system of cameras installed, it's something that is doable from an algorithmic perspective and there are several partners of ours that are looking at that and actually the technology itself, I don't think is as insurmountable as the logistical aspect and the privacy and the ethical aspect and regulation around it and making sure that it's not used for the wrong purposes but certainly with COVID, there is a new aspect of AI use cases and contact tracing is obviously one of them. The others that we are seeing is essentially companies are adapting a lot of their existing AI solutions or solutions that use AI to accommodate or to account for COVID, like companies that have surveillance systems and so if they were doing facial recognition either in metro stations or stadiums or banks, they now are adding features to their systems to detect social distancing, for example, or detect if somebody is wearing a mask. The technology again itself is not that difficult but in the implementation and the use and the governance around it I think is a lot more complex and then I would be remiss not to mention remote learning which is huge now. I think all of our children are learning remote at this point and being able to use AI in curriculums and being able to really pinpoint where a child is having a hard time understanding a concept and then giving them more support in that area is definitely something that our partners are looking at and it's something that I see with my children and the tools that they're using. So instead of reading to their teacher for their reading test, they're reading to their computer and the computer is able to pinpoint some very specific issues that maybe a teacher would not see as easily and then of course the teacher has the ability to go back through and listen and make sure that there weren't any issues with dialects or anything like that. So it's really just an interesting reinforcement of the teacher student learning with the added algorithm impact as well. Right, a lot of opportunity is going to come out of COVID, maybe more accelerated than others because as you mentioned, it's very complex. Monica, I wish we had more time. This has been a really fascinating conversation about what Intel and HPE are doing with respect to AI. We'll have to have you back as this topic is just too big but we thank you so much for your time. Thank you. For my guest, Monica Livingston, I'm Lisa Martin. You're watching theCUBE's coverage of HPE Discover 2020. Thanks for watching.