 Live from Las Vegas, it's theCUBE, covering HPE Discover 2017, brought to you by Hewlett-Packard Enterprise. Okay, welcome back everyone. We are live here in Las Vegas for SiliconANGLE Media's CUBE exclusive coverage of HPE Discover 2017. I'm John Furrier, my co-host Dave Vellante. Our next guest is Kurt Brezniker, fellow and VP Chief Architect at Hewlett-Packard Labs and Natalia Vasileva, Senior Research Manager at Hewlett-Packard Labs. Right? Yes. Okay, welcome to theCUBE, good to see you. Thank you. Thanks for coming on. I really appreciate you guys coming on. One of the things I'm most excited about here at HPE Discover is I always like to geek out on the Hewlett-Packard Labs booth, which is right behind us. You go to the wide shot, you can see the awesome display. But there's some two things in there that I love. The machine is in there, and which I love the new branding, by the way. I love the pyramid coming out of that, the Phoenix rising out of the ashes. And also Memster, which are, the Memster are really game changing. This is underlying technology, but what's powering the business trends out there that you guys are kind of doing the R&D on is AI, and machine learning and software is changing. So, what's your thoughts as you look at the labs, you look at the landscape, and you do the R&D, what's the vision? So, one of the things that is so fascinating about the transitional period we're in, we look at the kind of technologies that we've had till date, and certainly it's up in a whole part of my career arc. And yet, all these technologies we've had so far, they're all kind of getting about as good as they're going to get. You know, the Moore's Law Semiconductor Process steps, general purpose operating systems, general purpose microprocessors. They've had fantastic productivity growth, but they all have a natural life cycle, and they're all maturing. And part of the machine research program has been, what do we think is coming next? And really what's informing us as what we have to set as the goals are the kinds of applications that we expect, and those are data intensive applications, not just petabytes, exabytes, but zettabytes, tens of zettabytes, hundreds of zettabytes of data out there in all those sensors out there in the world. And when you want to analyze that data, you can't just push it back to the individual human. You need to employ machine learning algorithms to go through that data, to call out and find those needles in those increasingly enormous haystacks so that you can get that key correlation. And when you don't have to reduce and redact and summarize the data, when you can operate on the data at that intelligent edge, you're going to find those correlations and that machine learning algorithm is going to be that unbiased and unblinking eye that's going to find that key relationship that'll really have a transformational effect. I mean, I think that's interesting. I'd like to ask you just one follow-up question on that because I think it reminds me back when I was in my youth around packets and the buffer and the speeds and feeds and at some point there was a wire speed capability. Hey, packets are moving and you can do all this analysis at wire speed. What you're getting at is data processing at the speed of as fast as the data's coming in and out. Is that, if I get that right, is that kind of where you're going with this? Because you have more data coming. I mean, potentially infinite amount of data coming in. The data speed is going to be so high velocity. How do you know what a needle looks like? Or even... And I think that's a key and that's why the research that Natalia has been doing is so fundamental is that we need to be able to process that incredible amount of information and be able to afford to do it. And the way that you will not be able to have at scale is if you have to take that data, compress it, reduce it, select it down because of some predetermined decision you've made, transmit it to a centralized location, do the analysis there, then send back the action commands. Now we need that cycle of intelligence measurement, analysis and action to be microseconds. And that means it needs to happen at the intelligent edge. And I think that's where the understanding of how machine learning algorithms that you don't program, you train so that they can work off of this enormous amount of data they voraciously consume the data and produce insights. And that's where machine learning will be the key. Natalia, tell us about your research on this area. Curious, your thoughts? Yeah, so we started to look at existing machine learning algorithms and what are their limiting factors today in the infrastructure, which don't allow to progress the machine learning algorithms fast enough. So one of the recent advances in AI is the appearance or revival of those artificial neural networks, deep learning. That's a very large hype around those types of algorithms. Every speech assistant which you get, Serenio phone, Cortana, or whatever, Alexa by Amazon, all of them use deep learning to train speech recognition systems. If you go to Facebook and suddenly it starts you to propose to mark the faces of your friends that the face detection, face recognition, also done with deep learning so that's the revival of those artificial neural networks. Today we are capable to train quite light enough models for those types of tasks, but we want to move forward. We want to be able to process large volumes of data to find more complicated patterns and to do that we need more compute power. Again, today the only way how you can add more compute power to that, you scale out. So there is no compute device on Earth today which is capable to do all the computation. You need to have many of them interconnect together and they all crunch numbers for the same problem. But at some point the communication between those nodes becomes a bottleneck. So you need to let know, neighboring note what you achieved and you can't scale out anymore. Anything yet another note to the cluster won't let up to the production of the training time. With the machine when we have and the memory-driven computing architecture when all data sits in the same share pool of memory and when all computing nodes have an ability to talk to that memory, we don't have that limitation anymore. So for us we are looking forward to deploy those algorithms on that type of architecture and we envision significant speed ups in the training and it will allow us to kind of retrain the model on the new data which is coming to do not do training offline anymore. So how does this all work? When HP split into two companies, Pula Packard Labs went to HPE and HP Labs went to HP Inc. Okay, so what went where? And then first question, then second question is, how do you decide what to work on? So I think in terms of how we organize ourselves, obviously things that were around printing and personal systems went to HP Inc. Things that are around analytics, enterprise hardware and research went to Hula Packard Labs. The one thing that we both found equally interesting was security because obviously personal systems, enterprise systems, we all need systems that are increasingly secure because of the advanced persistent threats that are constantly assaulting everything from our personal systems up through enterprise and public infrastructure. So that's how we've organized ourselves. Now in terms of what we get to work on, you know, we're in an interesting position. I came to labs three years ago. I used to be the Chief Technologist of the Server Global Business Unit. So I was in the world of big D, tiny R. Natalia and the research team at labs, they were out there looking out 5, 10, 15 or 20 years. Huge R and then we would meet together occasionally. I think one of the things that's happened with our machine advanced development and research program is I came to labs not to become a researcher but to facilitate that communication to bring in the engineering, the supply chain teams that technical and production prowess are experienced from our services teams to know how things actually get deployed in the real world. And I got to set them at the bench with Natalia, with the researchers and I got to make everyone unhappy. Hopefully in equal amounts that the development teams realize, you know, we're going to make some progress. We will end up with fantastic progress and products, both conventional systems as well as new systems, but it will be a while. We need to get through, that's why we had to build our prototype to say, no, we need a construction proof of these ideas. The same time with Natalia and the research teams who were always looking for that next horizon, that next question, maybe we pulled them a little bit closer, got a little answers out of them rather than this next question. So I think that's part of what we've been doing at the labs is understanding how do we organize ourselves? How do we work with the Hewlett Packard Enterprise Pathfinder program to find those little startups who need that extra piece of something that we could offer as that partnering community? So it's really a novel approach for us to understand how do we fill that gap? How do we still have great conventional products? How do we enable breakthrough new category products and have it in time for them that matters? So a much tighter connection between the R and the D. And then, okay, so when Natalia wants to initiate a project or somebody wants Natalia to initiate a project around AI, how does that work? You just say, okay, submit an idea and then it goes through some kind of peer review or how does that all, and then how does it get funded and take us through that? So I think, I'll give my perspective, I'd love to hear what you have from your side. For me, it's always been organic. The ideas that we had on the machine, for me, my little thread, one of thousands that's combined to get us to this point, we started about 2003, where we were getting ready for, we were mid-way through blade system C-class, a category defining product, absolute home run in defining what a blade system was going to be. And we're partway through that and you realize you got a success on your hands and you think, nothing gets better than this and then it starts to worry, what if nothing gets better than this? And you start thinking about that next set of things. Now, I had some insights of my own, but when you're a technologist and you have an insight, that's a great feeling for a little while and then it's a little bit of a lonely feeling. No one else understands this but me and is it always going to be that way? And then you have to find that business opportunity. So that's where talking with our field teams, talking with our customers, coming to events like Discover where you see business opportunities and you realize my ingenuity and this business opportunity are a match. Now, the third piece of that is someone who can say a business leader can say, you know what, your ingenuity and that opportunity can meet in a finite time with finite resources. Let's do it. And really that's what Megan, the leadership team did with us on the machine. Kirk, I want to shift gears and talk about the Memster because I think that's the showcase item everyone's talking about. Honestly, the machine has been talked about for many years now, but Memster changes the game and it kind of goes back to old school analog, right? We were talking about log in and log in kind of performance that we've never seen before. So it's a completely different take on memory and this kind of brings up your vision and the team's vision of memory-driven computing which some are saying can scale machine learning because now you have data response times in microseconds as you said and provisioning containers in microseconds is actually really amazing. So the question is, what is memory-driven computing? What does that mean? And what are the challenges in deep learning today? All right, so I'll do the machine learning and you'll do the machine learning. So what I think of memory-driven computing is the realization that we need a new set of technologies and it's not just one thing. If we could have, can't we just do dot, dot, dot? We would have done that one thing and so this is more thinking a holistic approach looking at all the technologies that we need to pull together. Now memories are fascinating and our memrister is one example of a new class of memory. But they also are- That's doing it differently too, it's not like- It's doing it differently, it's changing the physics. You want to change economics of information technology? You change the physics you're using. So here we're changing physics and whether it's our work on the memrister with Western Digital and the resistor RAM program, whether it's the phase change memories, whether it's the spin torque memories, they're all employing a new physics. What they all share though is the characteristic that they can continue to scale. They can scale in the layers inside of a die, the dies inside of a package, the package is inside of a module and then when we add photonics, a transformational information communications technology, now we're scaling from the package to the enclosure to the rack across the aisle and then across the data center. All that memory accessible as memory. So that's the first piece, large persistent memories. The second piece is the fabric, the way we interconnect them so that we can have great computational, great memory, great communication devices available on industry open standards. That's the Gen Z consortium. The last piece is software, new software as well as adapting existing productive programming techniques and enabling people to be very productive immediately. Before Natalie gets into her piece, I just want to ask a question because this is interesting to me because sorry to get geeky here but this is really cool because you're going analog with signaling. So going back to the old concepts of signaling theory, you mentioned neural networks. It's almost a hand in glove situation with neural networks. So here you have the next question which is connected dots to machine learning and neural networks. This seems to be an interesting technology game changer. Is that right? Am I getting this right? I mean, what's this mean? So close? I think I'll just add one piece and then I'll put here Natalia who's the expert on the machine learning. For me it's bringing that right ensemble of components together. Memory technologies, communication technologies and as you say, novel computational technologies because transistors are not going to get smaller for very much longer. We have to think of something more clever to do than just stamp out another copy of a standard architecture. Yes, you asked about challenges of deep learning. So if we look at the landscape of deep learning today and the set of tasks which are solved today by those problems, we see that although there is a variety of tasks most of them are from the same area. So we can analyze images very efficiently. We can analyze video, so it's all visual data. We can also do speech processing. There are a few examples in other domains with other data types, but there are much fewer. So it's much less knowledge which models to train for those applications. I think that one of the challenges for deep learning is to expand the variety of applications which it can be used. And it's now that artificial neural networks are very well applicable to the data where there are many hidden patterns underneath and there are multi-dimensional data like data from sensors. But we still need to learn what's the right topology of neural networks to do that, what's the right algorithm to train that. So we need to broaden the scope of applications which can take advantage of deep learning. Another aspect is, which I mentioned before, the computational power of today's devices. If you think about the well-known analogy of artificial neural networks in our brain, the size of the model which we train today, the artificial neural networks, they are less, much, much, much smaller than the analogous thing in our brain, many orders of magnitude. It was shown that if you increase the size of the model, you can get better accuracy for some tasks you can process a larger variety of data. But in order to train those large models, you need more data and you need more compute power. So that today we don't have enough compute power. Actually, it did some computation. So in order to train a model which is comparable in size with our human brain, you will need, and to train it in a reasonable time, you will need a compute device which is capable to perform 10 to the power of 26 floating point operations per second. We are far, far... Can you repeat that again? 10 to the power of 26. So that's a, we are far, far below that point now. All right, so here's the question for you guys. Okay, all this deep learning source code out there. It's open bar for open source right now. All this goodness is pouring in. Google's donating code. You guys are donating code. It used to be like, you have to build your code from scratch and maybe borrow here and there and share in open source. Now it's a tsunami of greatness. So I'm just going to build my own deep learning. How do customers do that? It's too hard. You're right on the point to the next challenge which I believe is out there because we have so many efforts to speed up the infrastructure. We have so many open source libraries. So now the question is, okay, I have my application at hand. What should I choose? What is the right compute node to the deep learning? Everybody use GPUs, but is it true for all models? How many GPUs do I need? What is the optimal number of nodes in the cluster? And we have a research effort to answer those questions as well. And a breathalyzer for all the drunk coders out there. Open bar. I mean, a lot of young kids are coming in. This is a great opportunity for everyone. I mean, in all seriousness, we need algorithms for the algorithms. And I think that's where it's so fascinating. We think of some classes of things like recognizing written handwriting, recognizing voice. But when we want to apply machine learning and algorithms to the volume of sensor data so that every manufactured item, and not only every item we manufacture, but every factory that could be fully instrumented with machine learning, understanding how it could be optimized. And then what are the business processes that are feeding that factory? And then what are the overall economic factors that are feeding that business? And instrumenting and having this learning, this unblinking, unbiased eye, examining to find those hidden correlations, those hidden connections that could yield a very much more efficient system at every level of human enterprise. Data is more diverse now than ever. Sorry, Dave, to wrap up. I mean, voice you mentioned. You saw Siri, you see Alexi, you see voice as one. Data set, data diversity is massive. So more needles, more types of needles than ever before. So in that example that you gave, you need a domain expert. And there's plenty of those. But you also need a big brain to build the model and train the model and iterate. And there aren't that many of those. Is the state of machine learning and AI going to get to the point where that problem will solve itself? Or do we just need to train more big brains? What do you thought? Actually, one of the advantages of deep learning that you don't need that much effort from the domain experts anymore, from the step which was called feature engineering. Like what do you do with your data before you throw machine learning algorithm into that? So they're a pretty cool thing about deep learning, artificial neural network that you can throw, almost throw data into that. And there are some examples out there that the people without any knowledge in medicine want the competition of the drug recognition by applying deep neural networks to that without knowing all the details about their connection between proteins. Like they're not domain experts, but they're still able to win that competition just because algorithm that good. Kirk, I want to ask you a final question before we break in the segment because I was having spent nine years of my career at HP in the 80s and 90s. It's been well known that there's been great research at HP. The R&D has been spectacular. You live at too much R, I mean, too much D, not enough applied. You mentioned you're bringing that to market faster. So the question is, what should customers know about Eulah Packard Labs? Today, your mission, obviously the memory centric is a key thing. You got the machine, you got the memster, you got a Nobel way of looking at things. What's the story that you'd like to share? Take a minute, close out the segment and share. Eulah Packard Labs, mission and what expect to see from you guys in terms of your research, your development, your applications, what are you guys bringing out of the kitchen? What's cooking in the oven? So I think for us, it is, we've been given an opportunity, an opportunity to take all of those ideas that we have been ruminating on for five, 10, maybe even 15 years, all those things that you thought, this is really something. And we've been given the opportunity to build a practical working example. We just turned on the prototype with more memory, more computation, addressable simultaneously than anyone's ever assembled before. And so I think that's the real vote of confidence from our leadership team that they said, no, the ideas you guys have, this is going to change the way that the world works. And we want to see you given every opportunity to make that real and to make it effective. And I think everything that Eulah Packard Enterprise has done to focus the company on being that fantastic infrastructure provider and partner is just enabling us to get this innovation and making it meaningful. I've been designing printed circuit boards for 28 years now. And I must admit, it's not as, it is intellectually stimulating on one level, but then when you actually meet someone who's changing the face of Alzheimer's research or changing the way that we produce energy as a society and has an opportunity to really create more sustainable world, then you say, that's really worth it. That's why I get up, come to labs every day, work with fantastic researchers like Natalia, work with great customers, great partners and our whole supply chain. The whole team coming together, just spectacular. Well, congratulations. Thanks for sharing the insight in theCUBE. Natalia, thanks very much for coming on. Great stuff going on. We're looking forward to keeping the progress and checking in with you guys. Always good to see what's going on in the lab. That's the headroom, that's the future. That's the bridge to the future. Thanks for coming in theCUBE. Of course, more CUBE coverage here at HPE Discover. We've got the keynotes coming up, Meg Whitman on stage with Antonio Neary, Kube alum, back with more live coverage after this short break. Stay with us.