 Live from Las Vegas, it's The Cube! Covering AWS re-invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. Welcome back to the Sands Convention Center in Las Vegas, everybody. You're watching The Cube, the leader in live tech coverage. My name is Dave Vellante. I'm here with my co-host, Justin Warren. This is day one of our coverage of AWS re-invent 2019. Naveen Rao is here. He's the corporate vice president and general manager of artificial intelligence, AI products group at Intel. Good to see you again. Thanks for coming to The Cube. It's really thanks for having me. You're very welcome. So what's going on with Intel and AI? Give us the big picture. Yeah, I mean, actually the very big picture is I think the world of computing is really shifting. The purpose of what a computer is made for is actually shifting. And I think from its very conception, from Alan Turing, the machine was really meant to be something that recapitulated intelligence. And we took sort of a divergent path where we built applications for productivity, but now we're actually coming back to that original intent. And I think that hits everything that Intel does because we're a computing company. We supply computing to the world. So everything we do is actually impacted by AI and will be in service of building better AI platforms for intelligence at the edge, intelligence in the cloud and everything in between. That's really come full circle. I mean, when I first started this industry, AI was the big hot topic. And really Intel's ascendancy was around personal productivity, but now we're seeing machines replacing cognitive functions for humans. That has implications to society. But there's a whole new set of workloads that are emerging and that's driving presumably different requirements. So what do you see as the sort of infrastructure requirements for those new workloads? What's Intel's point of view on that? Well, so maybe let's focus that on the cloud first. Any kind of machine learning algorithm typically has two phases to it. One is called training or learning, where we're really iterating over large data sets to fit model parameters. Then once that's been done to a satisfaction of whatever performance metrics are relevant to your application, it's rolled out and deployed. That phase is called inference. And so these two are actually quite different in their requirements and that inference is all about the best performance per watt. How much processing can I shove into a particular time and power budget? On the training side it's much more about what kind of flexibility do I have for exploring different types of models and training them very, very fast. Because when this field kind of started taking off in 2014, 2013, typically training a model back then would take a month or so. Those models now take minutes to train and the models have grown substantially in size. So we've still kind of gone back to a couple of weeks of training time. So anything we can do to reduce that is very important. And why the compression? Is that because of just so much data? It's data and- You don't have to act on it? Yeah, it's data, the sheer amount of data, the complexity of data and the complexity of the models. So a very broad or a rough categorization of the complexity can be the number of parameters in a model. So back in 2013, there were probably 10 million, 20 million parameters, which was very large for a machine learning model. Now they're in the billions. One or two billion is sort of the state of the art. To give you bearings on that, the human brain is about three to 500 trillion models. So we're still pretty far away from that. So we've got a long way to go. Yeah, so one of the things about these models is that once you've trained them then they do things. But understanding how they work, these are incredibly complex mathematical models. So are we at a point where we just don't understand how these machines actually work? Or do we have a pretty good idea of, no, no, when this model is trained to do this thing, this is how it behaves? Well, it really depends on what you mean by how, how much understanding we have. So I'll say at one extreme, we trust humans to do certain things. And we don't really understand what's happening in their brain. We trust that there's a process in place that has tested them enough. Like the neurosurgeon's cutting into your head, you say like, you know what, there's a system where that neurosurgeon probably had to go through a ton of training, be tested over and over again, and now we trust that he or she is doing the right thing. I think the same thing is happening in AI. Some aspects we can bound and say, I have analytical methods on how I can measure performance. In other ways, other places it's actually not so easy to measure the performance analytically. We have to actually do it empirically, which means we have data sets that we say, does it stand up to all the different tests? One area we're seeing that it is autonomous driving. Autonomous driving, it's a bit of a black box and the amount of situations one can incur on the road are almost limitless. So what we say is like for a 16 year old, we say go out and drive and eventually you sort of learn it. Same thing is happening now for autonomous systems. We have these training data sets where we say, do you do the right thing in these scenarios? And we say, okay, we trust that you'll probably do the right thing in the real world. We know that Intel has partnered with AWS around autonomous driving with their deep racer projects and I believe it's on Thursday is the grand final. It's been running for, I think it was announced on theCUBE last year and there's been a whole bunch of competitions running all year. Basically training models that run on this Intel chip inside a little model car that drives around a racetrack. So you're speaking of empirical testing of whether or not it works. Lap times gives you a pretty good idea. So what have you learned from that experience of having all of these people go out and learn how to use these AOI models on a real live race car and race around a track? I think there's several things. I mean, one thing is when you turn loose a number of developers on a competitive thing, you get really interesting results or people find creative ways to use the tools to try to win. So I always love that process. I think competition is how you push technology forward. On the tool side, it's actually more interesting to me is that we had to come up with something that was adequately simple so that a large number of people could get going on it quickly. You can't have somebody who spends a year just getting the basic infrastructure to work. So we had to put that in place. And really I think that's still an iterative process. We're still learning what we can expose as knobs, what kind of areas of innovation we allow the user to explore and where we sort of lock it down to make it easy to use. So I think that's the biggest learning we get from this is how I can deploy AI in the real world and what's really needed from a tool chain standpoint. Can you talk more specifically about what you guys each bring to the table with your collaboration with AWS? Yeah, I mean, AWS has been a great partner. Obviously, AWS has a huge ecosystem of developers, all kinds of different developers. I mean, web developers are one sort of developer, database developers are another, AI developers are yet another. And we're kind of partnering together to empower that AI base. What we bring from a technological standpoint, of course, of the hardware, our CPUs are AI ready now with a lot of software that we've been putting out in the open source. And then other tools like OpenVINO, which make it very easy to start using AI models on our hardware. And so we tie that in to the infrastructure that AWS is building for something like DeepRacer and then help build a community around it, an ecosystem around it of developers. I want to go back to the point you were making about the black box, AI. People are concerned about that, they're concerned about explainability. Do you feel like that's a function of just the newness that will eventually get over it? I mean, I could take so many examples in my life where I can't really explain how I know something, but I know it and I trust it. Do you feel like it's sort of a tempest in a teapot? Yeah, I think there's, it depends on what you're talking about. If you're talking about the traceability of a financial transaction, we kind of need that maybe for legal reasons. So even for humans, we do that. You got to write down everything you did. Why did you do this? Why did you do that? So we actually want traceability for humans even. In other places, I think it is really about the newness. Do I really trust this thing? I don't know what it's doing. Trust comes with use. After a while, it becomes pretty straightforward. I mean, I think that's probably true for a cell phone. I remember the first smartphones coming out in the early 2000s, like, I didn't trust how they worked. I would never do a credit card transaction on them, these kinds of things. Now it's like, take it for granted. I've done it a million times and I never had any problems, right? It's the opposite of social media, most people. Well, maybe that's the opposite. Let's not go down that path. I'm quite like Dr. Kate Darling's analogy from MIT lab, which is that we already have AI and we're quite used to them. They're called dogs. We don't fully understand how a dog makes a decision and yet we use them every day. That's right. In a collaboration with humans. So dogs sort of replace a particular dog, but then again, they don't. I don't particularly want to go and sniff things all day long. So having AI systems that can actually replace some of those jobs, that's kind of great. Exactly. And think about it like this, if we can build systems that are tireless and we can basically give them more power and they keep going, that's a big win for us. And actually the dog analogy is great because I think at least my eventual goal as an AI researcher is to make the interface for intelligent agents be like a dog, to train it like a dog. Reinforce it for the behaviors you want and keep pushing it in new directions that way as opposed to having to write code that's kind of esoteric. Can you talk about GANs? What is GANs? What's it stand for or what does it mean? General, generative adversarial networks. What this means is that you can kind of think of it as two competing sides of solving a problem. So if I'm trying to make a fake picture of you that I don't know makes you look like you have no hair like me, you know you can see a Photoshop job and you can kind of tell oh that's not so great. So one side is trying to make the picture and the other side is trying to guess whether it's fake or not. If you have two neural networks they're kind of working against each other. One's generating stuff and the other one's saying is it fake or not? And then eventually you keep improving each other. This one tells that one no I can tell. This one goes and tries something else. This one says no I can still tell. The one that's trying with a discerning network once it can't tell anymore, you've kind of built something that's really good. That's sort of the general principle here. So we basically have two things kind of fighting each other to get better and better at a particular task. Like deep fakes. I use that because it is a relevant case and that's kind of where it came from, it's from GANs. Okay and so wow obviously relevant with 2020 coming up. I want to ask you how far do you think we can take AI, two part question. How far can we take AI in the near to midterm? You know let's talk in our lifetimes. And then how far should we take it? And maybe you can address some of those thoughts. So how far can we take it? Well I think we often have the sci-fi narrative out there of building killer machines and this and that. I don't know that that's actually going to happen anytime soon for several reasons. One is we build machines for a purpose. They don't come from an embattled evolutionary past like we do. Their motivations are a little bit different say. So that's one piece. They're really purpose driven. Also building something that's as general as a human or a dog is very hard and we're not anywhere close to that. When I talked about the trillions of parameters that a human brain has like we might be able to get close to that from an engineering standpoint but we're not really close to making those trillions of parameters work together in such a coherent way that a human brain does and efficient. Human brain does that in 20 watts. To do it today would be multiple megawatts. So it's not really something that's easily found just laying around. Now how far should we take it? I think I look at AI as a way to push humanity to the next level. Let me explain what that means a little bit. Simple equation I always sort of write down is like people are like oh radiologists aren't going to have a job. No, no, no. What it means is one radiologist plus AI equals 100 radiologists. I can take that person's capabilities and scale it almost freely to millions of other people. It basically increases accessibility of expertise. We can scale expertise. That's a good thing. It solves problems like we have in healthcare today. That's where we should be going with it. Well a good example would be when and probably part of the answers today when will machines make better diagnoses than doctors. I mean in some cases it probably exists today but not broadly but that's a good example, right? It is, it's a tool though. So I look at it as more giving a human doctor more data to make a better decision on. So what AI really does for us is it doesn't limit the amount of data on which we can make decisions. As a human all I can do is read so much or hear so much or touch so much. That's my limit of input. If I have an AI system out there listening to billions of observations and actually presenting data in a form that I can make better decisions on, that's a win. It allows us to actually move science forward, to move accessibility of technologies forward. So keeping that context of that time frame I said something in our lifetimes, however you want to define it. When do you think that, or do you think that driving your own car will become obsolete? I don't know that it'll ever be obsolete. And I'm a little bit biased on this. So I actually race cars. It's like- Me too and I drive a stick. I kind of race them semi-professionally. So it's like I don't want that to go away. But it's the same thing, we don't need to ride horses anymore but we still do for fun. So I don't think it'll completely go away. Now what I think what will happen is that commutes will be changed. We will now use autonomous systems for that. And I think five, seven years from now we will be using autonomy much more on prescribed routes. It won't be that it completely replaces a human driver even in that time frame. Because it's a very hard problem to solve in a completely general sense. So it's going to be a kind of gentle evolution over the next 20 to 30 years. Do you think that AI will change the manufacturing pendulum and perhaps some of that would swing back to sort of in this country anyway onshore manufacturing? Yeah, perhaps. I was in Taiwan a couple of months ago and you're actually seeing that already. You're seeing things that maybe were much more labor-intensive before because of economic constraints are becoming more mechanized using AI. AI as an inspection, did this machine install this thing right? So you have an inspector tool and you have an AI machine building it. So a little bit like a GAN you can think of it, right? So this is happening already. And I think that's one of the good parts of AI is that it takes away those sort of harsh conditions that humans had to be in before to build devices. Do you think AI will eventually make large retail stores go away? Well, I think as long as there are humans who want immediate satisfaction, I don't know that it'll completely go away. Some humans enjoy shopping. Some people like browsing. Yeah, it depends how fast you need to get it. And then my last AI question, do you think banks, well traditional banks will lose control of the payment systems as a result of things like machine intelligence? Yeah, I do think there are going to be some significant shifts there. We're already seeing many payment companies out there automate several aspects of this and reducing the friction of moving money. Moving money between people, moving money between different types of assets like stocks and bitcoins and things like that. And I think AI is a critical component that people don't see because it actually allows you to make sure that first you're doing a transaction that makes sense. Like when I move from this currency to that one, I have some sense of what's a real number. It's much harder to defride. And that's a critical element to making these technologies work. So you need AI to actually make that happen. All right, we'll give you the last word. Just maybe you want to talk a little bit about what we can expect AI futures or anything else you'd like this year. I think we're at a really critical inflection point where we have something that works basically and we're going to scale it, scale it, scale it to bring on new capabilities. It's going to be really expensive for the next few years but we're going to then throw more engineering at and start bringing it down. So I start seeing this look a lot more like a brain, something where we can start having intelligence everywhere at various levels, very low power, ubiquitous compute, and then very high power compute in the cloud, but bringing these intelligent capabilities everywhere. We need these great guests. So thanks so much for coming on theCUBE. Thank you, thank you for having me. You're really welcome. All right, keep it right there everybody. We'll be back with our next guest, Dave Vellante for Justin Warren. You're watching theCUBE live from AWS re-invent 2019. We'll be right back.