 Live from Austin, Texas, it's theCUBE. Covering South by Southwest 2017. Brought to you by Intel. Now, here's John Furrier. South by Southwest, Austin, Texas, the Silicon Angle. theCUBE, our broadcast, we go out and extract the single-play noise. I'm John Furrier, I'm here with Naveen Rao, the Vice President General Manager of the Artificial Intelligence Solutions Group at Intel. Welcome to theCUBE. Thank you, yeah. So we're here, big crowd here at Intel, Intel AI Lounge. Okay, so that's your wheelhouse. You're the General Manager of AI Solutions. That's right. What is AI? Well, AI has been redefined through time, a few times. Today, I mean, generally applied machine learning. Basically ways to find useful structure and data to do something with. It's a tool, really, more than anything else. So, obviously, AI is a mental model. People can understand kind of what's going on with software. Machine learning and IoT gets kind of in the industry and those are hot areas as one, but it really is points to a future world where you're seeing software tackling new problems at scale, so cloud computing, what you guys are doing with the chips and software is now creating a scale dynamic. Similar to what Moore's Law has done for devices, you're starting to see software impact society. So what are some of those game-changing impacts that you see and that you're looking at at Intel? Well, I mean, there are many different sort of thought labors that many of us would characterize as drudgery, right? I mean, for instance, if I'm an insurance company and I want to assess the risk of 10 million pages of text, I can't do that very easily, right? I have to have a team of analysts run through the right summaries. These are kind of problems you can start to attack. So the way I always do it, look at it, is what a bulldozer was to physical labor, AI is to data, right? We can, this is the thought labor. We can really get through much more of it and use more data to make our decisions better. So what are the big game-changing things that are going on that people can relate to? Obviously, autonomous vehicles is one that we can all look at and say, wow, that's mind-blowing. Smart cities is one that you say, oh my God, I'm a resident of a community. Do they have to re-change the roads? Who writes the software? Is there a budget for that? Autonomous, sorry, smart home. You see Alexa with the Amazon. You see Google with their home product. Voice bots, voice interfaces. So the user interface is certainly changing. How is that impacting some of the things that you guys are working on? Well, in terms of the user interface changing, I think that has an entire dynamic on how people use tools, right? These are something is, the more people use it, the more pervasive it becomes. And we start discovering these emergent dynamics, like an iPod, for instance. Yeah, storing music in a digital form in a small device was around before the iPod. But when it made it easy to use, that sort of gave RICE the smartphone, right? So I think we're going to start seeing some really interesting dynamics like that. One of the things that I liked about this past week in San Francisco, Google had their big event, their cloud event, and they talked a lot about them. By the way, Intel was on stage with the New Zeon processor up to 72 cores. Amazing compute capabilities. But cloud computing does bring that scale together. But you start thinking about data science has moved into using data. And now you have a tsunami of data, whether it's taking an analog view of the world and having now multiple data sets available. So the evolution, you can connect to, okay, a lot of data, now you have a lot of data plus a lot of data sets, and you have almost unlimited compute capability. That starts to draw in some of the picture a little bit. It does, but actually there's one thing missing from what you just described, is that our ability to scale data storage and data collection has outpaced our ability to compute on it. So computing on it typically is some sort of a quadratic function, something faster than linear growth on the amount of data. And our compute has really not caught up with that. And a lot of that has been more about focus. Computers are really built to automate streams of tasks. And the sort of idea of going highly parallel and distributed is something somewhat new. I mean, it's been around a lot in the academic circles with the real use case of drive at home and build technologies around it, is relatively new. And so we're right now in the midst of transforming computer architecture and it's something that becomes a data inference machine, right? Not just a way to compute, sorry to automate compute tasks, but to actually do data inference and find useful inferences in data. And so machine learning is the hottest trend right now that kind of powers AI, but also there's some talk in the thought leader circles around learning machines, data learning from, you know, engage data or however you want to call it, also brings out another question. How do you see that evolving? Because do we need to have algorithms to police the algorithms? Who teaches the algorithms? I mean, so you bring in this human aspect to it. Right, so how does a machine become a learning machine? Who teaches the machine? Is it, I mean, it's crazy. Let me answer that a little bit with a question. Do you have kids? Yes, four. Does anyone police you on raising your kids? Kind of a little bit, but not much. Not much. They complain a lot. So I would argue that it's not so dissimilar, right? I mean, as a parent, your job is really to expose them to the right kind of biases or not biased data as much as possible, right? Like experiences are exactly that. I think this idea of shepherding data is extremely important. And you know, we've seen it in solutions that Google has brought out. There are these little unexpected biases and a lot of those come from just what we have in the data. And AI is no different than a regular intelligence in that way, right? It's presented with certain data, it learns from that data, and its biases are formed that way. There's nothing inherent about the algorithm itself that causes that bias other than the data itself. So are you saying, Naveen, that exposing more data is actually probably a good thing? It is, I mean, you know, exposing different kinds of data, diverse data, right? I mean, give me an example from the biological world, right? Children who have never seen people of different races tend to be more, it's something new and unique and they tease it out. It's like, oh, that's something different. Whereas children who are raised with people of many diverse space types or whatever are perfectly okay seeing new diverse space types. So it's the same kind of thing in an AI, right? It's going to hone in on the trends that are common and things that are outliers are going to call as such. So having good balance data sets, the way we collect that data, the way we sift through it and actually present it to an AI is extremely important. So one of the most exciting things that I like, obviously autonomous vehicles, I geek out on because, not that I'm a car head, gear head or a car buff, but it just, you look at what it encapsulates technically. 5G overlay, you have essentially sensors all over the car, you have software powering it, you now have augmented reality, mixed reality kind of coming into it and you have an interface to consumers and their real world in a car. So in some states, a moving data center, some say it's also a human interface to the world as they move around in transportation. So it kind of brings out the AI question and I want to ask you specifically, Intel talks about this a lot and they're super demos. What actually is Intel doing with the compute and how are you guys and what are you guys doing to make that accelerate faster and create a good safe environment? Is it just more chips? Is it software? Can you explain, take a minute to explain what Intel's doing specifically? Intel is uniquely positioned in this space because it's a great example of a full end-to-end problem. We have in-car compute, we have software, we have interfaces, we have actuators. That's maybe not Intel's suit. Then we have connectivity and then we have cloud. Intel is every one of those things, right? And so we are extremely well positioned to drive this field forward. Now, you ask what are we doing? He says that hardware is a software? Yes, it's all of it. This is a big focus area for Intel now. I mean, we see autonomous vehicles being one of the major ways that people interact with the world, like locality between cars and interaction through social networks and these kinds of things. So this is a big focus area. We are working on the in-car compute actively. We're going to lead that. 5G is a huge focus for Intel, as you might have seen in other mobile world, Congress, other places. And then the data center, right? And so we own the data center today and we're going to continue to do that with new technologies that actually enable these solutions, not just from the pure hardware or primitive perspective, but from the hardware, software, hardware interaction and full stack. Is that so? And so most people think of Intel as a chip company and obviously you guys abstract away complexities and put it into silicon, obviously you get that. And at Google next this week, one thing I was really impressed by was the TensorFlow machine learning algorithms in open source, you guys are optimizing the Xeon processor to offload, I know offload, but kind of take on, is this kind of the paradigm that Intel looks at that you guys will optimize the highest performance in the chip where possible and then to let the software be more functional? Is that a guiding principle? Is that a one off? So I would say that Intel is not just a chip company. We make chips that we're a platform solutions company. So we sell primitives at various levels. And so in certain cases, yes, we do optimize software that's out there because that drives adoption of our solutions, right? Of course, but in new areas like the car, for instance, we are driving the whole stack. It's not just the chip, it's the entire package end to end. And so with TensorFlow, definitely. I mean, Google is a very strong partner of ours and we continue to team up on activities like this. We're here with Dr. Naveen Rao with the Vice President and General Manager of Intel's AI Solutions, breaking it down for us. This end to end thing is really interesting to me. So I want to just double click on that a little bit. It requires a community to do that, right? So it's not just Intel, right? So Intel's always had a great rising tide floats all boats kind of concept over the life of their company. But like now, more than ever, it's an API world. You want to see integration points between companies. This becomes an interesting part. So how are you, can you talk up to that point about how you guys are enabling partners to work with it? And if people want to work with Intel, how do they work it from a developer to whoever? How do you guys view this community aspect? I mean, sure you'd agree with that, right? Yeah, absolutely. I mean, working with Intel can take on many different forms. I mean, we're very active in the open source community. I mean, the Intel Nirvana AI Solutions are completely open source. So we're very happy to enable people in the open source, help them develop their solutions on our hardware. But also, you know, the open source is there to form that community and actually give us feedback on what to build. The next piece is like kind of one click down if you're actually trying to build an intense solution like you're saying, you got a camera. We're not building cameras, but you know, these interfaces are pretty well defined. In general, what we'll do is, well, we like to select some partners that we think are high value add. And we work with them very closely and we build stuff that our customers can rely on, right? Intel stands for quality, really. And so, you know, we're not going to put Intel branding on something unless it sort of conforms to some really high standards. And so that's, I think, a big power here. Now, it doesn't mean we're not going to enable people that aren't our channel partners or whatever. They're going to have to be enabled through a more of a standard set of interfaces, software or hardware. Naveen, I want to ask you a final couple of minutes to have left to kind of zoom out and look at the coolness of the industry right now. So you're exposed to your background, where you got your PhD in and your topic wise, now that heading up the AI solutions, you probably see a lot of stuff. Go down the what's cool to you scene, what share with the audience, some of the cool things that you can point to that we should pay attention to or even things that are cool that we should be aware that we might not be aware of. What are some of the coolest things that you're that are out there that you could share? Well, so I mean, to share new things, we'll get to that in a second. Things that I think are one of my favorites is AlphaGo. I know this is like maybe a technique, but as an engineering student in CS in the mid-90s studying artificial intelligence back then, or what we call artificial intelligence, Go was just off the table, right? That was less than 20 years ago. And at that time, it was, it looks like such an insurmountable problem that the brain is doing something so special. We're just not going to figure it out in my lifetime to actually doing it is incredible, right? So to me that represents a lot. So that's a big one. Interesting things that you may not be aware of are other use cases of AI. Like we see it in farming, right? This is something we take for granted. We go to the grocery store, we pick up our food and we're happy, right? But the reality is that's a whole economy in and of itself and scaling it as our population scales is an extremely difficult thing to do. And we're actually interacting with companies that are doing this in multiple levels. One is at the farming level itself, automating things, using AI to determine the state of different crops and actually taking action in the field automatically. That's huge, right? This is backbreaking work. Humans don't necessarily worry about it. And it's important too because people are worried about the farming industry in general. Absolutely. And what I love about that use case of applying AI to farming techniques is that by doing that we actually get more consistency and you get better yields. And you're doing it without any additional chemicals, no genetic engineering, nothing like that, right? You're just applying the same principles we know better. And so I think that's where we see a lot of wonderful things happening is like, it's a solved problem but just not at scale, right? How do I scale this problem up? I can't do that in many instances. Like I talked about with the legal documents and trying to come up with a summary. You just can't scale it today. But with these techniques we can. And so that's what I think is extremely exciting. Any interaction there where we start to see scale of that kind of level. And new stuff? New stuff. Well, some of it I can't necessarily talk about but in the robot space there's a lot happening there. I'm seeing a lot in the startup world right now. We have a convergence of the mechanical part of it becoming cheaper and easier to build with 3D printing, the maker revolution, all these kinds of things happening, which our CEO is really big on. So that combined with these techniques becoming mature is going to come up with some really cool stuff. I mean, we're going to start seeing the Jetsons kind of thing, right? It's kind of neat to think about really. I don't want to clean my room. Like, hey robot, go clean my room, right? I love that. I love that too, right? Make me dinner. Make me like a gourmet dinner. That'd be really awesome, right? So we're actually getting to a point where there's a line of sight. Like we're not there yet. I can see it in the next 10 years. Yeah, so the fog is lifting. All right, final question just on just more of a personal note. Obviously, you have a neuroscience background. You mentioned the GO is cool. But the humanization factors coming in. And we mentioned ethics before we came out. We don't have time to talk about the ethics role. But as societal changes are happening with these new impacts of technologies, there's real impact. Whether it's solving diseases and farming or finding missing children, there's some serious stuff that's really being done. But the human aspect of converging with algorithms and software and scale, your thoughts on that? How do you see that? And how would you talk to that? A lot of people are trying to really put this in a framework to try to advance more either sociology thinking, how to bring sociology into computer science in a way that's relevant. What are some of your thoughts there? Can you share any color commentary? Well, I think it's a very difficult thing to comment on, especially because there are sort of these emergent dynamics. But I think what we'll see is just as like social networks have interfered in some ways and actually helped our interactions with each other, we're going to start seeing that more and more, right? I mean, we can have AIs that are kind of filtering interactions for us. What a positive of that is that we can actually understand more about what's going on around in our world and we're more tightly interconnected. You can sort of think of it as a higher bandwidth communication between all of us, right? When we're in hunter gatherer societies, we can only talk to so many people in a day, right? Now we can actually do more. And so we can gather more information. Bad things are maybe that, you know, things become more impersonal or people have to start doing weird things to stand out in other people's view. Like, you know, there's all these weird interactions. It's kind of like Twitter. A little bit like Twitter, right? I mean, you have to say ridiculous things sometimes till I get noticed, right? So we're going to continue to see that. We're already starting to see it as you pointed out, right? And so I think that's really where the social dynamic happened. It's just how it impacts our day-to-day communication. Dr. Naveen Rao, great conversation here inside the Intel AI lounge. These are the kind of conversations that are going to be on more and more kitchen tables across the world. I'm John Furrier with theCUBE. Be right back with more after this short break. Thanks, John.