 Live from Austin, Texas, it's theCUBE. Covering South by Southwest 2017. Brought to you by Intel. Now, here's John Furrier. Okay, we're back live here at the South by Southwest Intel AI Lounge. This is theCUBE's exclusive coverage of South by Southwest with Intel, hashtag Intel AI. We're amazing stars with Intel. Our next guest is Dr. Don Napis, who's with Intel and you are a senior research scientist. Welcome to theCUBE. Thank you. So you've got a panel coming up but you also have a book, AI for Everything, and looking at a democratization of AI. And we had a quote yesterday that AI is the bulldozer for data. What bulldozers were in the real world is AI will be that bulldozer for data, surfacing new experiences. This is the subject of your book. Kind of, what's your take on this and what's your premise? Right, well the book actually takes a step way back. It's actually called self-tracking. The panel is AI for everyone but the book is on self-tracking. And it's really about actually getting some meaning out of data before we start talking about bulldozers. So right now we've got this situation where there's a lot of talk about AI is going to sort of solve all our problems in health and there's a lot that can actually get accomplished. But the fact of the matter is is that people are still struggling with Gs, like what does my Fitbit data actually mean? Right, so there's a real big gap. And I think probably part of what the industry has to do is not just sort of build new great technologies which we've got to do but also start to fill that gap and sort of data education, data literacy, all that sort of stuff. So we're kind of in this first generation of AI data. You mentioned wearables, Fitbits. So people are now getting used to this. So this sounds like this integration into lifestyle becomes kind of a dynamic. Yeah. How are people grotting with this? What's your research say about that? Well right now with wearables, frankly, we're in the classic trough of disillusionment, right? For those of you listening, I don't know if you have sort of wearables and drawers right now, right? But a lot of people do. And it turns out that folks tend to use it maybe about three or four weeks and either they've learned something really interesting and helpful or they haven't. And so there's actually a lot of people who do really interesting stuff to kind of combine it with symptoms tracking, location, right? Other sorts of things to actually really reveal the sorts of triggers for medical issues that you can't find in a clinical setting, right? It's all about being out in the real world and figuring out what's going on with you, right? So then when we start to think about adding more complexity into that, which is the thing that AI is good at, right? We've got this problem of there's only so many data sets that AI is actually any good at handling, right? And so I think there's going to have to be a moment where sort of people themselves actually start to say, okay, you know what, this is how I define my problem, right? This is what I'm going to choose to keep track of. And some of that's going to be on a sensor and some of it isn't, right? And sort of really being intervening a little bit more strongly in what this stuff's actually doing. You mentioned the Fitbit and you were seeing a lot of disruption in the areas, innovation and disruption, same thing. Good and bad potentially, but obviously autonomous vehicles is pretty clear. One knows of Tesla's tracking and their Hot Trend. But you mentioned Fitbit, that's a healthcare kind of thing. AI might seem to be a perfect fit into healthcare because it's all these alarms going off and all this data flying around. Is that a low-hanging fruit for AI, healthcare? I don't know if there's any such thing as low-hanging fruit in the space. But certainly, if you're talking about actual human benefit, right? That absolutely comes to top of the list, right? And we can see that in both formal healthcare and clinical settings and sort of imaging for diagnosis. Again, I think there's areas to be cautious about, right? Making sure that there's also an appropriate human check and there's also mechanisms for transparency, right? So that doctors, when there is a discrepancy between what the doctor believes and what the machine says, you can actually go back and figure out what's actually going on. The other thing I'm particularly excited about is, and this is why I'm so interested in democratization, is that health is not just about what goes on in clinical care, right? There are right now environmental health groups who are looking at a slew of air quality data that they don't know what to do with, right? And a certain amount of machine assistance to sort of figure out, you know, signatures of sort of point source polluters, for example, is a really great use of AI, right? It's not gonna make anybody many money anytime soon, but that's the sorts of, you know, that's the kind of society that we want to live in, right? There was a social good angle for sure, but I'd like to get your thoughts because you mentioned democratization and it's kind of nuanced, depending upon what you're looking at, democratization with news and media is what you saw with social media. Now you got healthcare. So how do you define democratization in your context that you're excited about? Is that more of freedom of information and data? Is it getting around gatekeepers and siloed stacks? I mean, how do you look at democratization? All of the above. I'd say there are two real elements to that. The first is making sure that, you know, people who are going to use this for more than just business have the ability to actually do it, right? And have access to the right sorts of infrastructures to really do whether it's the environmental health case or there are actually artists now who use natural language processing to create artwork. And people ask them, why aren't you using deep learning? It's like, well, there's a real access issue, frankly. It's also on the side of, if you're not the person who's going to be directly using data, a kind of a sense of, you know, there are bigger, democratization to me means being able to ask questions of how the stuff's actually behaving, right? So that means building in mechanisms for transparency, building in mechanisms to allow journalists to do the work that they do. Sharing, potentially. I'm sorry? And sharing as well more data. Right, absolutely. I mean, frankly, we still have a problem right now in the wearable space of people even getting access to their own data, right? There's a guy I work with named Hugo Campos who has an arterial defibrillator and he's still fighting to get access to the very data that's coming out of his heart, right? Is it on SSD? I mean, is it in the cloud? I mean, where is it? It is in the cloud. It's going back to the manufacturer and there are very robust conversations about where it should be. That's super exciting. So this brings up the whole thing that we've been talking about yesterday and we've had many statements on theCUBE is that there are all these new societal use cases that are just springing up that we've never seen before. Self-driving cars with transportation, healthcare access to data, all these things. What are some of the things that you see emerging on that tools or approaches that could help either scientists or practitioners or citizens deal with these new critical problem solving that needs to apply technology to because whether it's, I mean, I was talking just last week at Stanford with folks that are looking at gender bias and algorithms. Something I would never would have thought of. That's an outlier, like, hey. Oh no, it's not. What? No, but it's one of those things where, okay, let's put that on the table. There's all this new stuff coming on the table. Yeah, yeah, absolutely. What do you see? How do we solve that with approaches? Yeah, there are a couple of mechanisms and I would encourage listeners and folks in the audience to have a look at a really great report that just came out from the Obama administration and New York NYU School of Law. I was called AI Now and they actually proposed a couple of pathways, right? To sort of making sure we get this right. So, you know, a couple of things. You know, one is frankly making sure that women and people of color are in the room when the stuff's getting built, right? That helps. You know, as I said earlier, you know, making sure that, you know, things will go awry. Like, it just will. We can't predict how these things are gonna work. And catching it after the fact and building in mechanisms to be able to do that really matters. There was a great effort by ProPublica to look at a system that was predicting criminal recidivism, right? And what they did was they said, look, you know, it is true that the thing has the same failure rate for both blacks and whites, but some hefty data journalism and data scraping and all the rest of it actually revealed that it was producing false positives for blacks and false negatives for whites. Meaning that black people were predicted to create more crime than white people, right? So, you know, we can catch that, right? And when we build in more systems of people who have the skills to do it, then, you know, we can build stuff that we can live with. This is exactly to your point of democratization, I think that fascinates me that I get excited about. And it's almost intoxicating to think about it technically and also, you know, societal. That there's all these new things that are emerging and the community has to work together because it's one of those things where there's no, there may be a board of governance out there. I mean, who is the board of governance when it's done? It really has to be community driven. And NYU's got one, and there are examples of communities that are out there that people can participate in or... Yeah, absolutely. So, I think that, you know, they're certainly collaborating on projects that you actually care about and sort of asking good questions about is this appropriate for AI or not, right? It's a great place to start or reaching out to people who have those technical skills. And there are also the Engineering Professional Association actually just came out a couple months ago with a set of guidelines for developers to be able to, you know, the kinds of things you have to think about if you're going to build an ethical AI system. So they came out with some very high level principles. Operationalizing those principles is going to be a real tough job and we're all going to have to pitch in and I'm certainly involved in that. But yeah, there are actually systems of governance that are co-hearing, but it's early days. It's a great way to get involved. So I got to ask you the personal question in your efforts with the research and the book and all your travels. What's some of the most amazing things that you've seen with AI that are out there that people may know about or may not know about that they should know about? Oh, gosh. I'm going to reserve judgment. I don't know yet. I think we're too early on the curve to be able to talk about sort of the magic of it. What I can say is that there is real power when ordinary people who have no coding skills whatsoever and frankly don't even know what the heck machine learning is, get their heads around data that is collected about them personally, right? That opens up, you can teach five-year-olds statistical concepts that are learned in college with a wearable because the data applies to them. So they know how it's been collected. It's personal. Yeah, they know what it is already. You don't have to tell them what an outlier effect is because they know, because they were that outlier. You know what I mean? They're immersed in the data. Absolutely. And I think that's where the real social change is going to come from. I love immersion. It's a great way to teach kids, but the data is key. So I got to ask you with the big pillars of change going on and at Mobile World Congress, I saw you Intel in particular talking about autonomous vehicles heavily, smart cities, media entertainment, and the smart home. I'm trying to get a peg, a comparable of how big this shift will be vis-a-vis, I mean, the 60s revolution when chip started coming out, the PC revolution and server revolution, and now we're kind of in this new wave. How big is it? I mean, in order of magnitude, is it super huge? All of the other shifts combined? Are we going to see radical configuration changes? You know, I'm an anthropologist, right? So everything changes and nothing changes at the same time, right? We're still going to wake up, we're still going to put on our shoes in the morning, right? We're still going to have a lot of the same values and social structures and all the rest of it that we've always had, right? So I don't think in terms of plonk, here's a bunch of technology, now that's a revolution, right? There's like a dialogue, right? And we are just at the very, very baby steps of having that dialogue. But when we do, right, the people in my field call it domestication, right? These things become tame, they become part of our lives, we shape them and they shape us, and that's not radical change, right? That's the change we always have. Let's have a solution. So I got to ask you a question because I have four kids and I have this conversation with my wife and friends all the time because we have kids, digital natives that are growing up, and we see a lot of also workplace domestication, people kind of getting domesticated with the new technologies. What's your advice for whether it's parents to their kids, kids that are growing up in this world, whether it's education, how should people approach the technology that's coming at them so heavily in an age of social media where all of voices are equal right now and yet more filters are coming out? So it's pretty intense. Yeah, yeah. I mean, I think it's an occasion where people have to think a lot more deliberately than they ever have about the sources of information that they want exposure to, right? The kinds of interactions and mechanisms that actually do and don't matter. I'm thinking very clearly about what's noise and what's not is a fine thing to do, right? So yeah, probably the filtering mechanisms has to get a bit stronger. I would say too, there's a whole set of practices. There are ways that you can scrutinize new devices for where the data goes, right? And often kind of the higher bar companies will give you access back, right? So if you can't get your data out again, I would start asking questions. All right, final two questions for you. What's your experiences like so far at South by Southwest? And where is the world going to take you next in terms of your research and your focus? Right, well, this is my second year at South by Southwest. It's hugely fun. I am so pleased to see just a rip-roaring crowd here at the Intel facility, which is just amazing. I think this is our first time as Intel proper. Having a really good time, the self-tracking book is in the bookshelf over in the convention center if you're interested. And what's next is we are gonna get real about how to make these ethical principles actually work at an engineering level. Yeah, computer science means social science happening right now, Intel powering amazing here at South by Southwest. I'm John Furrier, you're watching theCUBE. We've got a great set of people here on theCUBE. Also, great AI lounge experience, great demos, great technologists, all about AI for social change. I'm with Dr. Dawn Napest with Intel. We'll be right back with more coverage after this short break.