 From Cambridge, Massachusetts, it's theCUBE, covering MIT Chief Data Officer and Information Quality Symposium 2019, brought to you by SiliconANGLE Media. Welcome back to MIT, everybody. You're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante, I'm here with Paul Gillan. My co-host, Tom Davenport, is here. He's the president's distinguished professor at Babson College, CUBE alum. Good to see you again. Tom, thanks for coming on. Glad to be here. So yeah, this is, let's see, the 13th annual MIT CDOIQ. It's been lucky. Yeah. It sure is this year. Our seventh, I think, so maybe we'll offset. So, you gave a talk earlier. Should we be afraid of the machines or should we embrace them? I think we should embrace them because so far they are not capable of replacing us. I mean, you know, when we hit the singularity, which I'm not sure will ever happen, but it's certainly not going to happen anytime soon. We'll have a different answer, but now good at small, narrow tasks, not so good at doing a lot of the things that we do. So I think we're fine, although, as I said in my talk, I have some survey data suggesting that large US corporations, their senior executives, a substantial number of them, more than half would like to automate as many jobs as possible, they say. So that's a little scary, but fortunately for us humans, I think it's going to be a while before they succeed. Well, we had a case last year where McDonald's employees were agitating for an increase in the minimum wage and the management used the threat of roboticizing the hamburger making process, which can be done to get them to back down. Are you think we're going to see more of that where maybe AI is used as a threat? Well, I haven't heard too many other examples. I think for those highly structured, relatively low level tasks, it's quite possible, particularly if we do end up raising the minimum wage beyond a point where it's economical to pay humans to do the work. But I would like to think that, if we gave humans the opportunity, they could do more than they're doing now in many cases. And one of the things I was saying is that I think companies are, generally there's some exceptions, but most companies are not starting to retrain their workers. Amazon recently announced they're going to spend 700 million to retrain their workers to do things that AI and robots can't, but that's pretty rare. Certainly that level of commitment is very rare, so I think it's time for companies to start stepping up and saying, how can we develop a better combination of humans and machines? Well, the work by Bryn Jolson and McAfee, which is a little dated now, but it definitely suggests that there's some things to be concerned about. Of course, ultimately their prescription was one of an optimist and education and so forth. But the key point there is machines have always replaced humans, but now in terms of cognitive functions, but you see it everywhere. You drive to the airport, now it's electronic billboards, it's not some person putting up, the kiosks, et cetera, but you've used the term, pave the cow path. We don't want to protect the past from the future. So to your point, retraining, education, I mean that's the opportunity here, isn't it? And the potential is enormous. Well, and let's face it, we haven't had much in the way of productivity improvements in the US or any other advanced economy lately. So we need some, I guess, replacement of humans by machines, but my argument has always been, you can handle innovation better, you can avoid sort of race to the bottom that automation sometimes leads to if you think creatively about humans and machines working as colleagues in many cases. Well, you remember in the PC boom, and I forget who the Fed chairman was, it might have been Greenspan, said you can see progress everywhere except in the productivity numbers. Oh yeah, that was an MIT professor, Robert Solo, who won the Nobel Prize. But then shortly thereafter, there was a huge productivity boom. So I mean, is there maybe a pent up? Well, God knows, I mean, everybody's wondering, we've been spending literally trillions on IT and you would think that it would have led to productivity, but certain things like social media, I think reduced productivity in the workplace and we're all chatting and talking and slacking and so on all over the place. Maybe that's just not conducive to getting work done. It depends what you're doing with that social media, right Paul? If you're in our business, it's actually a good thing. It's phenomenal to see political coverage these days, which is almost entirely consists of reprinting politicians tweets. Exactly, I guess it's made life easier for them, although those poor people, reporters sitting in the White House waiting for a press conference, they're not doing very well. No, well, there aren't many reporters left of course. Where do you see in your consulting work, your academic work, where do you see AI being used most effectively in organizations right now? And where do you think that's going to be three years from now? Well, I mean, the general category of activity of use case is the sort of, sometimes call it boring AI. It's data integration, one thing that's being discussed a lot at this conference. It's connecting your invoices to your contracts to see did we actually get the stuff that we contracted for. It's doing a little bit better job of identifying fraud and doing it faster. So all of those things are quite feasible. They're just not that exciting. What we're not seeing are curing cancer, creating fully autonomous vehicles, the really aggressive moonshots that we've been trying for a while, just haven't succeeded at. What if we expand AI as good as the rubric for all this new cool stuff that's coming out? So considering all these new techs, AI, blockchain, new security approaches, when do you think that machines will be able to make better diagnoses than doctors? Well, I think in a very narrow sense, in some cases, they can do it now. But the thing is, first of all, take a radiologist, which is one of the doctors, I think most at risk from this, because they don't typically meet with patients and they spend a lot of time looking at images. It turns out that the lab experiments that say, these are better than human radiologists, AI, tend to be very narrow, and what one lab does is different from another lab. So it's just going to take a very long time to make it into production deployment in the physician's office. We'll probably have to have some regulatory approval of it. So the lab research is great. It's just getting it into day-to-day reality is the problem. Okay, so staying in this context of digital as a sort of an umbrella topic, do you think large retail stores will largely disappear? Some sectors more than others for things that you don't need to touch and feel and so on, before you order them, certainly even that obviously is happening more and more in online commerce. What people are saying will disappear next is the human at the point of sale. And we've been talking about that for a while in grocery, not achieved so much yet in the US. Amazon Go is a really interesting experiment where every time I go in there, I try to shoplift, but it seems to be able to prevent me. It took a while, now they have 12 stores, it's not huge yet, but I think if you're in one of those jobs that a substantial chunk of it is automatable, then you really want to start looking around thinking, what else can I do to add value to these machines? Do you think traditional banks will lose control of the payment system? No, I don't because the fintechs that you see thus far keep getting bought by traditional banks. So my guess is that people will want that certainty. And the funny thing about blockchain, we say in principle, it's more secure because it's spread across a lot of different ledgers, but people keep hacking into Bitcoin, so it makes you wonder. I think blockchain is going to take longer than we thought as well. So in my latest book, which is called The AI Advantage, I started out talking about Amara's Law, this guy Roy Amara, who was a futurist, not nearly as well known as Moore's Law, but it said for every new technology, we tend to overestimate its impact in the short run and underestimate its impact in the long run. And so I think AI will end up doing great things. We may have sort of tuned it out by the time it actually happens, but oh yeah, we finally have autonomous vehicles, we've been talking about it for 50 years. All right, last one. So one of the Democratic candidates, one of the 75 Democratic candidates last night, mentioned the Chief Manufacturing Officer. Well, do you see that automation will actually swing the pendulum and bring back manufacturing to the US? I think it could if we were really aggressive about using digital technologies in manufacturing, doing 3D manufacturing, doing digital twins of every device and so on, but we are not being as aggressive as we ought to be and manufacturing companies have been kind of slow and I think somewhat delinquent in embracing these things, so they're going to, I think, lose the ability to compete. We'd have to really go at it in a big way to bring it all back. Just, we've got an election coming up. There are a lot of concern following the last election about the potential of AI, chatbots, Twitter chatbots, deep fakes, technologies that obscure or alter reality. Are you worried about what's coming in the next year? In that capacity? No, that could never happen, Paul. We could never see anything like that. Deep fakes I'm quite worried about. We don't seem to, I know there's some organizations working on how we would certify an image as being real, but we're not there yet. My guess is, certainly by the time the election happens, we're going to have all sorts of political candidates saying things that they never really said through deep fakes and image manipulation. Scary. What do you think about the call to break up big tech? What's your position on that? I think that a self-inflicted wound, we just saw, for example, that the automobile manufacturers decided to get together, even though the federal government isn't asking for better mileage, they said, we'll do it, we'll work with you in California and the states that are more advanced. If big tech had said, we're going to work together to develop standards of ethical behavior and privacy and data and so on, they could have prevented some of this. Unless they change their attitude really quickly, I've seen some of it, Salesforce people are talking about the need for data standards, data protection standards. Unless they change quickly, I think they're going to get legislation imposed and maybe get broken up. It's going to take a while, depends on the next administration, but they're not being smart about it. You look at, oh, I'm sure you see a lot of demos of advanced AI type technology. Over the last year, what has really impressed you? You know, I think the biggest advances have clearly been in image recognition. I was looking the other day, the big problem with that is you need a lot of labeled data. One of the reasons why Google was able to identify cat photos on the internet is we had a lot of labeled cat images in the ImageNet open source database, but the ability to start generating images to do synthetic labeled data, I think could really make a big difference in how rapidly image recognition works. What do you mean synthetic? I'm sorry, what do you mean synthetic? Where we would actually create, we wouldn't have to have somebody going around taking pictures of cats. We'd create a bunch of different cat photos, label them as cat photos, have variations in them. Unless we have a lot of variation in images, that's one of the reasons why we can't use autonomous vehicles yet, because images differ in the rain and the snow and so on. We're gonna have to have synthetic snow, synthetic rain to identify those images so the GPU chip still realizes that's a pedestrian walking across there, even though it's kind of fuzzed up. Right now, just a little bit of variation in the image can throw off the recognition altogether. All right, Tom, hey, thanks so much for coming to theCUBE, it was great to see you. We got to go. My pleasure. Thanks for having me. Always good to catch you, you're welcome. All right, keep it right there, everybody. We'll be back from MIT CDOIQ in Cambridge, Massachusetts, Dave Vellante with Paul Gillan. You're watching theCUBE.