 From Cambridge, Massachusetts, it's the Cube, covering MIT Chief Data Officer and Information Quality Symposium 2019, brought to you by SiliconANGLE Media. Welcome back to MIT and Cambridge, Massachusetts, everybody, you're watching the Cube, the leader in live tech coverage. This is MIT CDOI Cube, the Chief Data Officer and Information Quality Conference. I'm Dave Vellante with my co-host Paul Gillan, Professor Dr. Stuart Madnick is here, longtime Cube alum, longtime professor at MIT, soon to be retired, but we're really grateful that you've taken your time to come on the Cube, it's great to see you again. Well, it's great to see you again, it's been a long time since we've worked together and I really appreciate the opportunity to share our experience here at MIT with your audience. Well, it's really been fun to watch this conference evolve, we're full, and it's really amazing, we have to move to a new venue next year, I understand. And data, we talk about the data explosion all the time, but one of the areas that you're focused on and you're going to talk about today is ethics and privacy, and data causes so many concerns in those two areas, but so give us a highlight of what you're going to discuss with the audience today and we'll get into it. One of the things that makes it so challenging is data has so many implications to it, and that's why the issue of ethics is so hard to get people to reach agreement on it. We're talks of people regarding medicine and the idea of big data and AI, so not to be able to really identify causes, you need mass amounts of data, but that means more data has to be made available. As long as it's everybody else's data, not mine. Well, not in my backyard, if you will. So you have this issue where on the one hand, people are concerned about sharing the data, on the other hand, there's so many valuable things we gain by sharing data, and getting people to reach agreement is a challenge. Well, one of the things I wanted to explore with you is how things have changed. Now you, back in the day, very familiar with Paul, you as well, with Microsoft, the Department of Justice, FTC, issues regarding Microsoft, and it wasn't so much around data, it was really around browsers and bundling things today, but today you see Facebook and Google, Amazon coming under fire, and it's largely data related. Liz Warren, last night again, break up big tech. Your thoughts on similarities and differences between sort of the monopolies of yesterday and the data monopolies of today. Should they be broken up? What are your thoughts on that? Well, let me broaden the issue a little bit more, if you will, and I don't know how the demographics of your audience, but I often refer to the characteristics of millennials. The millennials in general, I asked my students this question here, how many of you have a Facebook account? And I'm almost never in a class as a Facebook account. So you realize you've given away a lot of information about yourself. It doesn't really occur to them that may be an issue. I was told by someone that in some countries, Facebook is very popular, that's how they coordinate kidnappings of teenagers from rich families. They track them, they know they're going to go to this basketball game or the soccer match, they know exactly where they're going after it. That's the perfect spot to kidnap them. So I don't know where the students think about the fact that when they're putting things on Facebook, they're making so much of their life at risk. On the other hand, it makes their life richer and more enjoyable. And so that's why these things are so challenging. Now, getting back to the issue of the breakup of the big tech companies. One of the big challenges there is that in order to do the great things that big data has been doing and the things that AI promises to do, you need lots of data. Having organizations that can gather it all together in a relatively systematic and consistent manner is so valuable. Breaking up the tech companies, and there are some reasons why people want to do that, but also interferes with that benefit. And that's why I think it's got to be looked at real carefully is to see not only what gain may be made by breaking up, but also what losses or disadvantages we're creating for ourselves. So an example might be, perhaps it makes the United States less competitive vis-a-vis China in the area of machine intelligence is one example. The flip side of that is Facebook has every incentive to appropriate our data to sell ads. So it's not an easy equation. Well, even ads are a funny situation. For some people, having a product called to your attention that something actually you really want but you never knew of before, could be viewed as a feature. So in some cases the ads could be viewed as a feature by some people and of course a bit of intrusion by other people. Well, sometimes we use the search Google, right? Paul, looking for the ad on the side. No longer, it's all ads. Why? Go ahead. I wonder if you see public sentiment changing in this respect. There's a lot of concern, certainly at the legislative level now about misuse of data, but Facebook usership is not going down. Instagram membership is not going down. The indication is that ordinary citizens don't really care. That's been my, I don't have all the data maybe you may have seen, but just anecdotally in talking to people and the work we're doing, I agree with you. I think most people, it may be a bit dramatic, but I've had a conference once and someone made a comment that there has not been the digital Pearl Harbor yet. No, there's not been some event that was just so onerous, is so awesomely, the people remember the day it happened kind of thing. And so these things happen and there may be a little bit of press coverage in your back on your Facebook account or Instagram account the next day. There's nothing is really dramatic. I mean, individuals may change now and then, but I don't see massive changes. But you had the Equifax hack two years ago, 145 million records, Capital One just this week, 100 million records. I mean, that seems pretty Pearl Harbor-ish to me. Well, it's funny. We were talking about that earlier today regarding different parts of the world. I think in Europe in general, they really seem to care about privacy. United States, they kind of care about privacy. In China, they know they have no privacy, but even in US where they care about privacy, exactly how much they care about it is really an issue. And in general, it's not enough to move the needle. If it does, it moves it a little bit. How about the time when they showed that smart TVs can be broken into? Smart TV sales did not dudge an inch. Not much, how many people even remember that big scandal a year ago? Well, now, to your point about Equifax, I mean, just this week, I think Equifax came out with a website where you could check whether or not your credentials were- It's a new product. We're compromised. And if they were. And mine has been. As had mine, as had my wife's, as is Stu. So you had a choice, free monitoring or $125. So I mean, and then we went, okay, now what? Life goes on. It doesn't seem like anything really changes. And we were talking earlier about your 1972 book about cybersecurity that many of the principles that you outlined in that book are still valid today. Why are we not making more progress against cyber criminals? Well, two things. One thing is you got to realize, as I said before, the caveman had no privacy problems and no break-in problems. But I'm not sure any of us want to go back to caveman error because you got to realize that for all these bad things, there's so many good things that are happening. Things you could now do with your smartphone, you couldn't even visualize doing a decade or two ago. So there's so much excitement, so much forward momentum, autonomous cars and so on and so on, that these minor bumps in the road are easy to ignore in the enthusiasm and excitement. Well, and now as we head into 2020 election, you know, it was fake news in 2016. Now we've got deep fakes. We've got the ability to really use video in new ways. Do you see a way out of that problem? I mean, a lot of people look at a blockchain. You wrote an article recently in blockchain. You think it's unhackable? Well, think again. What are you seeing? Well, I think one of the things we always talk about when we talk about improving privacy and security in organizations, the first thing is awareness. Most people are only, you know, small amount of time aware that there's an issue and it quickly passes the mind. The analogy I use regarding industrial safety, you go into almost any factory, you'll see a sign over the door every day that says, 520 days since last industrial accident, and then a subline, please do not be the one to reset it to zero. And I often say, when's the last time you went to a data center? And so our sign is at 50 milliseconds since last cyber attack or data breach and so on. And so it needs to be something that is really front of mind in people. And we talk about how to make awareness activities, both in companies and in households. And that's one of our major movements here, is to try to make people more aware. Because if you're not aware that you're putting things at risk, you're not going to do anything about it. Last year we contacted at Silicon Angle, 22 leading security experts and asked them a simple question, are we winning or losing the war against cyber criminals? Unanimously they said, we're losing. What is your opinion of that question? I have a great quote I like to use. The good news is the good guys are getting better. Better firewalls, better cryptographic codes, but the bad guys are getting better faster. And there's a lot of reasons for that. I don't dwell on all of them, but we came out with an article talking about the dark web. And the reason why it's fascinating is if you go to most companies, if they've suffered a data breach or a cyber attack, they'll be very reluctant to say much about it unless they're really compelled to do so. On the dark web they love to rant in reputation. I'm the one who broke into Capital One. And so there's much more information sharing, they're much more organized, they're much more disciplined. I mean the criminal ecosystem is so much more superior than the chaotic mess we have here on the good guys side of the table. Do you see any hope for that? There are services, IBM has one and there are others that are sort of anonymized security data, enable organizations to share sensitive information without risk to their company. Do you see any hope on the collaboration front? Well, as I said before, the good guys are getting better. The trouble is, at first I thought there was an issue there wasn't enough sharing going on. It turns out we identified over 120 sharing organizations. That's the good news and the bad news. There's 120, so IBM has won and there's another 119 more to go. So it's not a very well coordinated sharing that's going on, it's just one example of the challenges. Do I see any hope in the future? Well, in the more distant future because the challenge we have is that there'll be a cyber attack next week of some form or shape that we've never seen before. And therefore we're probably not well prepared for it. At some point, I'll no longer be able to say that. But I think the cyber attackers and breaches and so on are so creative, they've got another decade or more to go before they run out of steam. Well, we've gone from hacktivists to organized crime, now nation states, and you start thinking about the future of war. I was talking to Robert Gates about this, the former defense secretary. And my question was sort of well, don't we have the best cyber? Can't we go in the ovens? He goes, yeah, but we also are the most to lose. Our critical infrastructure and the value of that to our society is much greater than some of our adversaries. So we have to be very careful. It's kind of mind-boggling to think. Autonomous vehicles is another one. I know that you have some visibility on that. And you were saying that the technical challenges, help me if I get this right, of actually achieving quality autonomous vehicles are so daunting that security is getting pushed to the back burner. And the irony is, I had a conversation, I was a visiting professor at the University of Nice about 12, 14 years ago. And that's before autonomous vehicles are on an IE. But they were doing what they call automotive telemetrics. And I realized at that time that security wasn't really our top priority. I happened to visit an organization doing real autonomous vehicles now, 14 years later. And this conversation was almost identical. Now, the problems they're trying to solve were harder problems they had 14 years ago. Much more challenging problems. And as a result, those problems dominate their mindset. And security issues kind of, we'll get around to them. If we can't get the car to ride correctly, why worry about security? Well, what about the ethics of autonomous vehicles? We're talking about that, yeah. You're programming, if you're going to hit a baby or a woman or kill your passengers or yourself, what do you tell the machine to do? That is, it seems like an unsolvable problem. Well, I'm an engineer by training and possibly many people in the audience are too. I'm the kind of person who likes nice, clear, clean answers. Two plus two is four, not 3.9, not 4.1. That's the school up the street they deal with that. The trouble with ethic issues is they don't tend to have a nice, clean answer. Almost every study we've done that has these kind of issues on it, and we have people vote, almost always is spread across the board. Because any one of these is a bad decision. So which the bad decision is least bad? Like what's an example that you use in your class? Well, the example I use in my class, and we've been using it now for about well over a year now and a class I teach on ethics, is you are the designer of an autonomous vehicle, so you must program it to do everything. And the particular case you have is you're in the vehicle, it's driving around the mountain in Swiss Alps. You go around a corner and the vehicle, using all sensors, realize that straight ahead on the right-hand lane is a woman in a baby carriage pushing onto the left, just entering the crossway of three gentlemen. Both sides of the road have concrete barriers. So you can stay on your path, hit the woman in the baby carriage, veer to the left, hit the three men, take a sharp right or a sharp left, hit the concrete wall and kill yourself. And the trouble is every one of those is unappealing. Imagine the headline, kills woman and baby. That's not a very good thing. There actually is a theory of ethics called utility theory. It says better to say three people than 201. So therefore you do want to kill three men, that's the worst. And then the idea of hitting the concrete wall may feel magnanimous, well, I'm just killing myself. But then as a designer of the car, shouldn't your number one duty be to protect the owner of the car? Yeah, yeah. And so what people basically do, they close their eyes and flip a coin because they don't want any one of those hands. It's not an algorithmic response. It doesn't lead. I want to come back before we close here to the subject of this conference. Exactly. You've been involved with this conference since the very beginning. How have you seen the conversation change since that time? Well, I think it's changing two ways. First, as you know, this is a record breaking group of people are expecting here. Close to 500, I think, have registered. So it's clearly grown over the years. But also the extent to which, whether it was called Big Data or called AI now, whatever, is something that was kind of not quite on the radar when we started, I think it was over 15 years ago, we first started the conference series. So clearly it's become something that is not just something we talk about in the academic world, but it's becoming mainstay business for corporations more and more. And I think it's just going to keep increasing. I think so much of our society, so much of business is so dependent on the data in any way, shape, or form that we use it and have it. Well, it's come full circle. As Paul and I were talking at our open, this conference kind of emerged from the ashes of the back office, information quality, and then, like you say, the Big Data and now AI. And guess what? It's all coming back to information quality. Exactly. Lots of data that's no good, or that you don't understand what to do with is not very helpful. Well, Dr. Maddick, thank you so much. Oh, it's a pleasure. I really love being here for all these years. Really want to thank you for that. And I want to thank you guys for joining us and helping to spread the word. Thank you. It's been our pleasure. Everybody, Paul and I will be back at MIT CDO IQ. Right after this short break, you're watching the Q.