 Good evening everyone. Welcome to the Berkman Klein Center event series. My name is Dan Jones. I'm with the Berkman Klein Center and before we get started I just wanted to flag a couple upcoming events. Next week is the wrap up of our 2018-2019 event series. We have on Monday we're helping with the book launch for David Kay's book Speech Police that will be at the Cambridge Public Library on Monday night. The next day we have Christo Wilson speaking about auditing for bias and resume search engines right here at lunchtime on Tuesday. And then that evening we'll be helping with the launch of Mary Gray's new book Ghost Work that will be at the Harvard bookstore. You can find out more about all those events in RSVP on our website cyber.harvard.edu and we hope to see you at some of those events. I'll mention one more thing for housekeeping. This event is being live streamed and archived on video and audio. So be aware that your voice and visage will be retained for posterity on the internet, especially when we get to question time. And without further ado I wanted to turn over to David Weinberger and Joey Ito. Thank you. David and I go way back to the early days of blogging when most of the bloggers knew each other. We were coming up with track backs and then corporate America figured it out and started to try to turn the blogosphere into a corporate marketing machine. And David and a few of his pals got together and wrote this thing called the Clutrain Manifesto, which was a bunch of the bloggers trying to explain to these marketing people that this thing was about conversations. It wasn't about marketing, it wasn't about any of these paradigms that they had before. It had a huge impact on me and I think the blogosphere and it was a very prescient document that even when I go back today I wish that more people had adhered to that. And then he wrote another great book called Small Pieces Loosely Joined. You will start to see that his book titles start to become sort of really great images on themselves, but a lot of people just read the book title and figure that that's all they need to read. But actually the book goes on to, because Small Pieces Loosely Joined is kind of how I imagine the web, but the book is really about how the web is kind of a weird place and why do we feel comfortable there. The other, the next book, which was also a wonderful book, because everything is miscellaneous, which again, very great title. You can be kind of reductionist and sort of, okay, I get what that means. But it actually was a lot more about how the world gets, once it's knocked in a bazillion pieces, how do we think about how to order that. And then Too Big to Know, again, another book where you can be, oh, okay, I get what that book probably means, but it was really about how knowledge is moving from books to networks. Did you do this when you were at the library? Yeah, so this is really kind of, I think, inspired by some of the work at the library. And I think Everyday Chaos is also inspired by some of the open library work that David did. And I'm not going to give this one away because he's going to talk about Everyday Chaos. And it's not, and again, Everyday Chaos sounds a little bit pedestrian, but it's far from pedestrian and it's quite deep and maybe controversial to some. So I'll let David present some of his views and then we'll have a conversation. Thank you, Joey. I think I need to turn on my microphone. Thanks, Joey. That was actually a remarkably good, incredibly brief summation of a series of books. So thank you. So first of all, I have a question. And first I have a statement. Thank you very much for coming out. The question is, and nothing's going up, this is a sense of the room, that's all. How many of you, one way or another, work in our machine learning researchers or are in some way in that field? A smattering. When in doubt, you probably should raise your hand for that. And among the rest of you, how many of you feel like you have at least some sense of machine learning? You've been following some of what's going on. So raise your hands there. That's at least half. And I suspect that a bunch of you are not putting up your hands because you're afraid I'm going to cold call you and ask you to prove it. Which I was contemplating to tell you the truth, but I won't. Okay. So this, I'm going to talk, I'm going to try, I'm going to, this is the first time I've given this book talk. It means a lot to me that it's being done at the Berkman Center, Berkman, excuse me, Berkman, I'm showing my age. It's a fine center where I've been a fellow or something for about 15 years and feels very much like my intellectual home. So I'm going to tell you is, I'm going to tell you about the book is basically as quickly as I can, but it's going to be longer than you want. So that Joey and I can talk and then we all can talk. So I want to begin with a warning. The warning is that I am actually going to say positive things about the Internet and positive things about AI. I understand that there are some negative things going on on the net and that there are deep and important issues around bias and fairness and other such really crucial things when it comes to AI. The book talks about especially the issues around AI. It tries to make sure that readers know that these are really real and pressing problems, but it's not about those issues. And so what I'm going to talk about tonight is I think overall quite positive, even though it's sort of unusual I think these days to be positive about AI much less the Internet. So the book isn't about those problems. There are wonderful books and research being done at the Berkman Center at the Media, MIT Media Lab, which Joey is the head of, and other places. The book is about, it has a hypothesis which is the future has changed. By which I do not mean that the things of the future are changing because that's always the case. It's not about the hyperloop as of 1912, but rather the book wants to explore the idea that our idea about how the future happens has changed. Not the content, but the way that the future works. There are seats if you want to sit down, or you can stand, it's really up to you. Really none of my business when you come right down to it. So this is the premise of the book. How we think the future happens has changed. And one of the ways of thinking about our idea of the future in an informal way, and this by the way, the domain is in the West, is that we sometimes think about the future as this broad field of possibilities. It's our job, we think, to figure out which of those possibilities we want. And then we know that as those possibilities come towards us, as the future comes towards us, that the possibilities start falling away until there's only one left. And if you've done things right, then you win. You get the possibility that you wanted. But the basic motion of the future is that of narrowing of possibilities. And so our basic strategy has been to anticipate those changes and to prepare for them to try to move the future in our direction. And this obviously works very, very well. There's a tremendous price that we pay for the anticipate and prepare strategy. But we don't notice it because we've been doing it literally since Paleolithic, since the first time we started flinting in ax in preparation for the next day, anticipating that we would need it the next day. So we've been doing this a long time. The costs of this strategy are enormous. We end up overpreparing, mispreparing, underpreparing, and we take that for granted because that's just the way it is. We will continue doing this. The book and I, I'm not arguing that we're going to give up on anticipating the future because if you did that, the next bus would hit you because you wouldn't look both ways before crossing the street. So absolutely we're going to continue doing this, but we are also now doing other things that may be shaping our fundamental idea of how the future works. So I want to look at two big segments as the book does. One is things that are happening on the net that I believe have already conditioned us to accept that the future happens far more chaotically than we used to think. And second of all, that AI has come along and is now giving us, now that we've gotten used to this sort of chaos, AI is giving us a model by which we can start to understand it. There are still seats, I feel bad watching you stand, but you can if you want to, of course. Okay, so that's the basic structure of the book and also very, very much so of this talk. So let's start with the net. I'm going to try to go as fast as I can because you are one way or another, you are Berkman Klein compatible people. And I don't want to, I assume that I'm certain that most of you are very well versed in what's going on on the net. So the net is letting us succeed in a chaotic environment. And for all the problems on the net, we still go on the net, we still get stuff done. I think we all would acknowledge that it's transformed almost every aspect of our lives, mainly for the better, but in some ways horribly as well. We are basically succeeding with chaos on the internet. And so we end up in this chaotic, weird environment that is the internet doing weird things that we don't even notice or weird anymore because we've gotten used to them. But I think there's a thread connecting a whole bunch of them. And so I'm going to point to two connected threads here and give you just one example really briefly of each of them. So the first example is minimum viable products. How many of you are familiar, I'm not going to call on you, familiar with the concept of the MVP? About half, okay. So you are all probably using something that started out as an MVP. An MVP is a product that gets launched on the net almost always that provides only the barest minimum of functionality that people will pay for. So if it's Dropbox, you can use your files anywhere. That's pretty important functionality, but that's all that it started with basically. And then over time, you watch what users actually do with your product, what they complain about, what they're talking with one another about, what they want from it. You measure when you can how they're actually using it. And that gives you a really solid sense of what they want and need and you start building up the features. This is a very successful strategy. It is a very new strategy and it's entirely and empathetical to the most basic assumptions we've had about product design, which is you get to design a product and launch it once and that's it and so you better get it right. And the prototype of this is the Model T from Henry Ford 1908 that they sold 15 million copies. We don't say copies, that's an online thing. 15 million of these Model Ts over the course of 19 years with basically making no changes to it because Henry Ford anticipated users needs so well that they didn't have to make any changes. That is the model of design by anticipation. Minimum viable product is an example of something thoroughly different. It purposely holds off from anticipating what people will want from the product and do with the product. It is holding possibilities open rather than trying to select the one right path. And it's not just MVPs, there's a whole range of things, agile development on demand from even before the rise of the internet, on demand manufacturing. Unconferences which some of you are familiar with but where the agenda is not written before the conference, the agenda is written by the people who are attending it when they get there. No anticipation of what people are going to want to talk about. Unconferences are awesome, by the way. So these are all examples. They're thoroughly weird. We take them for granted. They're always holding off from anticipating and holding possibilities open. Second example, slight change from that theme are open platforms or open APIs, application programming interfaces, which is when a company has a product, they're selling it the way they sell products, they may well also these days open up a platform that enables any developer who has access to the web to use some of the functionality built into that product, maybe even some of the data, we hope carefully anonymized and privacy is preserved, to build extensions to that product, to change the way that it works so it suits their needs or the niche needs of users, to integrate it into other products, into their workflow. The company purposefully holds back. The company builds the product that they think is going to be successful and they add on to it over time. But they also open up these platforms because they recognize, the company recognizes, that it cannot anticipate all the uses to which this product might be put. All of the niche uses, for example, which even if they recognize, they couldn't afford to build. Slack, which is a well-known messaging app, is just one example of this. It's a little noteworthy because they actually created an 80 million dollar fund to actively encourage people to build these applications that integrate Slack into other products so it becomes part of the workflow. Open platforms, excuse me, they don't just hold open possibilities. They make more possibilities possible. You can do more things and companies go to some expense to do this. And it's not just an open platform. It's also games, computer games have been doing this since the early 1980s. Open source, open access, these are all ways of enabling more possibilities to be made. And I want to call out one particular way of doing this, which is interoperability. Those two people wrote a pretty good book about it. They are Berkman Klein folks, a book called Interop. Interoperability is when a product, a part of a system designed for that system is also capable of being used in other systems in unanticipated ways. This makes every piece of what you build generative. That is, it can create new stuff. It can be used to put together new stuff. He also wrote a pretty good book that talks about generativity. The future of the internet and how to stop it. Sorry, thank you, Joey. Jonathan Zitrain, who is also sort of Berkman Klein connected, I believe. So interoperability is especially important because not only is it generative but it is at the heart of the very heart and essence of the internet, which if nothing else is a way of making information interoperable. You can send it anywhere and do anything you want with it. The internet is an interoperable system. So all these things, far from anticipating and trying to narrow the possibilities, both of these types of things try to make more possibilities. The fact is that if anticipation is not just paleolithic, it's actually also reductive. It's a way of reducing the possibilities. This guy has written a really important essay in now a book called Resisting Reduction. This is the move away from anticipation as a form of narrowing. I think we're moving in many ways. We have already moved to various forms of unanticipation, purposely holding back our expectations for the future and trying to open up more of them. In fact, one could say that it seems as if that for the past 25 years, the globe has been involved in a worldwide attempt to make the world more and more unpredictable. As if that was on purpose, and I think it absolutely was on purpose. It is on purpose. That is exactly what we've been doing for 25 years. Everything we can to make the world more unpredictable, which means we are in effect reversing the flow of the future. It's not time travel, but rather than seeing it as a narrowing, we now also frequently bend our efforts in order to open it up. One might say that the imperative has become in companies, in other organizations, in our individual lives, in our social groups to make more future. That's part one. The internet has changed our practice in ways that affect how we think the future works, then AI is reframing it. This is the hypothesis of the book. When I say AI, as I think many people do these days, they actually mean machine learning. I'm going to try to explain machine learning in a moment, but let me tell you one of the premises going forward, which is if it is the case that we understand ourselves through our technology, and we have done this historically, and I think pretty clearly it has. So we only started feeling, feeling under pressure, and like we need to let off steam during the steam age. And very rapidly in the 1950s, at the beginning of the information age, we suddenly felt, we experienced information overload. We could feel the information tingling through us and feeling made dizzy by it. We worried about it. It was a thing. We think of ourselves, we get sort of stuck, and we say, I'm processing, I'm processing, as if your brain were a computer. We understand ourselves through our tools, especially the tools that we name an age after, like the age of information or the steam age. I'm willing to, I don't make predictions, but I think I'm about to. Seems pretty clear to me that we are now at the beginning of the age of AI or maybe the age of machine learning. I don't know what it's going to be called, but this is an epical technology, machine learning. And if this is that, if we are now entering, if that's going to be the dominant technology, and if it's the case that we understand ourselves through our dominant technology, then maybe we can start to ask, okay, well, then what will the world or what does already the world begin to look like if we start understanding ourselves and our world through the model of machine learning, just as we have with information and with steam and with clockworks, et cetera. And that's actually what's behind much of this book. So I'm going to try to give you the world's fastest explanation of machine learning. Those of you who are in the field, please refrain from cat calling. This is, I know I'm getting this wrong. I am not a computer scientist. I'm a writer. So I am certainly going to get this wrong. So traditionally and including in traditional programming. So let's take traditional programming. We like to start with conceptual models. So if you are working on a program for predicting sales for your business, you will first think about what are the factors that affect sales, and then what are the relationships among those. So certainly the number of sales people very likely affect sales. More sales people, maybe more sales, but that also has an effect on costs, and it's going to drive up maybe the cost of your support, customer support, and do you have enough leads? What's the relationship there? Maybe you need to increase marketing and get more leads, et cetera. So you have a set of the factors that affect it and how they interact, and then you, that's a conceptual model, then you write a working model in software that instantiates that. And if what I just described sounds like a spreadsheet, absolutely, that's what a spreadsheet is. It's a really easy way of programming a computer, and it's exactly what you do. Here are the factors, here are the relationships, the formulas, the equations that connect them. And obviously this works really, really well. Machine learning doesn't do that. Machine learning takes the data, but throws out, largely throws out, the set of relationships among the pieces that we humans think are there. You take the data, you strip all meaning, oh, I'm sorry, I should say, we like it when our conceptual model and our working model are the same. In our history we don't always achieve that, but when we do we really, really like it for, I think, good reasons. This is a case in machine learning where we just sort of throw out the conceptual model. And throw out's too strong, you can art, you know, but basically we throw out the conceptual model, we take the data, we take the categories that we're in, we put them into these buckets, and then we let the machine do its statistical magic, work iterated, iterate and iterate, findings, statistical relationship, correlations among the pieces of data, without having any idea about how things go together and which things are causal and all of that. Here are correlations among what can be millions of data points connected multiply 1,000, 1 to 1,000, more or less, depending on the sort of neural network that you're building. And you end up, and those relationships that you're drawing, that the machine is drawing, excuse me, have weights, sort of the predictability of the likelihood of those weights, those relationships, actual, doesn't matter. Relationships have weights, you get this enormously complex network, which does not look anything like this, is a conceptual, artist conceptualization of it, in which lots and lots of particular points are connected to lots and lots of particular points. And these things, when you put in the data, you want to know, okay, fine, this is, let me see what the sales projections are going to be, you put in the data and outcomes results. And we use these, they don't always work, but we use them because often they do work. They give us more accurate predictions, or faster predictions, or better classifications, more accurate than we could do. They're doing things either faster, better, or cheaper than we can, and these things are things that we think of as cognitive activities. Every day, I mean literally every day, there are new applications of this technology. Many of them weird and surprising, but they work, and many of them, at least initially, we don't know why they work. There are many connections. If you sat down and you tried to figure out where the different points connect to, you would spend years following meaningless data points and you would come out of it, still not knowing how it worked. In some instances, you certainly get their generalizations and you can begin to see how it works, and there's lots and lots of great work being done in order to make these systems more explicable, but as it stands right now, we use systems, not all of them, but we use some of the systems that we use, that simply cannot understand. People look at them and examine them and they're just, I don't see why, I don't get it, but they work. That is, that's a surprise in our history. We have these things work, we use them, we're getting tremendous benefit, we are at the beginning of a spurt of innovation, but it comes at a price, it's a wrenching change. It is giving us a new model of models, which for me, and for the book, is the important and interesting thing. The types of models that it builds are different from the sorts of models that we build for ourselves. The sorts of models that we traditionally build for ourselves are old meaning now. We like it when we have general principles. We can say, okay, here's the general principle about how a business works or how anything that we're modeling works. Here are the general principles. We can apply particulars to it. We can explain particulars by looking at the general principles. We can predict on that basis those general principles are understandable. We understand them, that's how we use them. And at least a part of us says it's the general principles, that's where the truth is, that's the truth. The rest of it is just sort of data, it's transient, you feed it in. The permanent truth, that's in the principles. In these new models, it's really not like that. It's a connection of particulars. These particulars are connected complexly and densely and delicately, so a small change in one of them can cause a gigantic change, a ripple effect, a butterfly effect, if you will. And at least some of them, we don't understand. And there's controversy about whether we can get to the point where we will always understand these. Right now, we're not. So I want to look at some things that change. I'm going to warn you this is very speculative, but if that's the case, if we're going to start understanding ourselves in terms of those models, here are maybe some things that will change or are already changing. We can begin to see the change already. The first is strategy. So strategy is a pretty new thing in the world in the way that we think about it. So military strategy is really a 19th century invention in business strategy. You're talking decades here. In order to have an idea of military strategy and be able to write books about it and teach it and fight battles using it, you have to have an idea that it's a fairly stable system, stable enough that there are some general laws, and they're not as stable as Newton's laws of gravity, but nevertheless, there are some general principles we can start to know and educate ourselves on that are stable and law-like and we can understand them. Our experience on the net says, well, life maybe isn't quite like that. In the past 10 or 15 years, the idea of business strategy, which as I say is already a pretty new idea, has come under very potent and heavy fire criticism. So a book like The Black Swan by Nussin Nicholas Talib, a very important book, basically says that your strategy is very likely to be disrupted at any moment by things beyond your control. The Black Swan shows up, falls on your head and crushes your business, crushes your supply chain. You never know. The strategy is not as stable as we thought. The only thing that's surprising to me is that we needed to hear this because it seems, but we did, and it's having this book as a very important business book within the world of business. And likewise, Rita Gunther McGrath is among the people who is saying, be careful of committing to large scale strategies because there's so much change. If you pay attention to the small changes around you, look for opportunities and risks, try to address them early. Some people have started calling this type of approach being wary of big, thunderous strategies, minimum viable strategy. I actually thought I invented that and then I Googled it. And I did not invent that term, but I like it. This also, I think, helps explain why we hear such ridiculous... terms from the valley that we all find ridiculous, like disruption and pivoting and run fast and break things. So I'm not advocating for the language. Nevertheless, the fact that this is the language of the valley tells us something about the nature of strategy, that it has changed, that these things can be talked about as strategy is really, really weird. That indicates perhaps that we are moving away from this a little bit. I just want... this is inessential, but I'm going to say it anyway. So Plato is the first one who explicitly distinguishes strategy from tactics. And what he means by tactics is what we pretty much would call logistics. What he means by strategy, the go-to example, the analogy that he uses in order to explain this in the first instance, are musicians who are making up tunes, improvisation. Not exactly our idea of strategy, but I think we're seeing indications that we actually are going back to that notion of strategy. So the second thing that maybe changes, that we may already see changing, is decision-making, where our model has been, our image has been, well, you're at a crossroads. You've got two choices to make, which is hugely reductive. Or maybe it's five, or you have ten options, but you've already done the work of narrowing the future down to a literal handful of options. And then in a large environment, the person at the top makes a decision, and we consider that activity to be quite heroic. There are no spoilers here, okay? It's just John Snow, no comment, but it's heroic activity. We know that corporate hierarchies in the middle of the 19th century were invented... One of the explicit reasons they were invented was to limit the flow of information, because the person at the top of a large corporation can't know everything, and so you need a set of lieutenants who will reduce at each step the amount of information, and they're competent and they reduce it to what one person can decide. And, you know, that works if you think that decisions are best made by reducing and throwing out as much information as you possibly can. These days, you know, when the web started and people were worried about information overload, I mean, that was the topic. For like 15 years, from the middle of the 1990s to at least the beginnings, the early 2000s, this was the topic. Information overload is going to destroy us, and I don't know about you. I have not heard anybody complain about information overload in, I'm going to say, a few years. I hear other complaints all the time about information, that people aren't getting the right information, there's fake news, these are serious issues, of course, but the desire that I see all around me is people who feel like they're not getting enough information. I think we have gotten acculturated to what we used to think of as information overload, which I'm going to say, I think has an effect and is actually a healthy thing. So, if you want to make, if it's the case that we need as much information as we can so that we can make smaller scale decisions or strategic decisions, then the same is true for decision making. Well, in fact, I was just redundant. So we see people who, organizations that are moving consciously towards distributed decision making, where the local expert gets to make the decision because she's the one who knows the most. That's what makes her the local expert. And if the local expert can't decide, then you escalate it and you keep escalating it if you have to until you get to the top. And so this is very much the Wikipedia model. So I'll use the founder as the example. Where'd he go? Okay, there he is. Jimmy Wales, who has said that by the time a decision gets to him, if it works its way up the Wikipedia hierarchy and Wikipedia absolutely has a hierarchy, then that means the entire community was unable to resolve this question. And if that community unable to resolve it, it's very likely because it's not a resolvable question. The two sides are equally... The pro and the con are equally balanced. And so Wales says that in most instances, when it gets to him, it means his job is simply to toss a coin because he is not a local expert. He does not know as much about 18th century French literature as the local expert does. So by the time it gets to him, coin toss. And this guy has said something very similar, that often when the decision gets to him, it's gone up through an administration of experts. Very likely, there's no way to decide. So that's not always the case. I'm sorry, I don't mean to say presidents always flip the coin. Sometimes wish that was the case because then 50% of the time we'd be getting good results. But sometimes that is the case. And it takes a certain modesty as a leader to say that. It's not heroic to be the coin flipper, even though that's exactly in some instances what it should be. Okay, the next thing that changes are explanations. So let's say you are on a back road, you're driving, you get a flat, you want to know what happened, you look, and yep, okay, there it is, nail. That's a really good explanation of a flat. But it's a particular type of explanation. It's a very common type, which is a sine qua non, which is there but for the nail. Except for the nail, you wouldn't have gotten the flat. Perfectly good explanation. I'm not arguing against it. All I want to say is it's not the only sine qua non in this situation, because you were on that road because you were late and it was a shortcut. If you hadn't been late, you wouldn't have hit the nail. If you hadn't swerved to miss the bunny, you wouldn't have gotten hit the nail. If metal were softer than rubber, you wouldn't have gotten a flat. If pointy things didn't penetrate better, you wouldn't have gotten the flat. If we didn't care about going places fast, you wouldn't have gotten it. If we didn't care about this in an economy, a capitalist economy, which will serve our needs in a particular way, we wouldn't have gotten the flat. If anybody were in an effect, you would not have gotten the flat. If space aliens had finally arrived and vacuumed up all of the surface metal because they are a rust-based metabolism, you would not have gotten the flat. All of these are sine qua non. Everything is a sine qua non. Everything had to happen for you to get that flat. But we look at the nail and we say, that's the explanation. Why? It's a really simple answer. There's a really good answer to this as well. The nail is the explanation because in that scenario, it's the only thing you can change. You can't get rid of gravity. You can't go back in time and say, okay, I'm going to run over the bunny and not get the flat. You can't do those things, nor should you. But you can take the nail out. And so that becomes the explanation. Of course, I'm not arguing against this. All that I'm saying is that explanations are tools. They are not always the best tool, despite what some regulators seem to think. And they always hide more than they reveal. And I'm pretty sure I'm going to regret saying always because it always is regret saying always. But I'm going to stick with it. This type of explanation hides more than it reveals. It's fine. It's a tool. It's what we need from it. But if we are now in a position thanks to the new sorts of models that we're experiencing, thanks to our life on the Internet, where we are in this chaotic environment, if we are coming to accept inexplicability, not as an exception, not as a flaw, but as a part of the landscape. It's not a failure to understand. We understand through explanations that hide more than they make clear. If we get used to inexplicability as a part of the landscape, then we are headed for disruption on a Copernican scale, as I think we are. Because Western culture, and you remember, I scoped this only to the West. Western culture begins with a covenant. A covenant that says humans are the special creatures that are able to understand their world, to some degree. In one tradition it's that God made us in his, sorry, his, we'll talk about that later, in his image by which nobody means God looks like a person, but rather that we were made as creatures who were able to understand and appreciate some of God's creation. In the Greek tradition, the same word that names the order and the beauty of the cosmos, logos is the word for the human ability to apprehend that beauty and order. This has been fundamental to our idea of who we are and why we're here. There's no point in being the rational animals if the world isn't rational. But our experience of the internet, our increasing reliance on inexplicable models, models that don't rely upon general principles, that connect particulars to particulars and may not yield general principles, or may not yield general principles that humans can understand. That breaks the covenant. It says that this technology is thinking, it doesn't think at all, right, so heavy air quotes on this, but is thinking about the world differently than we are, and we're using it because it is thinking better, at least in those instances for those purposes than we are. Maybe our thinking is a tool. We have known this for a long time. This is not news, but it's coming to prominence because now our machines are letting us succeed with inexplicability. It's at the heart. I don't want to say that because it'll get in trouble, but the complexity of these models, their particularity is at the heart of the model, and those two things do not yield themselves easily to human thought. It may be that the world is more chaotic than we thought, and now we can see that we've succeeded at it on the net because we have technology that enables us to make predictions and classify and do stuff like that, but it may be less like our understanding than we thought. The universe does not owe us an explanation, and if it did, it gave us one, we couldn't understand it. We may be less at home in the universe than we were able to acknowledge before. That is, I think, it's a scary prospect. It has dangers for sure, but I also, in my heart, I think it's an evolutionary step. I think it's a step forward for us. It's a step towards maturity as a species to be able to recognize that. It's very painful. It's a painful transition for sure, and a dangerous one because we are going wrong in many ways and will continue to. We're just at the beginning of it, though, and I think that it is actually a way closer to truth. And to the humility that comes with awe. Thanks. Thank you, David. So that was my 20-minute talk. So this morning, as I was thinking about this talk, I was Googling around, and I read a paper called Explanation as Orgasm by Allison Gopnik. And for those of you who haven't read it, basically it's that she's a developmental psychologist neuroscientist, and she says that as we have orgasms to induce us to reproduce, we have this orgasmic love of epiphanies, of explanation that induces us to learn. And so the reason little babies have joy when they have something unexpected happen and laugh is just because it forces us. It's a hard wiring to induce us to learn, and it's the desire for explanation at the level that we desire reproduction, and it's hard work. So either your explanation has to give us a lifetime's worth of orgasmic epiphany that we will never have explanations, or you are going to deprive us of orgasmic explanations. But to turn this into a question... That is an awesome choice. I need to think about it. But I think we've talked about sort of different responses people have, so in my book I had this principle, practice over theory, and my faculty peers hated it. They said, no, no, it should be the opposite, theory over practice. Are you calling for the end of theory? No, just the end of orgasms. No. And before I respond... You can respond to the first part too, if you want. There's a connection between explanations and humor and the joy of both. That is ripe for exploration. So I haven't read the orgasm piece, but I don't want to comment on it in any serious way. So just sort of cheap shots. Expressing my discomfort with the entire concept. Although I think it sounds pretty good. So no, I'm not calling for the end of theory. I like science. All anti-vaxxers raise your hand and then leave. I like science. I want more of it. I want evidence-based argument. But, so it's got to, you know... We do... We live in a world that is... We think we live in a rule-based universe. And we do. I'm not gonna... No intention of arguing with Newton, right? Or any other people proposing well-documented universal laws. So we think about the world as consisting of... The reality is in those principles. But then we get up and we go to work. And we cross the street and there is, even in this rule-based world, we have no way of knowing that there's gonna be a candy bar wrapper over here and somebody wearing a Beyonce t-shirt over there and that there's a red car with one light out and that we're gonna miss the light and we're gonna get to work and the coffee's gonna be warm or hot or cold or the canister is broken or our co-workers borrowed our cup and forgot to wash it and she's wearing mismatched socks or whatever. Everything in our lives, everything in this room is an accident in that regard. But that's not necessarily new, right? No, it's not new. Okay. We have invested, so I'll get to the new part. No, you're absolutely right. We have in the West sort of metaphysically invested reality in what's permanent and real, which the universal laws that we are remarkably able to discover, those are the real things. All that I'm suggesting is that if our experience on the net of succeeding with chaos and cherishing chaos and if we now have a model that in some way instantiates that, it doesn't start with rules, it may not yield rules, but it takes individual pieces and connects them in statistically useful ways, we hope. If that becomes the model of how the world works and how the future happens, then maybe we can, instead of saying mere accidents, real laws, mere accidents, nobody ever says mere laws, mere universal laws, there's a value judgment, there's a metaphysical value judgment where we have preferred the eternal and the permanent and the laws over the experience of our own lives. And if we are able, if these changes enable us to change that weighting sum, not disinvesting from the importance of universal laws, when we can find them, it's fantastic, but give more validity and importance to the accidents, the contingencies, the dust of our own lives, then that's a change. So I do buy the importance of accidents. I mean, there's a number of scholarly studies about innovation discovery that the majority of interesting things are discovered by looking for something else. So I think serendipity makes sense. But sort of the question, sort of getting back to kind of like how we live our lives, right? Because, and this gets back to the orgasm thing, which is, I have a two year old and another thing that Allison describes is that the number of hypotheses tested per minute by a infant is higher than a research scientist, right? And so what's interesting is our lives are continuously hypothesized, test, learn. And that loop happens because of the love of epiphany, right? But you're not saying to give up on that. AB testing in a way is that, right? Yes. And our addiction with causal explanations, although you kind of tease the causal part, is it about the timeline and the grandeur of the stuff that becomes, because at a micro level, do you agree that learning comes from testing causal hypotheses? That's certainly a big part of it. Yeah. Okay, keep going. Yeah, and so I guess, is it a time scale thing? Because I think like strategy even, if it's very short, right, like in it by minute, and that's what a lot of Silicon Valley is, is it still strategic, but it's strategic in a much more fast clock speed? Or is that not the way you would frame it? Well, so let's go back to that because that's an important point too. So there may well be, and I assume there are reasons why the patterns that machine learning discovers are patterns. And it's certainly the case that the confetti that falls on the Thanksgiving Day Parade, each piece is governed by causal laws. The reason why I sort of shrunk for a moment on the testing of causal hypotheses, although that's fine, is that there's been a line of thought over the past few decades that let's think about, at least the past few decades, let's assume that human thinking, human reason is not aimed at truth, but at evolutionary objectives. That we are optimized, so to speak, not for discovering truths, but for surviving, for having lots of kids who then will. So the fact that a kid is testing lots of hypotheses doesn't necessarily mean that she's developing statistical patterns about how things interoperate. Language itself, so there's a mix here. Language itself, words are generalizations. Words are a single thing that cover instances in every case that are different, one way or another. We can't survive without words. I am not suggesting that we go non-linguistically. Rather, I am suggesting that we may be in for an ethical change in the broad strokes of how we think about how the world works. Certainly universal laws, we certainly want science to continue. I'm sorry, did I terminally offend any anti-vaxxers? We certainly want science to continue. We want more funding for it. But what do we think that it's doing, what do we think that we are doing? So you can have, I think Newton certainly knew this, I think all scientists do, except the ones who are going to disagree with me, I guess. They're universal laws. The data is incredibly complex. Applying those universal laws to each little tiny piece of dust and piece of confetti is impossible. We take that for granted. That's our condition. I'm not disagreeing with that. I'm suggesting that we take that more seriously. That we take more seriously. We say that and then we turn our attention back to the universal laws and say that's what's real. We'll say, yeah, they're real, but so is the dust and the confetti. So is it like the miscellaneous thing where we focus on the stuff that we understand, but the majority of important stuff is unexplainable? Is that the way you would just say it's a distribution? Yeah. And I guess to put sort of my machine learning tussle hat on, my personal opinion is that there's a kind of, there's an interesting battle going on, I think, between the statistical correlation machine learning people and the emerging causal people, like Udaya Pearl's Why book, right? Where, I mean, there's a famous statistician. The book of Why? The book of Why. There's a famous statistician criminologist named Burke who said if sunspots and shoe size correlate with crime, we should use that data. And there's definitely a fairness issue. And then there is a, is correlation equivalent to cause and should we use correlation? I think the thing is that you get lazy. And I think machine learning is very good at making predictions based on correlation. And a lot of humans then stop because it's difficult to ask causal questions and I think you deal with causality a lot, Sina Kwan-Nan's in your book. Are you saying that, I guess the question, it's sort of a nuance, and by the way, when you asked me to do this, you knew that I wasn't going to throw you softballs, right? Well, it's, so are you saying that we should give up? Are you saying that it's going to be harder? Are you saying that we should try hard? Where are you on this kind of causal and counterfactual thing? I mean, do you think, what are you telling us as a community how should we deal with causality? It's a simple little question. First of all, I think every other at least Westerner believes that underneath all of this, there's causality. If there are correlations that keep holding and holding and holding are sort of more rigorously tested, then maybe there's some causality we don't understand. It may be very indirect between sunspots, shoe size, and crime. That seems pretty unlikely, but I am not a computer scientist from what little I know, it sounds like, including the Pearl book, it sounds like there are good reasons to incorporate what we know about causality into machine learning systems, when it makes sense. I mean, why throw all of that out? Except that doing so can turn up correlations that strictly causal analysis would miss. So you're saying end? Yeah. Don't throw away the unexplainable, but not give up explanation. Yes, and you inserted quickly the fact that there are some ethical issues here as well, and I know that you know this very well, as many people in the room do. Those are really, really serious. When you are arresting people of color because there's a statistical relationship between the sneakers they wear and whatever, you really have to do some careful thinking. In fact, you have to stop doing that. That's pretty clear. So the ethical issues that ring this are actually central to how we apply this stuff. It has to be very, very careful. I would point out that there's an interesting, sort of, I think it's, I'm turning over to his name now, Timothy Morton, a philosopher like you. I'm not a philosopher. You are? No. Once a philosopher, always a philosopher. But he has this thing called object-oriented ontologies, and he's working with indigenous communities because I think one of the interesting things is that a lot of climate-related issues come from a very reductionist approach and understanding of climate. And indigenous communities tend to have a more nuanced and a belief system where unexplainable but intuitively understood things are practiced. And there's, I think, an interesting movement to try to incorporate, whether it's talking about eastern medicine or talking about the understanding of the climate by indigenous communities. So I think there's an interesting, among sort of anthropologists and philosophers, an approach to trying to bring the unexplainable into context. And I wonder, have you thought about, and is it connected to, because, let me lean back, I think that we've had a very reductionist period through the industrial revolution where we wanted all the modalities to look the same. We wanted organized, we wanted order. And now machine learning and internet has shown that the world is chaotic. But I think a lot of history with nature actually wants you to start to connect with it. Or even just society. It turns out to be messy. And I think it was actually the engineers and the economists who kind of wanted to quantify everything. And if you think, if you talk to the humanities and the social scientists, they might argue that it's not. And the philosophers are often in service of the economists and the engineers helping them justify with platitudes the reduction. Sorry, I'm getting off on my own little tangent. But I guess, do you think this is a new thing with computers and internet, or is it just re-understanding something that may have been around before we got into this sort of engineering mode? When are the softball questions coming? No, those will come when I open it up. No, they're not going to give you softball. So there's been a tension for a long time between these two approaches. One is what today we would call statistical, but even before you would note correlations between things, and no correlations. So the dog star appears and you know that the Nile river banks are going to overflow and you have no idea. You don't understand about gravity and what causes the tug of gravity causing the overflow. But there's a correlation and it turns out to be causal underneath it. And we accept it. It's sort of a statistical, simple statistical one. The founding fathers of the U.S., a bunch of them used to track the weather every day, like the weather log, without any theory about whether early frost was foretold by the withering of the chrysanthemums or whatever. I'm pretty much making this up. But they would note this and then try to find correlations going on for hundreds of years. It's only like right at the beginning of the 20th century that a model of the weather was formulated that a guy whose name I never remember, Birkenness, I think. Anybody know? You can Google it later and correct me. Said, well, okay, you know what? Weather, atmosphere, it works on Newtonian principles which you can understand. There are seven factors. There's moisture, speed, heat, whatever. Seven factors, we can use this and came up with a model that was law-based, reductive in its way and eventually worked initially not so much and the early computers took a full day over a day to calculate results. At the same time that there were statistical models getting increasingly sophisticated. So this tension between the two has been going on for a long time under different names. It's only now that we have machines that are capable, so everything on the surface of the earth affects weather a little bit. It doesn't care about, but it affects it. Only now do we have the sensors that are able to gather enough data and the machines are able to not just process the data, but to process it in its relatively chaotic form, looking for correlations that the seven principles, which now are more of course, would miss entirely that we're getting results that you all know that your weather reports have gotten far more accurate and precise and longer term in the past than they have for hundreds of years. Well, for a hundred years. There's been amazing progress in weather forecasting because it's machine learning now. We now have machines that are able to calculate and compute and connect more and more stuff in ways that we cannot understand frequently that are giving us better results. We are living in this world every time you use your phone for, you know, you're reading your email, the spam is machine learning. The suggestions on your music system, the type ahead when you type it's all machine the routes that it's drawing for you it's all machine learning. And in some instances, we don't know exactly how it's working, but it's working really well, well enough that we're not using the old tools. As we begin, as this begins to sink in and we say, yeah, no, it's fine. We don't understand. We don't need to. The routing is working fine. It doesn't matter if we don't know how it works. The spam filters are amazing. We don't know why it's picking out the word beyond in connection with that. But who cares? It's working. As this becomes, and as our medical, we go in for medical treatment, and then we're not very far away at all from the doctor telling you, well, you look pretty good. Everything's fine except the machine learning system says that you're at risk of developing type two diabetes in five years. What do you want to do about it? 62% chance. And you say, what? Exercise. They don't eat carbs. Doctors say, well, it's machine learning. We can't tell you exactly in this case, but we do know it's 62% chance. And you're going to do something about it or not. We're going to be faced with this over and over and over again until and we are going to be constantly reminded properly about ways in which inexplicability is unacceptable that in the judicial system, for example, I think this is a strong case to say we do not want any inexplicable systems. There are going to be accidents. Uber is going to run over another pedestrian and we sort of know why that one happened in Arizona, but we don't know. It's too complex. But we're going to get back in the car and let it drive us because fatalities have dropped. The standard thing to say is 90%. They drop 50%. We're going to be in the car. The planes land using computers. It's fine until they don't. So every time the fact that these things are inexplicable because they screwed up, perhaps, or because they're creating injustice, it's just going to drive home to us that, oh, we're living and benefiting from machines that we don't understand, that's going to further convince us that the world is chaotic because we can benefit from that level of particularity refraining when necessary from formulating general principles. It's great when you can, but if you can't understand what is working, I think it may change our attitude towards not just those predictions, but our understanding of how complex life is. Starting in the early 60s, I guess late 50s, silent spring, we started to understand that there are ecosystems. It was a new notion. The word just goes back a couple of decades before that, but ecosystem, what an interesting idea. And the notion that everything is connected, we now live in, we all accept that, we understand it, we take it for granted. That's being driven down a level now by this, it's not just the broad sweeps of the course of the river isn't, it's the falling of ash, it's the way a leaf scooters down a driveway. These are now all examples of a different world. I'm going to add one last question and open it up, and this will be an easier one. So you said that leader just flips a coin if everybody tells the truth at the top. I think at least in my organization, everybody hides a little bit of the truth, and by the time it gets to me there's no truth left, but either way, either zero truth or absolute truth is leadership overrated? Do we need them anymore? Should we just replace them with a coin flipper? Is that what you're saying? That's exactly what I'm saying. No, it's not what I'm saying. I mean, it is absolutely overrated. It's toxic in many instances. The gender politics of it by itself should make that clear. And the reaction against leadership has been going on now for a long time, and we can certainly trace it back to the hippie error, but we can go back further and it has a long history. We both admire and love our leaders and are very, very skeptical and worried about them. So my point is no, all decisions ought to be a coin toss. It's that decisions should be made, we need more information than ever. So rather than reducing, let's see if we can make decisions that take advantage of as much information as we can. And there's another part I didn't talk about, but I can be very brief about, which is that behind the idea of strategy, do you know the phrase, we're going to put all the wood behind the arrow? This is from, what's his name? Scott McNeely, who was CEO of Sun Microsystems, which is a computer company before most of you were born, oh my god. And he just talked about strategy, said we're going to put all the wood behind the arrow, which is a phrase that sort of caught on, because it is manly and it's phallic and it's just great. So we begin with orgasms and end with fallacies, you and me. Sorry, that really didn't come out right. It's a phrase that assumes that strategies you only get, you have to reduce, it's a zero-sum game and you have to put all of your resources behind the one thing that you choose, which makes it very high risk of course, but sounds very impressive and you're doing everything possible. And this whole image, the military image of decision making is generally entirely inappropriate anywhere except in the military and maybe not even there, I don't know. It's actually really interesting stuff going on in the military around leadership, which has been really one of the most enlightened groups when it comes to leadership and it's based upon this zero-sum idea that you only get to choose one direction. Well, yeah, often that's the case, kind of make those decisions, but also increasingly now, you can go in multiple cases. You can set up an open platform and pursue many possibilities or let the world pursue the possibilities that you can't. You can make more future and in that situation, leadership is not what we used to think it was. It is far more generative, it is more about enabling thriving, enabling growth and nourishing and I'm going to say the sorts of things that you actually do is head of the media lab, where you enable more possibilities. You also do a budget which requires getting all the wood behind a few arrows. I'm considering changing my title to custodian. That's actually what I do. No, you're a gardener. Nothing wrong with being a custodian. All right, so I want to, thank you, I want to open it up now. We have about, sorry, only about 10 minutes left, but does somebody have a microphone? Somebody have an easy question, way more important. So the person with the microphone picks the next person. And I have a microphone. Is it on? Okay. Yeah, I think it's on. Okay. My question is sort of a comment and I want to response to it. And that is that all this data that we're looking at is not reality. It's a measurement of reality. And so it's always incomplete. And so making decisions on incomplete data is always probabilistic. So I think a lot of what we're dealing with is as life gets more complex it becomes more probabilistic than deterministic. So there is a determinism in it, but it's not absolute. And one of the things I see with chaos and accepting chaos is that some people accept it and a lot of people don't. Maybe a comparable amount, maybe more. And maybe we end up with people who run countries who don't really believe in chaos, but believe in things that aren't necessarily fact based. That would never happen. Right. So I just wondered like I'm just saying it's maybe more complicated than just that I'm just saying there's still a reality underneath it. I'm a scientist. I'm going to admit that at first. I expect I have this faith. I've thought about this a lot too because I'm a little bit spiritual about it, but that we have scientists have faith that there's an explanation underneath everything. And they don't necessarily need to have that explanation all the time. And I see I've thought about this in the context of religion and what you talked about is about indigenous people's beliefs which are based on their data that they have sorting together, but they don't know what's behind it. So it works over a certain time scale, but not necessarily over a different time scale. And I think there's so there's problems with this moving forward. And it's not straightforward. And I mean we're not we're not really dealing with it correctly yet. And it's not clear when we think we are we're probably wrong. So what do you think about that? I'm going to hold off I'm going to hold back on talking about spirituality. I'll let Joey do that if you want but I want to thoroughly endorse what you were the rest of what you're saying. I don't disagree with this but okay. So data so I'm going to annoy a computer scientist I think but data is a construct. We decide what we want to measure because we think it matters and we determine how we're going to measure it and what units and how accurate we need it to be. From my point of view and I think that I mean this pretty much literally information is what we read off the dials that we've created to measure the stuff that we care about. And then we go to a machine learning system and we put in the data that we think is relevant. So there's human touch all around machine learning. It is not just looking out over the universe and gathering data and making its pronouncements. We if it's plastic software we're feeding in the stuff that we think matters and so that's likely to be the normal sort of stuff that shows up in a hospital record medical record and that seems pretty reasonable and it very well may not be the local environmental conditions in your community which may turn out to be absolutely really really important. These are system and then we decide what we want to optimize these systems for and that's a human decision. How accurate do we insist they be? What are we going to take their advice? Who's going to decide what the thing should be optimized for? Who's going to decide this is a useful set of data and this is a set of data that we think probably doesn't obviously contain hideous biases. The hideous biases that human society is heir to. These systems are thoroughly drenched in the human and so the results that we get are affected by that negatively because we're going to miss some stuff because we didn't realize that some factors affecting that sneakers, shoe size does affect health or whatever. There's going to be factors we're going to miss. You see this we've got to wonder about this for weather, for example and there are going to be factors that throw things off. These are always more human than I have made them out to be so I'm really glad you made your comment. These are probabilistic systems. Absolutely. And machine learning computer scientists that's all they deal in is probabilities. That's what they want out of it. Every answer is probabilistic. 62% chance you're going to get type 2 diabetes in the next 5 years. I think that is a huge step forward for us though. If we are able to start thinking asserting with every proposition what we think the probability is, maybe not numerically of it, then we maybe can talk together better, make more sense out of things together because if you don't you just end up asserting at each other. So I actually really like the probabilistic nature of it. My profession is one that you just identified as a perverter of philosophers. I'm an econometrician. That was him. And I worry about just the machine learning that is so based in correlation. A very simple example. There was an outbreak of flus and a bad epidemic in Russia. And the peasants did a correlation. And they found that the outbreak was the worst where they were the most doctors because they had been sent by the central authority. They killed all the doctors as a way of curing the influenza. So we teach our students that yes, if you merely want to forecast in a short run throw everything in and you get your correlation but you do not know causality. You do not know how to change what's going to happen. So without theory mere causality is useless. And I remember one professor is teaching me knowledge without confidence is futile. So I do worry you can cause chaos by depending on correlation. So that's a really important point. And I don't think there's any gain saying it. So I don't mean to gain say it what I'm about to say which is absolutely but machine learning is not generally is not doing those simple correlations. And so it's taking a huge amount of data not the five things that we think might affect each other like doctors and measles it's taking in the case of so there's a system called deep patient that mounts Sinai hospital did in like I think 2015 deep learning machine learning thing where they took 700,000 patient records each of which had 500 data points about and threw them into a mix. It's deep learning which means that it was just numbers and it found correlations among the numbers without being told ahead of time these are doctors, these are patients, these are symptoms, these are illnesses, these are medicines just numbers. It's just numbers and it found correlations among these millions of data points that enable it to make better more accurate predictions for some diseases than humans do and be able to predict with some degree always with some degree of accuracy the onset of diseases that humans simply cannot yet predict and we don't know why some of these instances, we don't know why it thinks but turns out statistically it's probabilistically correct. That's not a case where there's a simple correlation between doctors and measles and go do the stupid horrible thing. This is a case in which there's so much data, so many data points that can't be fathomed but the result is a more likely outcome. But I think those edges of explainability is where scientists will now go to try to create those understandings so I think as an exploratory tool it's actually quite exciting and I wouldn't want to give up looking for the causes. I do not want to give up so any progress we can make in explicability is good. I don't think you were saying that but I was just reinforcing that that edge is very interesting and then please pass the mic to somebody raising their hand if you have the mic. Please keep the mic and ask your question. Thank you so much. You had mentioned about complexity and I think you were suggesting that by having open systems you can sort of deal with complexity because you're not prescribing something in advance and I wanted to sort of talk about is it possible to explain something which has no cause? Is that machine learning is using correlation? Are we trying to solve the wrong problem? Trying to explain something which is directly using correlation and also to what extent should we seek causality? You gave the example of the nail and you say it could be the gravity, it could have been avoiding the rabbit. In a machine learning system you would have to sort of collect all that information and I'm wondering does trying to find a cause require machine learning to collect so much data and what would the implications be of over collecting information in order to find a cause if a cause can be found? I was asking for softball questions and you have let me down. I think that if explanations are a tool and then by the way I sort of said this too quickly in the talk but they're not always tools but it's a very common sort of explanation is a tool and if the explanation is somebody says why do you do this thing and you say well cause every time I do it you know every time I do this another thing happens and it works and that's right then you've given an explanation that's the tool the person needed why were you doing it? Cause in my experience I don't have a theory perhaps maybe I'm a child you know I would just make sure that we clarify the causality piece with the sine qua non the counterfactual they're slightly different the counterfactual is an explanation so an example of the diabetes case so Sander Wachtner has a great paper about counterfactuals but what would be into your point with a nail what you want to know is what was the smallest thing you could change to have changed the outcome so if you are diagnosed with 62% chance of diabetes what you want to know is what small thing can I change to lower that so if it's that if you weighed 10% less your likelihood of diabetes would go below 50% that's what you want to know and so I think counterfactual explanations are helpful they don't explain the underlying theory or even maybe the cause but they may be able to explain something you can do about it so I think that's slightly different from cause and I think we call this shortest distance so if you've got all these variables what you're looking for is for this person what is the shortest distance change they can do to have changed the outcome and I think that's a category of explainability that some computer scientists are pushing especially in law and automated decision making I think that's yes and so I'll make a silly example because 10% drop 10 pounds we already have a theory about if that's relation of weight to diabetes so let's say it's actually eat three cornflakes a day we don't know why but people who eat half bowl of cornflakes we don't know why and it's got to be post if you're only born Japanese that's not helpful because you can't change it so it may be that eating three raisins a day three cornflakes a day works it's not exact and that's fine good do that that's a really good thing to know but it's not exactly what normally we would mean by no that's kind of my point yeah I understand and that's fine then don't eat the cornflakes but we still don't know exactly why and it may be that the reason we don't I'm sorry I'll give you a harder example in a minute the reason we don't know why is that the cause of this thing this case diabetes is so enormously complex it's like asking what the cause of a war is and so typically you know you get some type of like the Archduke was assassinated but so many other things had to be in place without them you just have a dead Archduke you don't have World War I that it's not exactly we think of in that case we think of it as an explanation we're told in our schools that was really really not we need the economists to help us and everybody we need the geologists to help us some things are so deeply caused that even though you can find a scene upon on a shortest distance thing to do eat the cornflakes it's not really an explanation and so to make this really hard is one of the as you know I think many people here may know there's a couple of important papers that say counterfactual case if you want to know if the system discriminates against women and you didn't get the job because machine learning said you know discarded you or you can get the loan or whatever then rerun the application and change just one thing change your gender and if it comes out differently then you know the system is biased that seems like a pretty good test it's not exactly it's not exactly an explanation if you really wanted to know why it did that the fact that that one thing changed is the thing we need to know it's a nail in the tire it's not an explanation of why why did that happen and it may be a far more complex than the simple cause you fixed it but explanations often are about fixing a problem that's what they're there for and if you can do that with the three cornflakes you can do it with a correlation that you don't understand but seems to work much of life we've done that you know does does machine learning have to be decentralized by law lest it sort of is the alternative too horrible to think of so centralized machine learning would be one entity in the world that has all the machine learning or google has too much facebook has too much amazon has too much so I work part time for google disclosure I'm a writer in residence for a while in the machine learning research group and I do not speak for google next disclaimer so there's a there's a non a case made outside of politics which is what you're you know political effects and social effects which is what you're asking about it says you know if you can get machine learning systems to share their results and even perhaps share their models you can start to knit together systems you can knit together systems that could give you better predictions so we have weather systems that have what we think or weather data but they may not I don't know they may not be connected to industrial information or etc etc and maybe if you did you would get better predictions I don't know enough to be able to tell you what the dangers are of doing that type of it's distributed but it's federated systems I am not entitled to have an opinion I think I would worry more about how any of these centralized own systems are used who's making the decisions for at I'll take google why not who's making decisions about what the machine learning system should be optimized for in the youtube recommendation engine which you could be optimized simply to get more people to click and watch more things longer in which case it's very likely to recommend horrible old destroying stuff that is addictive eye candy just as with Facebook if it's optimized for maximum clicks as it seems to be because they get their ad money then that seems to have some pretty bad social effects for autonomous cars do we want to optimize to save for comfort for to save energy for faster speed travel shorter travel times there are conflicts between these different goals who gets to decide and I'm not going to say anything novel when I say I agree with those who say that decisions need to be informed by the people who are affected by them all of the stakeholders which is a very large group of people that those decisions have such large social effects that they should not be made at least entirely by commercial entities I mean that's that it seems really clear to me I do not want the autonomous vehicle makers deciding what their cars are optimized for because for example they will be very inclined to optimize them to save the lives of the of the passengers in their car at any expense and then you get a system that does not minimize the number of fatalities just minimize the fatalities of people who can afford BMWs so there's I think there's lots of room for regulation and for making sure that the decisions are made by the people who with the input of the people who are affected and made with societal interests in mind I don't have great ideas about how you do that but I'm really glad that there are lots of people working on this and how much trouble on I am for this well I'll take that part and send it to Larry and to Elon but I think we're out of time but thank you for being good sport and the great answers David thank you thank you for the wonderful question