 Hi, good evening everybody. Welcome to Making Sense of the Digital Society again in this empty theater. Hebel Amuver, thank you for having us once again. My name is Tobi Müller. I'm a freelance journalist of Swiss background here in Berlin, and I am the moderator of this series. It's going to be a little shorter than usual, at least, you know, when we did it live with, well, it is live tonight, but maybe we did it with an audience. I think the time frame is about 75 minutes and not, you know, those two hours it sometimes turned out to be in the past. So, good evening everybody. Again, it's probably quite safe to say, at home, within the last two months of a rather loose lockdown in Germany and contact restrictions all over the world. I think the idea of intelligence has been contested. Maybe it has even shifted somewhat, at least in the public sphere. For the moment, I mean human intelligence. Most of us have listened closely to doctors, preferably to epidemiologists, to economists, to politicians, not so much to artists. I have to mention that since we're in a theater here, but even some of them eventually got some more airtime. We listened to people who seemed fit to say something about how to cope with the pandemic, how to change its course, and what to expect, what to expect in the near future. But hardly anything is as hard to do at the moment as predicting that future. Because so many conditions seem to change faster, at least we think that, than they had changed up to the outbreak. This is not based on a peer-reviewed paper. Of course, I'm only your moderator of this evening. But I think that there is one thing that holds true for all the smart communicators in this crisis. They have taken into account the category of uncertainty. They point toward the fact that they cannot know for sure at the moment that they are working on it, that we are in the process of knowing. Is that something uniquely human? Well, I'm not sure. I even doubt it. But we have a guest here tonight that is going to tell us a lot more about the role of humans in an age of intelligent machines. And to have a hunch that when we talk about artificial intelligence, AI, we first talk about intelligence in general. But more about our distinguished guest in a minute. Thank you again for making this possible, the two main agents hosting this venue, the Humboldt Institute for Internet and Society here in Berlin and the Federal Agency of Civic Education. And of course, the Hall. The structure roughly of this evening is, as it usually is, it's a short introduction. Then we have the central piece, the talk of our guests going to have a rather short one-on-one conversation after that. And then you at home or wherever you are watching this show tonight, actually, are able to ask your questions. We have a tool called Slido. I think you should see it on your screens where this is streamed now on the respective websites of the hosts here. And also on AlexTV here in Berlin, a tool called Slido where you can ask your questions you're even able to vote on the questions. So Christian Graufvogel, who organized those events so well here, is going to read them out to you, to us during our conversation. There's also Twitter, there's the usual hashtag, digital society, if you want to comment there, or ask some questions, even after this evening. So back to our guests. While some of the outlooks on the European role in the tech power race have been rather pessimistic in our series here, Ratau and other venues, she, US-born and race, British passport since 2007, says, I quote, the EU is currently leading the world with the GDPR, the General Data Protection Regulation, the Dartenschutz Grundverordnung in German. So that sounds like a piece of good news in this context, and we sure need that at the moment. And from all what I was able to glimpse from her work, our guest is very precise in differentiating terms that sometimes have been mixed up a little bit here in this series, probably by me, terms like algorithm in AI, mere math, or machine learning. Our guest has joined the Herdy School as Professor of Ethics and Technology this February. A private graduate school here in Berlin specializing in public policy and international affairs. Her research focuses on the impact of technology and human cooperation and the governance of AI and ICT for information and communications technology. Before coming to Berlin, she spent 18 years mostly in the UK and a computer science faculty at the University of Bath, one of the top 10 universities in Great Britain. She holds degrees in psychology, behavioral science and artificial intelligence from the University of Chicago, the University of Edinburgh, and Massachusetts Institute of Technology, MIT. Apart from her academic career, she has worked also as an IA consultant, that was I think in 95 for Lego. She also advised legislators about AI and robotics. And at least since her PhD, she has been reminding many of us in the last 20 years or more to anthropomorphize, not to anthropomorphize artificial intelligence, which means not to think of AI as human. But let us hear it in her own words in this talk titled, The Role of Humans in an Age of Intelligent Machines. Very pleased she is with us tonight, despite the empty theatre. Please welcome Joanna Bryson. You need an applause track. Okay, I'm turning my back to the emptiness. Hi. Well, thanks very much for having me here. It's a great honor to be meeting you. And I look forward to meeting you when there's more of you again. So why don't I go and get started? I'm going to do this talk in three parts, because I think really a lot of the concern is about the jobs, right? Artificial intelligence and jobs. So when we think about what is it that we're afraid of and we're worried about AI, part of it is will we be able to find a place for ourselves? Then I'm going to talk in a little bit larger frame and talk about intelligence and ethics and, of course, survival as promised in some of the tweets I saw about this. And then finally I'll make a few recommendations. So can AI replace people? You might think the answer is obvious, because we've seen jobs be taken. But, well, I want to give you a really great case. So just a few months ago, I was asked to speak in Norway at the Norwegian Radiologists annual conference keynote. I mean, who knew there was an annual meeting of Norwegian radiologists? But there are, and I knew why they invited me. So let's have a look at this. It's actually more animated than I usually do. Yeah, there was an elephant in the room, and here it is. Jeff, perhaps a similar question, if you may. What do you think is the exciting work to come? Okay, so... Sorry. Let me start by just saying a few things that seem obvious. I think if you work as a radiologist, you'll like the coyote that's already over the edge of the cliff, but hasn't yet looked down so it doesn't realize there's no ground underneath it. People should stop training radiologists now. It's just completely obvious that within five years, deep learning is going to do better than radiologists because it's going to be able to get a lot more experience. It might be 10 years, but we've got plenty of radiologists already. All right. So four years ago, Jeff Hinton said there was no reason to train any more radiologists ever because we knew that within five years, maybe 10, they would all have their jobs taken. In fact, there's more radiologists now than there were four years ago. And the reason is because they actually each produced more value. They're all using AI, and I was slightly wrong. Part of the reason they had me there was not because they were terrified of Jeff Hinton, but rather because they were all using AI and they thought it was cool and they wanted to know about this ethics part of it. So in fact, AI has increased the value of radiologists. So, sorry, you've moved. He was right that I was going to start out with a couple of very quick definitions because it's only a half-hour talk. I'm going to use a definition of... There's many definitions, but this is one that goes back more than a century and it's the one I was taught when I was an undergraduate learning behavioral science at the University of Chicago. So intelligence is the capacity to do the right thing at right time. And you can think of that also as a form of computation because what you're doing is you're taking this information, right now I'm sort of ignoring that information over there or this information back there, but I'm taking this information here and trying to generate the correct action even though this is actually quite an unusual situation. That's what intelligence is about. Now, you might think there's more to intelligence. It's about souls or life or something else, but I think by using this straightforward definition that was originally invented so that we could talk about which animals were more intelligent than each other, then we can start asking interesting questions, like what is it about intelligence that enhances human value to take these pieces apart? All right, so that's the definition I'll use for the whole talk about intelligence. What is artificial intelligence? It's really easy. Artificial intelligence is intelligence in an artifact. Surprise. What is an artifact? It's something that's been deliberately built. So it is not something that scientists discovered under a rock or that came from space or something. It is the subset of all the stuff that translates perception to action that humans are responsible for producing. We're the ones who did it. All right, so let's go back to talking about employment again. Let's pretend that I wrote, I do actually write programs occasionally still, I wrote some software that made teachers twice as good. What would that mean for employment? Well, actually, there's nothing in that sentence that tells us about employment. We could either have twice as good of schools. The kids get twice as good of education or we could have half as many teachers. So there's nothing in the software itself that determines that, right? This is a political decision or a normative decision. It's something where we need to choose what we think is important to do. But there are big differences, right? If you choose to go with fewer teachers, then you have fewer people with jobs, right? But maybe you have a higher average quality of teachers. So when people first invented PCs, people, the Microsoft or whoever, got the PCs down cheap enough that they cost like a half-year salary for a secretary, then people started buying one PC and firing two-thirds of their secretaries and keeping the best secretary because that person could now do much more work because of all the software helping them do more of their job. But those people didn't all necessarily get unemployed that were fired. So the question here is, we may have a higher average quality teachers or, and this is not an exclusive or, it could be both, we may wind up with fewer whistleblowers. This isn't such a good example from teaching, although there is some whistleblowing going on right now in UK teaching. But just thinking of it as having a stripped-down organization that gives you a simpler control or management problem. So some people say that they want to do that. Of course, some people say they want to save money. But when you make those sorts of decisions, then you may also wind up with much less diversity. Now, of course, if you're careful about who you chose, you might wind up with still having a similar level of diversity. And then the other thing I talk about often, but I won't talk in depth here about, is fragility. So fragility is when the organization can't handle as many challenges as it could before, right? And in particular, a particular kind of fragility that bothers me, which isn't on the slides, but I'll go into it anyway, is when you have a rigidness because you've computerized and you've created the only interface to the organization through a computer form. So if you have humans that can talk to other humans, then you can solve problems. But if you have your customers only working on a computer form, or if you hire some people to type in the computer form for your customers, but you still take away the liberty of human-solving problems together, that's a kind of fragility, too. You find that you can't solve a problem that the customer has. All right. So that was the perspective of the management or the society that's trying to decide how much to invest in education or whatever. Let's look at the other side. So one of the things that people have said for a while is that when you have communication technology, and this is so true of the moment, where you work becomes less determined, right? And so the ways by which unions used to organize is challenged by this precarious nature of jobs, by the fact that you could have... I know that I'm looking at translators right now. I know that the translating industry has massively changed recently, and it's less likely that translators are working for a single organization, more likely at their freelance. So this is something that makes it harder to do traditional unionization tactics. I think that this should be fixable because after all, it is communication technology. So I think we will find ways to organize our societies again. And it does seem that we are already starting to see that in some kind of platform work that we are seeing strikes and things like that. However, of course, they are exposed to a lot of surveillance that maybe wasn't experienced before, or maybe it was in factories. Okay. It's certainly true. It's amazing to come to Germany and see the ways in which labor is integrated all the way through the organization to the executive. And also in the United States, in some states, where the workers' rights are being supported, it depends on the state, on the legislature. So this is, again, something that is not entirely about just the workers and their union. It is also about the wider society and how the laws have been written to support and to integrate. Okay. Let's talk about wages. I think this is the thing that is really the place where the coyote is over the edge of the cliff. Because sometimes wages change slowly, even though the real thing that changed them was a while ago. Now, David Otter, I went back, I was talking about those secretaries who lost their jobs when we got PCs. David Otter in 2015 wrote a fantastic paper, which I still recommend called Why Are There Still So Many Jobs? And he pointed out, among other things, that we have more artificial intelligence than ever, and we also have more jobs than ever. So it's not a simple relationship, or at least not a simple bad relationship. But there's something else that could be happening that's a problem, and that is it may be that AI is increasing inequality. And part of that is because it may be making it harder to have a range of possible wages. And that is believed, although this is a tricky part of economics, that is believed to be part of the problem with inequality and also with polarization if you don't have social mobility, if you don't have a smooth level of wages. I'll go into an example really soon. Yeah, in fact, I'll do the examples. So let's take this one. One of the things that Otter documented, in fact with colleagues at Boston University, was that there's actually more bank tellers now than there were before automated teller machines. Got an automap, right? So that's contrary to people's intuitions, and in fact, if you look at a particular bank branch, there will be far fewer tellers than there used to be in the bank branches. But the consequence of that was it made the bank branches cheaper, and so they were able to have more bank branches, which the customers wanted. There was demand for closer bank branches that took less time to get to them, and so you wound up with many more branches and actually more tellers making slightly more money because they weren't doing the easy jobs anymore. They weren't just counting money very often, right? However, that wasn't the only reason it was cheaper. It was also because of that smaller number of people you didn't need so many bank managers, and they were the ones that were really bringing in the money, right? Now, you could say, well, who needed those guys? They were guys. They were mostly white males, yes, at least in North America, and I actually learned this from the Royal Bank of Scotland, so I guess from Scotland. But they were probably the more talented people in the organization. They were reasonably able to work with people, and there was always someone in the community that had worked their way up to that that then, if something disastrous happened like a tree fell on the school, might be able to be the philanthropist to come in and chip in. So there was someone around with a little more money. So that might be something that's essential to the economy. Another example, people worry that driverless cars will take all the driving jobs, but in fact, right now, there aren't enough people. It's hard to drive to drive. It's hard to hire truck drivers. Almost all the truck drivers in the United States, at least, are old and want to retire and nobody wants to replace them. So you might think that that means, well, we need driverless, but I would argue that we're already in this situation that AI has taken over driving. Not entirely, it's not completely driverless, but remember that definition I gave of intelligence a few slides ago, right? Power steering is already translating the situation into action. It used to be in the 1950s and 1960s that a truck driver had to be a physically strong person, right? And they also had to be good with maps and they also had to be good at bookkeeping and they had to be organized. Now, it's not trivial to drive now, but it's not as hard as all those things. So more people can become professional drivers more quickly and the consequence is that they get paid less and therefore you can't find enough drivers, right? They're being paid half as much as they were in the 1970s. Okay. Let's talk a little bit more about that fewer people with jobs thing. I find it weird I have to make this argument. Maybe, again, I don't in Germany, but in some contexts, I have been in contexts where people were saying, get rid of the people, you can't rely on the people, they're dangerous, all right? Okay, it was one of the very few times I spoke in Washington, D.C. but anyway. So one reason you want people is diversity and not just because it's politically correct, because it's useful, all right? There's something called Fisher's Fundamental Theorem of Natural Selection, but this isn't just about biology, machine learning also works a lot the same way as evolution works, right? And evolution goes faster the more variation you have, okay? It's proportional, right? Why? Because you're, you know, the natural selection, you're selecting the fittest, well, you need a variety and if the world changes, you need to be able to grab some variety, all right? So, and we've seen this also, various people have gone out and looked at, you know, diversity on boards and things like this and found again, as you increase, as you increase diversity, you get better performance from a board, a corporate board, right? You just have more ideas. So last variation means less robustness for addressing underlying change, as I said before. So that's one reason you want a bunch of people around who think differently from each other and the little politically correct. We need privacy, tolerance and diversity, right? We also need to make sure that people can say what they're thinking and stick out, as well as be different. Otherwise, there's no point. Okay. What about accountability? This is somewhat something that people don't think about as much. So they think that, why don't we make the robot responsible for itself? Why don't we make the robot pay tax? Why don't we make the robot a legal person, right? But the problem is that the way we've set up justice, which makes sense for us, is that actually it's more about dissuasion than recompense. That means you're trying to make people do the right thing. You're trying to find ways for society to work together. You're not trying to get something back, some kind of restitution when somebody's done the wrong thing. People do the wrong things you're already losing, okay? All of the ways that we try to dissuade people from doing the wrong things are based on what it's like to be a human. Like losing time out of your life because you're in jail, losing social status, losing wealth, so losing liberty, right? And you could say, well, we'll make the robot care about that, right? But the extent to which humans care about those things, it's systemic, and it's not just us. It's like dogs and guppies and sheep. If you separate the sheep from its flock, it's terrified. If you separate a guppy from its school, it can die of stress. This is systemic, and it's because biology knows that we are social animals that need to have this kind of status, and so we're driven to it. We're not going to build that kind of systemic level of aversion into a well-designed artifact. Remember the A part of AI? Now, if you could magically upload a human brain, then yes, she would have something that suffers. But we can talk about that in Q&A. I cut those slides out of here. But any system that we buy is built and designed, and if it's safe and secure and something that people are willing to have liability for, then it's also modular. And so if you put the suffering in, you could take the suffering back out. Or a hacker could take the suffering back out. So it's not going to work the same way people work. And we can already see this when we make certain kinds of artifacts, corporations. When you go too far in limiting the liability, so there's perfectly good reasons why we have limited liability and why we have legal personality for corporations. But since the 1990s or so, it's been overextended, and we've seen this thing with shell companies. Basically, they're set up only to fail, because nobody has invested in seeing the thing work. They're only finding it as a means to do money laundering and to evade accountability. So AI as a legal person would be the ultimate shell company. Nobody would care if it would fail, and so it would be set up to fail. It's just asking for corruption. Okay, and if you want to read more about this, we had a paper in 2017 about this with a couple of colleagues that, unlike me, are law profs. I am not a law professor. All right, so that was about AI and jobs. Let's go a little further into these things now. So going back to the definitions, I already said this about intelligence. And yeah, this important thing. It isn't math. And you can do computation as a part of the process of math, but my point is that it's not an abstraction. It's not something that's perfect and eternal. It's actually something physical that has consequences, right? Being able to act requires physicality. Computation takes time, space, and energy. People often forget this. That's why you go out and buy a new computer with more memory. That's the space where the computing is being done, right? So some people think, well, that's not what I was talking about when I was worried about AI. I was worried about agents. And again, there's lots of definitions of agent. If you're talking to Donald Davidson, I think he's dead, but if you're reading Donald Davidson, then yes, okay, that's what you care about. But there's another simpler definition of agent, which is we learn, you know, even in chemistry. You have an agent is something that changes the world. And I agree with you, a robot changes the world. A mobile phone is a kind of robot. It's an AI device that changes the world. It buzzes at you when you're late, right? Something like that. Okay. What we actually are caring about is moral agency. I don't know what the word for that is in German. But moral agents are, it's a philosophical technical term that is who is responsible for their actions, right? And it's determined by a society. You might think we all agree about this, but different societies have different ideas about how old you have to be to be an adult. Some societies, it isn't about how old you are, it's about whether your father is still alive and what gender you have, right? These are the kinds of things that make you an adult. A moral agent in some societies. We can't even agree about how old you have to be to consent to sex, how old you have to be to fight in a war. There isn't universal agreement about these things, right? Moral patience is the other thing we really care about. What are we obliged to take care of, all right? And these two things brought together are called moral subjects. I'll skip over that. We mostly care right now about agency and patience. And the question is, are we obliged to put AI into one of those positions, all right? Well, all of those things are just definitions. You can find them in dictionaries. You can find other ones too. But that's coherent. This is the thing I say that drives some people up the wall. I would argue that there's nothing more fundamental to ethics than who are your moral agents and who are your moral patients. And since I've already demonstrated that people have different ideas about who's a moral agent and who's a moral patient, that that implies ethics itself is also created by society. In fact, I would say that the reason you have ethics is it's almost identical with the creation of a society, the sustaining of a society, right? It's the definition of a society. And one of the nice things about that is that it explains why so many of the things that we consider to be ethical, I mean, some of them are clearly involved things about, you know, universal things like theft and murder and things like that. But some of them have to do with what we wear, right? What clothes we wear and what circumstances, right? When you're willing to get naked, you know, like in the States in Berlin. It doesn't happen in the Midwest of America, right? But defining your identity, having ways to define your identity and signal it with your music and your culture and your language is one of the things that sustains a society. And so there is a moral component if you want to keep your society together, right? All those things matter, all right? And so, yeah, I think I'm going a bit slow. There's no clock back there either. All right, and I already mentioned this, so I'll keep going. Right, so the concepts like responsibility and intention are things that allow us to co-construct who we are, all right? These, again, are not things that science discovered. They're ways that we think about how we relate to each other. So the humanities discovered it. They're useful concepts. I'm not saying that they weren't discoveries, but it's not something you go out and do an experiment about, okay? So government enforcements are public goods, right? This set of responsibilities, this idea of what we do it and how we enforce these things are all forms of public good. And this is a little tag I add in in case there's anybody in industry right now. This is why we pay tax, right? Or we should be paying tax. We need to find ways to help make sure that the system is working and it's stable, all right? And that includes giving up some revenue to invest in infrastructure, all right? All right, so I'm going to go into now a little bit about how intelligence is changing the world and coming to the security and power things. If you believe, and that's kind of different from what I said earlier, the intelligence is something you can take apart, all right? So that there's action, there's perception, there's motivation, there's memory, there's learning and reasoning, right? If you can take all those pieces and if you get a big textbook on AI, it'll have those on different chapters, right? Then I would argue that every machine has and especially writing can be thought of as examples of AI. There are all ways that we have used artifacts to drive action in an enhanced way from our own motivations, okay? So by this definition, there's been something like 12,000 years of AI. Writing is 10, 12,000 years ago. So you might have heard of this problem, the intelligence explosion or superintelligence, right? There's a concern, actually I.J. Goode said this was great. It was the final invention because machines will then invent themselves and we can just stay home and drink or something, I don't know. Whatever we're doing now, right? But anyway, Nick Baster made it sound like a bad thing when he wrote superintelligence. So there's this concern that if you have the machines that learn how to learn, there'll be an exponential increase in their capacities, right? And that's one of the ways, and now we can all read exponential curves, right? So I made this argument that we have AI. My argument is that we with AI are the intelligence explosion. And here's my exponential curve, okay? Not only, it's an exponential on an exponential. Look at the y-axis there. So there's an exponential y-axis and it's still going up exponentially, right? Except for the occasional plague, right? So this is amazing, you know, that 10,000 years ago, there were more macaques. The monkeys were doing better. They evolved more recently than there were hominids, right? Now look at us, right? Okay. So one of the things that Bostrom said when he said that this is going to be really scary is that there will be unintended consequences. So even if we designed the AI really well and we know what its goals are, and it's trying to do the goal that we thought it was doing, because we're letting it think for itself, weird side effects might happen. Like a planet might be turned into paperclips just because we asked the machine, the AI system, to organize a few offices, okay? So where are the unanticipated consequences? Well, I would argue that that increase in the number of humans has been accompanied by a decrease in the number of every other large animal, okay? And there's lots of documentation of this, including more recent papers, but this one has the prettiest graphs. And you can see as humans got to the different continents, there was these big drops. Interestingly, there's actually more biomass now than there used to be, but it's almost entirely things we eat and play with in terms of large mammals at least. And if you have trouble thinking about exponential graphs, here's the XKCD version from some years ago. I think this is terrifying, right? So the dark gray in the middle is all the land mammals that are humans. And then the lighter gray is all the stuff we eat or play with, right? And the green is all the wild animals left. There's a lot less elephants now than when this was drawn, right? The data, I think, was taken from the knots. All right. So I would argue that one of the unanticipated consequences is a challenge on our sustainability. And another one is inequality, right? But these are things that we didn't really mean to have happen as a consequence of us doing well, but they are, right? Let's talk a little bit more about inequality. So this is a graph from one of my collaborators and some of his collaborators, but the one I'm working with is Nolan McCarty. And it shows, I'm sorry, this is American data, but we do have a lot of this in other countries, too. It shows that the level of political polarization, which we're all worried about right now, is correlated with inequality. And the level of inequality, let's see if I remember, the dark line is the inequality. The dark line shows what percentage of wealth of income is going to the top 1% of earners. And so that was extremely high before World War I, right? Completely entropy then. And then it went down and then it came back up again. All right. So why was it so high? And I have, if I went into more graphs, I could show more graphs. It was coming up at that point. What was bringing it up in the late 19th century? Well, we're not sure. This is working progress, and I'll show you some more of it soon. But we think it might be that when you introduce a new technology that reduces the cost of distance, then you naturally get fewer winners. And those fewer winners get all the money, right? So if we were going to go out to the pub, which we can't do and there's hardly any of us, then we wouldn't go to the best pub in Berlin and certainly not the best pub in the world. We'll go to a pretty good pub that's nearby, right? But if all of a sudden everybody can send their beer on bicycles all over Berlin, it may be that fewer pubs are going to win, right? And so then you have fewer pubs getting more money, all right? So people have been going around in the talk show circuits talking about the great decoupling in 1978. What happened when wages plateaued, but productivity kept going up? And what I love about this graph is it shows that there wasn't a great decoupling, or there sort of was, that angle where the inequality starts going up again. But there was a great coupling. And this is the United States, so what happened was after one World War and one financial crash, enough of the elite signed up with the proletariat, the FDR was able to bring in the New Deal. And then that brought down, first they brought down political, so you can see the political polarization of the Congress was brought down first and then they were able to bring down the inequality. The rest of the world all got to that point in 1945 and they signed up the Bretton Woods Agreement and they said, okay, we can't allow this kind of mayhem to happen. What happened after World War I did not work right. We need to make sure that we don't have transnational wealth extraction, for example, that's something that causes lots of inequality. And so that was banned for a few decades. Unfortunately it's been re-innovated. So as I mentioned, if we're gonna solve these problems, we need not only that most people notice there's a problem, but some powerful people as well. Not everybody has to realize it, but enough that we can get a coalition together and fix things. I'm gonna skip over that part right now. All right, so why would this be? We know that there's a correlation, but we don't know why. I'm doing some, this is not peer reviewed yet, or I should say, yeah, no, it's not even peer reviewed yet. But anyway, we have a mathematical model trying to understand this correlation, and we make a couple of assumptions. One is that as I said before, when you have diversity, you have a higher expected outcome, right? So you know that diversity is a positive thing and you're more likely to do well. But, and now this is the thing we're adding and that hasn't been done before, we're also assuming that with more diversity, you don't know what's gonna happen. So even if the expected outcome is higher, the variation in the outcome is wider. So you might do really well or you might do really badly. Whereas if you cooperate only with people like you, then you have a better idea of what the outcome is going to be, even though it's a lower outcome. And so what happens then is that if the economy is doing badly, if things are going badly for you and you don't have enough wealth and you can't keep your family together, then you reduce the amount of risk you're willing to take, even though you're lowering your own expected outcome. So there's this cycle, this declining cycle. And the figure here, I don't know how clear it is. We started the model out with sort of a 50-50 so that the y-axis here is what is the probability of success when you work with the out-group. We're assuming it always works with the in-group. And then what you're seeing is that when there's a decline in the quality of the environment, so when you're less likely to be able to support yourself if something goes terribly wrong, then you get this conservative push towards only interacting with the in-group. So we're calling that polarization. The interesting thing is that if the environment gets really bad, then you know that just working with other people like yourself is not going to be enough and so you gamble for redemption and then you do get this re-collaboration again at the end. Anyway, you can go read this on Archive and see if you think it's right. Hopefully we'll get it into a journal at some point. So I want to say that these problems of sustainability and inequality do not only happen to humans. And it doesn't actually only happen to apes, but again, this is my short version and I love the picture of the apes. But all species have these kinds of problems. And even Jane Goodall was this huge thing when she first documented chimpanzees having a war and everybody thought, no, no, it's something about humans. It's like, no, unfortunately not. So the chimpanzees could be living together for a long time. They're successful, they split up, and then they fight and one troop wipes out the other. And this has now been documented by lots of people. And it's explained when, again, there's not enough resources. That's when people, that's when the chimpanzees start attacking the other chimpanzees when they don't have enough food for their own troop. So, but what about people? People don't only split into two groups. And I think this is what's really interesting about us. So let's say these are not gendered incidentally. There's no secondary sexual characteristics drawn here. Okay, so if you're lucky, well, we all have parents. Okay, so hopefully you have a family. You certainly have neighbors. You may not have noticed them much recently, but now all of a sudden we are localized again. So you have neighbors. If you're lucky, you get to hang out with them. You hopefully have coworkers. And you may have other people that you hang out with too. So human identity is more complicated than chimpanzee identity. And there's a lot of different ways that we can take these things apart. And I think part of what's going on, in fact, one of the things people argue is that political polarization is the wrong name. You should be thinking about identity politics. People choosing or picking an identity and saying that, oh, my religion or my ethnicity or the length of my last name or something is what matters. It may actually be a survival strategy that we had for dealing with hard times, unfortunately, by fighting and going for the resources when we felt like there weren't enough resources. So this is like the opposite of public goods, right? If you can, you want to build up public goods and allow more people to survive. But if you don't see a way to do that, then you start competing and you pick a team. So that's the basic idea. All right, so what should we do now? Hopefully, of course, we're going to be producing public goods and making it so that we can all get through this to the other side. And not just this, all kinds of this. There's a lot of crises going on right now. There's a lot of focus, of course, understandably on the pandemic. But don't forget about the assaults on democracy and, of course, the climate crisis. We can't forget about these things that are making it very hard to do what we do. So, yeah, society is richer than it's ever been before, right? There was a prediction in the early 20th century that people would only work sort of 15 weeks a year or something by the year 2000. And actually, if we wanted to live like the average person lived in 1920, we wouldn't have to work more than a couple of months, right? But nobody's happy with that anymore. We like air conditioning. Oh, maybe not in Berlin. Sorry. Computers, computer games, televisions, health insurance. Again, you don't have to pay for that here. Okay, so one suggestion that people have made is that we redistribute all that wealth through something like basic income. And one concern some of us, and I'm one of these people has, is that if we just do that, then we lose some of those links I was showing, those interactive links. We lose the, you know, there's been an inherent, a lot of people have talked about the value to yourself that you feel when you have a trade or that people value. But I think that, again, that value to yourself is really the recognition of your value to others. That if you have wealth, you want to employ people. If you have a skill, you want people to employ you. And you can do both, of course. And so it's one of the things that creates a fabric. Now, of course, there may be other things that create the fabric like the localization. But we've been able to ignore a lot of those other things. Now, so Guy Standing is one of the people who's argued strongly for basic income and he managed to find funding and other people found funding and they've done some experiments and they've shown that communities that got basic income, it really did bring those communities together. And I think you have to take that seriously. On the other hand, when it is only one sub-community, you have to realize, of course, that that gives the community an identity. So it's not surprising that that might bring them together. And there's also the concern about the longer-term consequences and the curves we saw after socialism would come through entitlement. So although doing well for a generation is always a good thing too. The other big concern is that people aren't seeing that we already are getting a massive basic income. We have all these nice things, not just the theaters but also the streets and the personal security and that you can walk around without a sword. As Hobbs said, you would never leave the house without a sword, would you? So we no longer walk around with swords and we have nice roads and we have people to collect our garbage and we have water that comes into our house. All these things are being provided as a basic income already, but they are differentiated. Some people get more help than others. Some people really cannot get through life in contemporary society without additional help. And so that is another problem with this model. So let's talk a little bit about government. What is government for? Again, in the tech industry, a lot of people are not sure. But I was fortunate to hear this guy land out. I probably say his name wrong, he's French. In fact, apparently when the EU was like seven guys in a room, he was the French guy. So anyway, he defined it this way. Look, the first thing the government does is pick problems to solve. The second thing is allocate resources to solve that problem. And then the third thing is stabilize that solution. And he was very concerned that that was the thing that was gained disrupted after the 2008. Both corporations and countries have governments, right? There's a lot of discussion with AI about self-regulation. Yes, absolutely, regulate yourself and company. But we have governments to coordinate all those other infrastructure things that we've forgotten about, that we ignore the streets and the health insurance and things like that. So we want governments to help us police our sectors and we want our sector, we want our corporations to help us police our governments, right? We want to try to combat corruption in every way we can, right? It's a cooperative effort. And if every aspect of good governance makes these interactions easier. So what I told the Norwegian radiologists at this point was, hey, you guys don't have a problem. Norway has at least historically been the go-to example of how not to experience a resource curse, how not to get that huge amount of inequality, right? So when they found oil, they said the government's going to invest this and it's going to benefit all the people, right? So wealth is distributed, it's not captured by an elite, at least historically, right? And I'll show you some statistics really quickly. Where you have high inequality, this is again slightly confusing, the x-axis, high inequality is far out and the y-axis is about how much health and social problems you have. And again, this is 2011, I think you would find UK way up on health and inequality right now. But anyway, yeah, here's Norway and there's America and here's Germany. It's doing pretty well. So inequality was increasing, I didn't mention this, but actually you don't want it to be zero. Economies don't seem very strong and everybody has exactly the same thing. We do seem to like to have some kind of motivational pressure or as I mentioned before, the people who do a little better are having more assets so that they can help other people. I don't speak Norwegian, but apparently the blue line is the number of people in Norway that think they pay too much tax, right? So they're happy with the arrangements, the being a high tax, a low inequality society. In contrast, my other, my adopted country in the 15 years running up to last year has just had absolute plummeting of inequality and London just kind of missed it. I saw weird things happening but it's hard to put all the pieces together. I lived in the Southwest in a few towns like Bath and I think it's called Chareau. There's a couple of things that were super shiny and all the hipster stuff was coming there, but lots of towns were just collapsing. And again, this is about assortment, that we have the power to all the people with money go to the pretty places and they leave a bunch of places behind. So anyway, just the hunger, the people not, you know, I have a friend in Wales that says I don't even keep the schools open, of course again, right now nobody has their schools open, but they were having trouble keeping their schools open five days a week. They were cutting back to four and even three and a half days a week on the schools in the rural parts of the country. So this is the real tragedy of Brexit. And the interesting thing is it's not regulatory captured by oil, but apparently by the financial sector. But it still has the same kind of consequences that if all the rules are set up so that you don't have taxes to pay, then you can't take care of people adequately. All right. So what should we do? I think I sort of hinted at where I'm going to go. I've only got two more slides. I'm probably going too slow. I'm sorry. It's a weird audience. All right. So AI will make many of us, at least, better at doing our jobs, just like the Norwegians. It already is. This is like, why am I talking about the future? We've been doing this since the 90s. I can spell now. I didn't used to be able to spell. So we're all getting better at doing our jobs. That could mean that there will be lower entry barriers to doing a job and that can result in lower salaries. All right. So one of the benefits from these kinds of things get distributed isn't anything to do with AI. It's about governance. And that's why I went to a governance school because I realized if I'm talking about AI ethics all the time, that's where I'm going to actually find out how to solve the problems. Successful governments right now fall under huge propaganda attacks. And that's why I still think the crisis of democracy and governance is one of the biggest ones that we're doing with right now because without effective governments, it's hard to solve the other problems. I don't even need to name the countries. So I'm just setting up as a hypothesis that some powers don't like the tobacco regulation or money-launching regulation or the strength of NATO or the EU. And maybe they don't like the assault against climate change. That we have this almost global agreement that we should do something about climate change. Have you seen this graph before? Do you know what this is? You're such a small audience. No one's going to volunteer anything. They're all just sitting back there. All right. I'll tell you what it is. It's the expected change of GDP as a consequence of global warming. All right. So there's... So when you have policies, when you are pushing through policies that are going to increase global warming, accelerate climate change, who's benefiting? All right. So that review is in the technology review from a couple of years ago. But this line is not. This is a separate thing. One definition of war is dismantling your enemy's infrastructure such that they lose autonomy with respect to you. That is that you get to have your way, they don't get to have their way. And I wouldn't be surprised if 100 years from now or 150 years from now, people treat this period right now as a war that's already going on because we are doing these kinds of assaults. So last slide. AI is not magic. It's something that people make and it's something that we change things with. We author AI and we author the laws that govern it. And we author the laws that govern our economy. Humanity and society cannot be replaced easily. Remember that one slide about dissuasion and justice? It goes beyond that. It's aesthetic. So what does it even mean to how can we have machines value the things we value? Everything comes down to a bunch of apes trying to survive on the planet. Everything that humans care about. So it's not sensible to think about machines replacing us in those endeavors. Employment may not only be about meeting basic needs and again as well established, nor only about specialization and redistribution, although that's important too, but also about generating social connections, diversity and broad security. And although I expressed skepticism about universal basic income, I do think there's real work that needs to be done about thinking more about how we share wealth through the system, given how the system provides the materials to produce the AI and other kinds of technologies that are benefiting so many people, or so few people so much. All right. And then finally, this is an implication. I would recommend that technology, including the law, should be used to increase transparency and accountability, not confuse those, because that's the way we'll be able to get on top of this problem and convince people to come on top of the problem. So we really need to be working more on helping people understand the system. All right. So thank you for your attention. You need that claptarch again here. Oh, it's not my laptop session. Look, I was going slow. How much time did I take? 50. So a big round of applause from your homes. I hope for this very rich and also very vivid talk. Thank you so much. Thank you. Joanna, let me start right away with inequality actually because you drew this big, great historic arch almost from the inequality from late 19th century brought about by technology, by railroads, by the news industry, by the telegraph, you know, technology like this. And then you stated what reduced this kind of inequality. You jumped to the, well, to the 30s, to the New Deal, FDR, and of course it would be cynical to say, but it was also a war that changed a lot. And in Europe after that, inequality was greatly reduced, at least for 30, 40 years. Then it started to go up again, especially in Germany. I mean, those are not worldwide figures, but in Germany inequality has gone up actually in the last 20 years. So how to fix this now? Because as you said, what fixed it in the past was policy, was governance, right? But where does technology come in there? And could you also say that like railroads or roads or highways, autobahns, sort of solved some of those problems, bringing the goods to the people, you know, in the regions and so forth. So what's the mix there between technology solving the problems it brought about and policy? Right. Well, let me go back. There's some really interesting points you brought up there, including the war. I know I'm not supposed to talk about the war. No. A lot of people say that there's been people getting books and books about like you only drop inequality when there's a war. And I don't think, again, that's why I love that graph of the United States. The British brought down inequality about the same time. The interesting thing was that they were both implementing a German model. So the Germans did do some very interesting things immediately after World War I. But a lot of these people had been talking about the problems of governance increasing in the late 19th century. So they were already starting to look at the social welfare programs and for solutions. Again, Germany was working in that too. So I definitely, when there is a huge destructive war, then you lose, and if you remember, there was a spike in the graph, the rich lose a bunch of stuff. That's part of the reason that they sign up. But at the same time, maybe I'm just being optimistic. I don't believe, that may be the dramatic place where you can really see the inequality coming down. So I shouldn't say most of the world, most of the world has become more equal. So we've done an incredible job since 2000 of reducing inequality globally. And that's not just China, and it's not just China and India. It's also a lot of Africa, but most of Africa. We've moved like 5 billion people. So again, not just China. 5 billion people have been moved out of extreme poverty in the last 20 years. However, the OECD has mostly been increasing in inequality. And again, you get this small number of actors who have a lot of power that don't need a lot of coalitions, and then you get more entropy. But I wanted to say something. Oh, France. France actually has... This is not just Macron. This is like 15 years ago, the inequality was building and then they brought it down, but they still are getting social disruption. Maybe that's just how they are. But maybe it is, again, the kinds of political thoughts that we see. Or maybe there's still problems. On the other hand, China has massively increasing inequality. And yet, total social stability. So it seems that... And this was true in our model, too. So in our model, inequality does increase the polarization, too. I showed you that if the economy goes down, it increases. Inequality does the same, but only when the top gets more asset, gets all the money, and so that the bottom are no longer able to survive, basically. So then they're in the same situation as this falling apart economy. Germany and China both seem to be examples of places where the bottom have been brought along well enough that even though the top is moving away, that they're sort of still relatively supporting their governments. But it is a dangerous situation because if you can't keep doing that distribution, then you might not be long-term stable. You mentioned those four or five billions. I'm not sure of people that have been moved out of extreme poverty in the last 15 years. Now, if you take that as an example again, where does AI technology come in there and what is due to policy that's been changed? That's a super interesting question and a lot of these things are under research. I mean, one of the things that seems to be true is that effective altruism is actually more effective than the old, not data-led altruism. So there are a lot of people like the Gates administration, like the Peter Steering thing that goes through and use data and say what's actually working. One theory is that it was the drop in charity redistribution that was going to the elite who were then maintaining the inequality. So if you just gave money to the companies, the old Western model, it's the companies to the country's leadership, that then they tried to do the things, they tried to keep the poverty that was bringing the money. And so that wasn't as effective as, say, having China come in and say, we just want to share materials and so then paying people to do the mining or whatever. So that may have been a more effective thing. But my favorite good news, possibly, AI story besides the data-driven philanthropy, is just having information. So mobile telephones and people being able to know the weather, you know, and what the value of their products that they're growing are. Again, that information had been cut off to people, but as technologies become cheaper and more ubiquitous, information technology. Again, that's so much AI, although the predicting the weather is an incredible exercise and enormous numbers of AI models, right? Again, something people don't think about, but it used to be done without AI, now it's mostly done with computers and AI and things like that. So anyway, getting the devices out to the people, you know, that they hopped over and they didn't even ever get landlines, but the mobile phones, you get one mobile phone per village and you change, you know, hundreds of people's lives. I think it's very great listening to you how you actually are able to change some white misconceptions, some of us have about AI. And yet I do have to ask another question about maybe it touches inequality. You know, the problem is not jobs, you say. AI does not actually cause job losses. It causes wage loss, right? It's wages, not jobs. So that's one thing. I can relate to that. But the other question would be what can AI do actually to solve the wages problem? Is there anything it can do? Are we on the policy side again there? Or is there nothing we can do? I think that... Well, I mean, like I mentioned, again, it's not AI itself that's acting. It's us who developed the technology industry. I'm anthropomorphizing again. But yes, like I said, the radiologists were actually making more money because there was more demand for them because they were more effective because they had AI. So there will be industries and disciplines like that, where that will drive wages up. I do think it is that we do need to be thinking hard about... And maybe, like I said, it's not universal basic income isn't quite the answer. But again, Germany does a really great job of investing in the arts, making sure... I don't know if all of Germany is like Berlin. I used to live in Mannheim too. But having all the little shops, the boutiques, it's very hipster here. In most places of Berlin, but not all. Yeah, no, that's true. That's true. Not every part of Berlin, but there's at least... It's bikeable. It's bikeable, that's true. We do have a tool, a participation tool. I forgot to mention that at the beginning of our conversation called Slido, there's a hashtag, Digital Society. I have a bunch of question here that I really would like to post, but it's your turn. So if there's no question, I'm gonna go on. There are a lot of questions now. I think we start mixing them a little bit. Christian, is there anything on Slido for us? Yes, there are a lot of questions and we apologize in advance if we can't post all of them. I will start with two questions regarding AI definition and AI discourse. So one question from Martin Schmidt was what difference makes AI to the computer for you? The Bankteller ATM example is a good example for computerization. Should I bring in the next one? They're related, yes. If not, better way. It comes with a bit different direction, but that goes also into the question of AI discourses. There was a question, why do current discourses tend to discuss AI as an objective and neutral to human-made and flawed with all the biases of humanity? Okay, all right. So those are the difference. I'll start with the first one. So why do I use such a simple definition of artificial intelligence? Well, there's a long history of this. Apparently, John McCarthy was the first person who said that thermostats are AI and by the definition I gave, plants are intelligent. They detect light and they grow towards it, but they're not very intelligent, right? Why do you make it so broad? It's because I find it the easiest way to talk about the moral obligations and to be clear, basically. We can talk about systems we're able to learn and learn to learn and we could call those cognitive if we want, but we still understand that there are systems that have been built and developed. I think at this stage you will not find digital technology that doesn't have AI in it, even if you mean by AI machine learning or something like that, the adaptive capabilities. Again, it is ubiquitous. It is everywhere. And so it's actually a hard... The OECD people, like, they've got a job to see what has AI spread and is their uptake sufficient. And like, how are you... Good luck. It's going to be harder and harder to differentiate and tell where the AI starts and the computing stops. And so that's why, from the ethical perspective, I find it more useful to really focus on software, software development, to the clarity of who... What person did what? When did the person decide that it was able to go out? Did you follow due diligence? Just like any other manufactured artifact, it's not trivial, it's perfectly good means to do things that in the automotive industry... Well, in fact, in the automotive industry, when they build AI, they follow pretty good DevOps, which is development and operations. They can demonstrate that they did the right things. But that's because they already had lots of obligations for making sure things work, the airline industry. Now that we know that social media can make governments fall, we need to start treating it more like a critical system. And now that we know that search is kind of a social media, that it has huge impacts, the ordering of results and things like that, then we need to make sure... We need to set up a system of checking these kinds of things. So anyway, I am using this broad language for that reason. Now let's go back to the prejudice. What you don't get in this neutrality. All technology, I mean, again, you look at sociology, all technology expresses, to some extent, the power structures. Although again, we just talked about how the mobile phone completely overturned the power structures. And Slido is overturning power structures. Do people not in the room are able to ask questions, right? Although maybe not completely overturning it. Maybe the upvoted ones are ones that use better English. So there's some opportunity cost of where you happen to have been born or what education you got. So it's difficult to entirely throw over every structure, but they do change and warp. The prejudices of artificial intelligence, one of my other 2017 papers was the one in science that showed that sexism and racism are uploaded with language. And actually that paper was listed not under AI, but under under Cognitive Science, because what that really tells us is that we all absorb our history when we learn words. It isn't the fact that the machine is learning the words doesn't somehow that different from people hearing the words, right? So that stuff, again, it's embedded in the word programmer that most programmers are male. Humans will come in with that set of expectations just like the machine will come in with those sets of expectations. But we are able when we have technology, one of the metaphors I like to use is it's like the difference between having your own children biologically versus adoption. Some people, certain kinds of people get really angry if they say why should AI be held to a higher standard? So for example, don't let driverless cars drive unless it's a hundred times safer than a human driver, right? People say, well, why? But it's exactly because we've gone to this level of where we've written down the instructions. We didn't just sort of socially evolve and intuit and whatever biologically evolve into the situation. Suddenly we're in a situation where the government's involved, corporations are doing things deliberately, and so of course they can be held to a higher standard. And so with AI we can go back and do a better job. And a lot of people are saying they are getting again, I can't get this documented, but when I go and talk to people in HR departments they're saying, with AI we're getting the kind of diversity we've been feeling to find before. And the reason they won't go on the record about that is they're afraid of getting sued because they didn't see the diversity that was there in the entry pool before they trained systems to help them find the things that perfectly, people who wanted to do the right thing just couldn't. They couldn't flip through and shut off their brains, but with AI you can carefully program shutting off parts of the brain. If that makes sense. Please go on for another question before I pick it up. Maybe you follow up with the questions about inequality and diversity which you already mentioned at the moment. So there was one question from Es Salman. Can you ask the value of diversity against the bias problem? Can you please highlight this point and how to convince decision makers that this is important for our evolution? Well, all I do is that when people ask me to do things I try to go. But that is why we're trying to get that paper done. Again, that was something that seemed really important right now. But yeah, Fisher's fundamental theorem of evolution has been known since the 1930s. And unfortunately, again, since things that happened in the 1930s, you can't talk about biology and political science meetings very much. But it's getting to the point so the people they think that if you I've literally in a political science department once told, been told you can't bring in psychology here because psychology involves biology and biology involves genetic determinism like what? But now you know what we do does matter incidentally. I'm not a determinist. And I've now lost myself in the weeds. Where was I trying to go there? We were talking about oh yeah, all you can do is just try to keep doing what we are doing here everybody. We're all playing on Slido and keep getting the information out there. I guess you could join the board and try to make sure that decent biology is taught. I think that's a very interesting point. I'd like to take it a little bit further or try to take it a little bit further when it comes to diversity. AI is very good in actually increasing diversity and cutting out the bias that is in language because it reiterates what's been iterated before all the time. Wait, I said that people have been successful in increasing the bias with AI. Remember, it's just a tool, right? Yes, but it can be fixed with AI if I understand you correctly. If programmed right, it can be fixed or engineered more easily than what can be done with language because engineering culture is much more difficult than engineering a machine, I would assume. Well, a machine is an example of culture but the point is that yes, if those are your priorities then you can actually use AI to reduce, to increase diversity to try to review bias. You can absolutely also use it to increase bias and that's why we need to be able one of the things I'm trying to communicate is that we need to be able to see the documentation. This is standard if you're building software that you have documentation about why you build things where you build them, who wrote the code and so you can see that if there is something that's giving an unfair result you can see was that what was paid for in the first place? Was that introduced by a rogue hacker? Was that introduced by somebody evil that you didn't know was bad, a bully that was working for the company? I mean, you need to be able to see that and just like people sometimes are saying like, isn't aren't we going to need like a Hippocratic oath for programmers or something like that and I think that's the wrong metaphor because yes, programmers are professionals but we don't have a one-to-one relationship with them like we more or less do with doctors or bankers sorry, doctors or lawyers so those are the things that we can think of as having liability insurance or whatever but I think it's much more like a bank that there can be a rogue trader that loses a lot of money but hopefully both the trader and the bank get in trouble for letting the trader do that and then the trader goes to jail if he did something illegally, hopefully or similarly architecture the architecture metaphor, it isn't just the architect themselves and it isn't just like one builder or planner or whatever there's a whole hierarchy of people trying to make things work and there's the government going and inspecting things and giving planning permission and there's licensing there is licensing of the architects before they can get out of school so I think those are the kinds of models that we should be moving software into that level of professionalism more like architecture I guess I shouldn't say more like finance oh, that's like your call for transparency as I understand it right, in engineering in AI engineering so to speak there's another maybe it's a problem of definition I'd like to ask you about when we talk about diversity and at least we have the possibility of doing it better there is the possibility, I know it can be used in all kinds of different ways as you just explained now but there's a possibility of doing that you also talked about the correlation of polarization and inequality and then you said that identity politics is sort of a fight for resources in that climate of polarization and inequality so where would you differentiate identity politics which is something that comes quite close to the idea of diversity right, that this is actually a fight that has to be fought by many people in order to broaden the conversation right, okay, so again this is areas that we're currently in research on so some things I can come in really with a lot of authority this is something where we're working on it and so I just wanted to say that but one of the things that at least a couple papers have shown is that when you have high levels of polarization high levels of inequality, sorry then you often have large single donors that are giving a lot of money and especially the most extreme politicians tend to be getting a lot of money from large single donors with so called eccentric views and so socially unacceptable views, right and so the paper basically said that oh the politicians are absorbing the eccentric views of the donor and that's why in the polarized in the high inequalities leading to polarization because they're being pulled out to this eccentric view okay, I think that we could actually go and again this is me with my biology my theoretical biology head on I think we could go a little further if you have an enormous amount of money and you want to sort of buy a politician, how do you know you're the only person that bought that politician right, so you need to take you need to get that politician to say something absolutely insane on television, you know like oh yeah it makes sense for Russia to take Crimea or something like that so that you know you know whose team people are on basically and I think so getting people like you know obviously there's I shouldn't keep picking on Russia, there's a lot of problems in the world, but the point is that getting people to say very socially unacceptable things is a means to for them to show that they really care about their team membership and so I think that there's actually that the inequality is actually not it isn't just chance that they have these eccentric views but that is a test of loyalty that the very rich people are using and then for the other people that identity politics comes down to look I can pay this huge penalty to prove that I'm on this team so you guys please don't leave me behind and so people are picking the team, they think that they'll both fit into and that they have a chance of winning with and I think so I think that's what's going on but like I said this is this is kind of speculative you know informed but speculation yeah okay let's go back to Slido before we wrap this up with some questions about the EU Pris, Christian, so we have another question which goes into the direction of inequality was one of the top vote questions so we have to pose it do we finally need to introduce an automation tax to decrease inequality that's that's one selection, one possibility I think so one thing I don't like the idea of is you know like taxing robots because robots aren't countable again that's anthropomorphism I can look in this room and I can see exactly how many people are in this room right like that if it was a bigger room I'd have to count them right but with a robot if you write a tax towards automation or robots or whatever I can then go as a programmer and change the robot so that it evades the tax you know that even happens famously with like you know there was a tax about how many windows you had in your home and how many windows you had so AI is way easier to change than homes so I don't think that's the right way to think about it I think it makes I'm in favor of the wealth taxes where you see industries that just have insanely more money than others they're just sitting on their books they're market capital in fact I'm working now a bit with the European Commission on this stuff like recognizing massive market dominance I mean it's not that hard and then say okay when there's massive inequality then there's inadequate governance it is a job of the government to make sure that you solve problems for a community and you're not solving the problems adequately if some people have nothing and some people have an awful lot so I think the best things I've seen is well there's a digital tax that people are working on here I don't think we should pick on the digital I think it should be in general transnational tax I think that we didn't quite get on top of petrochemical who until what 15 years ago they were the top market cap and I think we obviously still have issues of finance too, pharma there's a lot of organizations that basically it makes sense that they're selling stuff all over the world they don't need to only sell it to the country anymore and so a single government can't do the kind of redistribution they used to be able to do so that's one of the reasons I love coming back to the EU the GDPR it demonstrated, I mean people said this is gonna be a disaster, everybody's gonna pull out they're not gonna want to deal with this and like the only things that did not deal with this were like little tiny local newspapers in the United States or something like that even mid-sized newspapers paid the cost to make sure that they could have Europeans hitting their sites, right? Because there's a lot of people here with a lot of money, right? And so nobody was gonna walk away from that. And I think that that demonstration, and then the weird thing is that like the, so Google, Microsoft, they were saying, those are the ones I talked to the most, a little bit Facebook, but they were saying like, oh, this can be such a disaster, it's not gonna work, you're just driving. And then they're like, oh, this is like great, actually. We can now do business with the whole Europe. Much more easily, there's like a single API to the whole of Europe. And it's like, what do you think those guys were doing? I mean, the EU does things slowly, but they do them well. And they were not only trying to cause trouble, they were, that was part of the whole package that actually made it easier to do digital business. There was clear rules. Another way, I've had Chinese colleagues say to me, oh, we love the GDPR. It tells you exactly how much it costs if you do the wrong thing, you know? But that, so some people get really angry when they hear that. They think, we don't want to have a cost on our privacy or whatever. But the point is, that's the point of like arms negotiations. It's not to prevent all war, it's to clearly communicate how much people care about things and what penalty they're be willing to pay, like to specify that so people understand what does and doesn't matter. So I think the EU did a brilliant job there. Now they've been blocked from doing the digital tax, which I think is a mistake on the people, on the part of the people blocking, because again, it would make everything work better if we could go forward with this easily. But Ireland and Luxembourg are not gonna agree. However, for this kind, and the EU is great for EU things, for Europe, you know, geophysical things. But what if we didn't have, what if we just put together another coalition of the willing include maybe Malaysia and Indonesia and we could get a lot of people, maybe all of Africa would sign up to this and say, if you want to do work here, then you need to have some kind of proportional redistribution such as the German and French and British have been suggesting. Thank you for staying with us so long, we're not done quite yet. But since you're with us actually tonight, I hope you're with us somewhere. I think one very last question from Slido or Twitter before we wrap this up for good. Yes, we have one last question, which already goes into the last question to be miller always poses in the series. It goes as follows, does AI spread a US-biased cultural value perspective? Is AI the new round of cultural imperialism? Well, I would say it depends what you, I guess it depends what you mean by AI. It has so far been facilitating those kinds of perspectives in some places, but I think Big Tech is recognizing that again, they want to do business all over the world and that they want to find ways to make this work. And let's not forget that there are a lot of other people using and creating AI so we can get over excited about these really large companies that we hear about and forget that the Eurozone alone is one of the top three economies globally. At the end of World War II, the GDP of the United States was greater than the rest of the world combined. That's just nuts, right? It was amazing. And then we were raised saying, oh yeah, Cold War, it's like, it wasn't fair. That was a very unfair war, the Cold War. Anyway, then now the situation is that the combined GDPs of China, America and Europe are more than 50%. So that's a significant shift. And again, I think this might be about digital more than AI, but we are equalizing now and that is what part of what's creating a lot of this disruption. But anyway, the point is that the EU itself and clearly a lot of this is digitally led economy is doing well, it's doing fine, it's doing it with a patchwork of lots and lots of small companies just like the EU itself is a bunch of smaller countries, right? So I don't think it has to be that way. Of course, there are consequences of everybody using Facebook or everybody using Google or Bing. You want to make Microsoft people really happy to remember to say Bing. But anyway, but there are people using, well, but not many. I think that there was a surge in people using WeChat and then people sort of have figured out that they're afraid of the surveillance aspects of that. But China is now, I think, the largest trading partner of the vast majority of the planet. So it's hard to say that it's only been, there's only been one consequence of that. And America is a fusion of an awful lot of cultures and I think it will continue becoming more diverse. So. Thank you, Christian. Well, most of the questions I wanted to ask about the European Union and the GDPR, the General Data Protection Regulation have been asked if they've not been asked, you've answered them in your answers. I also wanted to ask you questions about schools, about, you know, do we need more efficient teachers or twice as good schools? Do you think the pandemic is going to be a digital push in that? But I really would actually prefer to take the bird's eye view for the very last question. Maybe it is stupid. Maybe it is too simple. Maybe it's too broad, but let me try it anyway. We've talked so much about ethics, about moral agents and patients and their role of humans. And if I understood you correct, you say, well, our role is the same as ever. We have obligations to ourselves. We have it to our families, to our friends, to our neighbors, you know, be the nations or actual neighbors where we live. So would it be okay to say that we don't need AI ethics, specific AI ethics, but just ethics? Yeah, I think that that's definitely what it's coming to be. I wouldn't say that our role is exactly the same as it's ever been because we do have so much more power and so much more knowledge. And so I don't think that, you know, 2000 years ago, well, you know, somebody made the Sahara, they're in theory, agriculture, but they weren't able to figure it out. We might be able to stop from making the whole world into the Sahara Desert this time. But yes, I think that you can't just, I mean, AI ethics is just ethics, right? AI is just an expression, as I said, of all the technology we use and the fundamental problems are the fundamental problems. Thank you so much, Joanna Bryson. Thank you for being with us for your excellent talk, for answering all those questions. Thank you for watching us. It will be online, I think, about a week from now if you wanna check again and see you next time. Have a good evening. Thank you, Interpreter, also for bearing with us for such a long time. So a big hand, a big hand of applause for everybody. Thank you. Cheers.