 Felly, rydyn ni'n gwybod yn bwysig i gael cyd-leinwyr yn y cyfansiwch. Mae'r cyd-leinwyr yn ysgai nawr, oherwydd o'ch cymryd cyd-leinwyr, oedd gwrthodol, a mae'r prydyn ni'n gweld o'r rhywbeth o'r cyd-leinwyr. Mae'r rhagwyr yn ddod i, wedi bod yn gwneud y cyfansiwch. Fymaeul hwnna'n ddod o'i cymryd. Rydyn ni'n gwybod yn ddwy'r cyd-leinwyr, i fyddwch yn gweithio'r cyd-leinwyr. OK. So, everywhere I go, I, at business conferences, I sense the feelings of technological euphoria and social economic doom. And both emotions are accurate and valid. In the bar I guarantee you, you will have a conversation that explains to someone you've never met about the 10-15 year technology roadmap of your company or your project and then discuss whether you're going to buy gold or an island to survive the coming meltdown. The reason we're in that situation is because we're in three kinds of trouble. We have an economic system that no longer works for most people. I won't go into detail. You know the problem. Most people cannot explain to their children how life is going to get better for them. As a result of not addressing it, we have evaporating consent for democracy. We have evaporating belief in the rule of law and universal human rights. We have, at the same time, something that is possible to see only if you look for it. And most people are not looking. We have a crisis of machine control, of technological control, of vast asymmetries of power between people who control technology and people who do not. And as you heard this morning, the meta-frame for that entire crisis is the climate crisis. But for me these three things, dysfunctional economy, dysfunctional politics, dysfunctional relationship between technology and society, are more front and centre actually than climate change because sorting them out is the route to addressing climate change. David Attenborough told us what to do about climate change. The problem is society is not set up to do what we need to do. In my book, Clear Bright Future, I try to trace the roots of these three crises. And because the book is about human beings, I talk at length about my thesis that at root this is a crisis of the self. It is the crisis of the kind of person we created in 30 years of what we call neoliberal or free market economics. A self centred only on homoeconomicus, only on the economic, only on the two-dimensional worth or non-worth of every choice. But here, I want to talk about, I want to rephrase this, I want to repose what this crisis is about. Another way of looking at it is it's a crisis of autonomous systems and human relationships to them. Frederick Hayek, the guru of neoliberalism, described the economy as an emergent autonomous intelligence, a decision-making machine that was more intelligent than any single human being. That's the theory of free market economics. Markets cannot go wrong unless humans tinker with them. Now, Thomas Hobbes in the Leviathan describes the state as, it's in the first line, an artificial kind of a man, a robot, an autonomous system. And if you remember what Hobbes did, he drew the Leviathan, but he basically drew Facebook. He drew a person with a head and a body which was full of other people's heads. Quite interesting. What is AI? What is a robot? What is an algorithm that you pass through as you go through the security system of an airport? It's an autonomous system. So this problem of human relationships to autonomous systems is, no, it's old in the sense that the Mesopotamian state or the Greek state at the time of Pericles was an autonomous system. Legal systems are autonomous, and we create them to operate on us against our will in the sense that we authorise the control to something else, the legislature. But at the same time, in the machine age, we've been very, very up against the problem of what do human beings do with autonomous systems that they create that then control them. The factory is the great example. The first 30 years of the factory system after Richard Artwright built Cromford, it was assumed that this new form of economy and social technology called the factory couldn't work without child labour. In fact, it was the liberals. It's always the liberals. It was the liberal in Bourgeoisie of Manchester that said, if you abolish child labour, the factory system will go bust. So we've always had this problem of calibrating the relationship between human beings and the systems that they create. In a way, it's the industrial era's version of the alienation problem that Karl Marx describes. We create religions, and then they dominate us. We create literally fetish objects, we create little idols, and then we worship them. And with autonomous systems, it's just a more complicated version of that. Now, what I want to talk about in this brief segment is one of the ways, one of the conclusions I've drawn about the way we deal with this problem. Because the way we're dealing with it right now is that we are saying, if we see an autonomous system, like a legal system, like an economy, like a firm even, or an algorithm that's maybe selecting candidates for jobs, what we say is it should have ethics. There should be an ethical control over it. Should robots follow ethics? Isaac Asimov had three rules for robots, that they must reveal themselves to be robots, that they must follow human laws, and that they must not collect information without our permission on us. A great set of rules for robots, but the human beings who run Facebook are not really interested in following them. And now it annoys the Chinese state. It doesn't solve anything. What if it's the Chinese constitution that you're following? It doesn't solve anything. I think the question should be more fundamental. It is this. Should human beings have the right to control machines? That's the question. And I think we're going to face that question. We've faced it in a relative way for the last 250 years since the first factory was under the first modern warship. The warship and the factory emerge in the same way, discipline and punish. We've faced it relatively for 250 years. I think the era of artificial intelligence and algorithmic control which we are approaching and the foothills of will force us to confront this problem in an absolute way, not relatively at all. It's an absolute yes or no. Do we human beings have the right to control machines that can't even now fix the elections we think we're voting in? That's the problem. The front end, the wedge issue is algorithmic control. What is an algorithm set of instructions? It's an autonomous system. You go through an algorithm every time, especially if you go into JFK. You go really in an algorithm if you fly into America. Are those fingerprints, Paul's fingerprints from last time? Yes, passed to the next desk. Do you have your visas, a journalist? Yes. Does your face look a bit funny? Let's have a further question. Meanwhile, the facial recognition is acting upon you. If you go to some venues in the Far East, the heat sensor is sensing whether you're carrying an infectious disease, sensing the temperature of your forehead. As a reminder, because I had a nuclear medicine done to me recently with radiation stuff pumped into your bloodstream, you do not go through one of those dates until three days after that. It's asking, is this guy radioactive? You don't know the number of questions that are being asked about you, but you kind of know that if you follow the rules in a fatalistic way, it's good. In fact, when I go through, I go through now, you know, a lot, security at airports, and I've learned to be utterly relaxed. A lot of you see people being really tense, rushing to get there, get that fucking laptop, stick it down. Just go zen. You might as well be in Guantanamo, because that's about as much freedom as you have at that moment. You don't know it until your freedom is curtailed, and some of my makes and journalism have been curtailed at the airport. But at least you know the rules, and you kind of know that it's necessary. But in the last ten years, large parts of business human life have been populated by algorithmic control without our choosing it and without our knowledge. There's plenty of examples, but if you look about what people are doing with business to people, you collect data on the activities of the workforce in real time. So it's not how many targets did you achieve anymore. It's are you en route to Paul Mason's doorstep with this widget from Amazon right now, and are you late? And did you take the right route and did you stop for a wee? That kind of data, real time. And then automated decisions are taken about the workers' work in real time behind their back. Automated decision. It's not Joe in dispatch is going, you know, mate, come on, you're late for Paul, he leaves this widget. It's going Paul is a high net worth customer. If you don't get there, you're going to be, and this is the fourth thing, find. So suddenly work is quite different, isn't it? White Van Man is one of the least rewarding jobs. You see the Ken Loach film. But in addition to all the analog bullshit that we pull on White Van Man or woman, there is no digital bullshit. It's algorithmic control. And the question is, of course, we could say automated scheduling is also happening. So people are getting rosted on for work, say at a big supermarket, without being able to talk to a human being. You can't say, yeah, but you know my kid is at this special school and no, it just says, look, we need so many people at this time. We're dehumanising decision making about people, but that's just the thin end of the wedge. Whether it should monitorism real time, surveillance, whether it should take automated decisions which could be biased or inhuman, whether it should be in control of us at all, the should tells you where this is going. These are ethical questions, and yet these systems have been rolled out with almost no ethical discussion in business and absolutely no social contract between us and the people who are deploying them. There is a complete vacuum of ethics in real business, in the real deployment of automated systems on people. On a scale that I think parallels the rollout of the factory system, factory system was done behind big walls. If you go to Cromford, if you go to any of the very early factories in Richard Artwright built a windowless wall around it, a literally windowless wall, four stories high, and at one gate and a cannon loaded with round shot, rape shot pointed at the door. That's early capitalism. In a way, that's a great metaphor for what has happened to the algorithmic control of people. It's been done without seeing it. Now, suppose we wanted to impose some ethics on business with regard to artificial intelligence and algorithmic control. Ethics come in four flavours. This is a 20 minute talk, so I'll literally list them on my analog PowerPoint here. There's one I hope we would all address, which is, fuck you, otherwise known as the prevalent ideology of Silicon Valley, the philosophy of Frederick Nietzsche, shoot someone in the face and run away if he can get away with it. Laughing says Nietzsche. You can do that. If you program an AI with man's Superman ethics, guess what's going to happen? Who's going to be the Superman? It will assume that it is the Superman without any doubt. I'm not going to go into all the obvious scarce stories about AI. They are all true. People deploying it are relentlessly trying to make this stuff safe, and yet it poses challenges on the scale that we probably haven't understood. Discount, Frederick Nietzsche, you've got utilitarianism. What makes the most people the happiest? Well, it's quite possible to justify everything I've just described with regard to an Amazon driver, ethically, if you're using utilitarianism, because what makes people happy is using Amazon, and Amazon needs to deliver really fast and really accurately to your doorstep with a person who hasn't had a toilet break. That's utilitarianism. You can go quite a long way because you can say, I'm not happy. I want an ethical service provider. You can have a discussion. Then there's the rules-based system, the so-called deontological systems that we're used to from... Anybody who's used the word social justice, you're using that. The idea that there are immutable lists of good things that we should always meet like fairness or equality. There's that. You could do that, and you can get quite a long way with that. My argument in the book is that, however, that the only one that really arms us, the only ethical standpoint that is going to arm us with an adequate answer to, on what basis do we human beings actually want and claim the right to control autonomous systems that may be cleverer than us, is the fourth one, which is the one invented by Aristotle and embodied in both Islam and Christianity and later Judaism, which is virtue ethics. It is the idea, what is a good society and what does a human being look like who is happy in that society? The commenting question would be, and therefore, what can a machine do and what can it not do when it comes to creating that society? Now, if you think this is all, whoa, this is Friday afternoon stuff. Google owns the only non-military and non-policing AI commercially that we know anything about. OK, this is deep mind. And so Google has several layers of AI ethics board, ethics decision making. It's great. So they set this international ethics board up for their entire AI strategy this year. Great. This professor, this right wing conservative moralist, and they got them all together and boom! What happened? Do you know what happened? One week into the project, it fell apart and had to be scrapped. Now, if there's an ethics board, if you had an ethics board for the local hospital in Brighton that suddenly fell apart, you think, hmm, maybe there's a mismatch here between what we're trying to do and the operation. We all know what an ethics board looks like in medical ethics. Over the years, medical ethics has become very sophisticated at what? Solving the closed task of how do we develop this drug without harming people. It's a classic utilitarian task. But if you are trying to create an intelligence that is as clever or more clever than human beings, it doesn't need a list of rules. It needs a theory of human beings. And that's what Google didn't understand. And so on first contact between the right wing conservative philosopher and Google's workforce, it blew up. They refused to do it. Proving the old adage, which you should remember in your business life, if the ethics board can be sacked by the workforce, it's not an ethics board. But that's where we are. They're trying to put it back together again. And what I averaged them and others to do is to think from that standpoint, what is a human being? What is a human being? Now, a theory of human beings is otherwise known not as an ethical approach. The better label for it is scary for business. It's a moral philosophy. That's what a theory of human beings is. And there are several. Many of you will have been brought up in religions that have conflict in understandings of what human beings are. But I think it's that level, that meta level that we need to solve this problem. The AI, of course we know when the artificial general intelligence, but AI will reshape the landscape in many ways. I think it will make our relationship with big computers more like any of those you were in the 60s or 70s in computing departments, where you had to queue up to get your project into the computer. You remember that? There was a line of floppy disks that were allowed to go on it. That either distributed and relatively democratic nature of our access to computing might change. But whether it does or not, it will ask us what the first time we get artificial general intelligence, it will ask us, and indeed it's always asking us implicitly when we're working with it, on what basis are you controlling me? And this is not simply a future problem. If we just use one example, every one of you in your phone has, if you've got a latest one, you've got a chip that has face recognition built into it. So you have the ethical problem already, the moral philosophical problem, that your machine is seeing your face and everybody else's face you show it to, whether you want it to on, and it is recognising them, and it is creating data. Now, to finish, here's my theory of human beings. I think human beings are, as far as we know, unique, both on this planet and since we don't know any other planets, we are the only example of this we've got. We're the only example of a part of nature that's developed consciousness and language. And as a result, what are we? We're technologists, we're team workers, we are linguists and we're imagineers in our DNA. As a result of that, we have, you bet, changed our external environment. We have changed nature. The Anthropocene is here, climate change is here. In 200 years, we've fucked the planet in the English vernacular. But we've also changed ourselves. Human nature is mutable, it's changeable. And on that basis, I think it is possible, it is not necessarily definite, but it's possible that we can use technology and human development to set ourselves free. That is a perfectly rational and logical secular position to defend. And to finish, I will say that it's the absence of our ability to assert that. That we have a pathway. This pathway is actually moving at exponentially increasing speeds if you've noticed. We have a pathway that can be towards our self-destruction of our environment or to our own human liberation. That to me is the metaframe in which I want to do all politics and all economics and all business. But the absolute root of it, going back to the crisis we're in, I think our subservience to automated systems, whether they be economies, states or companies with big technology, lies in the fact of a very recently acquired trait of human beings. And that is fatalism. We have become utterly fatalistic because we handed all decisions to the market, the market fell apart and no people are asking what do we do next. The root to finding out what you do next is to become in your life non-fatalistic. And on that note, I'll invite another for back. Thank you.