 Rhaiddoedd, ac rhai gydag dweud y Rhywbeth Ffilosofiol Fel Llywodraeth yn Glasgo. Rwy'n amser i gyrddwyr yr hyn a'n amser i gyrddwyr onlin. Professor Vallar yn y bêlig i'r ymddangosedd ar y Dweud yma i'r Dweud yma i'r ddaethau a'r ddweud o ddweud o'r Ddweud yma i'r Ddweud yma. Felly o'r ddweud o ddweud o ddweud o ddweud o ddweud o ddweud o ddweud o ddweud o ddweud o ddweud o ddweud o ddweud. Felly yn dweud yng Nghymru, ym 2018, byddwn yn y ddechrau ar ddechrau Ffylosofii ar Sant Clara University. Bydd yn ddifu allan y Llyfrgellol, yn cyfnod o'r wych yn ddifu allan, ac yn gwybod ysgol Llyfrgell yn ei ddifu allan o'r cyfrifol yn y ddweud. Mae gynllun yn y troed gyda'r drosiaethau ei sydd wedi'i ddwylau i ni'n mynd i gadewgio ar y cyfle cyflawni, ac sy'n hynny i ddweud cyflwyno ddylau o'r gyflwyno eu bydderfynu. Mae gymru. Mae'n gwybod i'n ddweud gwybodol ar y mae'n gwneud i'n cael ei bod yn dda i ni, I thank you all for coming out and restarting a little bit of perhaps normal philosophical life in the community again bit by bit. Let's hope it sticks around. I'm really excited to be speaking with you today. I just want to say that it might help to contextualize a little bit the way that I'm going to deliver at this lecture. Much of the lecture is spoken and not reproduced on the slides. I find that presentations that are fully encapsulated on slides kind of take people out of the listening mode often but at the same time slides are really important element for accessibility and for people to be able to track the main themes of the talk so I have slides but you'll see for certain periods of the talk that the slides won't be advancing and I'll just be developing the argument through my speech. I'm going to begin with a little prelude as it were to kind of set the scene. My talk is about the thoughts, the civilized keep, and I'm going to share a screenshot of a tweet that actually was the inspiration for this talk. I've blurred out the information of the account and the author, but the tweet simply said, my dad told me that he makes decisions now by emailing himself a potential plan and reading g-mails suggested auto responses to it to determine if it's a good idea or not. He followed up, his dad told him, if it says, if Gmail says the automated response says that's a plan, you know you're on to something. And of course I think this is tongue in cheek, right, but it reminded us all I think of the slippery slope that we may be on in a world where the commercial development of artificial intelligence, and I want you to imagine an asterisk after artificial intelligence because artificial intelligence isn't really intelligent at all, but nonetheless, it can mimic intelligence in ways that can invite us, I think, to turn off our own. I'm going to argue that that is a very real danger. I'm going to argue that it's not an unprecedented one but it's coming to us today in a new form. And I'm going to argue that we don't have to give into that. And that we can even retain the advantageous elements of artificial intelligence technology and other computing and automation technologies without granting it this power to step into the space of thought that we ourselves rightly claim. I'm going to also begin with a provocation from the philosopher Alfred North Whitehead who, in his introduction to mathematics in 1911 wrote this. He said it is a profoundly erroneous trwism that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. The civilization advances, he said, by extending the number of important operations which we can perform without thinking about them. Well, if Alfred White North Whitehead meant that in 1911, we might imagine he would be very happy today and would see us as very civilized indeed. First of course that Whitehead is thinking in the context of mathematical reasoning, and the increasing sophistication of mathematical techniques and techniques of formal logic and other kinds of analysis that allow us to build ever more sophisticated forms of thought on top of what we might think of as automated foundations, not automated by computers in this era, but automated by something that actually inspired the development of artificial intelligence which really took off in the 1950s when scholars that the tools of formal logic could be implemented in computational architecture. But I want us to question this claim from Whitehead or at least recognize that it has to be profoundly qualified because I don't think it can be true that simply extending the number of important operations which we perform without thinking about them can be the engine of civilization. So the question behind my talk is what thoughts do the civilized keep. So here's a list of questions that today your phone can probably answer for you. Or some app loaded onto your phone. Or Google can answer it for you. And without you really having to do much, but passively accept the answer. Now, I don't see a particular worry about us handing over any of these kinds of thoughts or questions to automated systems. Maybe in the Q&A some of you will want to push back on that. But I'm willing to let these questions go. I'm happy to let them go in fact. But here are some questions. I'm not so happy to let go. What is a fair outcome of this decision? What does this child need? How can I treat this person with respect? What does beauty look like? How should I spend my days? Does this person deserve their freedom? Who should lead us? Should this person be allowed to live or die? Each of these questions is currently the target of someone developing or who is already deployed algorithmic systems for decision making and decision support that aim to answer these questions for us. And this is what leaves me deeply unsettled and indicates I think that we have a coming fight on our hands to claim for ourselves or perhaps reclaim the space of reasons and particularly the space of moral reasons. Philosophers in the audience will recognize these phrases. They come from Wilfrid Sellers and John McDowell and a number of other philosophers who use these terms. If you haven't heard these terms before, I will go into them in a moment and explain the context and what I mean when I talk about the space of moral reasons. And even is this a good plan from our friend's tweet? I think it is something perhaps we ought to be able to answer for ourselves. Okay, so in this talk I'm going to set the stage first in an introduction. I'm going to talk about that space of moral reasons and why it's vital for human intelligence. I'm going to talk about why AI I think presents a threat to the space of moral reasons. And then I'm going to talk about how we might make moral space in AI driven decision systems. So I'm not suggesting that we necessarily need to eliminate the use of these systems in our institutions or societies, but I think we need to design them in a radically different way, a way that makes more moral space for us to think in. Okay. So much of today's media obsession with artificial intelligence and the power of algorithms obscures the fact that algorithms as finite series of steps for generating solutions to a given cognitive problem are nothing new. Both the term, which first appears in late medieval Latin named for the 9th century mathematician Al-Khorizmi, and its reference class, which includes the earliest mathematical procedures for counting addition and division. All of these long predate modern computing practices. So we've had algorithms for a very, very long time. But modern computing has invested today's digital algorithms, especially those embedded in AI driven and automated decision support systems with vastly expanded social power. Today increasingly sophisticated algorithms constrain and shape how we read, watch and hear online, who we are invited to meet or date, what medical treatments were advised to undergo, who will hire us, how the justice system will treat us, and where are allowed to live. Further social constraints and influences from AI and decision system algorithms are projected in almost every sphere of culture, governance and commercial activity. Since new mechanisms of social power is philosophically significant, we should not be surprised to find that these developments raise a host of important political, epistemological and ethical questions. Among them are questions about the opacity of the social mechanisms dependent on these new computing techniques. For it's become increasingly challenging to understand exactly when, how or by whose authority these algorithms affect their profound influences on our lives. This lack of transparency in a black box society to borrow the title of Frank Pasquale's excellent 2015 book on this subject, raises profound ethical questions about justice, power, inequality, bias, freedom and democratic values in an AI driven world. The problem is rendered especially complex given its multiple and overlapping causes. Prypriatory technology, poorly labeled and curated data sets, the growing gap between the speed of machine and human cognition and the inherently opaque and unpredictable behavior of many deep machine learning processes which may prevent even their programmers from understanding an AI systems internal operations, interpreting its results or assessing its reliability in complex interactions with other social and computational systems. So these are just some of the different types of opacity that we face today with AI. A lot of times in AI ethics we talk about AI opacity as if it's one thing, but it's actually a multi-dimensional problem. The uses of AI are opaque. Think about the ways that the uses of data by Cambridge Analytica took us by surprise. The scale and complexity of AI is often opaque. High frequency trading algorithms are almost impossible for human beings to monitor in real time at any level of close detail. Prypriatory algorithms make the design of AI systems opaque. Deep learning algorithms make even the internal structure and functioning and reasoning of these systems opaque. Think about the powerful AlphaGo and AlphaZero systems built by DeepMind whose power comes with that cost of opacity. Or think about the opacity of who owns things like botnets on the internet that are deployed in deceptive contexts presenting themselves as radically different identities that might be state sponsored or sponsored by some other organization for malicious purposes. We often don't know whether products generated by AI like deep fakes or synthetic media represent anything real or true, so they're opaque in their verticality. Algorithms like Compass, which I'll talk about later, which is used in the United States in the context of evaluating criminal defendants who are at risk and making bail decisions are opaque in terms of their justification and their fairness. Recommender algorithms are opaque in terms of their social effects. We still don't really understand what they're doing to us and our relationships and our societies. Products like Tesla's autopilot shows repeatedly how opaque it is in terms of the limitations. We govern technologies like this by car crash, literally. And in other cases, metaphorically, we don't really understand what's going on in terms of the system are only when people die. And we have opacity in terms of the authenticity of some of these systems. We're often marketed to in ways that encourage us to imagine that robots, for example, can read our emotions and express their own. And many people are taken in by this kind of deception. So I want to think about opacity in these richer ways. But my focus today is an aspect of the transparency problem or the opacity problem. These are basically the same issue just framed differently. My focus is a problem not yet widely discussed among philosophers and other academics and not discussed much in media and public policy circles. And this is the prospect that the growing opacity of AI systems and their uses may result in a severe and ethically troubling contraction of what philosophers have called the space of moral reasons. In 1956, the philosopher Wilfrid Sellars spoke of knowledge taking place in what he called a logical space of reasons, of justifying and being able to justify what one says. This means that to be knowers and more broadly to be intelligent in the ways that humans are characteristically understood to be, our minds must function as more than inert repositories of facts or true statements about the world. Our minds must be active and competent movers within the space of reasons. This means that we must be active reason givers and actively responsive to the reasons of others. We must be sensitively and skillfully attuned to the standards of evidence and appropriateness that govern the exchange of reasons. We must be at home within the space of reasons. Robert Brandon later explains in his account of this space and its social meaning, quote, knowledge is intelligible as a standing in the space of reasons, end quote. It's worth pointing out that very recently philosophers, Bert Henryx and Sebastian Nell argued in their paper Aliens in the space of reasons that it's important to recognize that the kinds of computational tools that today commonly receive the label of artificial intelligence are wholly incapable of standing with us in the space of reasons. One of the most powerful and dangerous aspects of the type of AI systems that rely on complex machine learning models is that they can derive solutions to certain kinds of well-defined knowledge tasks in a manner that entirely bypasses this open logical space, taking a narrow mathematical route from task to solution that's incapable of delivering giving or reason considering along the way in the way that we can recognize and participate in. Now, sometimes that's a good thing. That narrow mathematical route to the solution is sometimes exactly why these systems can solve problems that we can. Recently, DeepMind's team helped figure out how to contain a nuclear fusion for far longer than scientists had been able to figure out how to do in a kind of reactor called a tokamak. That's the kind of thing that we want AI to do, to take that mathematical route directly from the problem to a solution that might have taken us lifetimes to reach. As I've described in a recent popular magazine article, however, when we think about things like the natural language model GPT-3, which some of you might have heard of, which reproduces apparently sensible speech, stories, poems, songs, answers to questions, can fool you, at least for a short time, into thinking that you're talking to a intelligent speaker of our language. But that is a really powerful illusion. I'd like to compare it to the difference between a human climbing a mountain by a well-marked and secure route that others can follow, and that's what we do when we think with one another. And a robot being flung from the mountain's base to the peak of the mountain by catapult. Keep in mind that in this metaphor, the mountain stands for a problem in the real world. While the robot and the human may reach the same place, and the robot may well arrive faster, only the human knows the mountain, and only the human can get back down or guide others up its flank. And I think this is really important, and I think it's particularly important when we're talking about the kinds of questions that I showed on that second slide, in a way that differs from the kinds of questions we might ask about how to contain nuclear fusion reactions. Philosophers such as John McDowell in 1996's Mind and World adapted Cellar's metaphor to the domain of ethical or moral reasoning so that we may speak of a space of moral reasons within which morally capable and responsible adults are able to move and are invited to act. This space of moral reasons may be understood in several ways. It may be seen as a cognitive space in which a moral agent enjoys the psychological freedom to reflect upon morally salient facts, values, possibilities, principles, consequences, and ideals that might inform and motivate his or her actions. It may alternatively be viewed as a public space in which morally salient facts, values, possibilities, principles, consequences, ideals, etc. can be entered into a moral discourse, one that can inform and motivate the decisions of moral agents within a given community of actors. Finally, we confuse the psychological and public conceptions and see them as mutually constitutive elements of the context in which moral agents are enabled to choose and act. In all of these formulations, the metaphor of space is intended to represent a temporarily and discursively open, yet structured horizon of moral thinking and choosing that allows an agent to assume responsibility per his or her moral action to be potentially responsive to moral reasons communicated by other moral agents or to other new moral information in the agent's environment and to be capable of morally justifying her actions to herself and others. Preserving the space of moral reasons is essential to being at home in the moral world, to seeing morality as an inextricably meaningful feature of one's life and the life of others. Thus, the space of moral reasons can be threatened by a range of causes that make it harder for humans to be at home with moral thinking. Historically, these threats have ranged from authoritarian ideologies that encourage us to leave the task of moral thinking to our God or our leaders, all too often presented as a package deal, to cynical jaded philosophies in which moral thinking is seen as mere politics by other means, perhaps the wasted effort of the hopelessly naive. Fortunately, history also contains in its pantheon of great souls, philosophers, theologians, activists, artists and others, many voices of resistance to these threats. Philosophers of the European tradition will cite the golden age of Athens and Rome, the Enlightenment era in Europe and various civil and human rights movements of the 20th century as times in which the space of moral reasons had to be defended and held open, often at great human cost. But there are many more stories of resistance to be told and not only in the West. But today we face a new threat to the space of moral reasons and a need for a new resistance. After first unpacking the space of moral reasons a little bit further, I'll then review several ways in which the space of moral reasons is at risk of shrinking both cognitively and in public life as a result of our growing reliance on increasingly opaque machine decisions. The risks of such reliance include shrinking public and cognitive spaces for reflection upon our actions and their moral status. Shrinking space for moral appeals of the rightness, goodness or appropriateness of AI generated judgments. Space for moral attribution of responsibility for such judgments and their consequences. And space for the use of moral imagination in considering and weighing alternative patterns of moral reasoning and judgment. And I'll go into each of these in greater detail. I illustrate each of these types of contraction of the space for moral reasons in concrete examples of AI driven decision systems used in jurisprudence, human resources and law enforcement, where such use to effect or mediate human decisions of moral consequence may already be seen. I'll close by asking how more ethically informed design and use of AI decision support systems might allow us to hold open the space of moral reasons or even help us to enlarge that space in personal and public decision practices. Okay, we've already said that the space of moral reasons has both a cognitive and a public dimension. To unpack these further, remember again that the space of moral reasons is a metaphor while a person's moral reasoning does take place within a well-defined physical space, i.e. his or her spatially extended brain and public moral reasoning takes place in physical and virtual spaces of its own. These are not the kinds of space we're referring to. Using the spatial notion of extension as a metaphorical bridge, we can understand moral reasoning as a process that's extended in at least two ways, temporally and discursively. The temporal extension of moral reasoning should be easy to grasp. It takes a duration of time for a human to reason about anything, but moral reasoning in particular is time dependent. First, consider those theories of moral psychology that emphasize two distinct time scales and associated cognitive mechanisms for moral decision making. The fast mechanisms of moral analysis, as posited by Jonathan Haig, Daniel Kahneman and others, are driven not by explicit reasoning, but by emotionally laden social intuitions that produce rapid and motivationally compelling judgments. Whereas the slow mechanisms of moral analysis are higher order, but often motivationally weaker processes of conscious moral reasoning. These allow for careful considerations of the strength of evidence, logical coherence of argument, and consistency with norms, values or ideals to which one is explicitly committed. We don't have to delve into the ongoing controversies about the merits of this particular model of fast versus slow moral decision making. Even its defenders who are often justly criticized for giving the personal and social force of moral reasoning to little credit, acknowledge that explicit or slow moral reasoning is important and essential for a healthy society. Jonathan Haig has acknowledged in a debate with his critics that reasons matter. Reasons produce movement in social mores, even if he continues to insist that the emotional ground of the debate must first be cleared of opposing fast intuitions if this moral movement is to happen. Moreover, Haig celebrates the norms of, quote, reason giving and responsiveness to reasons, end quote, that define slow moral reasoning processes. He says, I wish such norms could be sprinkled into the water supply in Washington. And I suppose we wouldn't mind if they were sprinkled into the water supply near number 10 either. Mor reasoning then of the type that observes the norms of evidence sensitivity, logical coherence and consistency, reason giving and reason responsiveness takes time, both clock time because moral reasoning happens more slowly in the brain than does moral intuition, and experienced time. It requires that we perceive ourselves as having an open horizon of time to think things through, to contemplate, to ruminate, consider, compare, locate, inspect and trace the relevant connections in our situation. Imagine yourself deciding to actually sit down and really think hard about a profound moral problem in society or in your own personal life. Now imagine someone setting a running timer on the table. It doesn't really matter, does it? If the timer has five minutes on it, or 15 or 30, you might have only needed five minutes to reach a sound conclusion, but your perception of a closed and inflexible temporal horizon will disturb and confound your reasoning process anyway. This temporally extended horizon of the space of moral reasons intersects with another discursively extended one. Of course, there's the trivial fact that the temporal space of moral reasons is increased during moral discourse with other persons or groups because I must wait for my reasons and evidence to be considered by others, for my objections to be answered, and for my interlocutors to articulate their own reasoning, evidence and objections. Yet even when moral reasoning is done by an individual sitting alone, it remains a socially mediated and discursive process. Because the reasons that were drawn to consider insofar as they concern moral life, that is life with others, always have a social context and a social meaning. The language of moral thought always projects social, political, cultural and epistemic distance between my reasons and the reasons of others. To reason rightly about moral matters, I have to remain acutely aware of the spaces between what I have, what I know, believe, need, want and feel, and the often very different things that are had, known, believed, needed, wanted and felt by the other humans involved in the moral situation that I'm reasoning about. This is why narcissists are generally terrible moral reasoners. They can't readily perceive such distances and discontinuities between themselves and their reasons and those of others or grasp their importance. The space of moral reasons then is a temporally and discursively extended space in which moral thinking can, so to speak, stretch out and do its work, both in the psychological and in the public context. It enables us then to be at home in the moral world, to see moral experience as an essential feature of one's life with others. Preserving a sense of myself as a moral being requires this space to be held open because otherwise I might act morally if my fast processes of moral intuition are sufficiently reliable in that particular case. But I will not have consciously assigned these decisions or actions of place in the moral narrative that anchors the sense of myself as a moral being, a creature who confronts morally significant things in the world and makes deliberate efforts to respond to them in moral ways. For a virtue ethicist like myself, it's obvious that preserving space for moral reasons, both cognitive and public space, is an essential prerequisite for the acquisition of the virtue of practical or moral wisdom, what Aristotle called phronasus. And that's a critical component of moral self-cultivation in general. The holistic understanding of the field of moral community and one's particular place in it, which Aristotle, Confucius and the Buddha all saw as required for living well. That holistic understanding can't be obtained without sustained opportunities to practice stretching out one's mind and speech with others in the shared space of moral reasons. The space of moral reasons also enables essential features of moral functioning in society. As I've said, first, it allows an agent to assume responsibility for his or her moral action and for others to confidently attribute responsibility to her. As noted above, the space of moral reasons allows us to place our moral decision-making within a personal narrative to take ownership of it. The space of moral reasons also allows us to be potentially responsive to moral reasons communicated by other moral agents or to other new moral information in the agent's environment. Because as long as the process remains extended, there's time for new or revised moral information to be entered at any stage, allowing us to back up and make the necessary adjustments to our assumptions, values and inferences. Even an extended moral judgment that's been finished can be revisited, retraced and modified with hindsight, just as I can retrace the steps of a hike I took yesterday and take an improvised detour. Or, as Haik, Kahneman and others note is often the case, I can take an earlier moral decision that I made on the basis of raw reactive intuition and use moral reasoning to expand it, to give it the extension and volume it originally lacked. In many cases, this is done simply to invent a convincing fiction to assure myself or others that I had good, well-considered reasons for what I did. But it can just as easily be done with authentic remedial intent, to give a quick emotional decision a careful moral audit when time and calmer passions allows, allowing me post-hoc insight into the ways in which my raw moral intuition served me and others well or poorly. Finally, and related to the previous observation, the space of moral reasons allows me to morally justify my actions to myself and others. Perhaps even more importantly, it allows others the reasonable expectation that they may demand such justification from me. If I've not been allowed the space to think about what I do in the moral realm or what my society does, I cannot offer any reliable evidence that I or my society should have done it or should continue to do it. If most others in my society are equally foreclosed from the space of moral reasons, then both I and my fellows are left at the mercy of moral luck if we are to have any hope of a good life together. And in general, we're ill advised to leave our fortunes to luck if we have any reasonable means of steering them well. Okay, so now we're going to take a little turn and we're going to take a little turn into science fiction. We're going to talk about Isaac Asimov and Multiback. Some of you may have read Isaac Asimov's stories. For those of you who haven't, I'm going to dive into one of them and I'll give you the context that you need. It'll become clearer later how this is relevant. In the 1955 short story franchise by Asimov, we meet Norman Muller, an office drone with a clerky soul. I love those words, who in the stories imagining of the year 2008 is selected by Multiback, the artificially intelligent arbiter of American democracy to represent the electorate in choosing the next president of the United States. By means of an impressive body of calculations that are opaque even to Multiback's human handlers, the supercomputer Multiback has determined that in this particular year of 2008, it is the very ordinary mind of Mr. Norman Muller of Bloomington, Indiana, that can provide it the best window into the collective will of the electorate. As Multiback system administrator John Paulson perfuncturally explains, by interviewing Muller, Multiback will be able to declare with great mathematical precision the winning presidential candidate. Just as Multiback declares with unassailable predictive accuracy, the result of all elections, national, state and local. And of course given Multiback's predictive power, quote, elections aren't the only thing it's used for, end quote. Yet Muller's role in the election is not to express to Multiback his own personal judgment of who ought to be president. That wouldn't be democracy after all, but just the rule of one. Instead by posing to Muller, a series of seemingly arbitrary questions about matters as banal as the price of eggs. While monitoring Muller's biometric data alongside his answers, Multiback is able to calculate, quote, certain imponderable attitudes of the mind and quote, characteristic of the average American voter at that precise historical moment. In the story, we're not given any express reason to question Multiback's predictive powers or its security from tampering or corruption. Still, the system administrator repeatedly emphasizes to Muller their shared civic duty to maintain secrecy about the death of the majority of the population. But in the story, we're not given any express reason to question Multiback's predictive powers or its security from tampering or corruption. Still, the system administrator repeatedly emphasizes to maintain secrecy about the details of the process so that the workings, especially the human parts, are insulated from outside pressures. Multiback then is part of an algorithmic decision system that involves many human technicians and administrators and human inputs like Norman Muller, yet which is massively opaque. This opacity is reinforced on multiple levels. First, the computational power and knowledge base of Multiback simply exceeds the grasp of human thought by a great degree. Second, the system architecture and internal logic of Multiback's AI is not isomorphic to human reasoning. After all, by what inferences would you discover the imponderable attitudes and minutia of someone's political mindset by asking them what they think about the price of AIDS? Or whether they favor central incinerators? Which is another question Multiback asks. Nor are the scales of judgment comparable. It forms a statistical analysis of correlations within a massive pool of data about virtually all known facts, whereas human voters are by nature of our cognitive limitations far narrower in our knowledge and reasoning. Finally, the overarching algorithmic decision system of which Multiback's algorithm is a central part is largely obscured from public view. In the story, American voters know that Multiback calculates the election results on the basis of an interview with a single representative individual. In the story, American voters know how Multiback conducts the interview and how the results are calculated to the limited extent that these are understood by the human system architects are tightly kept secrets of the national security infrastructure. And these themes appear throughout Asimov's writing when he contemplates the prospects of artificial intelligence. In the last question he writes of Multiback, its operators had only a vague notion of the general plan of relays and relays long since grown past the point where any single human could possibly have a firm grasp of the whole. Multiback was self-adjusting and self-correcting. It had to be, for nothing human could adjust and correct it quickly enough or even adequately enough. So they, the humans, attended the monstrous giant only lightly and superficially, yet as well as any men could. 14 years after the date of Asimov's projection, American elections are still carried out by individual voters, albeit with the help of Russian bots and Macedonian teenagers cranking out political fiction and fake news farms. But each of the forms of opacity that Asimov envisioned in franchise exists today in AI driven decision support systems in wide use, most critically those computational systems dependent upon deep learning and unsupervised learning machine algorithms which pose special difficulties for reliable human interpretation, validation and auditing. AI decision support systems today are used to identify terrorist threats and targets in voice, image, email, social media and SMS data, to assign risk scores to defendants in bail sentencing and parole evaluations, to determine where and when law enforcement personnel are most likely to encounter certain crimes or to diagnose cancers and recommend personalized treatment plans. Other systems calculate how likely you are to fit into the corporate culture and remain with the company to which you have applied. Or how close a match a stranger is to your romantic preferences. Or how likely you are to repay the business loan you're seeking. Or the chances that your kid will thrive at a selective private school. These are the sorts of decisions that govern how well or poorly our lives go. Yet in none of these systems can the average user or in some cases even the system regulators, programmers and administrators grasp precisely how the decision is being carried out or what salient factors are driving the results. In the franchise story, we're led to question how the political franchise of the voter can possibly be preserved under such conditions of algorithmic opacity. But we have to notice that one of the most disturbing aspects of the franchise scenario and our present reality is the contraction in the space of normative reasoning that it fosters, both in the personal and the public domain. Norman Muller has no cause to explicitly think through his own political judgments. First, because he reasonably assumes he'll never be the one American chosen to be directly consulted by Multivac. And second, because even when he is in fact the one chosen, Multivac doesn't need to explicitly ask him for his personal opinions about politics or the good of the union. Much less ask him to account for those opinions with reasons. On the public level, there's a similarly superfluous character to political discourse in the franchise story. It still happens, of course, and so far as in the story, politicians still campaign and voters still form opinions. But the causal link between explicit public reasoning for those opinions and the final vote, even the political need for explicit public reasoning is obscure. Do the voters perceive any need to persuade their neighbors or even to account for the reasons behind their own opinions when Multivac can correctly predict everyone's votes by entirely indirect and opaque means? In the story, the background assumption is that Multivac's decisions are, if not perfect, at least as accurate as the tallying of millions of actual human votes and far less costly and cumbersome. The same sort of justification is given for the use of AI decision support systems in human institutions today. No one thinks that any computer operating today can actually grasp the moral, legal or political gravity of drone targeting decisions, sentencing recommendations or loan decisions, much less reason about them. But if an AI system can make decisions that are just as reliable as those made by humans who do reason about them, only faster and more cheaply, then the logic of efficiency invites us to let the reasoning drop out of the process as a now unnecessary human excrescence of analog decision making. It doesn't help that the present quality of public reasoning and decision making, as evidenced on Twitter and Facebook and traditional venues such as Congress and Parliament, invites the consolation of cynical despair. Maybe humans are just not cut out for more reasoning after all. And at this point, let's be honest. How much worse could the machines really do? What do we lose by giving into that logic? First, we lose public and cognitive spaces for reflection upon our actions and their moral status. Consider the example of predictive policing algorithms, which are increasingly marketed by vendors as using the power of AI to deliver new data-driven insights about patterns of criminal activity. Until recently, the Chicago Police Department in the United States used a tool called the Strategic Subject List generated by an algorithm that determines the risk that a particular individual will be a victim or perpetrator of gun violence. Persons who rank high on the list received a precautionary visit from a police officer and or social worker offering assistance. Now let's set aside the doubts that critics raised about the effectiveness of the algorithm that's used by CPD, which eventually led to its discontinuation in 2020. The algorithm's lack of transparency was still a grave issue, and not just because similar technologies continue to be deployed in cities around the world without public notice or discussion. Even when the use of the system is not secret and there's an opportunity for public debate, it's still stifled by the inherent opacity of the algorithms themselves. As American Civil Liberties Union representative Karen Shelly noted about the Strategic Subject List, quote, we don't know all the factors that can put someone on the list, and a CPD hasn't made public the algorithm that they use to put people on the list, end quote. Since that algorithm was proprietary, neither the police officer nor the social worker making the visit could know exactly how the person got on the list, nor could they explain the system's reasoning to the person they were visiting. That means there's no basis for any of them to reflect on the quality or justice of the reason for the interaction. For example, whether it's because the resident has a long criminal history, or just happens to live in a poor neighbourhood, or lives in a wealthy neighbourhood, but has many police contacts for driving while black. Now, were the algorithm transparent and public with clear weightings for each risk factor or combination thereof, officers using it, or members of the public, might be invited to reflect upon the relative legitimacy or fairness of those different reasons for intervention. The results of those reflections might then be shared and debated with relevant parties. Likewise, a more transparent process gives the person receiving the visit a rational basis for reflecting upon its value. Should they welcome it and take it seriously? Should they dismiss it as a nuisance or police harassment? Should they make some life changes? Should they try to get their name removed from the list? How can these possible responses be appropriately evaluated when the opacity of the algorithm tightly constrains the space to reason about their presence on that list? Moreover, this contraction of the space of reasons impedes personal or public moral appeals of the rightness, goodness, or appropriateness of algorithmically mediated judgments, which is of course why institutions seeking to evade criticism have a strong interest in keeping that space closed off. A prime example of this is the use of proprietary algorithms in judicial decisions. As revealed in a 2016 investigative series by ProPublica, many states in the United States employ risk algorithms for bail granting and other judicial decisions. Risk scores for individual defendants are often given directly to judges and parole boards with no transparent analysis of their basis or their limitations and failure modes. Neither judges nor defendants nor reporters can typically gain access to the algorithms themselves. Nevertheless, the ProPublica team were able to demonstrate that the output of North Point's compass algorithm for predicting recidivism risk and criminal defendants were clear signs of racial bias, falsely predicting black defendants as re-offenders at almost twice the rate of white defendants. The compass survey instrument doesn't ask about the defendants' racial background, so the bias comes into the analysis via proxies for race, as happens all the time with these algorithms. Further scholarly analysis has adjusted an inevitable design trade-off in these kinds of algorithms between racial parity in false positives and parity in true positives. To put it more simply, that means you can have a kind of racial equity in terms of the algorithm's accurate predictions of high risk, or you can have racial equity in terms of the algorithm's wrong predictions, but you can't have parity in both. Now, if that's true, here's an opportunity for substantive debate about due process, the presumption of innocence, and the particular social harms and costs of false positives that tilt towards defendants of color. But this debate was walled off because North Point refused to share the benefits of its proprietary algorithm that would confirm the researcher's suspicion about its design limitations. This also hampers the ability of any defendant to present a reasoned argument that its use in their particular case introduced bias, while foreclosing the ability of critics to propose and publicly reason with the company or lawmakers about an appropriate remedy. The opacity of AI-driven decision systems also impedes the space for moral attribution of responsibility for such decisions and their consequences. Now, this is a multivacc scenario for just a moment and contrast it with recent American elections. For better or worse, the moderate transparency of American voting patterns enables somewhat reasoned, if often chaotic, and discordant public discourse about which social factors, groups, and events are most responsible for a given outcome. We can draw on turnout data, exit polls, local vote totals, voter interviews, and other indicators to analyze that. Now, debates about the influence of COVID-19 policies or rural Christian voters or disaffected millennials, or Trump voting or Trump fearing voters, or resentful Bernie Sanders loyalists, or the disenfranchised working class, or white supremacists, or QAnon conspiracies on social media, these may not produce immediate social cohesion or reconciliation, but it's fortunate that these clumsy analog voting practices unlike the fictional multibacks still afford a space for public and private reasoning about the causes and merits of our moral and political choices. In fact, the lack of consensus in American and UK voting discourse itself reveals important truths about just how deeply our political and moral visions have splintered. Contrast this with the highly opaque use of Cambridge Analytica's illegitimately obtained Facebook user data to mount a secretive operation on behalf of the Trump and Brexit campaigns to manipulate voter behavior through algorithmic targeting of our psychological vulnerabilities. A calculated campaign of cyber political scyops, a term they borrowed from intelligence and military agencies. Regardless of the actual impact of the targeting on the US and UK results for which Cambridge Analytica representatives originally claimed credit, we learned that their algorithms were loaded with an unprecedented trove of private Facebook user data, including data of users unwitting friends, which was then deliberately weaponized by well-paid psychologists in an attempt to subvert the function of open civic discourse and to encourage voter detachment from the very powers of explicit reasoning that can produce informed political choices. Of course, the algorithms remain proprietary black boxes blocking informed public reasoning about their effects. Did their targeting persuade us or manipulate us and deceive us? Did the targeting merely feed our existing political convictions and motivations or did it distort them? We don't know. Compare such subversive uses of opaque algorithmic power with the less controversial but equally opaque algorithmic models increasingly used for large scale corporate and institutional decision making, such as the sort of hiring software now used by HR departments and most large organizations. In 2016, the Harvard Business Review estimated that up to 72% of resumes were being weeded out by an algorithm before a human ever got to see them. In 2018, 67% of hiring managers and recruiters surveyed on LinkedIn said they were using AI to filter out job applicants and save time. Now, there are theoretically social and economic advantages to hiring by algorithm. In theory, hiring algorithms could promote a more diverse and well qualified workforce by passing irrelevant factors that human evaluators commonly favor or disfavor, but that don't reliably correlate with candidate quality, such as European or male sounding names. In practice, however, what hiring algorithms usually do is reflect, perpetuate, and even magnify harmful human biases that have been embedded in their training data. A recent Harvard Business School report from last year stated that automated resume screening and candidate evaluation systems were contributing to what they called a broken labor market with millions of well qualified applicants being screened out and discarded by algorithmic filtering that's often corrupted by historical bias. If, for example, a machine learning algorithm is trained on data about previous workers in a given industry and learns that male engineers in the training dataset were recruited more often and promoted more quickly, the algorithm is likely to unfairly favor male candidates for engineering jobs going forward. Unless specifically programmed to avoid that pitfall, which is far easier said than done with machine learning algorithms, the system built on that algorithm will not consider that the past data is likely to reflect historically ingrained but unjust social biases against women engineers. The system simply cannot tell the difference between a reason for a person being hired or promoted and a bad but widely used reason. That's because it's not standing in the space of reasons with us at all. And this was precisely how Amazon ended up in 2018 with a recruiting algorithm that it had to scrap because it had learned to downrank applicants who had attended women's colleges or who had served as the president of a women's chess club rather than just a chess club which actually boosted your score. The same can happen for past hiring biases based on economic class, region of origin or prestige of university background. Consider one popular service hire view which uses opaque preparatory algorithms to analyze video interviews of job candidates and project traits of personality and fit with the company based on existing HR data. If a candidate is rejected by the system based on poor fit with the company norms, how confident can we be that this is related to a job relevant personality trait, such as quickness to anger or deceptiveness? As opposed to the algorithms marking of an unusual muscle tick, a regional accent, culturally specific facial expressions or gestures, body mass, external signs of age or disability. All of these are unethical and or illegal reasons to discriminate against an otherwise highly qualified job candidate and yet we have no way of knowing whether hire views algorithm discriminates on these bases. Now someone might rightly know that human interviewers routinely discriminate on such bases and this is true. But at least we don't assume humans to be an objective analyst and we're able to ask them about their reasons. A human hiring committee can hold a member's feet to the fire to explain the basis of their reflexive dislike or distrust of a candidate that others on the committee find highly qualified and can discount the member's judgment if they fail to give good reasons supporting it. No such process of critical discourse in the space of reasons can take place between a human hiring manager and the hire view algorithm. The space of moral reasons is thus essential for reflecting upon, appealing and holding ourselves and others accountable for personal, social, moral and political choices. Yet the space of moral reasons also has an important forward looking function, namely enabling and encouraging the use of moral imagination in considering and weighing alternative patterns of moral reasoning and judgment. To reason about my past moral choices is always to invite the moral counterfactual. What if I had done that instead? What if I had chosen or said that instead? Often this begins in the discursive space of reason giving and reason commanding that takes place between the self and the inquiring others or the self and its conscience or the public and the public conscience. Why didn't you or we do this? Why don't you or we want that or value that? To answer such questions we often have to construct an alternative history in which different motivations and thoughts might have led to a different decision. I or we in the case of public reasoning may determine that these alternative motivations and reasons are ultimately unacceptable and incoherent and thus that I or we could not have and still would not do and should not do that other thing. But often moral learning and growth takes root in the space of moral imagination where I or we realize that other better choices were available to us through other and better patterns of reasoning, feeling and valuation. Here I or we may resolve next time to reason better and do better, to give more discerning sentences, to serve and protect our community more reliably, to hire more fairly or to vote more responsibly. Yet as part of an AI-driven decision system in which the reasoning is the machine automated and opaque part, reducing humans in the system to mere inputs and passive messengers or recipients of outputs, here the critical space of moral reasons becomes constricted by the point of vanishing and with it are lost possibilities for meaningful moral reflection, appeal, responsibility and imagination. Now, while it's often impossible to design predictive algorithms and other AI decision support systems to be maximally fair and accurate across all criteria and contexts, many researchers have suggested ways to make their outcomes more accurate, reliable and fair from attending carefully to undesirable if unintended social effects that might need to be mitigated such as disparate impact on protected classes, to shifting the burden of uncertainty from impacted groups to decision makers in order to incentivize AI designers and users to seek out better and more relevant data with which to train their algorithms. As useful and important as such recommendations may be, they do not directly address either the broader social and ethical questions raised by algorithmic capacity and decision support systems, nor the specific concern I've raised about preserving the space of moral reasons for human beings. For even if designers and users can be incentivized to promote better, fairer social outcomes in the use of AI systems, this might still be done without any concerted effort to make the systems themselves more transparent to users or the public or to foster personal and public engagement by human reasoners in decision processes of moral and political gravity. Ethically informed design and use of AI driven decision support systems will therefore require more than fair and beneficial outcomes. Not even transparency alone will suffice if entry to the space of moral reasons is blocked by other means. Ethical use of AI decision support will in many areas require explicit social recognition of and attention to the intrinsic value of high quality human engagement in moral and political thought and discourse. In practical terms, that means asking new questions of every proposed expansion or new form of AI driven decision support. Questions such as what existing processes of personal and public moral reasoning does this system constrain or duplicate? What if anything necessitates or justifies these constraints or duplications? Because I grant that there will be cases when there is justification for automating a process that even may have morally significant dimensions. We can imagine cases where we have sufficient reason to automate a morally important decision. I will grant that as a possibility, me, perhaps even a likelihood. But today we're not even asking what that justification is. Can the decision system be designed or used to integrate rather than constrain the space for human moral reasoning? Do we have to move the humans out of the space of moral reasoning in order to get the benefit of this automation? And if we don't have to do it, where and how can we integrate human moral decision making and with what additional resources? Finally, how might the computational power of this decision system be used to make additional space for moral reasoning? Perhaps by building into software or institutional decision procedures, discrete stages for human reasoning about the moral and political implications of the algorithm's inputs, outputs and effects, or better yet about the weightings and design of the algorithm itself. So we could build decision systems to be initiators, hosts and mediators of personal and public moral reasoning. We could look to build AI decision systems that usefully elicit track and highlight patterns of human moral reasoning. To alert us to emerging consensus, novel insights, pernicious tropes, fallacies, equivocations, key turning points. We might try to build AI decision systems that create more times and places for human moral reasoning to happen. Systems that invite more of the relevant stakeholders into that space. That provide an open digital record in public library of that. And that help hold us accountable for having done it. These possibilities today remain unexplored. In conclusion, while Asimov's multivacc scenario may not be in our immediate electoral future, very close cousins of it are already taking shape in many other areas of personal and public decision making about morally and politically significant matters of health, justice, labor, finance, education, family and community life. In more and more of these domains, the personal and public space of moral reasons is contracting as the power and projected socio and economic utility of sophisticated machine algorithms expands. I want to acknowledge space of moral reasons has been constrained before and in other ways. Of course, by priests, by kings, by oligarchs and family elders who would gladly substitute their moral and political judgments for hours. And by bureaucrats who endlessly invent analog means of rendering such judgments opaque. But at the heart of the modern enlightenment lies Emmanuel Kant's urgent call to dare to think for ourselves. A call answered in part by the rise of modern public education and liberal democracies that sought to expand the space of moral reasons and its privileges to the greater share of humanity. And I think we see considerable evidence today. And unfortunately, in the United Kingdom, particularly, we see a retreat from that mission. Today we risk surrendering that inheritance to algorithms embedded in helpful AI agents. And I think we see considerable evidence today and unfortunately, in the United Kingdom particularly, we see some of these algorithms embedded in helpful AI agents that, unlike tyrants and oligarchs, appear to us not as self-interested oppressors, but as benign and neutral servants of our will. Yet in closing up the space of moral reasons by making human operations in that space seem increasingly superfluous, inefficient and unreliable, their impact on the moral and political maturity of humanity may be no less retrograde. Fortunately, this future is not set for us. Those who would fight to protect and expand the space of moral reasons have a long history of resistance to learn from. And those who came before us would tell us that the prize has always been worth fighting for. Thank you. Thank you very much. I will take a short break for five minutes to get a chance to catch a breath. Mae'r ffeil of online, if you have any questions, please use a Q&A tab to type your questions in. For those of you that have questions in the audience here, Tresha will be running the mic for us in a few minutes. I remember as a schoolboy picking up a book in the library by Asimov and he talked in the book, I forget the name of the book now, but he talked about Coal, King Coal and how it was driving the machine world, how machines were being invented and Coal was fuelling these machines. And he talked in the book about this machine world, this industrial world that we lived in, this revolution that was taking place just for the First World War, or just after the First World War. And then we had a change in the First World War, a switch to oil and gas. Now I remember when I worked in the Middle East, I was working on a lot of computer software. It started off with cards and then it progressed to stuff you could handle. Now in the oil and gas industry, they have all kinds of algorithms to pinpoint oil and gas reserves and how much is estimated. But aren't we just seeing a kind of evolution in the future, a future that we cannot predict as human beings and that we're just building around us, we're seeing this future and change that's happening in our life, our life journeys. It was a little difficult for me to hear the question. So I'm going to answer the question that I thought I heard and you can correct me or ask the follow-up question if I misheard you. But when you talk about evolution, I think that is a slippery metaphor for what we're seeing. The evolution of technosocial systems is not like biological evolution. And it doesn't have, by any means, a sort of predetermined direction of travel. It is steered at every step by human, moral judgments and exercise of power. And there's nothing inevitable, as I mentioned in my earlier response, about its trajectory in any given case. And in fact there are any number of examples of technologies that people promised were sort of just the next evolution of X, Y or Z, encouraging us to think that these things were impossible to turn back or couldn't fail. I think, for example, of some of the particularly wasteful examples of investment in technology that have come through things like Elon Musk's promotion of the Hyperloop concept and the race to commercialize space and make space tourism a thing. First of all, Virgin Galactic was just yesterday that someone was talking about how much money that they're bleeding out and the absolute near impossibility of this being anything like a financially viable business model. Hyperloop is basically a commercial failure. These things five years ago were being talked about as the inevitable evolution of transportation. And instead of money going into things like high speed rail and sustainable pedestrian and bike friendly infrastructures, money is getting set on fire to build things that don't work or that nobody wants. There's nothing about that that looks like evolution. So I want to resist that narrative that the future will evolve necessarily as it should or must. It will go in any number of directions which we will be responsible for. And we may not all of us be empowered to shape that trajectory wisely, but that doesn't relieve us of the responsibility to do everything we can to carry out and execute that responsibility. I think that's a question from the gentleman next row back. The prospect of increasing numbers of important decisions being made in an opaque manner is not an appealing one, but one way of counteracting that might be the idea of a right to an explanation. And I understand that to some extent there have been attempts to make that a legal right perhaps in credit risk scoring and so on and another age. Indeed, the European Union I think moved in some way towards it a few years ago with the GDPR regulations. What's your view of that? Yeah, that's a really great question. And I'll just say I'm working on a large, actually two large projects funded by UKRI on trustworthy autonomous systems where I'm working alongside computer scientists and roboticists and others who are thinking about these questions, both from a technical side and from a sort of legal and moral side as well. So making explainable AI systems or decision outputs in a limited way can be a really important form of addressing some of the concerns that I've raised here, no question. And we need to continue to pursue that kind of research. However, there is, I think particularly among lawmakers, a misunderstanding of what it means to generate an explanation for some of these systems. And I think the concept that I'm using here about the space of moral reasons is pertinent. There are certain kinds of explanations that we could give of how a particular model arrived at a particular score or output and the result of different kinds of weightings. We might even be able to give a counterfactual example that says, well, if you had changed these two variables, then the decision wouldn't have been affected. So that's the explanation as it wasn't based on, let's say, race or gender. There are these kinds of limited explanations that we might be able to give. But not all explanations are interpretable in the space of human reasons. There are many kind of explanations of what a model did that was nothing useful on moral or political grounds in terms of justification or in terms of being able to situate it against the values and other normative commitments that we want to live by. So some of the kinds of explanations are technical explanations that don't actually solve the problem that we have. So what I would say is the right to explanation is not a silver bullet. I mean, if you made that illegal requirement, in some cases, it would be impossible to satisfy. And in other cases, it could only be satisfied in ways that would be morally and politically of little value to us. But there are some limited cases where that can be one really vital tool in the toolkit for ensuring that the space of moral reasons is not closed off to us by the implementation of these systems. I think that maybe ties into one of our virtual questions. How often do we ask moral questions of AI? That's a really interesting question because there have been some attempts to do what you might think of as a kind of moral crowdsourcing by using AI decision systems that are trained on some data bank of moral judgments that people have made in particular situations. So I won't call out the particular experiments that have attempted this. There have been a couple. They've come in for a fair amount of heavy criticism for a couple of reasons. In general, moral judgments aren't the kinds of things that are best left to a majority opinion, especially in a majority opinion where people are just asked to give a snap judgment without going through that reasoning process. So a lot of the attempts to train a moral machine to give the kinds of moral answers that humans would give are based on data that don't reflect any kind of moral reasoning at all, but just are more like those kinds of fast moral intuitions, those snap moral judgments that I was talking about. So one example of this looked at asking a training system to predict what would be the right answer if you were an autonomous vehicle driving and you had to decide whether to hit an elderly person in a crosswalk or swerve and potentially hit a woman carrying a stroller or something like this. First of all, it was a very cross way of designing the problem, but it also was basically asking people to just give their unreflective judgments of whose lives matter more. We know not to trust or we ought to know not to trust those kinds of judgments for many reasons. Those are exactly the sorts of judgments where we discount the value of disabled persons or we discount the value of people over the age of 50, which I now am and quite sensitive to the fact that there are many policymakers in particular who don't think that some of us have as much value as others. And so my point is that many of the attempts to ask moral questions of AI systems are based on AI systems that are trained on exactly the kinds of moral judgments we don't want to be making. And the reason why is you can't train them on moral reasoning. There is no semantic understanding of moral reasons that we know how to import into a dataset or a machine learning algorithm. All that it can do is make a statistical prediction based upon definite discreet values that are fed into it. And so we can tell it what people would have decided with respect to whether something was right or wrong to do, but we can't tell it why those were the right decisions or not. And so they're really at this point is no way for these systems to do anything but replicate the sort of frailist and weakest of the kinds of moral decision making that we ourselves do. On the, please Trisha, I'm going to go in there, another question online. At the end of Russell's refletch lecture, the emphasis was to decide what kind of society we want for ourselves. This seems very like your emphasis in the moral domain, but how do we agree on common moral values? Yeah, and that's a really important question because we have to ask who's the we in that sentence. And who's the we today that's deciding what the future looks like. Who is the we that today has the power to shape where technology takes us. In fact, representative of the broader human family and its interests. We know the answer to that question is no. We wouldn't be setting piles and piles of money on fire to try to colonize Mars while our own planet is burning. In fact, the, the we who are deciding what the future looks like with technology were representative of the broader interests of the human family. The vast majority of rich would be left behind to burn, even if we were able to colonize Mars, which if you know anything about the different. I'm a bit of a space nerd. I've talked to a lot of space nerds who would love to terraform Mars, and they will tell you, we absolutely cannot do that, not in any lifetime that would be meaningful, given the resource challenges that we have right now. It is a pipe dream. And maybe there will be some future era of human civilization in which that will be possible, but we will have to save ourselves from our current planetary crises in order to ever get to that stage of technological development. And so the question I think I want to turn around from from the questioner is, we first have to begin to think about who is getting to decide what the future with technology looks like. And we have to have a more equitable and just distribution of the power to make those kinds of decisions, the kinds of futures that people could build with technology. If we had a broader moral imagination for what technology could do with our sort of moral commitments leading it. We haven't even scratched the surface of those possibilities. And in fact anytime anyone actually develops an AI system that does something that's unequivocally good and unequivocally beneficial people people treat it like a unicorn like this this rare beast. It should, it should be the opposite. It should be that everything that we're doing with technology is clearly something that helps us flourish together over the long term. And right now the balance is tipped the other way. And what we're often using AI to do is to accelerate the processes that are already tearing us apart and impoverishing us. And what I want to do is not get rid of AI. I want to turn AI into the sort of thing that is reversing those trends, as opposed to accelerating them. I'm talking about things like political division, which algorithms and various kinds of exploitation of their power are aiding. I'm talking about climate change which wasteful uses of data and machine learning and things like cryptocurrency and and Bitcoin mining these sorts of things are accelerating the forces that are threatening to tear us apart and prevent us from flourishing together in the ways that better kinds of technologies could enable. First one. Thank you for a wonderful talk. You, you early on you said that these decision support systems aren't intelligent. Yeah. And what dangerous do you think it is that that has become the term artificial intelligence. Does it make us trusted, does it inclined to make us trusted and is it too late to change and find that that's it that's a great question. I actually have been kind of around ever since this particular wave of AI started about 1015 years ago. IBM, back when it was promoting Watson, which is commercially right now considered kind of a sad story, and a disappointing outcome, but back in, even in 2015, there was conviction that you know IBM's Watson was going to be, you know, the thing that was going to take the place of our doctors and take the place of lawyers and and and automate all these important decisions for us. But what was really interesting is that IBM at the time was pushing against the artificial intelligence description for Watson. They said, no, that's not what Watson does. That's not what we should want it to do. It should, we should think about it as augmentation of human intelligence. They pushed really hard, and they, they, they lost they got beat by people who recognize that the AI term would capture first of all the hearts of all of the science fiction nerds like me that have grown up reading stories about AI and love thinking about what real AI would be. And so there's there's an automatic affinity for the idea of AI that a lot of people have that what we're being sold today, we're being told is the prototype for that kind of AI where we have, you know, machine minds. And it's just not. It is not the early stages of artificial what we call artificial general intelligence. It's on a totally different track. And whatever might take us to something like artificial general intelligence or human level AI. We haven't seen that we haven't even seen the prototype of that yet. But it's very commercially powerful and seductive. So, in a sense, yes, I think we've lost that battle. If IBM couldn't couldn't win that battle. I don't know, you know how the rest of us are going to turn things around. But what I want to say is we can. We can hold on to the understanding of what real intelligence is and frankly, I don't even think until I tend to use the word wisdom more. There's a certain kind of intelligence that I can grant to someone who I wouldn't trust with that turned for for 30 seconds, right? A sociopath can be highly intelligent. And in fact, you can think about AI systems as comparable in some ways to sociopaths that is they lack affect they lack moral understanding that they have no ability to enter into the space of moral reasons with us. That's true of sociopaths as well. Fortunately, AI systems tend also not to be embedded with destructive impulses, or a lack of self control that we often see in sociopaths, right? So there isn't a inherent malicious instinct or impulse in these kinds of systems. So they're far safer in that in that regard. But I think we, we might just be. We might turn things around by instead of valorizing intelligence as a kind of narrow execution of a route from a problem to an optimized solution. It's about wisdom, which is a far richer concept which people have bought about for thousands of years. And I think we've let slip out of common understanding and discourse when you use the word wisdom people look at you like, what? And I don't think, I don't think we should accept that. So, so maybe that's the best answer I can give to your question. That was a great question. I'll pass you our last question. But thank you. As Isaac Hasmore mentioned, he came up with the three laws of robotics, all about ethics, and then explored in many books how, even when you've got this simple framework, so many ways for it to fail. And it's easier to describe than you think. Tonight I think you've told us it's not just about coming up with the framework. There's all this, the patity and the fact that the machines can't come into our space of reasoning unless we design them to do so. Clearly we face challenges. Clearly there's a lot of work to be done in this area. I think you've made it very clear for us. Thank you very much. Thank you so much.