 So I've titled this presentation beyond the ELSA Lab because I'm continually involved in discussions that tilt about ELSA labs and about how we should collaborate in setting them up. And I find them an important contribution to the AI research that's going on in the university and in the country more broadly. They also stimulate me to think about what can't be done in ELSA labs and whose job it is to do that work where it fits within the university. So I'd like to offer some observations on that and hopefully people can can feedback on what they think. So what makes AI a positive force in society. There's been a lot of discussion about this obviously over recent years with the huge growth in AI research. But predominantly we see these discussions tending to focus on both technical and formal kinds of solutions, the biasing models, the biasing data, legal compliance, obviously principles such as fact, fair, fat star, which is an ACM conference series that I've been involved in helping to organize. So I'm going to focus on all of these focus on how to formalize solutions for good AI, particularly around issues of transparency, fairness and accountability. And then last month, a really interesting report was put out by the European data rights organization, every interview to scientists on the idea of going beyond debiasing and the idea that AI is important for justice reasons in ways that go far beyond bias or lack of bias. And that consequently when we think about debiasing, which is one of the primary focuses particularly the policy and regulatory conversation right now. We really don't get to the heart of what we mean when we say good AI. Why is that? Well, here's an example from the Tuslachan Affair, which doesn't involve AI at all but which illustrates what I mean when I talk about fairness versus justice. I have a question from Etikova, who is the most senior judge of Bestuusrecht at Gratwurstate. Recently came on our television screens and apologized for the line that the Gratwurstate took on cases brought by the victims of the Tuslachan Affair. And he said something very interesting. He said, Bestuusrechten hebben wir in verantwortlichkeit für die Bieden von Vastigkeit, dus Rechtseenheit und Rechtssicherheit. Wir slaan picketballen, wo wir diese Sachen behandeln. That is important and thus hold them fast on the chosen line. The point he's making here is that in order to follow through on what Ralf van Staten saw as a responsibility to be fair to people, to be transparent, to be predictable, all of which are important principles in law, they kept to the line of greatest resistance in a way by not believing any applications for redress by Tuslachan Affair victims, which were in any way capable of being brought into doubt. So if there was any vagueness in any of the applications, if there was any uncertainty, then people would be disbelieved. And what Bajan said was, we should not have disbelieved that we should have heard on the side of justice of believing people who were en masse saying that they had been appeared, rather than trying to keep the line of formal fairness on the part of the law. This is an interesting comparison for the way that we currently talk about AI and fairness and justice. This is an example from our own world in Dutch universities recently, where the view and ever got together with the company Huawei from China to launch a new lab for developing recommended technologies for search engines called the dreams lab. Now this was interesting because it met with a lot of criticism and full disclosure, I was one of the people bringing some of that criticism. This is a group that I'm part of that thinks about funding in the Netherlands for research funding. The problems that we and many, many others raised were first of all, that Huawei was proactively involved in developing the technology that's involved in the genocide of Uighurs in Xinjiang province in China. Some researchers would be employed within the lab and therefore would contribute independently to Huawei's business model. So the existence of this lab would increase Huawei's profits, and thus its ability to contribute to China's genocidal policies. And thirdly, an independent critique was brought also that the collaboration would produce proprietary software would not be open, and therefore would also feed into Huawei's business model, rather than into AI development per se. Now what happened next was very interesting, because there was a response from the lab's organizers on completely different issues. They said, no, no, no, you can't bring this critique because our scientific integrity is preserved. We're free to publish whatever we wish and while we can't influence our research. And we check on national security issues with the intelligence services and the trade a camera has okayed this collaboration on the basis of its geopolitical implications. Third, it will be localized in Amsterdam, so no data will be going from the Netherlands to China to be misused. And finally, this was the most interesting thing for me, a derailing societal discussion is the real risk here. So if we discuss this in a pluralistic and democratic fashion, there is a risk that the lab won't go forward, and that we should all be very worried about. So this was interesting primarily because the critique was on the basis of human rights, and the response was on the basis of legal compliance national security and scientific integrity. These two worlds weren't speaking to each other very clearly, and we still haven't managed to get a dialogue going until very recently actually I guess, which managed to bring these concerns together and really have conversation about it. Here's another example. This is Timnit Gabriel some of you may recognize her she was until recently a member of Google's AI ethics research group. And she got fired for attempting to bring to the fat star conference, a paper called on the dangers of stochastic parents can language models be too big. And the criticism that Timnit brought in that paper with her co authors was that large language models are problematic. But not for the reasons that she was employed to think about as an AI ethicistic Google. She said they're problematic for two reasons. One is because they constitute unsustainable computing practices which have implications for climate change. And those climate change effects will be felt by people who do not share the benefits of the computing that will go on. Basically those those impacts will be felt by the poorest and least powerful countries and populations on earth. They also have the least technology access. This is a problematic disjuncture. And second, she said that by bringing these by bringing massive linguistic corpora together to create these large language models. So basically, Google reads public facing data from the internet and from corporate collected over decades. You are reflecting the world through these models in a way which will only reproduce the inequities already contained in public speech. So public speech on the internet is very often discriminatory but it also reflects a world which is full of structural discrimination and structural problems of injustice. Large language models will necessarily reflect that that world is impossible to de bias the data when the data is on this scale. And so these language models are problematic for reasons that go beyond classic ethical preoccupations with regard to AI they're problematic from a justice perspective that is global, and that is historical. These examples are interesting because they bring up certain blind spots in AI ethics that we would all do well to think about. For example, the idea of upstream problems. There are structural inequities as Timnit pointed out and injustice in the world that models and applications will then reproduce or amplify. One classic example of this is the page chatbot created by Microsoft a few years ago, which within a day had to be taken down because it started reflecting exactly as Timnit Gabriel said large language models would all of the discriminatory and violent language that was present on the web. And there are downstream problems where the effects of models may not be visible to lab scientists. For instance, as Timnit pointed out the climate change impacts of large language models. Third, the problem of innovation bias. This is something that the entry report is very good on if you want to check it out. The assumption that optimization is actually appropriate for certain high stakes functions. And I'll come back to this in a moment. There are institutional governance problems that become labeled as a related but are not in fact I related. For instance, the fact that the University of Amsterdam and the University of Amsterdam did not have human rights due diligence going on and therefore completely missed the real problem with the Huawei collaboration, while they were looking at compliance security and data localization. So, to finish up, if we want to move from issues of basically de biasing to concerns of justice. We might want to think about specific applications of technology and use these as benchmarks for where we might want to think about red lines about rethinking and reframing the questions that we're asking of artificial intelligence. One area and I just mentioned this is high stakes interventions where challenges to the system are either impossible or ineffective. One example, as I put up earlier was was welfare decision making welfare allocation. Another is migrant and asylum processes where now we're using my detection AI for instance among many other things age detection AI is also present all sorts of biometrics. These are problematic because they focus on populations who are not going to be able to push back. We're not going to be able to make claims if they're treated unjustly. We're not going to be able necessarily to know what the AI is doing, or to push back and make claims, as I said. Interventions that contravene scientific evidence. Here we come back to lie detection AI, which is a misnomer. It's actually phrenology. It is completely scientifically unproven and there is no current evidence that lie detection is ever actually going to become reliable in the court system. Third systems that optimize for what we might term illegitimate outcomes, for instance, social media algorithms that polarize public debates. And that can't really be adjusted not to because they belong to what are essentially advertising platforms where you want to get the most eyeballs the most attention on content possible and to make people stay online for as long as possible. And where it's well known that people stay online for longer when they're in a condition of outrage than when they're happy. And so polarizing public debates is highly profitable. And we might want to think about whether it's whether it's just whether it's permissible to optimize for that. And lastly, systems and infrastructures that impact negatively on our collective future. And here unsustainable computing is one example I've already talked about. But also there's precaritization in labor markets. There's the marketization of things such as healthcare which probably shouldn't be marketized or to become dysfunctional when they're marketized and various other areas that I think a lot of us can think about. I will stop and ask for your questions.