 Welcome to our OpenTalk series. My name is Teri Zatzuga and I'm the research lead of the AI and Society Lab. I'm leading an interdisciplinary research group that wants to find out how AI can serve the public interest. And pretty early in our research we understood that this is such a big and complex question that we cannot solve it completely by ourselves. So in this series of conversations we speak to people who bring in their experiences and their research to find out the limits and the potential of AI to serve the public interest. We really hope you enjoy these conversations. So I will talk about automatic decision-making systems in the public sector and I will briefly talk about legal aspects and mainly about ethical aspects. I said automatic decision-making systems. This is because we usually prefer to use that term over AI because we think it gives the system less power in the sense that we think we strongly believe that the final decisions should always lie with a person. We looked at opportunities for ADM in the public sector, which of course the first that always comes to mind is an easier workflow for within the administration. This is good for the people working there because it makes things easier, more efficiency. This is good for the budget of the Compton. It costs less money if it's more efficient. But also there's a better quality of service and this is something that if I, for example, as a mother apply for support from the Compton for child care, this is very convenient for me as a working mother if I can do this on the weekend or in the evening and not just during office hours of the public ministrations, we usually collide with my working hours. Of course there's also challenges. We looked at those as well. The big thing is bias and discrimination, which needs to be avoided. Also, then there's missing comprehensibility. This is a problem for the people living here because you don't usually always know what those systems do or one actually not even when such a system is in place or how they work. I will shortly say something about the legal framework. The thing that it's mandatory that you know when your data is being used. There should be an information. It's also in the law for the Canton of Zurich. The problem is there is not really always a law that ensures this also for ADM systems. This might be different in different places of the world, but it's definitely something that one should look at before issuing ADM systems in the public sector because this needs to be guaranteed. Then there's the question of transparency. There should be a public register for ADM systems in use in the public sector. Then there's also a recommendation based on the legal framework that there is in place that there should be not a full automatization because every decision taken within the public administration should always be taken by a person. This is because of the accountability. This needs to be solved. Who is accountable if a chatbot gives out information that is not correct? Now I'll talk about ethical questions. As you probably know, there have been numerous recommendations on how to deal with AI in the public sector. What they all share is that they give valuable advice for an ethically acceptable use of these systems. But some of these frameworks we feel do some kind of calculation or something of something like an ethics score or a label. A lot of these are based on questions that use categories such as sometimes or very subjective. Another problem that we see very often is that usually the assessments of these systems are exposed assessments. So when the system is already in place and life at a point where usually not much can be changed about the system. Also, it's very often a snapshot. So the system goes live. It's being assessed at this point and the situation that we have then. And it will not cover anything that has happened before. So what we did for our approach, we try to suggest to accompany a project already during the planning and test phase and also during operation, not just at the point where it goes live. We don't do scores or labels. We try to react to possibly difficult aspects or problems as they appear as they are seen. And what we do for that is we write a transparency report which documents every consideration made during all the different phases of a project so that all the decisions and discussions are somewhere there and can be reconstructed and are available for questions. What is also a very important point is that transparency, which is a word that you hear everywhere. Whenever you discuss anything close to that, for us, it's not a means to an end. It's not the goal for us. Transparency is something that helps us to ensure that we reach the ethical principles that we would like to meet. Our open talks are open for collaboration. Contact us to get involved.