 Welcome to our Open Talk series. My name is Teri Zabzuga and I'm the research lead of the AI and Society Lab. I'm leading an interdisciplinary research group that wants to find out how AI can serve the public interest. And pretty early in our research we understood that this is such a big and complex question that we cannot solve it completely by ourselves. So in a series of conversations we speak to people who bring in their experiences and their research to find out the limits and the potential of AI to serve the public interest. And we really hope you enjoy these conversations. When I introduced myself to people that work in different spheres and have nothing to do with our kind of bubble, that was like, oh, algorithms for the common good, it's like, what does that mean? But if you give them a concrete example, they're like, oh, this is so interesting. Yeah, that's a real problem. We need to solve that. And can algorithms really work? So this is why we think it's really important to understand from the details into the bigger picture. But first, maybe to me, I work at the Berthesmann Foundation since a year and a little bit. I come from backfond with Econ and then I did public policy. So I am not so strong on the tech side, but more on the question of how tech and society connects. And yes, I'm very happy to show you a little bit about our work. The kindergarten or actually the distribution of childcare places is a huge problem in Germany. Everybody knows somebody, especially in Berlin, that just went crazy about the question of how to find a childcare place. The question is, can algorithmic systems help make the distribution of daycare places more efficient? But I think even more interesting is more fair. If you think you have just a scarce amount of places, you can think about how you distribute them. It's so unbelievable, inefficient, how these childcare places are distribution, we need to find a solution. First of all, very societal problem, it's really about people getting on at a table. You really have to find criteria and you need to make them explicit. What are the criteria that make a child more relevant to getting a place? And once you have the criteria that not only counts for one quitter, but for more quitters like all of the surrounding quitters, then the next, the parents can just give in their preferences. And then the matching algorithm comes into play and sorts out the list in which way they should give out their free places. And lastly, it's the question of how this actually can lead to more fairness. They really have to figure out a criteria catalog. And that means it's just more transparent. If all the quitters, the kindergartens use that same criteria, it's also more easy to check for parents, like why did I get a place or did not get a place? Mostly they will ask why did they not get a place? And it can be less discriminatory. If they change something in the hierarchy of which children get an acceptance, they will have to explain why. So you can make changes to the setup of how the algorithm kind of sorts out the priorities. And there can be good reasons, but you have to kind of explain that issue in that set of room where you kind of explain that to the other people. And that can also lead to less discrimination. How could you communicate and create trust in the algorithm if the decision criteria are set by machine learning and not actual human beings? Yeah, super interesting question, because I think this is kind of what we do and society kind of populations attitudes toward what AI should or could do with society is kind of the interesting field because what we learned is it's a lot more easy to communicate about this case because it's such a simple algorithmic system. There's no AI in it. I said, some people really like to write, oh, this is AI because AI in kindergarten is such a nice like wording, but it's just wrong. There's no AI in it. And we always say that and sometimes we're hurt and sometimes we're not. But I think from our experience, it's a lot easier to get people more acquainted with the idea of using algorithmic system in this really like a fragile environment if it's understandable. And this is why we can always say it's not the algorithm. It's you and you decide what's happening. This makes a lot more communicable. Did you evaluate if the algorithmic system is more efficient and fair? I think the answer is efficiency. It's yes, because it's kind of inherent to the algorithmic system itself. So they thought about efficiency from the econ side when they implemented it and they had really concrete demands of what the system has to do. And that was create more efficiency. But talking about fairness is always looking at what what's the status core right now and the status quo is completely unfair. It's unfair on so many levels. If there's just a teeny tiny bit of more fairness because you lower the bar of how complicated it is to actually apply for it. That's just a really tiny kind of thing that you do in that system. But if that helps for people to actually get into that system easier, I think this is like, let's start there and see how else it kind of works. And maybe just the other part that's kind of the broader space, it's political, right? So if you'd set up criteria, it's a very political question of what are your preferences as a society and this, of course, a local societal question. And that's all about values. What are your values in saying we want all the people that don't have like poor in the sense that they get maybe, for example, hard sphere in that families, they should get a place in the weather. No matter this could be a political decision. This could be a political decision or a societal decision. I don't know how you can do that in whether that's lawful. But you could say that this is our priority and all the other parents that work full time but have like double income. Well, you will have to find a different way to find it. That's a political question that's about values. And this is where fairness is also decided. And I think that's where it gets really interesting because that's a societal question and technology is really just one way of fixing one of the little kind of things. But the questions itself, they're just very political and societal. Our open talks are open for collaboration. Contact us to get involved.