 I'm Policy and Advocacy Manager and Project Manager for Sustain at Algorithm Watch. And Algorithm Watch is a non-profit research and advocacy organization that is committed to watch, unpack, and analyze automated decision making systems and their impact on society. So with Sustain, we want to look at the impact that AI systems might be having on society in a sustainability sense and on individuals. So what I want to do today is I want to give you a short introduction into the project and what we are doing. But I'm really interested to hear your perspective from the free and open source software community on the sustainability of artificial intelligence and really want to exchange with you. So what we're interested in is that when we started the project and when I talk about we I'm talking about Algorithm Watch and our two partners that are, which is one is the EUV in German, the Institute for Ecological Economics Research and the Distributed Artificial Intelligence Laboratory at the Technical University of Berlin. We are funded by the German Ministry for Environment. And so when we started this project, our idea was that we could see an uptake in the discussion about AI for sustainability. So how can we use AI or machine learning to come to more sustainable solutions to have like monitoring systems for sustainability issues, et cetera. But we saw less of a discussion on the sustainability of AI. And even though we are currently becoming aware of huge energy costs that we have with AI systems, we have bias and data sets. We have problems with the qualification of workers when AI systems enter into the area of work. We have strong market concentration based on AI systems. So we are kind of aware of all these problematic developments. And this is kind of this lack of discussion on these unsustainable developments also means that we don't have a discussion about how to improve it. So this is kind of what we are aiming for. We want to start the discussion and we want to raise awareness on how we can make AI systems more sustainable because there are huge potentials and we just need to have this discussion to benefit from these potentials. Based on the examples that I just given you about the problematic developments that we are seeing, you probably already became the idea that we're not only looking at ecological sustainability, but we're focusing or we're following a nested dependency model of sustainability. That means we're not looking only at ecological sustainability, but also social and economic sustainability. So overall, and this is kind of our working definition, sustainability AI should respect planetary boundaries because this is the basis, ecological sustainability is most important in a sense, but it should neither endanger social cohesion nor aggravate problematic economic dynamics. So like I said in the outset, we from the start, we didn't even have an idea how what sustainable AI could mean, how we could even measure it, how we could go to an industry partner and then say, okay, this is sustainable, this is not sustainable AI. So what basically my colleagues did in the first step is that they looked at all the literature that is already out there and they did like a meta analysis of how different indicators for measuring sustainable AI in an ecological, social and economic sense might look like. And again, you see it's based on the nested dependency model with these three dimensions. And overall, we were able to identify 12 different criteria across these three dimensions. And then each of these criteria, you would find a number of different indicators, which we are now using for operationalization to make the sustainability of AI measurable in a way. To give you some examples in the ecological dimension, we would be looking at energy consumption, for example. So are there any methods in place to optimize energy efficiency, for example, by compromising models by small data approaches? Or is the indirect resource consumption considered for is hardware sustainable? Is it being reused or recycled? And what kind of energy mixes are being used in data centers, for examples? If we look at the social dimension and you take non-discrimination here as an example, we would, for example, be asking someone, okay, if you have this AI system, do you test specifically for potentially marginalized groups? Do you identify potentially marginalized groups and do you actually test specifically for them? Also, cultural sensitivity. So is the system or are your systems designed in a way as to allow for retrain options so that we can adapt to different data sets from different cultural contexts to make the system more culturally sensitive? And if we look at the economic dimension, it would be, for example, something like, we would be asking about the working, changes the working conditions for people who are affected by the implementation of an AI system. Does it lead to de-qualification for workers or does an organization offer retraining for workers affected by an AI system once it has entered their scene of work? So these are overall some criteria and indicators. It just gives you a first glimpse. But now I want to talk more concretely about the potentials of free and open source software for the sustainability of AI. And I want to give you three examples here. Again, from the ecological point of view, we could be looking at energy consumption. And from a false perspective, I think what's an interesting point here where it aligns quite nicely is that we are here addressing if energy consumption is considered at all in the development process. And I think the idea of making available pre-trained models would be a great way to reduce energy consumption. So this is a potential I see in the free and open source software community to make available pre-trained models. And this might even be more resource-friendly than just releasing the training code or the training data on the free licenses because training time is still expensive and resource-intensive. If you look at the social dimension, we could be looking at the criteria of tech robustness and human oversight. So here we are, for example, interested in ensuring data quality could be something like data sets are representative. They are fairly recent. They are complete and similar things. So open data sets offer themselves, obviously, better for auditing these data sets. So we have better oversight and better ground for doing actually auditing of data sets. In the economic sense, if you look at market concentration and innovation potential, closed data pools lead to more market concentration. So the idea of having open data pools, they contribute towards a more fair market and towards promoting innovation. It prevents lock-in effects because more competitors can actually develop AI systems and they can offer equal services when they can make use of open data pools. So to conclude these FOSS sustainability potentials, in a way, I think there are some alignments and this is just my view of the situation. Some principles of free and open source software align with our sustainability indicators, but also a lot of indicators are not specifically addressed by FOSS principles, but I'm really interested to hear your perspective on this. So do you see further potentials when thinking of the indicators and criteria that we have identified? And also I would be interested in your perspective on how to promote the sustainability of AI. So we are aware that we need to raise awareness and we also need to address the regulatory level. Yes, so I'm really happy just to get into a discussion exchange. I'm happy to answer any question that you might be having. Thank you for the presentation. I just had a question. You said FOSS doesn't align with the principles of sustainability in AI. Could you give an example or further elaborate on where those discrepancies might be? Okay, yes. It's probably, I didn't really express myself very well and if that came across like that. So in these three examples that I showed, I wanted to show that I think there are some principles of FOSS principles that align by default with the criteria that we identified. So by default we can say this is a sustainability criteria or indicator checked because it's a FOSS principle. They just align, it's the same thing. For other, it's not by default. For other principles, it is not necessarily by default that FOSS principles by default mean a sustainable AI system. So we have one criteria that we are using when it comes to bias. We are sorry. I will have to interrupt you but our time is unfortunately up. Okay. Maybe you can take like 10 seconds to really wrap up your answer for this. Thanks a lot. To wrap up, we are thinking about identifying stakeholders to include in the development process and this is not necessarily a FOSS principles but it could be in a project. Thank you for the question. Thank you.