 We are live. Hey, hello everyone and welcome to the end of the year conversation with trustees and staff of the Wikimedia Foundation. My name is Shanee Evanstein-Sigalov. I'm the vice chair of the board of trustees and the chair of the community affairs committee that is usually hosting these events for the community and with the community. Thank you to everyone who's joining us today and to those who will be watching later on. These calls are essentially for you, the community. It's a channel for us to share what the board is doing more transparently so you are up to date with our work, answer questions and have discussions on issues that are on people's minds and basically getting to know each other better and providing better communications between all the different stakeholders in our movement. I'm passing now to Elena. I'm passing now to Elena to share a bit more about how to call exactly. Thanks so much, Shanee. Emerald, maybe we can take down the welcome slide. Perfect, thank you so much. Hi everybody, I'm Elena Lappin, lead movement communication specialist with the Wikimedia Foundation. As always, the call today is gonna be a 90-minute call, roughly broken down with an hour of topical updates and conversations on those topics, question and answer on those topics, and then 30 minutes of open question and answer at the end of the call. So as we go, we'll have a section of updates on a particular topic and then Q&A on that topic followed by the next topic with topic-specific Q&A like that until we get to the end where we can ask questions about anything pertaining to any of the topics discussed today or anything else that you might have on your mind. We encourage participants to ask questions throughout here in the Zoom chat or in the YouTube chat. We're gonna be monitoring and queuing up those questions for both the topic-specific Q&As and the general Q&A at the end of the call. If you're here in Zoom with us, I know there are a number of people here with us, welcome. We encourage you to speak live if you want to. If you decide you do want to ask a question live and unmute, you can raise your hand here in the Zoom chat and we will add you to the queue and call on you so that you can unmute and ask your question live on the call. We do have a number of pre-submitted questions for this call. In order to get to as many questions and answers as possible, I've consolidated some of those pre-submitted questions. In consolidating questions, I always try to capture the full intent, the full meaning, the full spirit of the question and try to ask it in a way that will get you the answer that you're looking for. But if you have any follow-up questions, feel free to ask those in the chat and we can get to those as well. So just wanted to underscore that while you might not hear your question asked in the exact way you submitted it, I did really try to make sure that we got to the heart of every question. So as always, if there are questions that we don't have time to answer during the call, we're gonna post those answers on the event meta page afterwards and we'll also be posting a call summary on the event meta page so you can read up and share that with whoever you'd like. The meeting is being recorded. It's gonna be available on YouTube on the same link that we publicized to stream, the same link that those of you are already streaming from live right now. And then we'll post a recording also on commons afterwards. This meeting is covered by the terms of the universal code of conduct and we look forward to just having open discussion with all of you about some of the exciting topics we'll cover today and more. So with that, I'll pass it back to Shawnee to give us some board updates. Well, before we get to the board updates, we do want to just say thank you and celebrate a few things since we last met. And it seems like it's been busy months and there is a lot to celebrate. One thing is that Wikisource turned 20 and Wikidata celebrated its 11th birthday. Also last November, we celebrated a hundred million files on commons. So a huge congratulations to all of these communities and the volunteers who've been enabling these projects. These are quite amazing milestones and we really appreciate all the diligent work and dedication going towards them. December specifically is also an important month for us because we're going to celebrate two big things. One, Wiktionary is going to turn 20 and Wikivoyage turns 17. So a big happy birthday to both of them and to all of these communities. And the last thing on this thread is that in January, Ukraine Wikipedia will be celebrating 20 years of activity. So again, happy birthday to all these communities and well done for all the work. And while there's a lot to celebrate and to be thankful for as we are wrapping this year, we also have lost dear people in our community and we'd like to commemorate them. And I'd like to ask Nat Timkiv, who is our board chair to say a few words in memoriam of the Wikimedians that we've lost in the past few months and then move to some board updates. So Nat, to you. Thank you. So this is a really sad part. And at Wikimania, we celebrated contributions of many of Wikimedians who passed in the last year. And of course, only the ones that we know that were publicly mentioned. And after Wikimania, we also lost a very dedicated Wikimedian, Richard NBB. Some of you might know him also as he was involved in the movement chart drafting committee. I certainly worked with him in my work as a board liaison with that committee. And he's one of those kind-wise souls that you only realize that you lost opportunities to talk with more only after they are gone. We are one of those who also, in recent death, confirmed ones is a Palestinian poet and writer of the Wikimedian. And I'm not sure I'm going to pronounce it correctly. Heba Kamal Abunada. I, as a war refugee myself, I can just share and say that any words are going to sound cold and empty. It's really difficult to find something to say in these instances. As a Wikimedian at core, I can say that my way of dealing with any loss is trying to document and preserve knowledge. And I think that that's how the communities are dealing with that, with any losses, with any wars, with any destruction. We are trying to preserve things and try to make sure that they are not disappearing. Things, people, memories. And that's our mission. That's like the extent of how I can describe it and how I can explain it to myself. And that's a belief how people are dealing with things. At least on the other end of the world, the wiki world. I am going to move to board updates now. So our last meeting of 2023 calendar year was yesterday. We dealt with mostly things which connect to the world and to the internal board related work, but still as a matter of trying to be transparent and trying to make sure that things that we are doing are not a surprise. We have appointed Lorenzo as another board liaison to the MCDC. Shanice stepped down and now Lorenzo is going to join me in liaising with this committee. We have also received updates about board selection process which is supposed to start in 2024 because there are some trustees who might be running, whose term is ending in the next year. We usually create a separate group of trustees to work and liais with the elections committee in order to not have conflict of interest or try to mitigate it. And this board selection task working group, sorry, the name change will consist of Darius, myself, Shaggy as current voting members of governance committee who do not have that conflict of interest and also Isra and Kathy, our newest trustees that I'm going to say a few words a bit later. We are also working on evaluating board committees work and we are in the process of figuring out and what we can change in board committees. These things of course are going to be published later as some lessons learned after we know what worked for us and what didn't. This was also the last meeting of the year and of the term for Tanya who has been a trustee for already a few years. She was a chair of audit committee and she is going to stay on to preserve the institutional knowledge as a trustee, as an advisory member of audit committee and our newest trustee, Kathy Collins who is here also on the call. We have been looking for a replacement for Tanya, for Tanya's role for a while. I have been also pasting some updates on meta and to Wikimedia L. We tried to continue what we have been trying before with pre-onboarding people who we see as trustees, be it community or affiliate selected or board selected trustees. And with Kathy, we identified, after we identified her as a strong candidate and the ones that we would like to continue working, we also had an opportunity to organize a trip to the North America Conference in Toronto in order for her to get to know the community also more, not only trustees, not only staff, but also the communities because we believe that this is very important for a trustee to know everything that we can pass on, we can teach on before a trustee actually gets into the role. So I am going to give floor to Kathy. So maybe say hi. Thank you. Hi, everyone. I'm pleased to join you today. I'm really excited and honored to be joining the board when I was retiring from Rice University and thinking about next things. I was interested in an opportunity where I could contribute from my finance and government governance experience, but also find an opportunity that I could be involved in important issues of the day and also a learning opportunity for me and the Wikimedia Foundation and the Wikipedia community seemed like a good fit in touching all three of those areas. I have been trying to learn about the Wikipedia world since I first started conversations exploring this opportunity back in the early summer and was excited to go to the Toronto meetings. I met lots of people, heard range of issues that people either were concerned about or excited about and working on. I also met several people from WikiEggie, which was especially meaningful to me given my time at Rice University and on the board at Mount Holyoke and realized that a woman I know at Rice has been involving her class in writing and editing for Wikipedia. So things seem to come full circle in that way. So I'm very much looking forward to this opportunity. Thank you. Thanks, Kathy. Now that Kathy is a voting member of Wikimedia Foundation Board of Trustees, we also hope that Kathy would be also more involved in community events as well. For us, this is also an opportunity to talk with the communities to member directly, but also present things, also explain about things, decisions, processes, and which is also important before the board selection process really starts in 2024. This year, we have also, or this past month, trustees also attended regional events and thematic events, like, for example, Wikimedia CE Meeting in Georgia, and then we had Wikimedia Camp in India and Wikimdaba in Morocco, and mentioned Toronto and other events, which is also different communities, different trustees, but also just more opportunities to communicate and more opportunities to meet. And wherever possible, also to try to get trustees to also present on something that they as volunteers are interested in or competent in to share and use these opportunities to not only have tea and coffee, but also maybe share some knowledge and also learn because there are also a lot of trainings being organized by different communities and different regional organizers, and I just want to give a shout out to all organizers who are trying to figure out what are the capacities missing that can be developed because it's also important for volunteers who also happen to be trustees to get to know more things and learn. We also have some other topics that you also might want to hear some updates later in the call, like, for example, affiliate strategy, the report about Phase 1-2 of the process has been already published and the link may be shared in the chat for those who haven't been following that process. At the moment, we, as board liaisons to AFCOM, are also helping with the interviewing process. AFCOM had 32 nominations submitted, or applications submitted, which is a lot for a committee, so it also is going to take us longer, probably, than we expected, but the decision is going to be made as in previous years by the voting members of AFCOM who do not run, also who do not have conflict of interest as it was before, but this time that process is more vigorous than it was in the past. Now, the other big thing is sister projects task force. Vicky here somewhere in the screen is the lead on that work. This is also one of the pieces of work that we are trying to figure out from the governance point of view about how we can actually create a process of figuring out how we can open new sister projects, but also close them, trying to evaluate whether these projects are actually bringing impact, and that's the work that the group was tasked with. I think that's it with my boring updates, and I'm going to give floor to Shani as the chair of the CAC, the committee that is hosting this call to talk about the plans and updates for the work of the year to come. Thanks Nat, I'll try to keep it really short. This is just really a reminder that the Community Affairs Committee is a board committee that was set up in 2021 with a goal to, in a sense, bridge and create better communications between different parts of our movement, mainly the community and the Wikimedia Foundation. And we've been doing that in various ways and will continue to be doing that in 2024. We have strategic goals like every board committee every year, and specifically for 2024, you'll continue to see these calls happening every quarter, will continue to be present for different conferences and events of the movement that are important and strategic. We will continue to support work done by the MCDC, the Movement Charter Drafting Committee, and AFKOM, as Nat was saying, the work of sister project is happening under the CAC as well, and we'll continue to support our CEO and senior leadership in achieving the Wikimedia Foundation's goals with everything connected to the community. So this remains a channel for the board to improve the communications and bridges between the foundation and the community. And I hope that we can all see additional progress in the coming year. Those are some of the headlines. And if any of you have any specific questions about either what Nat shared or what I share, please feel free to ask us in the open questions section a bit later on. I'm now forwarding the virtual stage to Lorenzo as the chair of the Product and Tech Board Committee to share some updates about that. And he and Selina are our CPTO. So Lorenzo, go ahead, please. Thank you, Shani. Hello. Just last week, we had a meeting of the Product and Technology Committee. It was the last one for this year. And we started talking about the things that we are going to discuss in the next year. It's been an opportunity for us to hear from Selina, the Chief Product and Technology Officer about the key things that are on her mind in terms of strategy and planning, leadership and culture. And in the coming year, the plan is to continue those discussions and support Selina in her work. Overall, Product and Technology is a centerpiece in the annual plan for the current year. And there are many different objectives that the department is working on and that the board is overseeing. So I'd like to hand over to Selina to give us an update on what's happening with the foundation Product and Technology and the annual plan. Selina, over to you. Thank you, Lorenzo. Hi, I am Selina Deckelman. I'm the Chief Product and Technology Officer here at the Wikimedia Foundation. And I'm here today to talk about the Product and Technology work we've been doing under this annual plan, which focuses primarily on the needs of existing editors. This annual plan establishes Product and Tech as the largest focus area in terms of staffing and also budget. And we're also doing the essential work like maintenance, upgrades and fixes to keep all the Wikis running as well as the strategic work around improvements to the way that the software works. For that strategic work in particular, we've been collaborating with established editors to build based on the needs you're telling us about, the work that really improves your workflows and makes your experience on the Wikis more straightforward and enjoyable. So just to name a few of those strategic projects focused on experienced editors, they include improvements to the new Pages Patrol and Page Triage software, sorry, which allows patrollers to evaluate new pages for publications. And the improvements included the fixing of workflow breaking bugs, the updating of deprecated code, and the rewriting of the new pages feed using View.js, which makes future fixes and improvements easier to make. We also put the newest version into production in October, and it's been further updated by volunteer developers to make use of Codex, our modern product design system. Thank you very much the volunteers that are doing that. We also have been working on Edit Check, which will automatically warn newer editors when they have added information without a reference and encourage them to add a reference and then check whether their reference comes from a dubious source. This project was inspired by several community wishes over the years, and we hope that this will help automate the process of teaching newcomers how to edit and reduce the burden on experienced editors to both teach and correct. Edit Check is being tested on several small Pilot Wiki PDAs now. Along the same lines, we're working on bringing anti-vandalism bots to smaller language wikis that don't currently have them so that the patrollers can focus on more complex problems. We're also working on Dark Mode, which has been a long-standing request in the community wish list as a way to make reading and editing Wikipedia more comfortable and accessible visually. It's currently under development and we'll be rolling it out along with the ability for both logged in and logged out users to choose their preferred font size. It's a really interesting stream of work actually. A lot of caching discussions happening on the technical side. There are a lot more examples of improvements aimed at experienced editors like patrolling on Android, watch list for iOS, upgrades to the comments upload wizard and more. And I'm happy to talk about any of these things in more detail or take any questions. And so with that, I'll see if we have any questions in the chat or any questions live about these updates. Thanks, Selena. We've got some celebration in the chat around Dark Mode. Now that's exciting for people. Let's leave a few moments to see if there are any questions in the chat here in Zoom or YouTube about any of this, about New Pages Patrol, Dark Mode, Edit Check, any of the other features that Selena mentioned. All of this work being done, of course, under this annual plan. We don't have any questions. We can also move to the next part of the agenda and anyone who does have questions upon further reflection, if you wanna check out the links we're dropping in chat. You can ask those questions as we go and we'll take any questions or comments in the open Q&A section. It looks like we do have a question in the YouTube chat, but I think it's more relevant to the AI section of our call actually. So I may hold that one until we get to the AI section of the call. So thank you so much for those updates, Selena. That was great. Let's move quickly to Rosie to talk a little bit about the Talking 2024 initiative that we kicked off. I believe it was last month. Is that right? Yes, that is right. Thanks, Selena. So I'm Rosie Stephenson. Good night. I've been a trustee since 2021. Arpcom, Biodiversity Heritage, WikiClubs, Academic Research, Affiliate Programming, what do these have in common? These are just some of the topics of discussion at the Triangle of Community, Wikimedia Foundation and the Board during Talking 2024. The importance of communication with our movement cannot be overstated. And to tell us more about Talking 2024, I'll hand it off to Merda and Mariana. Over to you. Thanks so much, Rosie. What a great intro. Hi, everyone. My name is Merda, and I work for the movement communication team at The Foundation. I have the pleasure of coordinating the Talking 2024 series of conversations. As Rosie mentioned, Talking 2024 launched in November and has been growing steadily. It follows in the footsteps of Mariana's listening tour, which she led about two years ago this time before she started her position with us. And so this time around, in addition to listening, we're also sharing and really learning together. There are some big existential questions that still face the future of our movement, whether it's technology or finance. And as always, they can only be resolved through sharing and learning together, like we always do, listening to each other and coming up with solutions. Board trustees, Mariana and other senior leaders from The Foundation have been talking to a cross-section of community members, and Rosie shared with us some of those highlights. So far, 40 conversations have happened with online editors, functionaries, affiliates, affiliates, different committees, and the Talking continues. We have a few more lined up for the rest of the year, and then we'll pick things up up again in early 2024. We have connected with newcomers, established community members, editors, affiliate staff, you name it. So thank you so much to the community members that have signed up already, who have participated, and thank you to our leaders and trustees for making time for this to engage and talk, which is so important. And the conversations have so far been informative and in particular, setting the stage for multi-year planning that will begin in 2024. And I will share the link with you shortly if you're interested to sign up yourself or you would like others to take part in these conversations. And if you have any questions, please let me know. Thank you so much, Mariana. Thank you, Merded. Since you mentioned the first listening tour, I just wanna acknowledge since Delphine's here, she facilitated what were at that time hundreds and hundreds of conversations in a short period. So thank you for that. I think we're really just trying to make it easy to talk. Sometimes it's not clear who to email or if you wanna have a conversation with myself or with Selena or with the board. You don't always have to have a reason. We can just find time to engage on what's been happening in communities, the work that you're doing, the things you're seeing, the things you're worried about. I've had an opportunity to have conversations almost every day and some days more than once. Today I had a very powerful conversation with a volunteer in Nigeria. Learned a lot about their perspectives on how things have changed, not just in Nigeria, but in the kind of broader Sub-Saharan African context. I met with representatives from the Turkish affiliate and spend a lot of time understanding, again, not just their own local activities, but how they relate on a linguistic basis to many different regions of the world. So if it's something that you would like to find an hour to have a conversation, please sign up. And I think that as long as people are interested, we'll keep going. Thank you. Thanks so much for the update on Talking 2024. We'll see if we have any questions about Talking 2024 here in the chat, but definitely an encouragement to everybody to sign up for one of those conversations. Mariana, maybe you can talk a little bit about how the inputs we're getting through all the conversations in Talking 2024 are gonna be synthesized and then used for planning in the future. Like, what does that process look like? Thanks for that, Elena. I mean, we typically are taking notes in each of the conversations and as we go kind of aggregating themes. The Board of Trustees and the Foundation's senior leadership alongside members of the Endowment Board and the Movement Shorter Draft and Committee are all gonna convene in early March. And so the hope is that we are able to bring some of the outputs of these conversations into those sessions and be able to communicate out with volunteers, I think probably towards the end of February, what some of the key themes are that we've heard. I think the good news is that it's a lot of topics that are expected, things that are on people's minds around the Wikis themselves, a lot of questions and conversations around generative AI, the work we're doing in product and tech, the role of affiliates, how we're understanding external trends. And so I feel actually quite encouraged that we're talking about the kinds of topics that were already part of the strategic planning process. And I think that, well, I know that these conversations are gonna make that even better. Thank you so much. Great, we're sharing some links for how to sign up. With that, I think we'll move to the next topic. So the final topic for our updates and our call today is around AI. I know it's a big ticket item for our movement and for the world as a whole these days. So I wanna pass it over to the trustee, Louise, to open the conversation about AI. And then we're gonna have some updates from an affiliate perspective on what's going on with AI and then talk to foundation teams about what the foundation is doing. And I hope then we can have a rich discussion about that here on the call. Louise, take it away. Thank you, Elena. Hey, I'm Louise Benicremilio, currently wrapping up my second year on the board and it's been a particularly exciting past year. Generative AI has been the top of mind for many of us. There's been a lot of questions wrestling with on how we can as a movement keep up, how can we make sure we stay on top of the trends and how people access and consume knowledge? How can we leverage Generative AI as a tool that supports Wikimedia rather than just threatening it? We've been thinking a lot about these questions together, including during the last number of these conversations with trustees. It's been inspiring to see how different parts of our movement from individuals, affiliates, contact contributors, technical contributors to foundation teams are leveraging and exploring that technology. As part of this holistic approach to these discussions, I wanna turn it over to Natalia to give a bit of an affiliate perspective by talking about the work happening at Wikimedia Poland. We'll go then here from foundation team working on AI before opening the floor for discussion. Thank you very much. Thank you for invitation. I know it's not common for you to have a chapter perspective here. So I'm very glad to provide you with it. I'm Natalia, I'm ED of Wikimedia Poland since November last year. And one of my first tasks here was to facilitate a multi-stakeholder process of creating a three-year strategy for the chapter. And I have to admit that back then when we started talking about it, we did not foresee that AI is going to become such an important part of our landscape. And I just want to briefly tell you about three perspectives, external, internal and the perspective of the community. The external perspective is that since six or seven months, we're getting a lot of media attention, a lot of requests for interviews, podcasts, radio, press, anything. And it all concerns AI. And there's always this controversy. The question is, chat GPT is going to Wikipedia or so. But we are using the occasion to talk about Wikipedia and Wikimedia in general to change the narrative, to show that this is an opportunity for us, another decline. So I think this attention is going to last for another at least couple of months. And during the conversation with Mariana and Lorenzo last Monday within the Talking Initiative, I was suggested to create an FAQ. And this is a great suggestion. I'm going to do that for sure because having so many requests, outside requests about Wikimedia and AI, it's going to be very useful. So I'm surely going to consult some of you before we make it public. The internal perspective is our daily job. We are quite a big chapter, 11 people. And we started to experiment a lot with AI tools. It turned out to be pretty effective for us. We made a decision that any software that we use, we want to use as effectively as possible. And since there are AI assistants, usually better versions for the software that we use, we're trying to make the most of it. So it proved to be effective in grant writing, creating social media posts, enhancing our brainstorming, for example, when we want to generate ideas for, for example, project title, making notes and summaries of our discussions. And we also put a lot of attention to, like to follow the current state of things. Like we discuss a lot internally, we read articles, everyone has a different perspective on AI. And I think that's like a food for thought exercise for the whole team that's going to be useful and now in the future. And the community perspective is that what I observed so far is that more experienced editors and the Polish community consists mostly of active editors who are more than 40 years old. They are pretty conservative about it. Not sure yet if they want to engage with AI and how. However, younger editors are very much into it. Just last month, we were, we got a request from them to provide them with access to the professional version of DeepL, because they find it useful and they think that's something to support their activities. And we have at least one IT student editor who is very much into the topic, but he uses very technical, very hermetic language. So our communications manager is trying to work with him to help him create like an easier narrative so that the rest of the community can learn from him or be inspired. And we next year, we want to do a couple of webinars with him so he can tell us about his ideas and his perspective. And that's it for me, thank you. Thanks Natalia, that was really insightful. We do have a request here to publish the FAQ on forum so that it is available for real-time translation. Let's hand it back to the foundation team then to talk about what we're doing within the Wikimedia Foundation on AI. Hi everyone, my name is Chris. I am the director of machine learning at the foundation. So this is all very relevant to me. I've been at the foundation for three and a half years and really focused on a couple of things that I'll talk about in the following slide. So the thing that my team focuses on which is the machine learning team is on production machine learning models. So that is any model that is out and live in the world powering features, powering things that are being used by users or by the community. Our focus is in typically three main areas. First, we build the infrastructure that runs the model. So originally we had an existing infrastructure called ORS which a lot of you know and over the past few years we have built a new infrastructure able to handle a wider variety of models, larger models such as LLMs scale out to a larger number of users. Right now that new infrastructure which is called lift wing is in production. It's currently getting about 600 requests a second so it's feeding real amounts of live traffic. The next thing is around model transparency. So we focus on model cards. If the foundation has the model in production there is a model card about that. A model card is a set of public documentation about the model that not only describes how the model was made, how it should be used. It's also going to be a place where we put our human rights review for particular models. It's also a place for you can provide comments and feedback about models and ultimately govern the model as a community. And then the third area is on trying to figure out some experiments with how we can actually help existing users of the site. So for example, working on a project to can you actually just chat to the Wikipedia docs about like how you might edit a page or ask a question and we'll generate a sparkle query for you to return back like the exact code for that sparkle query. Ways to make the page make the experience on the site better. And now I think I pass it to Isaac. Yep, thank you, Chris. Hi everyone, I'm Isaac Johnson. I'm a senior research scientist the research team at the foundation. And I'm going to talk to you about one kind of generative AI project that's typical of some of the sorts of work that my team is doing. And what this project was around is recommending potential article descriptions to add to Wikipedia articles. And then those are descriptions go to editors and they can choose whether to add them or not. And this model that is doing this sort of work it does this by looking at the first kind of paragraphs in existing Wikipedia articles looking at other descriptions that might exist in other languages and kind of synthesizing all of that together and then simplifying it down to an article description that matches the sorts of norms that the community has set around what they should look like. And this was a really exciting collaboration. It started with some external researchers actually that we were able to connect with Jasmine Tanner at the Android team where they already had some existing processes for adding article descriptions but without AI kind of supports. And we were able to build out a pilot of this model on our cloud services and work with our legal team on around evaluating potential harms. And then Android led a very nice pilot of the product this past year and we pulled in some additional editors as well to help evaluate the system. And we'll be deploying it in the next few months on the ML platform as Chris was just introducing and have this model in production which is very exciting. And this is all to say there's a lot of folks involved to make this tool happen. But it was a very productive I think collaboration across these stakeholders. There's a few things I think are important to understand about this particular tool that was built and why it's an exciting one to talk about. One is, we were very careful about the design of this and how we constrained it and we did this for a variety of reasons. It supports editors by augmenting existing editing processes as I was saying Android already had processes where they're helping editors add these article descriptions. We were able to just make that simpler. Because we kept it well constrained we were able to deploy it to 25 language editions or we will be able to deploy it to 25 language editions without putting massive strain on our infrastructure either which is an exciting piece. And because it's a relatively straightforward model it's easier to kind of identify issues that might arise and explain the model to community members who are curious about how it might work and even make adjustments if we need to. And generally I think these sorts of constrained experiments with generative AI are really good because they allow us to understand the impact of these tools within the projects while constrained enough so that the potential harms are limited as we experiment and learn about these things. And you might imagine other examples in this space like recommending captions or alt text images helping editors summarize or simplify content and these are all sorts of projects that we're considering as we move forward too. And with that I think I'll pass it on. Hey, that's me. Hello, Mariana Penchok here. I am a product manager at the Wikimedia Foundation working on initiative called Future Audiences. So you've just heard a little bit about how we're using AI to support our current audiences on our projects and building out the infrastructure for that. I'm gonna talk a little bit about how we're looking at AI products that are happening outside of our ecosystem, outside of our platforms and how we're gaining a better understanding of some of the opportunities and risks in that space. So next slide please. So Future Audiences is meant to be a small part of our annual plan product and tech resources that's devoted to doing quick kind of opportunistic experiments to understand new technologies and new trends that might be impacting how people search for and get knowledge and create knowledge online. So when ChatGPT first launched, we were really interested and frankly, very afraid that there could be a potential world where more and more people began to rely on ChatGPT and tools like it, not just for the kinds of tasks that Natalia was talking about, kind of productivity and helping with marketing and writing emails, but actually getting information. And we know that these tools are not super reliable for getting information, they can hallucinate and make stuff up all the time. So we tried to get a better understanding of this by building a plugin to ChatGPT, an experimental feature of ChatGPT at the time. And I think what we've learned was, we saw in our high level statistics that we follow on things like traffic and user activity that over the last year there has not been a significant drop in page views or traffic. But the additional context that we gained through building the ChatGPT plugin was that it became very clear that even very passionate users of ChatGPT were not necessarily relying on it for general knowledge queries. They weren't, it wasn't replacing Google for them or Wikipedia for them as far as seeking out reliable information. And in fact, even the users of the ChatGPT plugin, the Wikipedia ChatGPT plugin, most of them said that they still relied on Wikipedia on the web about as much as they relied on the information that they were getting from this plugin. So I think all of these data points together really helped us to understand that at least for now the knowledge search apocalypse is not nigh. I do not think our movement should be complacent in this area because I think these technologies are evolving and changing so rapidly, but at least for the time being, it seems that consumers are a little bit more skeptical and wary of these kinds of tools. And I think that speaks to the second point, a really important point that we learned again through surveying the users of the ChatGPT plugin was that they were using it because they didn't trust the information that was coming from ChatGPT on its own, some of which is actually coming from Wikipedia but without sources, right? So just knowing that the information that they were getting was coming from Wikipedia specifically and having links back to Wikipedia to learn more, helped them to build trust in this content. So I think where we're going next with this work is really trying to focus in on that specific opportunity, knowing that consumers are skeptical of AI and increasingly the world is starting to be full of more and more questionable AI-generated content in all kinds of different platforms all over the web and seeing if there's a way that we can bring what Wikipedia has really always been used for this idea of a verification system, a human verification system and use AI to actually make that simpler and faster. So next slide, please. So our next experiment that we're planning is to try to make a kind of an experimental product around that. So making a way for people who are reading information on a different platform, let's say wired, allowing them to highlight certain claims that they might find suspicious and searching for those claims within Wikipedia and presenting information back from Wikipedia kind of on the screen. So I feel like this is a really interesting way to sort of create a new process that is already sort of being done manually through opening multiple tabs, going to multiple places to look for information, kind of bringing it all together. And with the power of AI, it really allows us to create a much more natural flowing experience with AI augmented search. So we are interested in seeing whether people are interested in this kind of a plugin and how they use it, how it can better understand and parse the information from different platforms and vet it against Wikipedia. And we see this as an opportunity, perhaps in the future to advocate for more platforms to add this functionality more natively. So you may be aware that platforms like Google are already kind of doing this. There are little contextual panels that pop up for certain kinds of features on YouTube, for example, state-sponsored media has a little pop-up with information from Wikipedia telling you that this is a state-sponsored media channel. So we're really interested in exploring this from a consumer-facing sort of perspective, but also maybe using this as more an opportunity to advocate for these kinds of things, being a regular part of what all platforms do to maintain credibility and trust with their users. So that's where we're going next. And I will pause there and I think we're doing Q&A after this. Thanks, Mariana. Yeah, that's right. We're gonna do a little bit of Q&A time on this AI bit. So thank you to the presenters. This was a really interesting way to explore the various angles of what we're doing. There are a few quick follow-up questions we've got in the chat about this. First question, this is probably for Isaac. Is Cindy language included in the 25 languages you mentioned? And maybe this is an opportunity to talk about how those languages were selected in general. Yeah, that's a good question. We'll share a link that includes the kind of full list that folks can see for themselves. So the way these languages are selected, for this case, we're building on an existing kind of pre-trained language model and we're fine tuning that to this use case of Wikidata descriptions. So we're to some degree restricted on what that previous pre-trained language model was trained on. And that really comes down to the amount of language data that is available on the internet. And usually corresponds with more or less the 25 largest language editions. That's not exactly the case, but for these models with the language models, usually where kind of the larger the language edition, the easier it is to find kind of the language models that allow us to build tooling around them. And if I didn't answer that directly, no, Cindy's not on the list. Apologies for that. Perfect, yes, thank you. That was gonna be my next quick follow-up. Thank you for that answer. We have a question from earlier in the call, and I think you touched on, all of you touched on a little bit, but the general looming question that I think is on a lot of people's minds, in the future, will AI chat bots be allowed to edit Wikipedia and what might that look like? Yeah, I can take that one. So I think early on when chat GPT was released, we all sort of tried to reflect on what it meant. And we were getting this question a lot from a lot of reporters, from a lot of community members from all over the place, like when is AI just gonna replace Wikipedia? And I think the answer that we all kind of collectively came to at the Wikimedia Foundation, and what I think will resonate is that it is hard to imagine a knowledge, a human knowledge creation project that doesn't very clearly center humans at the heart of it. So when we're talking about deciding what counts as a reliable source, what counts as reliability even, or what kinds of information is neutrally presented, I think these are really human questions. And the reason why people love and trust Wikipedia is that it is a human curated knowledge store and a summary of the secondary sources that exist around specific topics. So while I think AI can certainly make a lot of things a lot easier and more intuitive and faster and more friendly to do on our projects, when it comes to replacing humans entirely just allowing the chatbots to run wild and edit an update, I just, I think that would be a fundamentally different project from Wikipedia. And good luck to those who are gonna try that project. Maybe it'll work, but I just have a hard time imagining anyone wanting to go to that kind of a project or finding it reliable or trustworthy. Thanks, Mariana. So kind of logical follow up to that question. And I'm not sure if maybe Jimmy, you wanna weigh in on this, I know you've thought a lot about this and talked a lot about this. What is the foundation and maybe even the larger movement doing to prevent harmful AI generated content from being added to Wikimedia projects? Yeah, I mean, I think this actually does tie in to the previous question as well, the question about where, sorry, will AI be allowed to edit Wikipedia? And I think ultimately all of those kinds of decisions and those kinds of things come down to the open dialogue and discussion and discourse within the community. And so the community, I would say right now, there's a pretty wide consensus that chat should be, for example, would not be a very good Wikipedia and make stuff up, makes up references. Like you can get banned for that very, very quickly. But unlike places like, say Twitter or Facebook where people can come in and start posting viral content that may be completely fake and so on and so forth, we're just a completely different model. And so I think our first line of defense against misinformation or disinformation that's AI generated is our all existing community and our existing community standards around reliable sources, around vetting everything that comes in. But I think there are some danger points. I mean, certainly if you are a casual editor, you might very well be deceived by a brand new, say science news website that doesn't look too click baity and it seems plausible and it's AI generated and that could fool us. I mean, we know throughout the history of Wikipedia, occasionally some annoying person will spend quite a bit of time generating some fake articles about ancient Chinese poetry or something. And unfortunately, the AI may make people who are trying to do something like that may make it easier for them. And so I think we're gonna have to be pretty diligent but the truth is people have always tried to put bad information into Wikipedia. And I don't think there's any simple sort of single answer to this. There's no, you can't reliably identify content that's written by an AI. I mean, there are some claims that there's really good quality now watermarking for images that may make it possible to machine sort of detect AI generated images. I'm skeptical that that's as good as some people hope it is but I guarantee you that's impossible for text. You just can't, like that's just not possible. The text is too fluid and there's not enough data to hide a secret sort of, I'm a robot code in it. So ultimately I just think that's on us, the community really more than anything else. Thanks, Jimmy, very thorough answer. We have a couple more questions here about the topic of AI. I think let's go to training first. I think this might be best position for Chris but when the foundation team feel free to anybody weigh in on this question. Are the machine learning and research teams planning to train and host our own Wikipedia and or FOSS free culture at large language models anytime soon? Is anything like that in the works? Sure, I can answer that. Our focus is on the features and the products that make the experience on all Wiki projects better. That's what we're focused on. And so sometimes that means a really small model which means we have the infrastructure to host that. We host hundreds of small models for various purposes. And sometimes those are larger models. And so when it comes to like, like are we planning on doing it? Well, one, some of the work that Isaac was talking about wasn't exactly that. So in fact, that's taking a model and then fine-tuning it and then deploying it. So we are actively doing it. The thing that I think we don't wanna do is say, okay, this is a very like, there's lots of hype around this. We should make a Wikipedia AI and then sort of run with it. Instead, what we should do is focus on what are the things that are the features or the particular projects that make things better, right? Is this something that volunteers need to make editing better? Is this something that would make readership better? Is this something that makes the experience on the site better? And then grab models and then figure out whatever we need to do to make that happen. And our work on both, you know, the research team which Isaac's on and the ML team that Simon is focused on that, that we will use whatever models. We have the infrastructure in place to deploy those models. We can scale it out globally, however we wanna do it. And so that means that when there are needs for something, we can actually build against those. But we want there to be a specific use case that's a valuable thing that gets deployed, you know, with the product teams, with the communication team, with community consultation, like everything what we wanna do is we walk down that process as opposed to it'd be really cool if we had a really, really big, you know, big branded Wikipedia LLM, let's just go for it. I think the value in that would probably be much smaller. Thanks, Chris. A final question here in queue about reuse and attribution, which is something that I think comes up often in our discussions across the Wikimedia movement, but that we haven't fully touched on yet on this call. So this might be something, Jimmy, maybe you wanna start us off on an answer for this, but I think probably foundation staff have some thoughts on this as well. I know this is something we've been discussing at length. The question is how does the foundation plan to respond to AI reuse without attribution or compensation to our movement? Yeah, I mean, I think other people are better suited to answer on the legal front as to what we can do. But I think one of the things that's important to remember is that, you know, we've always come from, ultimately from the free software movement is where we sort of the thinking originated. Everything in Wikipedia is free to copy, redistribute, redistribute modified versions commercially or non-commercially. So people taking this gift that we've given to the world and using it to train our eyes is part of our mission in my view. And also I'm actually really happy if AI's are training on Wikipedia and not just Twitter. Like that's actually a good thing for the world. However, attribution is really important. That's also been a really core part of our values. It's certainly our values, never minding from a legal point of view, but cite your sources, don't plagiarize all of those kinds of values. And I don't think legally AI are plagiarizing but they're not citing their sources. And so those are morally something I think they should do. And how we best go about making that happen is both a technical and a legal question. I mean, certainly the technical piece of it is just that the technology despite seeming to be quite amazing actually the way it's made is sort of probabilistically generating the next word. The AI doesn't really even know where it got things from. It's just like a giant sort of matrix of probability. So I accept that it's a challenge for them. But anyway, those are my thoughts on that. But others can answer really on the more detailed legal technical questions and what the foundation strategy might be. I can chime in a little bit from the policy and advocacy team perspective. Hi everyone, my name is Stan Adams. I'm the lead public policy specialist for North America for the foundation. Well, I can't say all that much about strategy. And I think maybe some of our partners in enterprise might actually know more about that piece of it. I can say a little bit more about the challenges we're facing here and I think Jimmy's right that in terms of a sort of direct attribution for an output from a generative system, those systems just don't work that way. And so if we're going to get attribution, it will be in a more general sense from the developer of a model who might say we relied heavily on content from Wikipedia in the development of this model or something along those lines. I'll also share in the chat the comments that we submitted to the US copyright office which recently did a request for comments on copyright and generative AI. And to give you a little bit better sense of sort of the angle that we're coming at it from just to highlight one other sort of source of tension here and why it's a little difficult for us to be very pushy on this issue is that if we sort of publicly push too hard on the attribution angle, the folks that are reusing our content are going to increase their emphasis on their uses being fair use under US copyright law, a doctrine which we also support. And so it could set up a sort of weird tension between our stance on fair use and the people reusing our content based on fair use. And so I think we collectively want to avoid getting into a fair use fight as a fairly vocal proponent of the doctrine. And so like I said, I think we were sort of working through other channels to get that attribution. And please look through our comments for more happy to answer more questions on those if you have them. Thank you. I feel like I can add a little bit about the with media enterprise component of this. This is the commercially available API process for high volume, high speed reuse. And part of that is definitely a revenue generating project to have major institutions who are using very high volumes of Wikimedia content contribute into the movement rather than using our resources that have been donated, funded by donations, but also to improve the downstream reuse, visibility, accessibility, fluidity, because it's really hard currently. And a lot of institutions, major search engines and smaller institutions as well in trying to compete with the big guys simply can't use our content particularly easy, easily and can't identify when in their database the result you're being served is coming from Wikimedia, Wikidata or Wikipedia, et cetera among all the various other pieces of data that they have inputs. So part of the purpose of the enterprise API is to make it easier for them to ingest the content and therefore be able to say where it came from in their own databases so that when they serve it to you as a customer a search engine user, they can provide the attribution. A lot of cases these institutions would like when we're talking to them they would like to better attribute they're not attributing out of malice they're not attributing out of frequently out of a lack of ability to easily identify where it came from. So hopefully by providing a more commercially designed service to them, it makes the attribution component more easily followed rather than a legal stick it's a technical carrot. I just had one more thing that I was gonna add if that's okay, Elena. Okay and thank you Liam for describing enterprise for everyone it's like a helpful reminder to everyone like how we're approaching that work and how it's a little different than other organizations I think in a helpful way. So the thing that I wanted to mention so there's all of the opportunity in this space and I think there's a lot that's very good that we can do ourselves with AI and machine learning technology. I think it is helpful to the world that they're using our data instead of like Twitter or X as Jimmy said. And I also have some worries myself and one of the worries that I have gets back to an article I published earlier this year that talks about some of the principles that we use when we think about the ways that we deploy this technology and those principles just in short are sustainability, equity and transparency. And I don't know if there's like so we can pull up the blog post or whatever and post it in the chat but I think in that the thing that I am most interested in advocating for is the focus of companies that are using AI technology that are built on this like incredible gift that the world has given to the commons in the data that they train on to think about the sustainability of the data that they are using in the commons and how can we find ways of encouraging the people that have contributed so much to continue doing that and also to find new folks that are interested in contributing to the commons and encourage that. So that's the area that I feel it intersects with the attribution issue but it's not exactly that. It's a different kind of a thing is just thinking about the ecosystem of contribution and how do we all contribute to that sustainability over time? So that's the issue that I'm most interested in like advocacy around because I think that's like how we make all of the things that we work on last for a very long time. Thank you. Thanks, Selena and we have a link to the blog post here in the chat. Shawnee, do you wanna share a few closing thoughts on this topic and then we'll move to the other questions that we have for open Q&A? Sure. I just wanted to share very briefly that besides being highly interested in this topic because I'm a trustee, I'm also following this topic as a researcher and involved in three different research projects now on Gen AI and one of them is exploring what's happening in our movement. And I just wanted to take a moment to say that I think it's quite inspiring to see how different levels in our organizations or in our movement are reacting to generative AI. And it's not for nothing that in this specific call today we've included both affiliates and Wikimedia Foundation staff to speak to you. It's because really it will require a holistic and working together of various levels of our movement to really get it right. I appreciate that we are having a limited budget and we need to think carefully about what we're investing in and how we are going about it. We can't throw millions of dollars like the big tech companies on different trials and we have to be really thoughtful about what we choose to invest in and how it impacts other things that we are doing in the movement. So I think it would require like a movement-wide cross collaboration between different stakeholders and entities to really get it done. I think Wikimedia Foundation is doing amazingly. The teams you've just heard from the different teams but I also love that we have different chapters and other affiliates experimenting and checking what could be done. And we have individuals who are not connected to any of these affiliates, experimenting with tools with different workflows. And I think all of them, all of these moving pieces together will help us move forward to the right place, hopefully to remain relevant and to continue to serve the world as we've been doing before and maybe slightly differently but still the mission is there and it's right. And so I look forward it's exciting times in that sense and thanks for everyone who is contributing to this effort. We can't do it alone. Thanks for those reflections, Shani. I think this discussion was great thanks to everyone who submitted questions and who asked questions live. It was really insightful to hear kind of the multi-pronged approach that's happening throughout the movement on this. We'll get to the other topics now that were submitted for the open Q&A. The first one is pertaining to fundraising. As many of you may know, we are in our largest fundraising season of the year right now. So the question is, how is that campaign going so far? And what has the community collaboration process looked like for this year's largest fundraising campaign? And I know we have Megan Hernandez on the call. So maybe Megan, would you like to take that one? Thanks, Elena. I'd be happy to. Hi, everyone. I'm Megan Hernandez, VP from Fundraising at the Wikimedia Foundation. As Elena said, we are in our biggest fundraising campaign of the year, which runs on English Wikipedia in the US and other major English-speaking countries. We are one week in. So I'm giving you like a live update that will be changing as we get through the rest of December. But at one week, and I'll say the campaign is going really well in terms of revenue, we are tracking kind of right on projections on track to hit our targets by the end of December. And the collaboration this year has really been a major focus even before the campaign started last week at the start of the fiscal year in July is when we usually start some preparations and pre-tests for this big campaign. And that's really when we kicked off the collaboration process with the community this year. We have a collaboration page on Wiki. We've been having calls like this. It's speaking to many of you here as well as at Wikimedia and Wikicon North America just a few weeks ago. And I'm recognizing a few folks who I saw in Toronto. It's really nice to see you again here. And I will just share the link to the collaboration page. We posted an update just yesterday. I didn't actually, there you go. Just yesterday on how the campaign is going so far. And we're gonna keep this up throughout the month of December. I wanna express so much gratitude for folks who have been engaging there and helping to co-create the campaign this year. You will see if you click there on yesterday's update, we have a new part of the message that was submitted by a volunteer. And it is just so on topic with the conversation we're just having around AI. I think throughout the movement right now, this is a big ticket item, a big topic of interest with our Wikimedia community as well as our Wikimedia readers and donor community. So you'll see in there a new message that was submitted by Folly Mox and that's running in banners right now. One other thing I'll share that was from our update yesterday is I think a really exciting part of the campaign that is new for this year. So donors are reading Wikimedia, they see the banner, they decide to donate and kind of go through the payment process. And right after they finished making their donation, they reach a page that says thank you for donating and has various options for ways they can find out more information and get involved. And this year, we have included a call an invitation for people to edit. And we are seeing some folks starting to sign up and create accounts. We just looked at the one week mark a couple, yesterday I guess, and we had 1,400 accounts created and about 11% of those accounts went on to make an unreverted edit within 24 hours of creating an account. So I am just so thrilled to include this in the campaign this year as a way to deepen our engagement with donors, deepen our engagement with our community and get more editors. This was also a big topic that we talked about in Toronto a few weeks ago at the Wikicon North America Conference. And there was a lot of enthusiasm, I think from folks there to try it out and see what we learn and see what opportunity there is. So that's what we're doing right now. And again, we'll keep the updates coming on the page and if you are inspired by any of the conversations we had here, AI or other topics or just have ideas of messages that you wanna tell our readers this year, please go on there, share your ideas. And thank you, a big thank you to everybody who has already done that. We're gonna keep it up. So thanks, everyone. Thanks, Megan. I'll turn our attention now to the topic of affiliate accountability. We had a few questions submitted on this topic for this call. The first question about affiliate accountability has to do with the values of the Wikimedia movement and with the use of the trademark. So it's a little bit of a compound question here, but I think we can answer it probably all at once. So the question is, should chapters bearing the Wikimedia name and trademark be held accountable to the values of the Wikimedia movement? Does the foundation hold chapters accountable to this? And how does that happen? So the should part coming first and then how, what are the mechanisms for accountability as the second piece of that question. Vicky, do you wanna take this one? Yes, please. So my name is Victoria Doronina and I've been a trustee since 2021. I'm a member of community affairs committee. So I'm here and also product and technology committee. So I was happy to see all the people our committee works with. So if we're talking about values, all Wikimedia organization, including the foundation and all the affiliates, not only chapters but the user groups, they should uphold Wikimedia movement values because what is the point of them if they don't? When groups apply to be chapters, the first requirement for them is that they have a mission that supports Wikimedia values and our general mission of free knowledge. When they apply, affiliation committee and foundation staff evaluate the applicants. So to make sure that they have capacity to be good movement representatives and then they receive chapter recognition from the foundation board. And they held accountable on the ongoing, as an ongoing process, but it's mostly the members that should uphold the values. So chapter is an independent organization. So accountability is mostly internal that allows them to be autonomous and they have independent government structures. Chapter members elect chapter board just like Wikimedia foundation has board members for example, myself. And the chapter board is responsible for the activities of the chapter. I'm making sure that they correspond to the values of the movement. However, if there are issues with the chapter that cannot be solved by the government structure of the chapter, there are steps that affiliation committee and foundation staff can make to support the chapter retirement. So basically, if you assume that the chapter is not doing a good job, you apply to the AFCOM and the AFCOM will try with the members of the foundation to help you. However, if that fails, then as the worst case scenario, the foundation can revoke the status of the chapter. So first, self-governance, if it doesn't work, the foundation can help the chapter and then if that fails, that can be to recognition. Thanks, Vicki. So I think you kind of touched on the next affiliate accountability question in the answer that you just gave, but I do wanna make sure that we get all angles of this question because I know it's important to the question asker. The next question about affiliate accountability is just about the use of grant funds in general, acknowledging that the foundation is a major grantor to many affiliates. And so the question is, how does the Wikimedia foundation ensure accountable use of grant funds? And along those lines, ensure that affiliates are complying with fair employment standards, treating their workers fairly. How does the foundation deal with this? Right, so affiliates are independent organizations. As I said, which means that they have to handle their own employment practices. And because each country's laws are different, the foundation often doesn't have expertise in the country, what happened in the country. However, if there are concerns expressed by people repeatedly and insistently that the affiliate is failing to follow governance best practices. They're not complying with the laws of the countries or they don't fulfill the conditions of the grants. Then the foundation and AFCOM will look into it. And if we discover that the concerns are true, then they may help in getting on track. In general foundation is a major grantor to many affiliates and the steward of donors money, the money which are being collected right now that Megan told you about. And we are also accountable to the foundation, to the all stakeholders. Therefore, the foundation will take steps to make sure that the grants are spent responsibly and in the ways which was declared as a goal of these grants and it corresponds to the movement value. So the foundation has regular contacts with affiliate grantees and it has audits and other mechanisms for checking on what is really happening. Sometimes in some cases we employ a third party when there are potential issues but we're not familiar with the country and somebody else can do better for us. So basically Wikimedia Foundation is looking after donors money and we'll make sure that they're spent on maintaining the reputation of Wikimedia movement and its values. Thank you for that, Vicky. So we've got about four minutes left. I know we have two remaining pre-submitted questions I'm gonna try to get to both in these four minutes and then we want you to stick around because we have our poll like we always have before we wrap the call. So maybe we'll get through these two questions quickly. The first one is about the Arabic Wikimedia administrators that were jailed. I know this has been a topic that we've spoken about on the mailing list. The question is how do you feel, I assume that you as the foundation here, how do you feel about relaxing the privacy restrictions which prevent the foundation from leading a letter writing campaign in support of the jailed Arabic Wikipedia administrators? I know we have Steven here on the call, Chief Legal Officer. Maybe you'd like to weigh in on this one. Hi everyone, I'm Steven LaPorte and I'm the general counsel for the Wikimedia Foundation and that includes both the legal teams as well as global advocacy and our teams that provide trust and safety and human rights work. So this is sort of in the department that I work in. The question asked about privacy restrictions but I wanted to just note that privacy is only one of the principles that we think about when examining something like advocacy or a letter writing campaign. The more important principle in my mind is the safety of the people involved. That includes the people that we are intending to help by doing advocacy as well as all of the other people who contribute to the Wikimedia projects. Being a volunteer driven project means that there are lots of people who are affiliated or connected with Wikimedia and I think we, especially at the foundation but I think broadly as a movement owe it to everyone who contributes to think very carefully about their safety whenever we take actions. So when someone is jailed and we're trying to think about the best way to advocate for them, safety is the first question we analyze internally and I think an important one for anyone to analyze if they're considering sort of grassroots actions like a letter writing campaign. So I'll just say we take safety considerations extremely seriously. We consider them very carefully. We speak with experts. Experts do not always agree so we may have to get second and third opinions and come up with the best course of action based off of a multitude of different opinions and we consider the safety not just of the people we're trying to help but everyone else who's also affiliated with Wikimedia and maybe in that region that the advocacy is focusing on. So all of this is discussed in more detail in a response that my colleague Maggie Dennis posted on Wikimedia L. I think someone might be able to provide a link to that in the chat. This response goes into more detail on how we do these kinds of safety reviews and it also provides some links back to the human rights team which is the experts within the foundation and has more detail about how they respond. So hopefully that gives you some details about how we consider responding with advocacy campaigns including red letter writing. Thanks so much Stephen for that answer. The last question that we have that was pre-submitted is related to the upcoming board elections. I think Darius you might be best positioned to answer this one. Why is the proposal for the next board election giving short listing power to the affiliates? Well, there's quite a number of reasons for that but the main one is that we wanna basically reduce the burden on the voters and the candidates. When we have a number of candidates the burden is big on both sides. Also having a large candidate pool means a lot of extra background checks for everyone who is involved. So there is this factor. Of course the elections committee and the board selection working group will be discussing this and other possible ways to short list candidates. But for now we think it makes sense to have this also affiliates, play an important role. This is something we can easily ask them to do and I think they are happy to play this role. This is part of the movement. So I think it makes sense all together. While we are of course open to further community feedback in any form and you can email us just ask Kak at Wikimedia if you want to provide additional input and we'll be taking this into consideration for sure. Thanks Darius. I know we are one minute over but if you can stick around I know a few people have had to jump off but for those that can stick around we do still have a poll coming up. So just a reminder that this call is the recording of it is immediately available on the same YouTube link that people use to stream the call. It's gonna be on common soon and notes will be available on the meta page. The ask Kak inbox that Darius mentioned is also just taking any questions, feedback, comments for the community affairs committee throughout the year. So feel free to email that if you have any follow-ups for the board. So I will pass now to Shani to run our poll and close the call. Thanks so much Elena. As Elena is sharing the link to the poll I want to take a moment to just wrap up and thank everyone for participating in this call and making it happen, making it happen first to Elena for facilitating but to staff helping around behind the scenes for it to simply happen smoothly to all the trustees, to all the senior leadership and to all the team members who came here and mostly to you, the audience who's watching and engaging with us. The poll is meant for us to improve the way that we're doing these calls. So please tell us if it's working and if not. I also want to say that tonight we're celebrating Chanukah. This is a Jewish holiday where we light a Chanukah or a menorah and it symbolizes the victory of light over darkness externally but also very much internally. So with so much suffering around the world and so many threats, I just want to take a moment to wish us all a happier, safer and flourishing 2024 I hope this is a year when we all come together and continue to serve the world with free knowledge and happy holidays, everyone. We see the results here on the screen and we will continue to follow up if you have any other concerns that you were not able to get answers to, please write the askCAC email at wikimedia.org and we will do our best to help you. Happy holidays and stay safe, everyone. Thanks, everyone. See you in 2024.