 Welcome to January 2017's Metrics. My name is Josephine, I'm with the IT Help Desk. So today's theme is building our future. So pretty much what that is, is that we're gonna delve into finding ways how we can, as a community and WMF, build our future together. So here's our agenda for today. We have welcomes, our theme, our introduction. We're gonna go over movement update. We're gonna be reconstructing media Wiki history, and then next, community capacity development, and then movement strategy update, and then afterwards we're gonna do questions and discussion, and then move on to Wiki Love. So today we're gonna be welcoming some contractors, interns, and volunteers. So we have Allison with Legal, Omar with CE, Faizé with CE, Anthony with Product, JC with CE, Jean with Technology, and Veronica with CE. And so we have a lot of anniversaries today. So for one year anniversary, we have Deb, Jack, Nathaniel, and Chris. For two years we have Arian, Corey, and then for three years we have Ann Smith, or Sam Smith, and then, oh wow, there's a lot of three, Alex, Giles, and then for four years we have Bruna, Doreen, for five years we have Jody, Steven, and Andrew, and for a good six years we have Dario. So next we're gonna go over the movement update with Maria. Hello everyone. So as was mentioned before, the theme of this meeting is building our future. My name is Maria Cruz. I am Communications Project Manager in the Comments Engagement Department. And I'm here to share a few stories about the movement. Next please. In the theme of building our future, is doing this by enabling students to learn values of collaboration and cooperation with weak media. And one example of that is Corphopedia. Next slide please. Corphopedia is a project that was proposed by students to their teachers after noticing there is not much content about Corphoo on Wikipedia in an introductory course about the insectopedia. It involves 60 students ages 12 to 68 from traditional and evening high schools. This involves two junior high schools, one evening high school, and one vocational evening high school. And that's why the age range is so wide. Another story from the movement, next slide please, is Wikispeak's language. Wikipedia will talk to you and it will teach you pronunciation. Next slide please. This is a project started by shared knowledge of Macedonia to document spoken examples of different languages. The goal is to enrich content on Wikipedia with multimedia files and in this way increase the quality and educational value of the written text. They have been partnering with Glam institutions, education institutions, and other Wikimedia affiliates as well. I'm trying to promote this initiative through the cooperation of Wikimedia Central and Eastern Europe. Next slide please. The Hungarian revolution of 1956 is an editing challenge. And I thought of sharing this because I think the future will certainly be built through affiliate collaboration. Next slide please. This is a writing contest to commemorate the 60th anniversary of the Hungarian revolution. The challenge was taken on by affiliates of Wikimedia Central and Eastern Europe. It had seven participating countries, 390 plus articles edited, and only one edited down in Hungary. And the contest organizers have implemented valuable lessons from Wikimedia CEE Spring writing contest. And finally, next slide please. During January, Wikimdava 2017 took place. Next slide please. This is the regional conference for Africans to strengthen support and share knowledge within Wikimedia communities, both in the continent and in the diaspora. It took place in Accra, Ghana, from January 2022, and it was supported through a Wikimedia Foundation grant. It had over 45 participants from 18 countries. And among the highlights of the conference is that increasing visibility and awareness of the movement, of the global movement, was identified as a key area for development in the region. And you can read more following that link. Next slide please. From the Foundation highlights, we wanted to share that the entire campaign on Gender Gap published its final report. This is a qualitative report that looks into inspired campaigns as a model for proactive grant making. This is a model created by the community resources team. And the report offers key lessons learned from the first campaign on the topic of increasing gender diversity on the Wikimedia sites. The Wikimedia Foundation received a $3 million grant from the Sloan Foundation, as you may have read on the Wikimedia blog, to enable structured data in commons. The second annual one live on breath campaign kicked off in January in this month as well. This campaign encourages librarians to get involved with Wikipedia by adding citations to articles. Next slide please. Many things happen in January as well. The Wikimedia Developer Summit took place from January 9 to 11 and volunteers and staff, developers discussed six key topics among which were creating a plan for the 2016 Community Wishlist, top 10 ideas, and how to grow the developer community. And we also had our Wikimedia Foundation, All Hands, hosted in San Francisco on January 12 to 13 where employees participated in sessions connected in person and discussed issues important to our work. And we also had an amazing talent show. And it's coming up in February 2017. Next slide please. It's the movement strategy process, annual planning and board recruitment process. And that is it for movement athletes. Thank you. So the next speaker we have is Dan. Yep, hi everyone. I'm Dan, I'm on the analytics team. And I'm super excited to be here today showing you what we've been up to. So, basically, in the theme of building our future, one of the things that our team does as analytics is try to make sure that we understand where we come from. So they say hindsight is 2020. And at Wikimedia Foundation, it's not always easy to answer simple questions. When we looked at the data, we found that a lot of the very fundamental things that we need to know about our past are actually pretty hard to answer. I'll give you an example. So how many new editors join our projects, all of our projects, since the beginning? To answer this question right now, we have to write this complicated query. It has to hit three tables, go across 800 wikis. It's got a bunch of joins and subqueries. Hard to read. And ultimately takes five days to run. So this is a problem because it's data that we need to know. It's information that we need to know. What we did is we brought all of that data from all those different places into one place. We patched it up, we cleaned it up, and we're calling it the data lake. So now when you ask the same question, how many new editors? We don't have to do any joins. We don't have to do any subqueries. We only look at one table and it takes five minutes to answer the question. So more than that, we built a process by which we can start with a question that's otherwise hard to answer, and we can build infrastructure on top of infrastructure that we've been building for years to make it easier to answer that question. So I'm gonna go over a few questions that are easier to answer now with the data that we've already built. And then I wanna show you ideas for what we can do going forward and how we can go about asking more interesting questions. So what if we wanted to know of those new editors, how many of them were bots when they joined? Today we know based on our database, we know which editors are bots right now. Today we don't know who was a bot back then because we don't store that data. So we dug through the logs, we patched, we historicized the concept of being a bot and we have two fields, some technical stuff here, but easy to understand. There's an event user groups and an event user groups latest field that's there in the same table with every edit. So we know whether the editor was a bot at the time of the edit and whether or not they're a bot today. And instead of joining with the tables and trying to pull this information from hard to get places, it's right there in the same row. You don't have to do anything to go get it. Other things that are historicized that we tracked down backwards through time are things like what titles pages have had over time as they get renamed and redirected, it created what names users have had over time. And these things are maybe not too interesting on their own but what if we wanted to ask something like, do people who get reverted or people who have harassment directed at them, are they more likely to change their username to try to hide from that? So this is data that's now letting us ask those kinds of questions that are really important to us as we try to figure out what we wanna do in the future. We also have some interesting new insights about our data that again, really hard to get if you're just looking at the current structure and digging through the tons and tons of data that's available without sort of prioritizing, looking at it from a question perspective. So for example, we have this concept of revision being productive. That means that someone made an edit and it was not reverted within 24 hours to act to figure out whether a revision is productive today. It's a really hard process. In this, in this work, it's just a field that says yes or no right next to the edit. So you can use that and ask questions about the productivity of the edits. What we wanna do next with this data, we have lots of, actually, I guess, other things that we track down are, and we have contextualized to the edit what registration date people have. So you can ask questions about what they do relative to when they were registered, how many bytes a revision adds or removes, which is another thing that's a little tricky to get today and so on. You can take a look at this, come ask us for more details. But what we're doing with this data next is we're using it to update WikiStats, which is an amazing project that started from before Wikimedia Foundation even existed and is a critical resource for our communities to do their work. We're gonna update that with this data and hopefully lots of interesting new stuff that they haven't been able to use until we did this infrastructure work. We're gonna publish this so that it's available for public research in collaboration with the labs team. We're going to make it available in this interface where it's really easy to slice and dice. I'll show you an example real quick. Going forward, even more, we're going to look at the revision text itself and parse that because there's lots of really, really interesting things. I'll give you an example. And most importantly, we wanna show that we built a process where you can ask questions. So your questions, whether you're from the community or from the foundation are welcome and we're gonna try to figure out how to build infrastructure to answer them. This is an example of Pivot. It's an interface that's available right now only internally because of privacy concerns, but hopefully this data is gonna be available externally, suit in this shape as well. We showed this to a few different teams like fundraising and reading. Everybody's super excited. It's really fast to get you insights. This is showing the amount of content added per wiki. So you can see that wiki data wiki is getting a lot of really interesting work being done. And the kinds of questions that we wanna answer, get to things that we haven't been able to get to before. So how much work does our community do to all of our projects? One of the ways to measure it is to count the tags that they place where work needs to be done, right? That's how much work needs to be done. So citation needed is our classic tag. How many of these tags are there in Wikipedia across other projects and backwards in time? These questions are really hard to answer now and getting the answers for them will allow us to do things like measuring the community's backlog so we can figure out, so we can include that in our decision-making. Importantly, we wanna count what you need. So I hope this data inspires you and you come talk to us. You can reach us on IRC and we can get analytics or on our mailing list or we have all of our documentation on this stuff and more on Wikitech. Thank you very much. We need to see all these slides again. Yes, all right. Good evening. I'm here to talk about the Community Capacity Development Pilot Program and it is a program done by the Community Resources Team, CR, on the SCR. This is an experiment. It's a program that was built on the premise that there are certain capacities that all thriving communities need to have and that some communities, for whatever reason, have not been able to develop or grow these capacities sufficiently or have plateaued and can't quite get beyond a certain level of capacity. And further, that WMF can usefully intervene and help those communities with a targeted, limited-time project partnering with a specific community to build a specific capacity to kind of get them back on their way growing and developing that capacity. This project, I would like to acknowledge and thank Anasuya, who approved this almost two years ago and had the vision and the clarity to see the need. And further, this project is cuteness approved. Much of it was supervised by Konkunchik here, a member of the Wikimedia Cuteness Association. So, how did we go about this? First, we conducted a whole bunch of research with quite long community interviews with 17 different communities across the world, most of them emerging communities. Yes, yes. Yes, I can give you examples. Partnership building, media relations, community governance, on-wiki technical skills. All of them are necessary for all communities and some communities have naturally grown those capacities and others not so much. And so, this research phase was precisely designed to find out what are some key capacities that are relevant and useful for our communities. After that research phase, which was conducted a year and a half ago, we selected three emerging communities to pilot with. The three were Brazil, with whom we were working on communications and media relations. The Tamil community in India, with whom we were working on on-wiki technical skills. And Ukraine, the Ukrainian Wikipedia community, with whom we were working on conflict engagement. So, already an example of three different communities that identified three different key needs for development. Once we had those communities interested in working with us on this, we developed a curriculum for how to build that capacity for that community. And the key factor in that was delivering that training in person, in that country, and in that language using translators. Then, it's time to evaluate this program. And that's where we are right now. We are done evaluating the program. So, here I am telling you about it. After that, we will need to decide, now that this pilot is complete, what conclusions do we draw, and how do we move forward? So, that's just the timeline. Not my style, really, but apparently, photos are important. So, here is India. That was also, by the way, supervised by Cuteness. Here is Greg Varnum helping out with the communications training in Brazil. In India, UV was key to the success of the training. And I don't know who this guy is. Conflict management. So, does this work? The short answer is yes, this works. We are able to help communities build these capacities if we pay attention to that community and work on that capacity. And the longer answer is not only does it work, it also has additional beneficial side effects. So, I'd like to share a few lessons. First of all, this high touch approach works. The community is really appreciated working with us. The level of attention that we paid, we repeatedly heard during our interview phase, things like nobody's ever asked us that. Nobody ever cared how our community selects admins, for example. And the communities in this pilot did successfully level up, did successfully kind of break new ground in their development in these respective criteria. What I'm stressing here is that in addition to our efforts to scale and to do everything that serves everyone with the greatest multiplier, there are certain things, this is one of them, specific capacities of specific communities that benefit from a high touch, high interaction approach. We need to do both. This training was effective partly because it was in person and in their language, in their own language. Our post training surveys and interviews have proven this time and again, people really appreciated the fact we came to them and we made it accessible in their language. The materials from those trainings are significantly reusable. It turns out that these needs are actually shared across large parts of the movement. For example, the conflict engagement materials we have developed, and it is quite tricky to teach Wikipedians about conflict because most of the standard curriculum on how to run conflict is about people having conflict in person, whereas most of our conflicts are online, sometimes with pseudonymous people, et cetera. So that curriculum has turned out to be quite reusable. I have personally given it already at four conferences aside the training in Ukraine. Likewise, the on Wiki technical skills training, which included tools demonstrations, but also a thorough introduction to Wiki data, was already delivered in multiple conferences and was very well received. So to share some concrete examples of the impact, the Brazilians following the training certainly correlated the Brazilians are permitting me to say also caused have revamped their website, which was defunct, have revived their blog and social media and regularly contribute and use those, and they've created a press kit. These are things that, for whatever reason, did not exist in Brazil for this training. The Tamil community now regularly engages with Wikidata. Before the training, there were zero contributors from the Tamil community, except for into Wiki links, of course, which everybody had to kind of migrate to. There was no editing of Wikidata beyond into Wiki from Tamil editors. Now there is, and one of them, I don't know how, has amassed 200,000 edits manually, not using bots to Wikidata in under a year. The training took place in 2016. Yes, there are some quotations here. I won't read all of them, but people were saying, I was aware of Wikidata, but found it complicated, confusing to understand. Now I think it's the future of Wikipedia. My mind was blown. I was inspired and started contributing massively. We need WMF to come to communities. The quality and depth of the training by experienced WMF staff can't be matched by outsiders. I don't know if it can't be matched, but anyway, warm endorsement. One very veteran Wikimedian, more than 10 years experience in the movement, told me, I attended lectures about Wikidata many times, but not one has engaged me and made me actually want to contribute. I was finally persuaded that I should invest time and go to actively contribute to Wikidata. So now what? This was a strategic pilot. This was an experiment. The experiment succeeded. This approach does work. The report, which is on meta, is recommending that we scale this up. Now that we know this works, scaling it up means working with additional communities and working on additional capacities. We've only worked on three capacities. Out of six we had identified in the initial research, and we could probably work on a few more. Secondly, to develop a kind of core curriculum, a kind of notion of what all communities should have and then track that across communities so that we know where are different communities along this core curriculum. It should be somebody's job to make sure our active communities are not left behind on adopting Wikidata, on using Lua. I don't see that it currently is. And someone other than the community resources team should care that the Tamil community was not using Wikidata at all. So I'm proposing, I'm recommending that we track this, that we do pay attention to this and then the constraints of resources and budgeting see how can we help the greatest number of communities progress the greatest amount along this core curriculum. And finally, and this is with a view to scaling, identify some already effective trainers across the movement and empower them to deliver that training again and again in their own communities and in other communities. One thing that was observed across all three pilot programs is that the quality of the trainer and the training matters a whole lot to the receptivity and effectiveness of the training. Some people are better than others at public speaking, some people are better than others at explicating Wikidata. Specifically, for example, I have been told by the Wikidata team in Wikimedia Germany that my approach to explaining Wikidata impressed them and they liked it so much, they are in fact adopting it to their own efforts to explain Wikidata. The point is once you find something that works, you need to empower that, make sure that has more chances of scaling across the movement. I cannot personally teach Wikidata to every single Wikimedia in the movement, but we can train trainers, make sure they're effective, and then send them out to do more trainings. So these three key recommendations are now before the foundation and that implies increased resourcing for this program. Again, it was resourced at a pilot level. Now WMF needs to decide whether and how to increase the resources. And the big question is, is WMF leadership interested in this? Now that we know this works, we need to decide whether we want to do this, how do we want to do this, what teams would be involved in doing this beyond the community resources team? And that is again where we are right now, that decision is yet to be made. If you have more questions, want to get involved, want to read some of the data and surveys, et cetera, it's all on Meta under CCD, Community Capacity Development, or you can write to me. Thank you. Next. Lisa, aren't you next? Is this strategy update, cuteness approved? Is this cuteness approved? I think it's lacking. Can we go to the next slide? I think it's lacking a little on cuteness. Yeah, I mean, it could use some work, I guess, black and white slide. So this'll be very brief. And I think most of you, but perhaps not of all of our listeners online, have met the strategy team. So we just wanted to take the opportunity today to reintroduce for some of you and introduce for those who haven't met them yet our four strategy team who's leading the movement strategy process. There's a combination here of new faces and familiar faces. Kind of the leaders of the group are Whitney Williams, Ed Blan and our own Guillaume. And the project managers are Shannon Keith and Susie, who I think most of you know who's worked with us, I guess for well over a year on strategy and annual work. So just wanna introduce, they are all working this week in Seattle together. And well, I'm sure we'll come out with lots to share with us. Thanks to everyone who participated in the workshops that we did at all hands. This team was also, couple members of this team are also over in Switzerland meeting with some executive, chapter executive directors and started the conversation with them as well, a lot more to come from this team. So, and they are somewhere listening, participating remotely in this meeting as well. So welcome to all of them. Hi everyone, well chime in briefly from Seattle. Thanks very much for that warm introduction. This is Ed Blan. I'm sitting here on the couch with my colleagues. Guillaume, you know. Hi. Susie, you probably know, this is Shannon. Hi guys. Whitney's not with us today, but she's with us in spirit. So we're delighted to be involved in this project. Thank you Essof, that was very helpful for us here. And we've heard a number of other things in terms of a quick background. I spent quite a lot of time in the corporate world and have spent the last 10 years mostly in the nonprofit world working with organizations that are part of movements to help them scale up and be more effective in those movements. The largest of which is the microfinance movement. So that's my background. And I can share, I've been working with Ed and Whitney and others on the WilliamsWorks team for several years now in collaborating with brands that reach sort of large numbers of people that are interested in engaging those people to do good in the world. And so it's a delight for Ed and I to be working with the Wikimedia communities on this project. So we're excited to share some of what we've learned from working with others and some of the folks we know and sharing that knowledge and also listening and really learning from you all which we've done for the past couple of weeks but are early and intended on doing more. So looking forward to that. No, I mean, I think we're doing a lot of great work and we will start posting that on Meta and on the main image very soon. And we will also be available to answer questions at the end of this meeting. If you want. If you have any. Great, nothing really more to add. Very good. Thanks again for the invitation to participate in the Metra's meeting. Yeah, just while we have you, I know that many people have met you but not everyone and in addition to getting to know each of you individually, I'm wondering if you could give just a general description of WilliamsWorks as an organization and what your focus is and what you're working on with us here because I don't know that everyone is familiar with that side of things. Sure, we off mute. Yes. So WilliamsWorks as a firm, the reason we're in Seattle, we're based here in Seattle but we also have team members in other parts of the world in Africa and Europe. Whitney Williams founded the firm 15 years ago. She used to work for Hillary Clinton in a fairly senior capacity, helping her to do a lot of travel and logistics around the world, meeting people, learning about problems and helping to address those problems. She transitioned to creating this social impact consulting agency and we've been helping for profits, nonprofits and individuals to do good in the world. Sometimes it's quite a shift when there are for profits and they don't really know how to give back and they want a lot of direction. Sometimes they're individuals who have done well in their careers and now want to give back in a significant way and would like help and guidance doing that. And other times they're nonprofits that want to do more or have more impact in the world. So we've been doing various projects with various organizations over the years including the Bill and Melinda Gates Foundation early on, Tom's shoes, we've created nonprofits in East Africa and Central Africa, all sorts of brands that you would know for profit brands and feel free to check out our website, Williamsworks.com. Let's see, this project allows the core group and various communities to pull up quite a few levels from your normal strategic work and think longer term about a 15 year time horizon and where this movement is going that you're a part of, where you are going and where you want to go, how you want to achieve high goals of knowledge, spreading knowledge, freedom, freedom. So that's what we're involved with. We'll be publishing very shortly a project plan that will help everyone see the scope is how we're engaging different audiences and allowing lots of participation into that process of developing strategic themes that will help to guide the communities for context. Hope that was helpful. Feel free to ask questions in the Q&A. We have more specific questions. So we have a couple of questions from RAC and one was about dance talk, but I'm going to talk about strategy, I'm going to read one from Joel which was mostly about a self-stock in a standard. He says, how should the foundation decide whether to spend more resources on expanding this program in a community capacity one versus all the other things the foundation could do? I'm envisioning something like somebody giving a presentation like this and saying our pilot shows that this program has a cost benefit ratio of between three and 10 and then that could be compared to other programs. It's the current strategy process coming up with anything that we could use to make an apples to apples comparison, especially between weather requirements, heterogeneous choices, or that's more knowledge to humanity. A new server from Africa or training six medium-sized communities and wiki data. How do we make that choice? So I'm sure I understood everything but I'm going to give you an answer and you tell me if that addresses the question. So the phase for the strategy process that we're in right now and that will be until around wiki media is trying to have a movement-wide discussion and to sort of define a direction where we want the movement as a whole to go and to try and align between the different actors of the movement and the partners. So it's not going to be about should the foundation devote resources to a specific program or a specific feature of the software. So it's not going to be that level of discussion. It's going to be as a movement, a composite of the foundation but also affiliates and organized groups and individual contributors and potential readers as the whole movement. Where is it in what direction do we want to go? And once we try to agree on that in a few months then we will start looking at so how do we translate that into action and who is going to do what? And then we will start talking about strategic plans with deadlines and roles and specific goals. And then we will start trying to prioritize those programs and how we assign the resources that will come in the second phase of this project. Does that answer the question? The live revision text option includes ORAS scoring on past revisions. This could answer questions like where past edits more or less damaging as estimated by ORAS have committed standards gotten stricter over time. But quantifying what we get more or less for the same level damage that kind of question. The answer is it's definitely possible to analyze revisions scores over time. We have to build a little bit more infrastructure but that's exactly the kind of thing that we want to know. Like what should we prioritize? What questions should we make easier to answer right now? So thanks for that. We'll follow up on it with Eric. Are there any questions in the office? Yes, I have a question about the capacity building. I was curious for the three sessions that you ran did you have like specific goals that you were trying to get that community to hit? And is the idea sort of once you've got all six I think of the topics that you have planned is that going to be essentially kind of the menu of like your topics and then your goals that you're going to be able to hit one training set? The trainings did have goals and some of them were concretely met and others may be met that there's a kind of growth towards them. Some are very hard to measure. So the conflict management one Ukraine was the hardest to measure because it's a soft skill and it's very hard to measure the amount of conflict or whether conflict is happening better. Inevitable conflict is happening in a better way on Ukrainian Wikipedia. That's very hard to measure. So there's no concrete evidence that we can point to for example that shows decreased conflict on Ukrainian Wikipedia but how people individually behave within that conflict and how comfortable they are to operate in a conflict written environment that has improved and we have anecdotal and personal evidence to suggest that. Other things are easier to measure. I mean, Brazil's media development is objectively observable. Wikidata edits can be counted, et cetera. To your question about the capacities, the six that were identified in the research, six is kind of arbitrary. There could be 10 capacities, there could be 20. All of those six are crucial. There are more. At the time, for example, we excluded some capacities that at the time seem to be addressed by other efforts within WMF efforts focused on organizational development of affiliates. Those efforts have since been suspended. So now it makes sense to actually revisit them as possibly part of the capacity development work. So that's not the closed menu of what we would offer. Even within each capacity, when we say on wiki technical skills, so we focused on Wikidata and bot frameworks with the TAMLs, maybe other communities will say, no, we got that covered, but what we really want to understand is Lua and Aure's and Lab's. It's really interesting. Thanks. Thank you. Hi, this is Dario from Research. I wanted to say quickly that in response to his offer, yes, in fact, we may have raised now to measure, effectively, attacks and conflicts, so we could talk about it. And I had an anecdote share in response to the project that Dan was presenting, which is particularly telling about the reconstruction of article history. So a few days ago, a few reporters were tweeting that the Wikipedia article about the 25th amendment of the US constitution was spiking in traffic in response to something happening in the public debate in this country that is drawing a lot of attention. I tried to replicate this data in trying to figure out that was actually the case and I couldn't. And the reason is that there is a 25th amendment article in Wikipedia, there's a bunch of redirects that are left there over the history of many changes of the article titles. That's something that happens very, very often in Wikipedia. I'm sorry, short. Our PGU API right now is unable to resolve these redirects and we might be unable to tell the story of what topics are actually spiking without looking in our history of an article changes and title changes. So I just wanna teach this as a possible use case. I think adding entire article history reconstructed will allow us to have very high quality data without traffic, I'll just edit. So I'm very excited about that. I said this on IRC, but I'll just mention here, we're super excited to solve the redirect confusion that has played many different questions with hard answers over the years. Super, super wanting to solve that. Thanks, Daria, for mentioning. Okay, one more question from IRC. It's from Layla to Assaf. And just for the reasons that people brought up, but a pilot was not good in some ways or in a different way. Did anyone come up to you and say for reasons X, Y, and Z that this kind of pilot would be a bad idea? And if so, what were the reasons of your points they had? No, nobody challenged this idea when it was presented and announced almost two years ago. When we presented the results of the research phase in Berlin, people had questions, but nobody said, this approach strikes me as terrible or wasteful or I'm even surprised nobody really challenged the approach. Maybe it means everybody thought it's a great idea. Maybe people were not paying attention, but the specific programs in the communities they were done were very positively received. People did have criticism about some technical details like one of the programs had a real-time interpreter who wasn't performing well enough and was distracting for people because they had to kind of mentally keep correcting the mistakes and adapt the wiki terms, et cetera. But that's not a very interesting criticism. It's a technical detail that of course we will do better next time. So interestingly, I have not had a very, I've not had a lot of responses to our choice of model, to our hypothesis here. Nobody has seriously engaged with it. There were a few positive comments and that was it. Are there any other questions? Any on IRC in the office? No? Okay. Well, next we can go ahead and move on to wiki live. I just wanted to say thank you to everyone that participated in the foundation, including Jack, Zach, and Saf, Catherine, and Casey as well, put a lot of effort into making it really awesome conference. There was a lot of really good workshops and teaching, learning, sharing, and I really, really appreciate everyone's efforts to make it a great success. So thank you. Hi, I wanted to give a shout out to Maggie, the MSNC team in community engagement. So a couple of weeks ago, as you know, we had the Dev Summit and this was really my first opportunity to meet the developed community. And I'm just so grateful for the work that they did both to prepare that, but also to prepare me along West to have a very positive engagement. I felt that the structure, the content, the atmosphere in general, but Dev Summit was really, really positive and helped, I think, have a very positive dialogue and certainly helped me personally begin to get to know people in a positive way. So I really appreciate it. Big shout out to Kim and Maggie and the rest of the team. I wanted to give some wiki love to everyone who worked on All Hands this year, which is I know a lot of people and I'm not gonna even attempt to name names because I would leave really critical people out, but we all know who you are, you are all awesome. And I really thought All Hands this year was a great mix of social time, of team building time, of substance, and great location, great everything. And thanks to everybody else who participated. I think people showed up in a really great way, really cooperatively, really sharing a lot and I think the whole experience was just amazing and thank you to everyone who worked on it and everybody who participated. Anyone else wanna share some wiki love? Okay, so I guess that's it for metrics. Thank you.