 Hi, everybody, and welcome to Editing the Wikiway software and the future of editing. Next slide, please. So before we get started, some logistics. Well, I think we just got a little turned around. OK, so we are here to talk about how the foundation product development teams and volunteers might evolve our software together to help newcomers make edits that fit our project's policies and grow into active and productive contributors. So we'll start with an introduction from staff. Then we're going to hear from a mix of volunteers and staff about some ideas for how we might answer this question. And then we're going to open it up to discussion and Q&A. Next slide, please. So before we get started, just some logistics. So first, please visit that etherpad link you see there. That's what we'll use to post questions. And then for those who are interested afterwards, we're going to be in the RIMO space to talk more for anyone that is interested. Next slide, please. So we need to diversify the ways in which people can contribute to our projects. And we need new voices to identify these new ways. So if you are here watching now or listening later and you may not have asked a question or entered an idea in a space like this, we hope that you'll consider doing so. Because if you do, we want you to know that all of us here will make sure that that thought is safely and warmly received, really because the movement depends on your voice being heard. Next slide, please. All right, so here's how this is all going to go down. We're going to start with the issue that's brought us all together, the impact that this issue has on newcomers, the state of the software today. We're then going to go through those ideas that I mentioned a slide before. And then we're going to open it up to discussion. Next slide, please. Yes, you're probably wondering who this person who's been talking is. That's me. My name is Peter. I work as the product manager for the editing team at the Wikimedia Foundation. And one of our core responsibilities is the visual editor. And we relate to this problem insofar as we as a team are asking ourselves, what can we do to help newcomers be empowered to make edits that projects value? Next slide, please. So the core issue is that the number and complexity of Wikipedia's policies are growing. And the current methods that we have for educating newcomers and making them aware of these policies, it's just not cutting it. And I think this point is really solidified by a recent study by the Art and Feminism Project in collaboration with Wikikred, wherein they found that newcomers, not only did they find the guidelines difficult to read and understand, but in many instances, exclusionary and disallowing to the kinds of knowledge that people wanted to share. Next slide, please. So the impact of this is pretty simple. Newcomers get driven away. As our projects mature, we learn more and we embed those learnings into our policies. But a consequence of that is that our policies get longer, more nuanced, and they also remain far away from where edits are actually happening. So you make an edit. It doesn't quite fit. You get reverted and you're driven away. Let's go to the next slide and we'll look at how this actually plays out. So if you look at the screenshot to the right, this really is an actual story, and it kind of plays out like this. So this newcomer was assuming, you know, reading Twitter and they notice a fact about an event they're interested in. Naturally, they go to Wikipedia, notice that fact wasn't represented, and decided to add it. Then they saw, oh, I should probably cite or reference this, so they clicked Cytoid. They entered the URL for the tweet that evidenced that fact and they published. Everything was like pretty seamless. Next slide, please. However, the interface didn't really prepare them for what would happen next, which is that that edit got reverted. And that edit got reverted because in this particular case, a tweet doesn't qualify as a reliable source. So all of that energy and good faith that this person brought to make the edit was undone. And the editing interface didn't really do anything to set their expectations. And so that's really what we're here to try and understand more deeply and generate ideas around is how can that interface evolve to set the expectations for people that, hey, there are kind of rules and guidelines that need to be followed. And we don't think that onus should be put on you to learn about them after the fact. So with all of that said, I'm now going to hand it off to Marshall Miller, who is going to take you through an example of how he and the growth team are thinking in this same space of kind of evolving our interfaces to help set clear expectations for people who are new. Marshall? Thanks. Hi, everyone. I'm Marshall Miller. I'm the product manager for the growth team. And it is so awesome to be here at Wikimania finally with so many of you. And the growth team is the team at the Wikimedia Foundation that works on increasing the retention of new editors. And we do that through changing the software. And most recently, building workflows that help the new editors contribute while they learn and while they adjust to the Wiki. So what Peter's been talking about are these barriers, barriers that are technical about how to use the tools that are related to policies and Wiki concepts, how to understand what is an acceptable encyclopedic edit. All of these things are really challenging for newcomers. And what happens is there's a lot of people out there who try to contribute and could become great Wikimedians, but they never succeed at making any valuable contributions and they don't stick around. So one approach that the growth team has been working on is thinking about what kinds of editing workflows could help these newcomers be successful and help them learn along the way. And this idea we called structured tasks. And the idea is that we can take certain editing workflows, certain common kinds of edits, and break them down into steps that guide and teach newcomers along the way instead of requiring newcomers to figure it all out themselves. And so I'm going to show you an example of the most recent one that we built. It's called add a link. So the growth team worked on this task by adding it to the visual editor. And it's an example of how can we extend the visual editor to allow people to edit in new ways that help them learn. The add a link task helps a newcomer add a Wiki link to an article, which is a pretty simple edit for a lot of us, but is actually hard to figure out if you're brand new. And I'm going to show it to you on mobile. These screenshots are from a mobile device because so many of the new people that are coming to our projects, they actually only ever use mobile devices to use the internet. So among the features the growth team built, here is the suggested edits feed, which is where a newcomer can choose an article to work on. And so first, they choose a task. And then after choosing a task to do, it explains the guidelines around doing the task. So for this link task, here's one of the screens that explains you need to make sure you're adding links only to concepts that need them, don't link to really common words, years or dates. And then when they get to the task, the way that it works is that an algorithm offers words or phrases that might need to be made into Wiki links to other articles. So here from this example in Czech Wikipedia, this blue highlighted word, it's asking, should this become a Wiki link to this article? And the user can choose yes, no or skip. And this is a way for them to successfully complete an edit with some guardrails around them so that they can't get too far off the field while also learning some of the policies and guidelines about what words should become links. And then finally they publish it and see that they've added links to some words and not to others and move on past that, having learned something about editing. So that's one example of a workflow that we built inside the Wikimedia Foundation that's now being tested on a handful of Wikis that is a new kind of editing workflow that would help newcomers learn along the way. But we at the Foundation, we're not gonna be able to think about, imagine all of the different kinds of workflows, all of the different kinds of policies that need to be taught. We need to be working with communities on this. And so that's the question we're trying to talk about today. How can we make it possible for people beyond just the Wikimedia Foundation to build some of these new editing workflows to get into the visual editor and extended in new ways? How can we empower community developers and communities to do that? So we wanted to brainstorm some of these ideas together. And so now I'm going to turn it over to these set of demos from volunteers and another WMF staff member. And we're starting with user Valegiapo from Italian Wikipedia. So go ahead. Hi everyone, I'm Valegiapo and I edit Italian Wikipedia since 2019. As every Wikimedia, I've been a newcomer too. And as a newcomer, I've made some mistakes. Mistakes that I repeated until someone made me notice them. Next slide please. And there comes the robot I've been working on in the past few months. Basically what it does is checking for newcomers edits, checking if there are mistakes and if there are writing them a message. That's easy, right? Next slide please. So let's see how this works. A newcomer is probably going to push the publish changes button. But how Marshall said, this is quite just an editing software. And so no guidelines are given. So newcomers are just going to click the button and maybe going away. Next slide please. So the bot is going to detect those edits and check if there are mistakes. Next slide. If there are, newcomers are going to receive a message in their doc page in order to unknowledge those mistakes and so stop making them. Next slide please. So I've encoded some of the most frequent mistakes I've found and in those we can find links to these ambiguity pages with links in section title, is external links in body and so on. This concept could be extended in a tool. Of course I wasn't able to work on it but I believe that's a really good starting point. Thank you. Thank you Vala Japo. Yeah, so that's really interesting that you're encoding these kinds of policy rules so that the bot can notice when they've been broken and then warn the newcomer. But as you were kind of saying, this happens after they make their edit and they may already have been reverted once they find out about these rules. And so I think that something we've been talking about is how can we get this information to them while they're editing? And so that would be a really great direction to take this thought. And so I wanna turn it over to user leaderboard from English wiki books to talk about their work on abuse filters and how those can be or have been used around that idea. Hello, can you hear me? Yes. Okay, this is just, that's, okay. So yeah, so I am leaderboard and I'm generally my so-called home wiki is in the English wiki book. And next slide please. Yeah, okay. My work is content on abuse filters which is a pretty powerful and flexible tool that allows handling of spam and vandalism. For me, the main draw of abuse filters that it allows an easy way for to handle spam and vandalism and all the nonsense while allow an estimate contributors to work freely. So unlike having a protective base which is destructive for everyone, abuse filters allows us to target the only problematic editors. And another useful draw is it works in the background. So it only activates when it needs to. So the average user would not see it and would not be interfered. Would not be bothered. And the last one is that it is really effective against long-term abusers which are basically users that just come and have no intention other than to vandalize wiki. So then it is very clear that they have no estimate purpose so they're not good fit users. So right benefit of abuse filters allows us to handle those cases pretty effectively while allowing an estimate contributors and normal users to work without any issues, especially new users. Next slide, yeah. So this is an example of an abuse filter, a public abuse filter. So it's mainly Reddit, which is called regular expressants. And this is an update experiment. This is a filter that's meant to cats uses that try to do some silence vandalism and politely draw them away from it so that they don't make that it. And they understand it's not, this is not a place to be doing vandalism edits. Next slide. And these are some of the conditions that can be effected on the abuse filter. So you have flags, you can set a flag to be hidden from public view, which is really important for filters that are targeted. So for LTF filters, we normally keep this in flag on because it's an LTA filter. We don't want LTFs to be coming and using this filter to the advantage. Then we also have a lot of options that can be taken when the filter is hit. For example, we can just give a warning and allow the field to use the continue. So we can give a warning saying that your edit may not be contributive, you may want to reach a get it. And the most common action is to just prevent the actions and get no, this edit has not been processed or this edit has been disallowed because of this reset. And in more extreme cases, we can decide to block the user from editing, which is which we rarely use. And it is only used when there is good reason to use it. For example, large case vandalism from a subset of users. And it's also important that when you try to block these, you will definitely have false positive because a perfect filter does not exist. And that is something which even, even we have to keep watch off because it will always happen. There have been uses affected by those filters. There is not a lot we can do about it, but that's the toughest balance. It's a tough balance maintaining, allowing filters to be hit when they need to, but making sure the filter do not hit when this didn't be the next slide. And this is an example of a popular filter being hit. As you can see these filters, this filter was hit like under on rapid succession. What was actually happening was there was a well-known LTA who was constantly attacking the wiki. And this filter managed to hold up against a lot of the edits while making sure that other contributors can continue without being affected. So you can see that they were like, you can see that it's in a few minutes because they have the IPs, constantly changing their addresses. And that was something that this filter was able to detect. And next slide. I think there was one. So this is some of the, this is what happens when a filter gets hit. And this is a warning saying that, I mean, this is an error. So this says that you have added an excellent link and users may become confused because they may be wondering, I just added a site isn't linked. But on this site, we actually block excellent links from new users because it's almost all spam. So we then tell you still, I know you may have had good intents, but we are forced to block excellent links because normally it's just spam. And if you think you had a good intention, we are sorry for that. And you can please, please let us know through this page. And then we have a page where admins and reviewers can handle those links, handle those false positives. There's a lot of this happening. The question is how to handle them. Next slide. This is an example of a warning. So the message clearly says that you're trying to, the user was trying to redact the page that clearly does not exist. And the filter caught this and the filter politely asked the user, you may be redirecting the wrong page. Make sure that you know what you're doing. You may be trying for fun. You may have accidentally made up, you might have tried to redact one place, but when the redact in someone else, we may have actually redacted to a full Wikipedia links. You may have accidentally used the wrong type of links. So all of those cases that you caught by this filter and the filter politely tell them that you may be doing the wrong thing. You may want to recheck your redact and then the better, the good thing about it is that it prevents users from accidentally redacting them to the wrong page and accidentally creating a message. And accidentally creating a message as a result. Next slide. I think that, yeah. And this is an example of what happens when a filter blocks an edit. And for some, okay, it's not that, for some filters we do not have a particular reason. So for this is a particular scape of an LTA. So we just tell them that politely tell them that your editor has not been saved because of some reason. And if you have a, and if you think that's a mistake, please let us know. And this is an example of what happens to visual editors, the previous screenshots to the Wiggy 2010 editor. This is an example of what happens if this is on visual editor. I think that's about it, I believe, for my thoughts. I'm not mistaken. Great. Yes, thank you so much, Leaderboard. It's really cool to hear from WikiBooks, which is a project we usually hear less about. And it's interesting to hear that abuse filters, as you can tell from the name, were originally meant to stop abuse. But as you're showing us, they can become a tool that can also be used to potentially nudge users or warn them that they're on the verge of saving an edit that might not fit with policies. So thanks for showing us that and that additional way that communities can try to nudge newcomers. And that's helpful for us to understand. So next, we're gonna turn it over to, advance the slide here, to Olga. So Olga is a design intern at the foundation. Please go ahead. Yeah, hi, everybody. So yeah, design internet foundation have done some work for the growth and editing team. And now on the iOS team. Next slide, please. So as we've heard Marshall talk about and the other presenters talk about that it's difficult for newcomers to learn about policies. Oftentimes they see, if they do try to learn about them, there's a lot of text, it's hard to kind of get your footing and understand them. This policy check is our designs that are based off ratios that I did about kind of this newcomer experience and how we can help them out with understanding policies. And also, first of all, show them where they're going wrong and also maybe use it as a learning opportunity for them to then also learn more about the policy itself and about kind of the Wikipedia ecosystem. And here, they do all that before they publish. Yeah, next slide, please. Here, I'll just go through an example of what this tool could possibly look like. Again, these are just ideas. It's not in production or anything and there's still a lot of questions that we haven't answered yet. But for instance, if a newcomer is editing or contributing to an article and then they have this popup come up which says, would you like to activate this tool that will help you? That will show you essentially if you're breaking any policies or guidelines and give you suggestions and if you click activate, then next slide, please. The text gets highlighted and what this shows is pretty much different colors, indicate kind of different policies you might be breaking or different suggestions the tool wants to make. So if you're interested, for instance, in the kind of the highlighted green area at the end of a sentence, if you click there, next slide, please. You get this screen popup that would show you the suggestion that it has for you. So for in this case, it's at a citation. The actions of the newcomer can then go forth with is either adding that suggestion or ignoring it but also on the bottom of the buttons, there's text that says actually adding citations is important for verifiability which is like an important policy on Wikipedia and if you kind of proceed without citations you might be reverted. And in this case, verifiability and the word reverted are links. So if the person's interested in knowing more, they can from here but also there's also an opportunity to learn more on the bottom, which is the blue ink. Next slide, please. And if they choose to click on this link, they'll be brought to this other screen, which has different tabs. So first it could have just like an explanation about the policy, for instance, and then it could have other tabs with just general policy and guidelines. But this idea behind this is to have kind of condensed information that could be helpful to start to introduce these concepts and words and terminology to the newcomer. And then also maybe have some editing tips with further links and even have potential community resources stored in the screen. So it's kind of this initial stage from which then the person can explore new concepts and stuff. So if they have kind of read through this and understand what they're doing and if they actually choose to add the citation, next slide please, then it just becomes the kind of classic out of citation screen and then they can add their citation there. However, if they choose to ignore the suggestion, the highlight becomes essentially gray and then when they go to publish, next slide please, they would get this pop up that is pretty much again kind of telling them that you've ignored a couple of suggestions. If you keep doing that, there's a high chance you might get reverted. Like, are you sure you still want to do that? And when I was kind of creative designs, I thought it'd be also nice to have some kind of potential follow-up. So in this case, it says like, would you like to maybe have these suggestions sent you to email, they can review them later. So you can check that or not and then continue publishing as normal. Next slide please. And as you see, that's just a quick demo of what this tool could possibly be like. But then there's still a bunch of research that could be done in this area to clarify things like for instance, when is the best time to introduce this policy to check to an editor, to not, you know, so it was introduced too early, so they are not frightened by it or that it's not introduced too late. Also, how would this work with source editor? Should this kind of thing be on source editor? And also how can we integrate existing tools? For instance, you know, how Marshall was talking about suggested edits and kind of the work the growth team is doing, how can we integrate those things together? And that's it on my slide, thank you. Great, thank you Olga. So that's such an interesting exploration because it's an idea that would give a heads up to the newcomers before they save their edit, that they might be running a foul of some policy so that they don't end up being reverted later. And so for these presenters we've heard from so far, they've all been talking about how do we nudge these newcomers and get the policies across to them as soon as we can in the editing process. For our last volunteer, we're gonna hear from user Enterprizee, who's been thinking about a potential new kind of editing workflow entirely that could make more volunteers or newcomers more comfortable with policy. So over to you Enterprizee. Thanks Marshall. So hi, I am user Enterprizee. I am from the English Wikipedia and I write code that helps other editors and hopefully newcomers do their jobs more effectively and efficiently. Next slide please. So today I will be presenting with the aid of very professional mockups, my workflow for suggesting an edit, which is distinct from actually making an edit because what I realized a while ago was editing is a very imposing barrier if you haven't ever done it because if you go to some of our more polished articles, it really doesn't look like there's anything for you to add because there's a lot of, there's a very bold statement to say, my sense belongs in this seemingly highly polished article. The metaphor I used was if you have a brick in your hands and you see an unfinished brick building, then everything's good and you know where to put your brick. But if you walk up to your, if you have a brick and you walk up to the skyscraper, like a big glass modern skyscraper, then of course there's no way for this brick to slide in at all. And that's the situation a lot of editors say they end up in when they see, one of our featured articles or good articles or even one of our other articles that are quite long. So what I thought of was a way for people to suggest and propose edits that other editors can then implement. So next slide please. So this is how you would bring up the tool. There's a new tab I envision between read and edit, suggestive of its position as an intermediate step between reading and editing. And the first step of the workflow is you select text in your article and you mark it as something you'll be making a suggestion about. Next slide please. And that'll bring up a dialogue with some options that are fixes that people commonly make. So one thing is you have a source that you would like the article to use. So that brings up like the standard source picker. You may want to move the sentence to a better location or you might want to tag it as maybe too technical or too confusing or something like that. So next slide please. I've, yeah, next slide. I've come up with some further design ideas for that. This is going to be a bit of a lift on the abuse angle because it's a totally new thing. So moderation is going to be more difficult. So maybe we could restrict to the sources that people can use. As you can see in this whole dialogue, there are no free form text fields. That's a big important thing because the last time we put those out there, if you may remember the article feedback tool, that was difficult to moderate. All right, next slide please. And then on the editor side, this is what they get when there's a suggestion, there's a pop-up on the right side of the article and they can hover that to view the suggestions that have been made and hopefully implement them. So that's all I got. Okay, great. Thank you, Enterprisey. So we're now going to move into the discussion part and here it's how to get involved. So this again is the link to the etherpad where anyone can type in their questions to the etherpad. So please start going ahead and doing that. It could be your question, it could be a comment or it could be an idea you have. You might say, on our Wiki, we have a tool that does this or we've wanted to build a tool that does this. And again, the central thing that we're trying to talk about here and get ideas going about is for all of the community members out there, how can we change our software? How can we make it possible for you all to plug new kinds of workflows into the editing experience, extend the visual editor, customize it to what your communities and your projects need? That's something that's been really difficult in the past and many volunteers like the ones on this call have found ways to do those kinds of things through scripts or gadgets or abuse filters or bots. And we want that to happen more in the future so that people are empowered to shape this experience. So Peter, I'm gonna turn it over to you to ask this first question, which I think comes from the attendees. Yes, thank you, Marshall. So we're gonna take Mark Bauman's question first and I wanna pose it towards both Olga and Balejapo. So Mark asks, where can they learn more about the common mistakes? I think you were referring to the mistakes that Balejapo encoded into BotTutor. So this question, I'm gonna pose it in two ways. One, for Balejapo, can you share what led you to prioritize those particular set of mistakes to start? And then Olga afterwards, I wanna go to you and have you kind of speak to why you chose the particular policy that you did for the demo. So Balejapo, to you first. Thank you. So basically I just started testing and I've set up the filter, let's call this way. The filter my bot is currently using in order to just consider newcomer's edits. And I just pasted all of the edits in a log. And then I took some time reading it and I found some of the mistakes that were most common and also I kind of figure out myself other mistakes that new user could do. However, if you need a full list of the mistakes my bot is currently checking for, you can find it on the reader pad at this start. I also base on reading above your question. Awesome, thank you Balejapo. And then Olga, I think, you know, I remember you doing some research about looking into the mistakes that newcomers make most often. And I think that informed the policy that you ended up including in your demo. Can you maybe share a bit more about that research and the common mistakes that you found newcomers running into? Yeah, sure. Thanks Peter. The way that I went around is also going and looking through old research papers and seeing kind of information that could extrapolate from there. For instance, there was one paper that talked about policies that were the most cited so assuming that's the ones that were brought up the most. So kind of instead of going from what are the mistakes that newcomers make the most I was also looking at what are the things that newcomers should be kind of learning first and what are important to learn to understand how to edit on Wikipedia. And that's how I came to first look into kind of after the long process of researching and looking through the papers to also look into the three core content policies and then also pick out verifiability out of those because without verifying your sources and adding citations, the article is considered legitimate. So, and I thought that would be a very kind of important thing to include and to kind of bring up to newcomers like on their first day of editing. Great, thank you Olga. So to follow up on this, I'm looking at the etherpad and there's a few questions kind of getting at the same thing, which is for these kinds of policies we all know that there's so many policies and guidelines for the wikis more than we can count and more than we can put in a list. I guess this question is for Valajapo for Leaderboard. As you're thinking about writing a bot as you're thinking about writing abuse filters how hard is it to encode some of these policies into actual software? Are there ones that are really difficult to detect ones that are easy to detect and what could make it easier? Are there tools or documentation or something that you could use that would make it easier to know whether you're succeeding in encoding these policies into software? How about that goes to Leaderboard first? Yeah, so my question is by policy do you mean policies on the wiki books? Like do you mean product related policies or do you mean some other kinds of policies? So one thing that I was thinking about I remember when we talked earlier, Leaderboard you were talking about how difficult it is to test the abuse filters, right? And that was a great example of something that makes it currently difficult for you to encode these kind of policies into the filters. And so like could you tell us some more about how that could become easier? Right, so the main difficulty when writing a filter is that I mean the testing is correctness is probably one of the more difficult parts of a filter because the current abuse filter interface gives us no way for us to figure out whether a particular test edit would trigger a filter or not. So all that you can do is look at check where the pass edits and that too with some conditions. For example, deleted edits don't count. Check where the pass edits will trigger a filter. That is useful but unfortunately the problem is that it does not help in more complex filters such as when you're trying to test like what IP I need to use. And that's what I would say is a difficulty. And the difficulty is that the abuse filter just kind of gives you syntax, it does not give you any way of checking the correctness of the filter that is where the filter does what is expected. And some kind of basic debugging tools such as for example, break points and common ID tools at least in a miniature form would probably help me as a filter writer to check where the filter is working as expected. Great, that makes sense. And that's a really like concrete learning that Peter and I can take back to our software teams to remember that as we empower communities to extend our software, we need to also equip them with the right tools to monitor and test whether what they've added is working as expected. Peter, do you have the next question? Yeah, I wanna ask this question because we don't have an answer and it's one that we'll need to collectively think about. So the question that was asked was how do we address the issue that policies are not used slash implemented uniformly across different Wikipedia's? And the reason that I wanted to draw attention to this particular question is because the way that we're thinking about this is that ideally we develop some platform or system that allows individual projects to customize the kinds of notices and alerts and feedback that they surface to editors in ways that conform with their projects. So that's number one. The second point that I think is inside of this question which is another excellent one that we don't have an answer for right now is what is the relationship or the connection between changes to policy and then changes to the corresponding software that encode that policy into the editing interfaces? I have no idea what the answer to that is right now although I wonder if any of the other panelists have ideas around this particular thing. Well, Peter, I'll just speak to your first question about how can communities customize the way that policies are implemented in software. There's one idea that the growth team's been working on that I hope everyone can check out and it's called community configuration. I'll put a link to it in the etherpad but if you search that on mediawiki.org you'll see how we've tried to build a form that allows administrators on Wikis to set the configuration for the growth features that applies to all users. We think that's an idea that would allow each Wiki to be empowered to customize the software to their culture to their needs. I'm seeing a question in the etherpad that's interesting about our language and I think this comes from Carla. So it points out that the word revert is we're all very used to it but it's a pretty technical sounding word and it's really more like being refused or undone. And it's like there's this question of how much should we be mindful of the language in our software that can convey these policies or convey what's happening. Wondering if anyone's thought about that maybe Olga you could speak to it from a design perspective or enterprisey from the tools that you've built. You've put language into these tools. How do you think about what it should say? I think the language is super important and I think there's this, because editing and competing you're coming contact with language that you don't usually come in contact on your day to day. So like there's this bridging between using especially for newcomers using the language that they'll often see if they do continue editing and kind of getting them more comfortable with it and having them understand what this language means but also as they're kind of onboarding on this Wikipedia experience like making sure that the language that we do use when we introduce these tools to them they can comprehend very easily and can draw links between, you know like Oh Reaver is kind of like an undone or whatever, you know. So I think it's just a fine balance that we have to really pay attention to language I think something should be in the forefront. Yeah, language is very important. We have a lot of jargon floating around and like we periodically get complaints on the English Wikipedia about notability being a confusing word. And if people who have been steeped in the community for years and years are still getting confused by this then imagine how the newbies feel. That's yeah, really well put. We are getting close to our time so I think we all just wanted to collectively say thank you for coming. We are gonna mine the ether pad to make sure that every question is addressed either in text or on the wiki page we will let you know where that happens. And then for those who wanna continue talking Marshall, if you could maybe put the instructions back up to these. Oh yeah. So it's gonna cut us off here in like 20 seconds. So if you wanna continue the conversation join us in building six floor to table A and RIMO or find us on the wikis because this is just the beginning of this work and we're gonna need to hear from all different communities all over the world about how to do this well. Awesome. And thank you to all our speakers who were just tremendous in collaborating on this. So thanks everyone and thank you for coming.