 Do you hear music in my ears? I don't know if that means it hasn't started yet. Okay, maybe that. Okay. It's kind of an interesting experience presented without knowing who's there. But anyway, hi, my name is Peter Pelberg. I work as a product manager on the editing team. And today I'll be presenting on the behalf of our team talking about how might we put policies in people's hands while they're in the midst of editing. So, oh, beautiful, thanks. So we're going to talk about a project that actually kind of first emerged in our minds two years ago at Wikimani actually in 2021. So this kind of is going to talk about how that idea has evolved and taken on actual form. Thank you very much for setting up the camera. So edit check, as we're calling it, is an effort to offer people actionable feedback about policies while they're editing. And the way that edit check currently works is that it prompts you to add sources, specifically newcomers and people who are just learning to edit, when they forget to do so themselves. So this is a little prototype. You see someone adding new text. They go to hit publish. They didn't realize or forgot to add a reference and then the mobile visual editor is going to prompt them to do so. They're presented with the normal kind of citation experience and then they're off to hopefully publishing their edit, accompanied with a reference. And that's the first version. And this, let's see if I can advance. Here we go. And this idea kind of is a descendant of I think ideas that have been floating around the movement for at least five years. But like I was mentioning, it really took shape two years ago at Wikimania in 2021. And we had this session called software in the future of editing and we brought together volunteers and staff members to kind of ask this question of, how might we evolve our software to help newcomers make edits that actually align with our policies and grow into thriving contributors? Because one of our hypotheses was that policies were just kind of out of reach or just even outside the minds of people when they're in the midst of editing our projects. And this kind of a thing that really brought this project into sharp focus for us is coming to realize that while we've done a bunch of really great work, simplifying technically how to contribute, the actual policies and expectations and conventions haven't really made their way into the software fully. We've got inline comments, we've got edit notices, we've got abuse filters, there are template talk page messages. They're kind of a bunch of messages on the periphery of our editing experiences, but in terms of actually making their way and affecting the steps and workflows themselves, they haven't really fully made themselves all the way there yet. And that's really what this project is about and it feels so crucial because we find this pattern where guidance kind of comes at the wrong moment. It's either too soon where you're like, forget the instructions, I'm just gonna click edit and do this thing or maybe it's too late. You've already published an edit that someone reverts or you published a new article that someone deletes and then you have an explanation on your talk page and by that point you're like, am I really gonna invest the time and energy into learning how this thing works? It seems like there's so many instructions. And the impact of that is kind of two-fold. Newcomers feel unwelcomed and they question whether Wikipedia is a place where they belong and experienced volunteers kind of repeat themselves and grow tired. Same people or people keep coming to our projects making the same mistakes. How do we kind of stem the flow of this? And we just don't think this is working out for anybody. And that kind of brought us, so what's happened since we've kind of named this issue, identified its impact and what have we been up to for the past couple of years? We haven't solely been working on this but we've gotten together as a team and sketched out a bunch of ideas and kind of wanna share you how that demo that you saw up front came to be. And so over the past 11 months we've had a bunch of conversations on Wiki. We've had 11 community conversations with volunteers around the movement, specifically focusing on volunteers, English speaking volunteers and French speaking volunteers. The reason for that is we're really centering people editing from within Sub-Saharan Africa, the newcomers in this project. And most often those are the two projects that they're contributing to most. And through this process, a couple kind of principles emerged about really guiding how we approach the experience that you saw. One is really embodying this idea of no firm rules. The interface is gonna invite people to make choices and ideally the interface creates an expectation that you are empowered to decide whether the invitation it's presenting to you is relevant or not. But ideally it doesn't create an experience where people start to feel like, oh, I'm doing something wrong if this is telling me I should do this thing or it doesn't give you the freedom and flexibility to do something different because there's a reason that you think that the edit you're making doesn't need to follow the kind of convention that the interface is suggesting. The other thing is encouraging interaction. I think one of the things that we heard so often in our community calls specifically with volunteers who have been traditionally excluded and underrepresented on our projects is that oftentimes they make a contribution that feels like there's no opportunity for a dialogue. This can sometimes come up when you're creating a new article and you don't even have an opportunity to articulate why something or someone might be notable. It just looks like on its face that content is deleted. And so ideally with this interface we're providing context on both sides. People who are new have this understanding and expectation that this is not like every other place on the internet where self-expression is centered. There are expectations. There are policies that help Wikipedia remain and continue to grow to be this vital resource that everyone can depend on. And on the side of experienced volunteers, it's helpful to know what motivated someone to make an edit, why they thought the contribution they're making may not necessarily need a citation. Maybe the person they're writing about is someone who is on a topic that just has been, again traditionally underrepresented within the sources that our projects typically deem to be reliable and shouldn't there be room in our patrolling conversations for edits of that sort. I'm not gonna go into each one but it was really valuable to be in conversation surface these principles that we can hold ourselves accountable to imbuing the software with. The other thing that we kind of learned is trying to figure out well, what is the right moment to offer people feedback? Kind of like I mentioned, we seem to be really well-practiced with giving people a bunch of instructions up front, almost at obstructions which, anyway, instructions up front. And then we also are pretty good at giving people feedback after the fact once they publish an edit. So one of the things that we were looking at in our design process is trying to think about when is the right moment to provide people feedback? Is it in the moment when they're looking over their edit before they hit publish, which you see on the left, is it just as they hit publish before they hit save while they're writing an edit summary? Is it after the fact? Do we want, is it more likely that people are gonna respond to feedback if they're offered guidance about how they might improve an edit that they just made? So those were some of the questions that we have been asking ourselves. And the hypothesis really here is, can we time feedback so that it arrives in a way that it feels like encouraging and powering and not punishing? I don't know if this metaphor will track for y'all, but sometimes we talk in the team of like, there's a difference between you cooking and that ingredient that you're wanting to add just like arriving in your hand at that right moment just as you need to, let's say like, put it in the pan or the wok and someone kind of laying out the recipe all at once at the top before you've even chopped your vegetables. It's like, are you gonna be able to remember that thing or even want to know about that instruction that is so specific to this thing that's further along? And so the idea is that if we can time feedback to come at the right moment and crucially equip people with tools to apply it, then maybe people will make edits they're proud of and projects actually being useful. I am looking at the time and I kind of wanna talk. So I'm gonna kind of cruise through these last ones. So anyway, the two of the ways that we're thinking about evaluating the success of this project and by no means the only ways but we're basically trying to lower the proportion of edits, new content edits that are reverted and increase the proportion of those edits that actually include references. And in terms of where we are now, I love this sketch, pal, who's a designer on the language team. If you've never met him, try this amazing diagram of like what the software development process looks like or kind of looks like an actuality and that's what you're seeing. But anyway, we're in the refinement stage. We're gonna have an initial release hopefully pretty soon and then we'll probably find ourselves somewhere back in that tangle refining the feature based on what we learned about how people actually use it. And in terms of like zooming out how are we thinking about this year? This kind of comes back to that moment thing that I was referring to. We kind of think that moderation or instruction happens right now either at the very beginning of an edit or at the end and after you publish. And with this project, we're really trying to sprinkle in this feedback in moments so that it arrives at the right time. And so this year we'll be experimenting with a few checks. That's what these yellow bars represent. And we have a prototype that we've been talking with volunteers about for, I wanna say like six weeks, six weeks now there's so much incredible feedback of which I'm happy to go into but it's available for you to try. And that's pretty much what I had prepared if any of this kind of like sparked a question or Peter you said something and it wasn't fully clear what you meant. We can definitely go into that now. There's also kind of two key ways that you can kind of remain updated and also participate in future conversations and there the links are there. So yeah, if there's anything that this brought to mind like, oh, this is energizing or like, have you ever looked at abuse filters? This sounds very familiar. I'm here for it all. And thanks for kind of like deciding to spend your very scarce Wikipedia time with me and edit check. So I am gonna pause here. If there are any questions or reactions or feedback, let's do it. Hear me? Yeah, yeah, I can. This is a very surreal experience because I'm looking at you at your back, but yeah, I can hear you. There you go. Hi, Johanna. I work at Wikimedia Deutschland, Wikimedia Germany and I run the technical wishes survey there. And I just wanted to mention that in our last survey in 2022, there was also a focus area that was up for vote. That was called something like give help while editing. So that sounds very similar, I think. And that actually made second place. So that was like, yeah, not the one, the one, but pretty close. So there's, I would say, definitely need for that in German Wikipedia. And I wanted to ask the things that were mentioned for this focus area specifically were about typography also, like when people, stuff that gets reverted, something like, you know, use the wrong quotation marks, for instance, or something else. Like also some things that are specific to the creation of articles, like some mistakes that people typically make when they create a new article. Is this something that's also, you see falling under this project somehow? That's a great question. And yes, yes, yes and yes and no. So I'll unpack that. So yes and so far as I think, conceptually, absolutely, that the idea of providing people feedback who are creating new articles, absolutely falls within the idea of this project. I think no is maybe too strong of a word. Whether we'll implement it as part of this kind of fiscal year as TBD, actually if you were at the last session and saw Kirsten from the growth team, that's actually something that our two teams are talking about, is there a world in which there are checks that are presented to people who are just creating new articles. So I can imagine something like, it needs to have this minimum number of sections, it needs to have this minimum number of wiki links based on the number of paragraphs, each one of these should have references. So absolutely, like you're right there. Yeah, cool, thanks for the question. Yeah, totally. Now this is weird for us, because I'm looking at you over there, but your face is actually that side. But anyway, hey Peter. I'm gonna ask you a question, because this is a conversation we've been having. But I wanted to talk to the side of kind of moderator creativity, how you're thinking about how these things might not be foundation-defined checks, but like community-defined checks and where that conversation is going for you. Yeah, great question Sam, wonder where it came from. But yeah, one of the things that I think that Sam, Kirsten and myself have kind of, yeah, have kind of talked about is, yeah, how might we make something that enables volunteers to define checks for themselves? So, Johanna, you were articulating, if we were to kind of abstract the statement or the question that you asked is kind of a bit of an if-this-then-that statement, like if someone with this, of this account age with this number of edits is attempting to create a new article, then that new article should have X, Y, and Z. And if any of X, Y, and Z thresholds are not met, then present them with this message or this feedback or this call to action. And so in an ideal world, if this project proves successful, is that kind of configurability, that kind of creativity is something that lives on the wikis, because personally, I don't think this project is super exciting if the editing team and product teams are needing to write each one of these individual checks. So that's something that we're trying to learn through the course of this project and something that Kirsten, Sam and myself are actively talking about is like, okay, where each of our teams are building these bits of functionality that when you kind of compose them together could potentially create the kind of creativity that I think Sam was alluding to. An abuse filter is one of those existing features that we draw a lot of inspiration on just because of how much, yeah, how much kind of expression it's unlocked for volunteers. The question is, can we deliver feedback of that sort that appears earlier and comes as more of a helping hand and less as a, well, maybe not. Anyway, I'll pause there. Yeah. Let's be done. Yeah. If there's no question, something that I've been trying to think through and I don't know how much time we have is someone brought up a really good question on the talk page actually yesterday and they were like, all right, this is me summarizing them, but they were saying, okay, so you're gonna, if effective, we're gonna have more people adding references, but those references could be extremely unreliable and it's more difficult to patrol an edit that adds new content that lacks a reference than an edit that adds new content and has a reference. And so something that I've been thinking through over the past just day or so is like, how will we detect if that's happening? Are we gonna need to rely on people manually reporting that's the case? So that's something I'm trying to sit through or sift through is, yeah, what happens or how might we, yeah, how might this not in the short-term turn into actually more work for moderators because they're less clear-cut cases of whether a new content edit is productive or not. Yeah. I have a quick thought about that actually. Yeah, yeah, talk to me. Yeah, one example you might look to from the past is the one-lib, one-ref editing campaign that I used to be in Russian running that I think we still run from the foundation. The idea was to encourage the Barians to come to Wikipedia and add a citation, right, to something that needed it. The campaign got a little bit gamified, right, so there was kind of leaderboards, statistics, who was adding the most citations. And in some communities that we engaged for those new editors, we sort of saw that behavior, right, that there was sort of like shortcuts or an misunderstanding about what a reliable source was. They were adding any source because that meant they got their point on the leaderboards. And so there's probably some folks involved with that campaign that will have sort of interesting thoughts about how they might have tried to solve that problem or how we try to encourage or educate those editors about what isn't a reliable source that might be a useful place to go. Great spot, yeah, I hadn't considered that. Thank you, Sam. I will probably ask you in Slack at some point later who a good person to talk to about that would be. Cool, well, thanks, whoever is there. Thanks for coming. And I hope, what is it? I guess it's like 11.45 or 12.45, so maybe lunch is close, but yeah, I hope the rest of the day is great. Good to see everyone. And I think I close my laptop now and go to sleep. Thank you. All right, good night, y'all. Or have a good day, y'all.