 All right, let's go ahead and get the October metrics meeting started. So today we are going to listen to people talk about, broadly about ways that we have been connecting with, learning from, soliciting contributions from our communities and contributors and readers. We'll start with a welcome and some of the administrative stuff. I'll be taking care of that. Then we're going to go to SICO and Maria for a community update. We'll hear from Kevin and Toby on metrics. We'll hear from Toby on a new feature and then we'll have some research discussion and a product demo and then it'll be back to all of you for Q&A and you get to get, you get to have people answer your questions. So I'd like to, we have a number of new people at the foundation this month, both contractors and requisition hires. So I'd like to welcome Ellen Lau, Boyana Dineva, David Lynch, Ed Earhart, Ellie Young, Jeff Elder, Julianne Girot, and Karen Brown as new staff members. And then I'd also like to welcome to the foundation Frederick Bolduc, Gaten Goldberg, Hannah Hernandez, Jennifer Grace, Jonathan Yonikowski, Nancy Liao, and Thalia Chan. Also welcome. And we have a fair number of anniversaries. So I'd like to congratulate us, I suppose. But I'm only going to read out the names of the first column here. So many congratulations to Ariel Glenn, Trevor Pascal, Guillaume Pomier, Amir Aharoni, Rachel Ferrand, Heather Walls, Erin Haffeker, Oliver Keys, Antoine Musso, and Gabriel Wick. Please also congratulate. And well, I didn't want to abuse the privileges of the microphone, you see. And now I'm going to hand this over to Maria for a community update. Buenos dias. Good morning all. I'm Maria. I work as community communications coordinator with the community engagement department. First, I wanted to thank all community members who share their stories online, because that is why we're here working for them. So I'm going to share some of these stories. In September, Wikimedia Central and Eastern Europe meeting took place. It is very important to bring together Wikimedians, because it gives them a chance to exchange knowledge and experience around different topics like advocacy, governance, and programmatic activities. 32 different countries took part in this meeting. And as you can see here in the pictures, well, these are two Wikimedians after a session on thanking users. One highlight of the meetup was the Wikipedia Education Program. It is one of the main gateways how to engage knowledge societies outside of the movement to get engaged in the Wikimedia projects. And 25% of Wikimedia education programs are in Central and Eastern Europe countries right now. And after the meetup, two new pilots were announced to be starting of this program. Moving on to the next story. First, Wikiumem edited town took place in Ciudad de Mexico. It was a 12 hour edited town that engaged Universidad Nacional Alpónoma de Mexico in training educators to use Wikipedia in the classroom. Wikichallenges are popular in a number of communities. In September, Wikireto took place in Tecnológico de Monterrey that engages students in creating content on a multitude of media. And there was a translation challenge on Spanish Wikipedia that engages the community with the content translation tool. I think the power of having a list of red links that people want to tackle together is very powerful usually and it's used for different purposes. Moving on to the next story. Maritim history is coming to Swedish Wikipedia thanks to the work of Wikimedia Sweden with this new partnership with the Swedish National Martian Museum. They are working first on building the vocabulary around Maritim history, which is very interesting. They are liberating an archive of vocabulary on Wikidata and then creating content around it by uploading photos and creating new articles. And speaking of Swedish Wikipedia, it reached 2 million articles in September. Or the Wikipedia hit 80,000. So actually Wikipedia 30,000 articles and Armenian dictionary 90,000 items. So without further ado, I will give them my tessico who will talk about reimagining Wikimedia Foundation grants. Thanks Maria. All right. Hi everyone. In August and September of this year, we ran a community consultation focused on our grants programs and I'm going to talk you through a bit of what we learned from that experience. So our grants programs emerged organically over the last five years. And what I mean by that is each of our programs was created at a particular point in time in response to particular needs. So we're sitting here in 2015 and grant making can feel a little bit like being on the ground floor of the Amazon rainforest. And what I mean with that is some of the key problems that we've identified over time, both as sort of the staff who operate these programs, but also from talking to countless grantees and committee members who participate in the process, is that people with ideas don't always know how to get started. Or once their projects are demonstrating impact, how to sort of grow and take that to the next level. We also see a lot of overlap and edge cases and gaps between our programs. And finally, you know, recognizing that our committees and our staff are often operating at maximum capacity. So there isn't a large room to grow our scale. And also, you know, recognizing that there are some single points of failure within those processes. So we wanted to take a step back, you know, kind of zoom out, stop thinking about individual programs and look at our grant making as a whole, really understanding what it is we want to accomplish and then figure out what the structures are that will help us get there. So we came up with a few design principles. Number one, really clear points of entry and pathways. Number two, looking at aligning sort of, you know, the amount of effort that it takes to apply and to report on, you know, any given grant, really aligning that with the amount of money you're talking about and the level of risk that's involved in a particular project or grant. And then again, looking at sort of, you know, larger funding amounts should be tied to large amounts of impact. Okay. So we're kind of getting the rainforest metaphor here, right? But why should you care? The reason that I care, and you might as well, is that we're giving out almost, you know, we're giving out more than $6 million from our donors, you know, funds to communities all across the world, right? So having good processes that ensure that we, you know, are making good use of these resources and also getting them to the communities that need them most is really worth thinking about. Okay. Now, I don't want you to pay too much attention to the details on this slide because all of this has changed post consultation. But this is where we started, right? We started with kind of a general idea that was based on, again, the staff experience and a number of conversations that we have had over the years with both grantees and volunteer committee members. But we wanted to test that idea, right? We didn't want to just sort of decide it all and go forth and potentially end up in the same place again a year or so from now. So we wanted to put it out to the community to really discuss and improve. And these are the goals, you know, firstly, we did want to communicate proposed changes early because they have an actual impact on real people in our movement. Secondly, gathering wide input from as many people as possible who would be impacted by any of these changes. And then based on that, we wanted to make improvements to sort of our straw person idea. Community consultations tend to be very public discussions on wiki. And, you know, that can be useful in some cases. In this case, you know, money can be a pretty sensitive topic. It brings up a lot for people. And we knew that we wanted to bring a really diverse range of voices to the conversation and collect as wide and broad and input as possible. So we decided to do something a little bit different this time. And what we did is we offered three different ways that people could give feedback. So we ran a survey where people could give anonymized input. We ran the on wiki public discussion on meta wiki. And then we also offered some smaller groups so that we could sort of kick around ideas and deeper discussion. And then ultimately what we did is we took all of the responses that came from all of those channels, aggregated them together, qualitatively coded them, and sort of looked at the themes that emerged as a whole. So this is what we found. And I think this is pretty interesting, right? Because it's sort of a test of this idea of how much do we do in public and what are the other ways that we bring voices to the table. So you can see here that most of the responses actually came from the survey. There was a robust on wiki discussion, but it was really 34 people and they brought a lot of good energy and ideas, but bringing in another almost 200 people via private channel, we felt like was was really interesting and got us a longer way a lot closer to our goal. We also looked at diversity of respondents again, wanting to make sure that we have a wide range of voices that are being represented in these conversations. So this is the breakdown in terms of global north, global south. We also looked to make sure that we had perspectives from women as well as men. And then the country spread so over 100 countries participating in this conversation. And, you know, that that to us is really valuable. And then again, looking at all wiki media projects. So the majority of voices were still coming from wikipedia, but we did get a lot of input from folks from our sister projects as well, which is exciting. Okay, so what are some of the things that that we heard in this consultation? Satisfaction with overall experience. You know, this, there's not, I think, a lot that can be drawn from this slide. I look at this in some ways as a baseline for future years, right? That as we make changes, we want to look at how that impacts people's experience over time. But the other thing to note is there's certainly at least a 20% opportunity here to improve. So we wanted to dig in a little bit to this idea of simplicity, simplicity or complexity in our processes. You know, wanting to really understand how many folks felt like, you know, engaging in a grants process was complicated. And if so, what were the pieces that they were particularly stuck on? You know, overall, again, we see lots of room for improvement. And about, you know, a quarter of folks found it difficult. So that's at least 25% room to improve. In terms of what people found easiest, Janice, you know, Janice is our wonderful grants administrator. Let's hear it for Janice. So once a grant is approved, you know, there's some paperwork you'd have to fill out. And then, you know, there's sort of the actual financial transfer itself, which, you know, we need to give it up for finance as well to for their help in that. And this is particularly significant because the grant making team has focused a lot in the past couple of years on our back end processes, right? So working with finance on distribution, working on cleaner agreements. And just in general, the sort of back end grants administration. So it's pretty exciting to see that that is working well. Now, what's not working well, applications are sort of overwhelmingly a theme that you'll continue to see in the next few slides. And so it's an area that we're going to be increasingly focusing on. And then the other one is quite large, right? You see right here, collecting global metrics for reporting, you know, we've invested some in the tools that sort of help facilitate that. But it's clear to me that we're not there yet. And there's some room for improvement there as well. So we asked people a little bit about their priorities, you know, what was most important as we look at making upgrades. And as we're thinking about our systems as a whole. And I think these results also are really interesting, you know, achieving impact came up really, really high. And that's something that, again, I wish we had done this same survey two years ago, because I bet that that has increased, right? We've had increasingly as a movement a lot of conversations around impact. And to me, this is an interesting sort of proof of an outcome of that. The other two highly ranked one, again, are applications. And something that my team found really interesting is that, you know, simplicity and speed of applications really rank a lot higher in terms of people's priorities, even then community participation in review. And, you know, we are participatory grant makers. We do our work in public. We involve the community in making our decisions. And so it's interesting to start to think about that balance, you know, what are the places that we really do need strong community review and those more extensive processes? And what are the places where actually simplicity and speed of applications are the most important thing that go in? All right. We also, you know, the resources team thinks about more than just sending money out the door. We think about the non-monetary resources that we're offering as well. And so we wanted to understand for folks, you know, what is most important to you in terms of the non-monetary resources that we can offer? And also, you know, which are the ones that we're doing the best at? And so those three orange ones at the top basically represent to us the largest areas to improve. They're the areas that folks had were most important to them. And they're not the areas of highest satisfaction. So, you know, looking at how we can offer more guidelines and resources to help people budget, for example, you know, offering more connections to other teams, we do see ourselves as connectors, often as grant makers. And so increasingly thinking about what systems we can offer to support that is really useful. And then resources for folks who are running online programs. Okay, so you remember that sort of complicated slide I showed you at the start, which was the idea that I told you to forget. All in all, you know, what we found was a general endorsement of that idea, right? Folks weren't opposed to the broad strokes of it for the most part. But at the same time, we got a lot of suggestions for improvement, right? Ways to tweak it. We had the community identifying about as many strengths as concerns. And so we decided to really use those concerns, you know, and put them into practice and make some tweaks to improve the idea. And you can see here, these are just a few things that people said in response to the consultation. Not everybody agreed on everything, right? There were some folks who very strongly thought that the idea was very clear and would be wonderful for everyone. And there was absolutely the reverse opinion to not surprising in a movement as as big as ours. So this is where we've ended up now. Based on all of the input, we've made some adjustments. And I think one thing to think about as you look at this is, you know, sort of the blue side versus the yellow side. So the blue side is beginning to focus increasingly on that speed, simplicity, lower risk, less dollars angle. It doesn't mean that there isn't a little more complexity between the two. But this is how increasingly we're thinking about it. Whereas the yellow side is grants that, you know, tend to be larger, tend to be a bit more complex, tend to involve a little more risk. And many of those do need deeper community review, right? A more extensive process, larger reporting requirements, and so on. So that's a bit of what we're looking at. For those interested in the details, you know, some of the modifications that we made to the initial idea based on this is, you know, we had started with sort of a broader events concept and recognize that it actually was more confusing than helpful in some cases. So we're moving over to the idea of conference and travel support, which is a lot more specific. Keeping project grants simple, right? One application, easy renewals, but continuing on the back end to focus of, okay, you know, get started with lightweight experiments and then grow them from there. We're adding in a new pilot, which we are just launching now for annual plans to begin to provide better support for user groups, some of our smaller affiliates, organizations who aren't currently being best served by the full process annual plans that we have in place now. And, you know, we had thought about a staffing limit for those. And based on community responses, we're going to pilot without that. So that'll be interesting to see how that goes. And then finally, you know, prioritizing upgrades to support applications being kind of the first area that we're going to focus on hitting, and then moving into reporting in general, including global metrics. This is the timeline. Again, we wanted to run this consultation and lay these plans early so that we had time to plan together with the communities that would be impacted. So we're starting with the annual plan grants pilot. Winifred is leading the charge on that right now and we're excited to see what happens there. Based on what we learned from that, we'll be rolling more changes into the annual plan grants. And it's, you know, not until July 2016 that we'll begin to transition project grants over to this new format as well. And with that, I just want to leave with thanks to everybody that participated in this consultation. You know, it takes a lot of time and energy to lend your voice to a discussion like this. And over 200 volunteers were right there with us. So big thanks. And now we go to Kevin for the metrics update. Right. Thank you, Siko. So my name is Kevin Laduke and I'm the manager of the analytics team. And this month, I'd like to focus on some page view data. And so from April to about mid-September, we had an average of 523 million page views per day. So the red line shows the total page views across the summer. So desktop accounted for 58.1% of traffic. Mobile web was 40.7% of traffic. And the apps were 1.2% of traffic. And the blue line here gives you the scale of everything else relative to desktop. So some of the things that you can see here are, you know, it looks like a saw. So it's sawtooth. And that's because we get our highest number of page views on Monday. And then it wanes. And on a Saturday, we get our lowest number of page views. Another trend you can see here is a seasonal trend. It dipped over the summer. And then in September, right after Labor Day, it peaked. So that's the back to school and back to work. Another way to break down this is to look at the location these pages came from. So if we divide it by global north and global south, global north accounts for 77.4% of our page views. Oh, and just a few notes here. This is where we switched over to HTTPS. So that's the line. And it doesn't have a tremendous impact on our page views, except a little bit more in the global south. So there's a problem with page views right now. And Toby's going to come up and address this with me. We have multiple sources of page views. Oh, yes. And thank you, Tillman, for making those graphs that I just showed. So we have different sources of page views. Wiki stats, which is stats.wikimedia.org, uses a legacy definition of page views. And it's not as good as filtering out spiders like Google spiders and Yahoo spiders. And then we have custom reports like the one I just showed you. And those have better filtering of spiders. And they're generated ad hoc when needed. And then we have SCOM score, which also gives us page view data. But they base it on panels. And then they extrapolate to global numbers. So as you can imagine, the core reading, the fact that the core reading metric is displayed inconsistently is problematic. This has actually caused some fairly significant confusion, even at board level, which is a bummer. So Kevin and I have decided that we need to do something about this. So we're going to work with some key stakeholders around the foundation, and essentially convince them that the new page view definition is the right one. This is a lot more descriptive than the old definition. And it's the one we'd like to move forward with. And then we're working with Eric Zockta, who is the longtime creator and maintainer of Wiki stats, to move his page view reports over to the new definition. We'll let the community know, we'll decorate those reports with a line saying, here's where the old definition ends and here's where the new definition begins. But hopefully, well, not hopefully, our intention is to make sure that we have a consistent definition of page views that everywhere we display page views is the same. And so we can have apples to apples discussions. So I'm not going to pass the mic to myself. Talk about a feature. So I've got a lot of new people here, but do you remember? Actually, I'm going to talk a little bit. So I'm going to show you, this is really a journey to a feature. The feature is really cool. And I'm excited to show it. But I also want to talk about sort of how we came to the feature and sort of underline some of the ways in which we're thinking in the reading team. And I think this echoes how people are thinking about products across the foundation. So do you remember Wiki Grog? It was an interesting experiment to see if we could use micro contributions to populate Wiki data. It was, it was a good idea. But ultimately, we weren't able to get the data into Wiki data at any sort of acceptable level. So we stopped the project. But we didn't, we didn't forget about it. And we sort of kept it in the back of our mind, because even though we weren't able to get data into Wiki data, we were actually, the mechanic worked really well. People actually liked to, to interact with, with the Wiki data mechanic on their mobile. So, so we asked ourselves, can we use the mechanic in a different way? About the same time, we were trying to get more users into beta. And we were sort of thinking about how might we do this. And somebody had the idea that we could use this Wiki, that we could use this Wiki Grog mechanic to actually do that. So we had an idea, we put together an experiment, and we looked at the data. So this, this is what we call working in the business. It's actually pretty exciting, right? I don't actually have the exact numbers, but as soon as we put that call to action up, boom, and then it maintained at a pretty steady level. So, so this is exciting. And so I think this proved to us really twice that this mechanic actually worked. So introducing quick service. So we have another problem in readership that we don't really know a ton about our readers. And we wanted to come up with a very light way of asking them, light way of asking them questions. So in this heavily expanded version of the mobile interface, you see right here, a quick survey, should the text be bigger. And so using the knowledge that we gain from both experiments, we're pretty sure that this is, this is going to be effective. And so here's just a quick, I guess, engineering manager mockup. I kind of screwed up on the, on the mobile thing, but so you'll see it here. I think we're still, we're still working on exactly where it's going to appear, but it'll probably appear near the top of the desktop and below the first paragraph in mobile. And so we're currently, we're currently working with Abby's team and Dario's team and a couple of other folks to design the first survey that's going to come out in Q2. We've let the community know about it, we'll be talking about it. And we're really focused on understanding our readers. But we also understand this actually might be a pretty interesting feature for a lot of folks. So yeah, so we're taking some, you know, we're, we're doing the MVP thing, we're focusing on one very specific problem, which is understanding our readers. But after we're done with that, we're surely interested in talking with other teams about how they can use it and how we actually can manage that moving forward. We're already having a few discussions, but the concern for us is this is a great feature, but managing it, you're entering into central notice type coordination. And that's not really what the reading team is all about. But, you know, we'll get the feature right, and then we can talk about that. So for more information, please feel free to reach out to me, John Katz or Ann Gomez. Thanks. Oh, yeah, I'm Abby. I'm the lead design researcher at the foundation here. And so this year at Wikimedia, the design research team did some research on mobile contribution to see how people are contributing on their mobile devices to Wiki projects. So here's our table at Wikimedia. Jonathan Morgan, our senior researcher sitting there. So basically what we did was invite people. We talked with a whole bunch of people and we asked people if they contributed on mobile at all. And if they do, we asked if they would mind showing us and giving us a demonstration of what they do. So, okay. Yeah. Is this, if we go back to the first video? Yeah. Okay. So I want to play a video here and just show you from a user's perspective what it was like, how he experienced the mobile contribution he was doing. So next slide and play. Okay. So now you're in the, you're in this section for existing projects. Yes. Okay. And tell me what you're going to do. So I'm going to add another question which allows a user to add a translated text in the browser. So, okay. So how would I do that? As I'm not familiar with the Markdown language, what I would do is I would copy paste the things. The problem is the text was here, but I was not able to see. Ah, okay. I see. I see. And then when you can scroll up and down, it finally appeared. No, it didn't appear. When I had two types of, I had to press backspace and then it appeared. And if we go down, this is bad. How do I go down with my keyboard? It's done. And we go to the last slide. And now I can go here. Let me paste it and I can't see what the text is. Yeah. So if this was happening and you were actually trying to edit this on mobile, like let's say somebody told you, you had to get the information up there and you want to train or something. And this happened. What would be the next thing you would do? I would open my laptop and edit, you know, open your laptop. And if you didn't have your laptop with you, what would be the next thing you did? I would just, I had to buy a to-do list and I would do it later when I go back to home. It continues to try. So, okay, let's try it out. Ah, just when the keyboard is there, you can see the bottom part. Okay. Okay, I can't even see the first one. Again, it's another weird thing. It's very sad. We're feeling his experience. Copy it again. Yeah, let's copy it. Let's give it one more try. Okay, let's work this time. We'll just call it a day. Okay. So, like, copy it and it goes to the bottom. Okay, why is this thing here? And as usual, I can't see what is at the bottom. So, I'm going to close this and click the button. Press it. And I'm going to place it here. Come on. Yay. So, what did you just do? So, I had to just press it down and I had to paste it. And I don't know why it wasn't working before. Okay, so this participant tried really hard. This was an 18-minute session. He did one task. And so, he kept trying. He had perseverance. Not everyone's going to try that hard. But, and he was elated when he succeeded. So, all right. Next slide. Oh, that's me. Next slide. What do I... Oh, sorry. Okay. So, we also observed, after we did analysis and reviewed all these videos, we observed some familiar workflows being disrupted, like finding and using talk pages. You'll see a video in a minute about that, about finding the language switcher. There's three different contexts. And they're in the same place in both the mobile context but in a different place than desktop. So, finding things a little different, knowing what context you're in, figuring out where to edit. There's no editing toolbars visible in many times when people are going to the app or the mobile web to try and edit. So, let's watch this clip. Here's another example. And he's an experienced contributor also. And he is trying to add a comment to a talk page that he just created. So, if you could play the video. I would feel awkward editing without checking the talk page. He just said he'd feel awkward without checking a talk page before he edited. Are you on the mobile website? Is the audio connected? But, it's hard for me to keep track actually. So, now I want to find the talk page. I forget exactly how to do it because there's not an obvious interface. So, I think the way that I could be, I need to do it is... So, right now he's looking for the talk page. Swap out because there's no specific way to say go to the desktop view. That's actually the reason I've stopped using the app. So, I can go to the history page. And I'm one of the people finally part of this. And now I'm logged in. So, now the mobile version. And it's the mobile page history which is awesome. But now I have to get back to the page itself. To see if I can... I still can't get to the talk page. For the reason I don't actually listen to notifications. There we go. Great. Great. No, but that's wonderful. The last time I used the talk page, I failed. Okay. So, in case you couldn't hear, he was looking for the talk page to lie. He couldn't find it. And then when he found it, he said, Oh, there it is. That's wonderful. Last time I wasn't successful. So, there had been an improvement and he was able to find the talk page. But still it was difficult to find. He also... I also asked him, are you in the mobile web or in the app? And he said, I'm in the app right now. But then he also said, it's really hard for me to tell sometimes actually. So, that describes contextual confusion which we observed in many people and a bunch of these 14 participants. So, slide. Thank you. So, this slide is just to kind of demonstrate. So, we have the app, we have mobile web and desktop and doing language switching. So, in the mobile web and the app, it's very similar. You're on the article. You're on the article and they look different. But then you scroll a whole bunch of it's a really long article down to find language switching. And then you get a language picker. You can choose which language you want to go to. And if that article exists, then you'll get into the article. On the app, it looks like this. On mobile web, sometimes you go to the desktop version of another language wiki. And then here's on desktop on mobile. It's a very different look obviously. And people who have learned to edit and contribute to Wikipedia and change languages on desktop have a very strong mental model of the language switcher being down in the left. Like several of the people were like, I don't know where the language switcher is. They were looking and they're like, it's got to be down here. They were looking on the left bar on the left menu and things in mobile because that's where their mental models have been built to quickly find that. You know, they burn those paths in their brain to find that action. So, this is kind of just demonstrating in one example that there's very different contexts for people to navigate back and forth. And people use these three contexts as contributors because you can do some things in one and other things in another. Like the reading experience is great on the app. But there are things that you have to go to desktop to do if you want to contribute on mobile. So, we're thinking that people will likely make more mistakes on mobile as it stands right now as far as contribution. For example, that there's no editing toolbars. People have to have the wiki text in their head and all the back and forth and the difficulty in selection. And things like that. We're concerned about that. Uh, so here's one more video. And this is just, I'll just let it play, but this is another person who's trying to make a comment on a talk page that he just created. So, if you could play the video. So, let's see, I made a mistake on the talk page. I took the page. Yes, I made a mistake. Then he has to click back. Here we go. Awesome. I'd like to edit it and it's not, it doesn't want me to edit it. So, I have to read it. And he said, oh, it doesn't want me to edit. So, it was another difficult experience for him. He made a mistake and he knew to fix it. But we're wondering if not everyone will know that they made a mistake and mistakes might not get fixed. Or it's a lot of reworking stuff. So, okay, slide again. Thank you. So, another thought we had was that mobile first people may have not, not access to as many learning experiences they, as they might need for learning opportunities. So, if, if their first experience on trying to contribute on Wikipedia is on mobile web or mobile app, they're not going to find toolbars right now. So, that could be difficult for them to learn how to do wiki text and they can copy and paste things. That's one way. Also, when you click edit, the info box text looks grayed out. It's actionable. You can change it, but it looks like it isn't. So, it causes confusion and for new people that might make them hesitate. Also, many people aren't aware that visual editor is available and visual editors just in the beginning on mobile and it's hard to find. Some people were looking for it. They were looking for it to use on the desk, on their desktop version and it's there in different languages and not. It's complex because of beta if they have beta on or not. Also, very new people might not even be aware of talk pages or history pages or diffs and these are tools that people need obviously to, to curate and do edit in. Okay, next. Another thought is like when we were at Wikipedia, the internet went down. Internet goes down here and there. So, people who have consistent intermittent internet may need some support. Like if we're able to provide the ability to save intermittently, to save work intermittently without the save button being pushed. I know there's probably hurdles to that, but that would really be helpful for people who work in that context of intermittent internet. It'd be less frustrating potentially. So, that's one thought we also were thinking. Yeah, so there are a bunch of super wikipedia's out there who will go over all those hurdles and keep trying and keep trying to get the work done. Not everybody will do that. But like we talked to one guy who built a whole wiki on his Android phone. And this is one of the stories that I just find awesome and amazing but not everybody's like that. And I would love to be able to support the people who are like that better and the people who are learning. We could support them on mobile a little bit better. And so, it looks really bad, but this is good for us to learn now. It's good for us to appreciate these experiences and learn how we might be able to better support mobile contribution. So, this is a great point in time for us to do this kind of exploratory research. We're going to dig in more and do more research. We're already doing usability, starting to do usability testing on the visual editor on mobile. We did a hear through review. And also, we observed some things that work great on mobile, like the wiki data game, playing the wiki data game. A lot of people said they do really small edits in mobile because it's something they can do on the bus or something. Adding articles to collections was mentioned. Also, people need multiple tabs when they're browsing, not only on desktop but on mobile too. And people use that for translation, for browsing, for finding references. So that's really successful. The random link was mentioned and also the nearby feature. Though some places the nearby feature has too many nearby things, but like roads, every road is a nearby experience. So, that said, this is all good for us to learn about and let's make a better experience for mobile. So, there's a few next steps for possible for design research to participate in this. Like I said, we're going to do some usability testing on BE on iOS and Android. We're going to dig into this data from the strategy survey since there was a lot of, it was around mobile. And so there may be some useful feedback or information requests for features and things like that. We want to consider ways to make the cross-platform experience a little less bumpy, a little more supporting mental models that already exist, if that would work. Also, we want to potentially consider the drawbacks and benefits of a responsive desktop. This could potentially improve the experience. I know there's been some work done on that before and maybe that's something we can consider. And that's it. So, next. Okay, is it ready? Hi, I'm Moriel from the collaboration team and I'm going to present Echo or the new Echo. I'm hoping everybody knows Echo. So Echo is the notification extension that we have. So you can see it here. Well, we did a couple of things with the new Echo. The first thing we did was move it to OUI. So now it looks better and we can support it better. And a lot of things we're doing as a kind of a prerequisite to other big things that are coming up. But so this is the demo. So as you can see, you're not looking at a split notification yet and that is because for new users that don't have any messages yet, they only see alerts. So in this case, I have one alert. This is a new user. And if I open it up, I say, welcome to Midiwiki. This is my alert. And as you can see now, I have no alerts yet and I can keep on going throughout Midiwiki. I will ask my trustee helper to send me a notification now. And could you? Of course, live demos. So we'll wait a little bit until another notification is sent. Everything was ready. We really prepared for this and then it's live demo. All right. So there we go. So I have another notification. I just got it. And I believe I'm not supposed to open it yet. So now what we're going to do is, so now I'll ask Ron to send me a message. So or thank me. And a message is either, yes. Oh, are you? All right. So we get another notification. I can open it up. You can see I have two things now. It's great. So I have my previous notification and two new notifications. So alerts are automatically marked as red when you read them. They are considered something very immediate. So just as you read them, they're red and you can continue on your way. So now I'll ask for a message. Got it. So when I fresh the page. No. Goodness. No, you were. All right. So what I'm going to do now, that's right. So I'm going to special. Sorry. All right. I'm going to follow a board. Yes. I follow the board. I'm going to leave that board. And now, sorry about that. And now we're going to add a message to the board. So up until now, I only had alerts. I'm going to refresh this. I have a message. Yay. I can take a look at that message and see that there's another test from Ron. It's great. And if I click it, I'm going to go. There we go. To where the message originated. Now I'm going to ask for a talk page message. So what we did with talk pages, we left the orange bar as you will see. Should I refresh? Yes. All right. So we left the orange bar. I have new message and I also have the alert. I will ask before I open the pop up, I will ask for another message not on my talk page. So just to add another topic. So to the same board that I'm watching, there's also going to be another topic. So what happens now is I'm going to have two messages. Two messages. Come on. All right. So two messages and my orange bar, which means that somewhere in the messages that I got, I have also a talk page message. So let's see. I'm going to open the messages. So yep, I can see that I have two new ones. Both of them are unread. One of them on my talk page. So what I can also do is say, you know what, let's, this one I don't care about. Sorry, Rowan. I'm going to mark it as read and keep on doing what I'm supposed to be doing. And I can close it. I can still see here, it's no longer immediate. So I've seen my messages, but I still have a message left. And then at any point I can click it and say, mark all as read. And since I also marked the talk page messages as read, I got rid of the orange bar. And that is it. I'd be happy to answer any questions, just to comment these kind of things. So not only did we split the notifications, we're also doing that on the purpose of supporting cross-wicking notifications and a lot of big stuff that are coming up. So thank you. We just had the product demo. All right, wonderful. So thank you very much to all of our presenters. And I'd like to ask you to come up so that people can ask questions. For the questions, we're going to start with IRC. And so over to you, James. Hi there. This is a question for Abby from Matt Flashin. He's asked, when you were testing on mobile, did you test Flow for comparison with Wikitext Talk Pages? No, actually we didn't. This was very ad hoc testing. We didn't do a recruit or ask for specific use cases or try to test specific products. We just asked in general, do you contribute on Wiki projects? But we could do something like that. Awesome. Thank you. Yes, go with the microphone up there. This is also a question for Abby. So one of the things that I sort of saw in all the people using it is when these are experienced editors. So my question was related to the inherent nature of editing on mobile. So first is basically do you think that long form editing is applicable on mobile? Is that first? And secondly, if not, then are we ready to research into things which are short form editing or something that... Because one of the other things that I saw was a Wikidata game was very popular on mobile and the reason behind that must be it's easy to do and it's very tactile as an interface instead of long form editing. So do you think that is something that is valuable and we should be spending our resources into that? I think we should think very carefully about it and do some more research and learn more from the collective intelligence here about and in community, of course, collaborate with community to discover that, to see what we should do and kind of make a strategy about that. I think we should think carefully about it. Thank you. Thanks. I mean we know that our version of the Wikidata game, Wikigrock, like that worked on the front end. Like we know users were happy to actually do small tasks on mobile. The fact that we couldn't figure out how to get it into Wikidata, that was really why that project was shelved. And I think that the struggle that the mobile web team had there was actually figuring out, okay, what was the right use of that kind of mechanic? Like I said, I think we found it with polls, but I think we know people will do stuff like that on their mobiles. The question is, what does that actually mean for editing in general? And I think there's lots of examples of people actually doing long form editing on mobile, but it's really hard right now. We could potentially make it easier. So, but I think it's definitely something to carefully think about as we move forward. James, do you perhaps like to say something? No. The voice of IRC is neutral on all things. Yes, question from IRC please. For you Frances, actually. Yes. Pine asks that they thought there was going to be a replacement for a new community tech wishlist survey happening in September. Did it happen? Has it been postponed? Did they misunderstand? It's on the roadmap. It is on the roadmap. Okay, thank you very much. And let's see. Yeah, Daria. Hey, Daria here. So, question and comment about microsurface. As you know, Toby, I'm super excited about this feature. I think it is like a high ceiling. Two questions I have. Our first, can you say something about what can be used to trigger these microsurveys? It might help people get a sense of how we can use them. And second, do you expect that the community will be able to use them or is it probably a focus function like data? Yeah, so I'll be honest. I don't actually know what we can use to trigger these surveys. Right now it looks like page load. I'll have to talk to the product folks. As far as the community, we've talked about this. That would be awesome. And I think that it's just going to be a conversation we have down the road. My concern is sort of a group leader is that it's hard to ask the reading team to sort of administer and manage surveys for the foundation of the community. And so I think we'll need to partner with other folks to actually make that happen. But yeah, I would love to do that. I think that would be, like you say, the possibilities are endless. The ceiling is really hot. All right. We are getting right towards the end of the meeting. So there's time for one or two more questions. Yes. Go ahead, Lila. I have one probably comment for Seco. So I was kind of reading maybe too much between the lines, but I felt that there was so much pressure and maybe your team or the community in terms of grants to prove impact worldwide. And if that is the case, I just want to make the comment that basically this is a common practice in academia that like, for example, I found a place like NSF does a lot of investment in terms of research. We do thousands of investments per year and one or two eventually will end up being the investments that were fruitful. So I feel less kind of worried we do investments by giving out grants. But maybe many of them don't resolve something just like globally impactful. But if you have one or two investments like in the year that end up being great and that I think already pays off for all the work that you do. Yeah. I think that's a really great point. And we think similarly about that. I mean, IEG individual engagement grants is a good example of that. That's a program that's specifically focused on experimentation. And with that, we fully expect a number of those experiments to not sort of demonstrate huge impact. It's very much a let's try it and see how it goes. I think that's a bit different than sort of a large grant to an organization that's kind of growing programs year over year and increasing those investments based on impact is where we've been sort of thinking the most strongly about it. But I think your point is really well placed. It's not that we expect every grant to demonstrate sort of the same level of impact and everything to be successful. We're really interested in learning and how we make different and smarter mistakes next time. So my comment is more of an announcement, so it should be short. So quick surveys is awesome. It seems like it's going to be a really awesome tool. But we have a lot of other ways to use surveys and gather data. I'm actually the survey specialist in the learning and evaluation team and my job is to help coordinate, provide tools and a little bit of services to surveys. And this is kind of an early announcement. So I am going to be sending some an email around to give you guys a little bit more information. But just to say that surveys take a long time. They're not something you can pull together in a week and two weeks. So just just a heads up that it requires a lot of planning. And I guess that's it. And if you have a survey you want to do and I would give you if you want to do a survey and you have at least four weeks in advance, then please reach out to me. Otherwise I will be sending something around soon. Great, thank you. We are out of time. Five words. All right. So yes, thank you all very much. Please join me in thanking the presenters again. And thank you all for your attention.