 Hello, welcome to February's edition of Lightning Talks. I'm your host, Kevin Leduc, and we have a few great Lightning Talks lined up for you today. So Lightning Talks are an opportunity for engineering teams or teams at the WMF, or in the community, to showcase their project. If you have questions while this is streaming, you can post them on wikimedia-tech, the IRC stream, and Megan is monitoring that stream, and we'll relay your questions into the microphone. And talks this time are 15 minutes long, and there will be a little bit of time after each talk for questions and answer. So our first talk is by Pine, and we actually have a video that we'll show you, and after the video, Pine will be live to answer your questions. So let's start with the video. Hello, this is Pine. I'm here to talk to you today about the instructional video series that we're tentatively calling LearnWiki. First, we'll talk about the opportunities that this project is intended to address. We'll talk about the project basics and measures of success. We'll talk about the project timeline, which is still subject to change as the project moves forward. We'll talk about use cases for this video series. We'll talk about who's involved in the production, and then we'll have some time for Q&A and see the image credits. So there's a long list of opportunities that this project is intended to address. Many of the help materials on English Wikipedia were designed prior to Visual Editor. And as I think most of us know, Visual Editor marks a major change in the user experience of Wikipedia for editors. The existing help videos for Wikipedia that we do have that are actually shown on the end.wikipedia.org are few, and they're often antiquated. There are some that are newer on YouTube, but they're, of course, not appropriately licensed necessarily. And their quality is also a little bit variable. The existing OnWiki help pages that we have are often walls of text. Even I, as a pretty experienced editor, can find them lengthy and frustrating and easy to get lost in. And I'm sure new editors find it all the more so that way. So a little background on the Wikipedia adventure. The Wikipedia adventure is a interactive tutorial that's designed to introduce new editors to Wikipedia. It was funded through an individual engagement grant a few years ago. I was, to some degree, involved in its creation. It was designed prior to Visual Editor. And it takes about 30 to 45 minutes to complete. And it's generally completed in the set sequence. So the design methodology is a little bit different. And it was designed in a different era. With this series, we're taking the interactive tutorial, or I should say visual tutorial, into the visual editor era. We're going to try to design it such that the tutorials are modular so that viewers can just take out a couple of modules instead of watching the entire sequence if they're just interested in learning one particular skill. The teahouse here, I list as an opportunity because it's something that I hope we can leverage through this project. The teahouse is generally viewed to be a successful project. It is a mentoring space for new editors. And I'm hoping that through this video series, we can both encourage new editors to take advantage of the teahouse, as well as make more efficient use of the time of teahouse hosts who can refer new editors to this video series. A number of affiliates, as well as the Wikimedia Foundation, are interested in increasing the quantity of content, as well as the number of contributors who are from GLAMS, educational institutions, STEM institutions, or other mission-aligned organizations, such as NIOSH. Let me expand this a little bit for those unfamiliar with the acronyms. GLAMS are galleries, libraries, archives, and museums. STEM institutions are those involved in science, technology, engineering, or math. And NIOSH is the National Institute for Occupational Safety and Health. So in other words, we are interested in increasing the amount of content on the Wikimedia projects from these mission-aligned organizations. And hopefully this video series will help to introduce the concepts and methods for participating and contributing to the projects to these organizations and to their staff members and to their volunteers. For edited tons and workshops, it is not always possible to have an experienced Wikimedean available. So for example, it may be that an event is happening too far away or at a time when no one is available who's an experienced Wikimedean. So this video series will be available on demand. Another important point is that time spent learning Wikipedia and Wikimedia Commons is time that's not spent creating or uploading content. And this can be a big deal. So if we have an event and the event is two hours and new contributors spend their entire first hour learning how to use the projects, that's time that's sucked away from them contributing to the projects. Now of course we do want people trained with the necessary skills, but not everybody needs a full hour of training. So hopefully the modular nature of this video series will help to get people up to speak quickly with the skills that they need at that particular event. And finally just to note that the practicality of delivering instruction over video, over the internet, has increased significantly over the years as the bandwidth that many end users have has increased over time. There is a limited population of experienced Wikimedeans who often help others and our time is often spent repeating the same information. I know this because I personally am involved with helping new Wikimedeans periodically. Mentors time could be used more efficiently if well designed instruction materials are available that cover common tasks and questions. And preferably we'd like that information to be available in multiple languages and I'll talk a little bit more about the multilingual features of this video series in a little bit. And the modular nature of this video series will allow instructors, mentors, and new Wikimedeans to select video modules on an as needed basis. So we're looking at about 40 to 60 minutes of view time for all modules and just to emphasize not everybody needs to take all 60 minutes of video. Some people may need to just take five minutes because they want to learn a specific skill. We're hoping to have a short and 20 minute module set available for time limited circumstances. We're looking at about a nine month project timeline in a budget of about $8,800 excluding WMF staff time. And here are some possible tracks of modules and what I mean by track is that it's kind of a sequence of modules that have been selected for particular use cases. So there might be one track for general Wikipedia editing. There might be a track that's designed for GLAMs and mission-delined organizations. There might be one for folks who are editing health content. As many people know, there are some special guidelines that apply on English Wikipedia for those who are editing health-related content. And also there might be a special track for those working in translation courses and education programs. I think as some people may know, there is a fairly large amount of translation going on in the Arabic Wikipedia education program. So we want to address that opportunity. In our measures of success, we're looking for viewership targets for the first one year, two year, and three years post publication. We're looking for positive feedback on feedback forms. We're looking for positive community sentiment from experienced book comedians. We're looking for positive sentiment from educators and students and GLAM representatives. And we're looking for success stories, at least one each from a Wiki Ed Foundation class and one from a GLAM organization, about how the video helped them to achieve their goals. Just to clarify a little bit, the Wiki Ed Foundation is different from the Wikimedia Foundation. The Wiki Ed Foundation supports education classes specifically in the United States and Canada. So here's our project timeline. I did a fair amount of research prior to going through the grant approval process. Grant approval happened in December. Community input about the video specifics, what to include, that kind of input was gathered largely in January. We're now in the script outline and script drafting phases. The midpoint report for the grant is scheduled for 29 March. Production of the actual video content, video recording, is scheduled for June. And public rollout in August near the start of the new academic year. And it's also around the proposed time for Wiki Conference USA 2016, which I'm hoping will happen around that timeframe. And the project completion date is estimated in September. So I'd like to talk a little bit about potential use cases for these videos. Here we can see, for example, some students in a medical English class, some folks involved in the College of Languages and Translation at Kingsaw University, folks at the Wikipedia Education Program and City University of Hong Kong at Louisiana State University, Wiki Loves Libraries event. I love to you, Critical Feminist Edithon, a museum using the Wikipedia collections tools, and also folks working in an open edithon at a museum in Mexico. These are all potential cases where the videos could help these folks with their workflows. So here are the people who are involved in this project. It's, as you can see, a fairly long list. There are needed four slides to try to categorize everybody. The primary grantees on this project are Nickel, Flora, and I. The Wikimedia Foundation staff project leads are Marty and Community Resources and Victor and Communications. We're also getting staff input or support from a variety of people around WMF. We have Ty and Anna in Education, Aaron in Research, Jonathan in Design Research, James Forester in Editing Product, Sherry Snyder, Community Liaison, Chris Schilling in Community Resources, and Chuck in Legal. There's a fairly long list of community members who have made comments or asked questions, and I greatly appreciate that. Volunteer translators. Now you may remember that I said earlier that I was going to talk more about the multilingual aspect of this video. One of our goals is that the video script in particular will be easily translatable into multiple languages. The people shown here have already volunteered to do translation, and they include a couple of WMF staff who are participating in a volunteer role. So the list of languages we have so far include Spanish, German, Greek, Czech, Arabic, Armenian, Odia, Russian, and Ukrainian. And we also received some general translation help from folks who were helping us translate messages to village pumps. Some folks have produced or will be producing related materials. Delacia and Wikimedia Mexico is planning to produce a related video segment. Kielana, through an individual engagement grant, is planning to produce some videos for her Women Scientist Workshop Development Kit. And Sean Kay Evans has produced some videos for ARC Plus Feminism, and I believe is planning to update those videos for ARC Plus Feminism with visual editor information. And here are a few other folks I'd like to acknowledge. Siko and the Individual Engagement Grants Committee, the variety of WMF teams that have gotten us to the point where we are today. The Tea House Project and the Wikipedia Adventure Project are both important predecessors. I especially want to thank Jonathan for his work on the Tea House and also Jake and Heather for their work on the Wikipedia Adventure. And I'd like to acknowledge my peers in Cascadia Wikimedians, the Wikimedia affiliate organizations on IRC, and on the Wiki Research L mailing list and other mailing lists who have contributed their thoughts over the years and informed my thinking about this project. Here are the image credits, and I provided a link here to Metta, where you can leave any questions or comments. Thank you very much. All right, thank you, Pine. I have a bunch of questions, but let's give other people a chance first. If you have a question in your local, it's a microphone. Any questions on Wikitech? I think Pine already answered this on IRC, but there's a question about where we can do the translation. Yeah, so if folks are interested in doing the translations, I would appreciate it if they would indicate their interest on the Grants Talk page, which I linked to on Wikitech, and there's also a link to it at the end of my presentation on the last slide. Thank you. So I have three questions. The first one is, when do we get to see the first video, or do we have to wait till they're already in August? The second question is, how much time are you spending on this, right? Like how many volunteer hours are going into this project for you and the other volunteers? And then what is the budget for? It's quite small, $8,000, so what is that money being spent on? OK, so let me see if I can take these one at a time, and I apologize if I forget as I go through here what the remaining questions are. So the goal is to have all the videos pushed out at one time. I had a discussion with this about Victor, and the consensus was it was better to publish everything at once, because if in the production process we find that we need to revise things, it's easier to be able to do that in-house rather than to have to reproduce stuff. So the idea is we're going to try to push everything out at once, and I think Victor is perhaps on IRC or maybe he's available in-house to talk about this more if he wants to share his thinking, but I'm deferring to Victor on this because he's done more of this than I have. So just to clarify, on the budget, I am funded through an IEG. This is not a volunteer project for me. I actually am getting funded for this. All of the research and other stuff wasn't funded. So yes, the budget is relatively small, and the number of hours, I believe, if I forget exactly what it is, but it's on the grants page, it's in the hundreds of hours, not in the thousands of hours. And that does not include WMF staff time, which is it's not a huge amount of WMF staff time, but there is a little bit of staff time going into this. And it also does not include the time of the translators. The translators are volunteering their time. So if we add in the time of the translators, I'm sure then we would probably get into the thousands of hours category. And I think you had a third question. I forgot what the third one was. Can you repeat that? No, you answered both of them together. Are there any other volunteers working actively, like every week, or is that translations? So there's not every translator. There are not volunteers working every week. I get periodic comments and questions from people, of course, but there's not any other active volunteer on the project. That will change once the scripts come out, once the scripts are produced, the idea is that at that point, the volunteer translators will get involved. So at the moment, the translators are mostly sitting on the sideline. Although, of course, I do ask them for feedback on the script outline as we're going through it. Once the scripts are frozen, at that point, we'll ask the volunteers who are doing translation to get active, and at that point, I expect quite a few more volunteers to be actively involved with the project on a regular basis. Thank you. I'd like to have a question. And I'm asking all this just to highlight how much volunteer goes into this to our community supports this. Fine. Thanks for the talk. I have one question for you. So are you familiar with the work that French media supported and French media has contributed to for the Wikimook class that is going to go online on February 22? I am not familiar with that project. If you want to drop a link on the grant's top page, I'd be happy to take a look at it. OK. I noticed that's one of the very few Wikimedia threads that I think you have passed all forward into. Thanks. OK. Thank you. Right. Thanks, Pine. Let's give him a hand. All right. So our next lightning talk is by Madhu, and she is remote. Hi. I'm just going to go ahead and share my screen. Great. We hear you, and we see your screen. All good. Can everyone hear? Yeah, all good. OK. Hi. I'm Madhu Mitha, and I work as a software engineer in the analytics team for the Wikimedia Foundation. And the title of my talk today is, How Many Users Access the Wikimedia Projects? The first thing that comes up is, why do we want to count the number of users who access the projects? So it's a popular sort of web analytics metric to quantitatively estimate how much are reaches, how popular the website is, and sort of estimate growth. Some of the projects may be growing. If it's a new project, do we want to see how many more users it's been getting, whether it's growing or not? That sort of thing. And the current metric that we have closest to estimate this sort of thing is page views. And it's really hard to do this with page views, because you do not know whether you had 10,000 users reading your site, or is it just one bot that had generated 10,000 hits on a page or something like that, which is where this sort of a number of users kind of metric comes into place. And there's this popular web analytics metric name for it, which is called Unique Visitors, or Unique Users. That's sort of used to estimate this. But I'm going to try and convince you that this is a misnomer and a sham. I say that because there are some popular ways of tracking or counting unique visitors. And I'm going to talk about some of them. The first one is IP addresses. This is super simple. We say, OK, fine. I have a web request log. I can see all the people who send requests to my site. Let me just count the distinct IPs. And I have the unique count. This is super inaccurate, because me as a user could be using computers that are shared by other people. I hop between networks, and my IP keeps changing, or I use a 3G or 4G network provider who tends to assign multiple people the same IP. So this can be very, very inaccurate. The next one is fingerprinting, where we try to reconstruct a user's identity by looking at a bunch of different things that come in through the web request logs, which could be like IP, user agent, and a lot more other things that we would reconstruct and say, OK, fine. So this identity, this sort of bunch of things, belongs to this user. So from now on, this user can be identified through this sort of a signature. And really, this is super privacy intrusive, and we do not want to fingerprint users. The third one is cookies. And traditionally, the way in which cookies are used to count unique visitors is by assigning a sort of unique identifier, which is what most things like, say, Google Analytics would do. They assign a unique identifier, and cookies have memory in your device. So they're going to sit in your device, and every request that you send is going to get tagged with the cookie. And by looking at the same sort of identifier, they can see that it's the same person and count uniques. The last one is having you log in. So I think companies like Facebook do this, where in order to be a user, you log into the site. There's no other way you access most of the stuff in the site. So they know exactly how many users they get, because you're all logged in. So we talked about having unique users. But the thing is, using any measurement, like the IP or the fingerprint or cookies, apart from the logged users way of doing this, there's no way to distinguish between devices. If I use two different devices, I'm two different IPs. I have two different signatures, and I'm going to have two different cookies. So the real question is how many devices access the Wikimedia projects, because we're not going to force people to log in. So this is the best we can get through any other method that we use. But how are we going to count this? We talked about cookies. We don't want to do IPs. It's super inaccurate. We don't want a fingerprint. So cookies was like the left out method. But then the traditional way of using them in order to count uniques is by giving these unique identifiers. But that gives us more power than to just estimate aggregate counts. And at the moment, our goal is to just estimate aggregate counts. And we don't want to track individual users. So that's where our research team and analytics team worked on this together and came up with this idea that we'll count unique devices, and we'll just use a date. And if that makes you go, what? I hope to demystify. So the idea is that they decided that it would be possible to count daily and monthly aggregate unique device counts using a cookie that would basically carry the date of last access when you last accessed a specific website. It could be English Wikipedia mobile or English Wikipedia desktop or Arabic Wikipedia or any of those. But per domain. So I'm going to go through and show the flow of data intro system from the time you send a request and how it sits, and then try to explain how we count uniques. So here we go. So start with the beginning. There's your tablet device or something. And your browser has a cookie store. You've never been to Wikipedia before. So currently, your cookie jar is empty. And you send this request on 1 January of 2016 to English Wikipedia mobile, and it hits the Wikimedia servers. It's kind of a very simplistic notion of the Wikimedia servers. It's more elaborate than that, but we're going to keep it at that. The servers sort of go through this pipe of things, and they send this data, which finally gets into our like Hadoop data store, where we store web request logs. And it goes and says, basically, that there was this request that came in on 1 January of 2016. It had no cookies, so this like last access cookie value now is null. At the same time, the server also sent me who's using the device. Another thing that says, an instruction that says, set the date of last access to Jan 2016, which is the first Jan 2016, which is when you last accessed it. So now your cookie store now has this like last access cookie that is set to first Jan 2016. Now you go ahead and issue another request on the same day. It goes ahead and does the same thing, except now the Hadoop data store receives the date of last access as first Jan 2016. Because it's the same day, the server is still going to tell me, the person who's using the tablet, that the last access is still first Jan 2016. I move on, I come back the next day, I access the same site again. And it's second now, but in the first request that I sent on the second, I still send the previous cookie, which goes in and sits in the data store, as with the last access date of first Jan. At this point, the server now tells me, OK, fine. Now your last access date is second Jan. And finally, I send another request. It goes ahead, and then now any other last access dates that are going to be set for any request on second Jan from me on this data store is all going to be second Jan. I hope that sort of made sense. And here is the full picture. Now looking at this log, so what we have to count uniques is just whatever is in the data store. And looking at this, we want to identify uniques. And you can now know that for first Jan, I came to the site multiple times, but I need to be counted once. On second Jan too, I came to the site multiple times, but I need to be counted just once. And we can do that very easily by looking on first Jan on any requests, or on any day, you can go ahead and find daily counts by looking at all requests that came with last access as null or as a date that was before that day. So that would be this first request with last access null for that day, or the date that came in with a date before second Jan, that's like last access first Jan 2016, on day two. This can be very confusing and take a while to absorb, but the slides will be up and I hope that it will make sense. OK, and there's a bunch of documentation to explain all of this and the thought process that went into this research, and that's linked to from here. There are some drawbacks for this method, because we're only using tokens and we're trying to just get aggregate counts. There's no way to sort of use cases that will help segment users or do like, if you had a feature and you said I wanted to count all the users who are using this feature or something like that, there's no way to do that sort of segmenting. In the current implementation, we have this cookie deployed only per domain, so en.m.wikipedia gets a different cookie from en.wikipedia, so there's no two wikis or projects can be combined based on country or any of the sort, and these numbers are not deduplicated. So because we're doing that, so we have only per domain numbers for the current implementation. There's also, because like I mentioned in the previous slide, that we have to count these requests that come in with null, what happens is we have this inflation of our data due to bot traffic, and there's a lot of added complexity to identify bot traffic. So we did a lot of work. Initially, the method was devised. We had the numbers up, and then we thought, OK, great, we have these numbers. And from there, it took us over six months to actually effectively find ways of discounting our bot traffic without investing too much time in actually statistically analyzing all our data and identifying bots. So there's a lot of research around this, which is linked to from here. The reason why bots is such a big problem is that I told you how this, like, we need to count last access null as a thing. And what happens is if you're a bot and you don't accept cookies, every request that you send comes in with a new set of null, and we count all the null. So if you send 10,000 requests with last access null, you get counted as 10,000 uniques. So there's a lot of work that went into that. But because it's a lightning talk, I'm not going to talk about that much. But I'm happy to, in questions, there's all the documentation of the research that we did. So I want to show you some preliminary numbers that we have. These are from some of the work that we did in November 2016 when we were still fine-tuning our numbers and improving our algorithm. The actual production numbers will sort of start from mid-December or so. And I don't have those yet, so I'm just going to show these experimental time numbers. So in November 2015, we had 440 million unique devices overall on eNWiki. We can sort of comfortably add eNWiki mobile and desktop together, because we assumed that there would probably not be any deduplication necessary between that. And 280 million of those were from mobile, which is great. This was an Arabic wiki. We had 13 million unique devices, and 9 million of those were mobile uniques. So that's all I had. I have some image credits at the very end, and open to questions now. Thank you, Madhu. Quick, easy question, how much of this data will be public, and when will we be able to read about it? Right. So we just productionized the jobs that are running for this. So they just have started running. We will probably make an announcement by the end of this week, and we hope to make all of this data public. There is just some things to look into. There can be some projects that have very low number of uniques, and we basically want to cut off those kind of entries, because it could be privacy-sensitive. So we're going to do some work into that, and hopefully we want to get all the data public. We also have a task file to work on building a visualization on top of this data, using dashiki or any of the dashboard stuff that we have set up right now, so that we can at least have visualizations before we can actually fully publish the data. But we want to publish all the data. Right now, they're available internally through archive on WMF tables, and we can send out that information by the end of this week, I hope. Right. Any questions on Wikitec? Right, thank you. OK, thanks. I do a hand. So Rosemary's our next presenter. Hi. Thank you, everyone. It's great to follow these two great presentations, and kind I want to thank you. I know you're still on the line. If I can present an annual plan as you presented the program plan for the video series, it's in good shape. Actually, this is about a work in process, a little bit of a messy process. The program capacity and learning roadmap. What is that? What does it mean? And where are we in the process? But most importantly, it's about the messy process of community consultation and collaboration. And I was trying to think of a metaphor, and I can see with my attendees in the room, they're going to laugh. I've got toothpaste. And I was thinking about the metaphor of toothpaste. We had three small teams, small, mighty teams, working together, just as we have small volunteer communities trying to get more and more out of the tube. It's like, oh, I forgot to go to the store. Let me see. Can I get one more out of there, one more brushing? And we continued to do this. And each year, we found that we're continuing to try to get more and more, do more and more with less and less. I think that's part of the environment of nonprofits. So back in November, we were given the mandate of, imagine you have program capacity and learning. And you look across three teams. You've got learning and evaluation, who's been studying programs. You have the Wikipedia library, which has been emerging. We have a little more history with education. And we've just gone through a grants review where we've looked at, do programs matter? Are they making an impact on content and contributors? So we were given the mandate of maybe look at the packaging a little bit differently. You're still going to have limited resources. But maybe, if we look at, what could we do differently with volunteers as well as increasing the capacity across those teams? How might we do that? And we have, what is a developing program capacity and learning roadmap, which will actually inform our annual plan. I have to say it's going to give me a whole lot more confidence. Back in November, we started basically qualitative interviews with program leaders, community leaders. And that's where we're at today. So let's take a little bit about what's under construction. When we looked at these three teams, we also looked at some of the pain points from community. Some of those pain points being underserved communities, such as Glam Wiki tools. We've been hearing about broken tools, if you could only do, and give that support. We also had inquiries coming in from different content partners that needed to be matched with community. So in addition to that, AFCOM, AFCOM is our front line with regard to affiliates. Might there be a way, in addition to staff supporting of AFCOM, that we could be a bridge in the infrastructure. So we began community listening. And that community listening began with looking at some of the blockers. What do we know? What are the greatest opportunities that we might have, considering both at the local community level, as well as here at the foundation? We're going to be answering hard questions with donor dollars. Are we doing the very best we can to move the needle to accelerate our progress? So first of all, the connection between programs, the importance of our affiliates and the work. So in one year of six programs, across our grantees, we found the 733 implementations, editathons, Wiki loves monuments, Glam. We saw the connection with regard to additional content as well as engagement of contributors. A newer program, which actually began as an engagement grant, our libraries, we saw a connection with those publisher partners. We saw that we had 5,000 accounts to 3,000 top editors and 18 global language branches. And most recently, Anna Koval had been assembling data with regard to education. What had been the impact of our education program with regard to our global presence, engaging students, and growth of Wikipedia articles? And then in this black box, something which has been both challenging for our team, we agreed, per the board notification, that we would do more of the heavy lift with regard to processing our affiliates. And we've seen a growth in our affiliates, not only chapters, but our thematic organizations and user groups. 46%, in fact. So what were the barriers? We identified a couple of themes. There were consistent themes in the interviews. And this is one. It might be familiar. And that was kind of getting lost in a museum. I could kind of give you what my words would be, but I think Kipple Boy kind of put it best. As a community, we are awesome at organizing the world's knowledge, but not so good with organizing our own knowledge. So you're really excited about contributing to Wikipedia. And like you know, there's a resource out there somewhere, but where is it? We even found here at the foundation that we were looking for the tools and resources. Interesting what happened over the holidays, as we were thinking about annual plans. We found here at the foundation, I think there were four different teams planning knowledge hubs. And we finally connected the dots, which is a single entry hub, where you can get to where you need to go, so you go from searching to finding information. The health and safety team had started this concept that we had talked about in a quarterly review. Communications wanted to offer tools. And so I guess one to our internal stakeholders here at WMF, if you're planning a hub, there's a cross-functional team who wants to get it right. And so we found that getting lost in the museum was one of the pain points of community. It's hard to get there and know when we've arrived. If you have a car that's no longer working, one of the things we found on our meta-pages, and by the way, our comments on not only the barriers, but our goals is open through the 19th, as we continue this iterative process, that the tools that had been built by gracious volunteers were no longer working, like the Glam Wiki tool set. And also, how are we able to tell the story of what we're doing at a community level, as well as at a global level? And what we found is the pain of collecting global metrics was like, this was not why I signed up to be a volunteer and contribute to programs. And so the infrastructure and the tools and resources that really value volunteer time became really important. Knowledge voids. Whether you're here at the WMF and you've experienced staff turnover, if you're in a community and you've seen active editors and community leaders move on to their next junction, you recognize that preserving institutional knowledge is something that's absolutely critical. And so we found fragmented learnings, clearly in a movement like ours. If we know single points of failure, and we also know what works so we can replicate that, we're going to get to scale and support a program leaders. So we found that, yes, we were drowning in information, but starved for knowledge in decision making. Navigating the seas. What we found is that oftentimes, if you're starting out as a program leader, you need to be able to connect with people doing that. So our program teams right now have been doing those connections, but we also found that new knowledge partners wanted connection with local communities. So is there a way to connect people to people within the movement and people to content? You'll be seeing in some of the prototypes we're talking about, we talked about the T-House success in being that go-to place for new editors. What if we had a T-House concept for program leaders and affiliates? This, again, gets to the central knowledge hub. This was the hard part. We took three separate teams. Imagine you're three passionate teams, very interested in your own thing. And you've been asked to come together and look at your current workflows and really take a very hard look at what is the common criteria for where we will place community investments and what we're going to recommend. We came up with, and these are also on our meta-pages. We'd like your comments on that criteria. The impact on content and contributors. A demonstrated community need. There were multiple points that were indicating this was a true barrier for communities. Sustainability, as well as scalability, across multiple communities, and we knew that we needed to go from one to one to one to many. So we did not want to be at the foundation of single point of failure, as well as how do we develop community leadership. So you take this road back. What happens is you have two highways. Essentially the work of this combined team is breaking down into these two highways. Program learning and infrastructure, which is that broken wheel on the car, getting that where it needs to be. So it's easier for community leaders to run their programs. And then community leadership development. I.E., we really think that the answer to developing the movement is making sure what we've done in pilots, and I'll talk about that, really goes the next dimension. Program and learning infrastructure, the bridges to help community knowledge move. So a couple of things. You're going to see on our prototypes this program and events dashboard. Luckily, we found the Wiki Ed Foundation had already invested in the dashboard for education. And we found out that with minimal lift, we could get that dashboard to work for multiple programs, not just education. You will see that there are use cases and development for that. We've talked about the Wikimedia Resource Center. If you go in different directions, you're going to get lost along the way. That prototype and development. Community tools. We're having conversations with community tech that what's on community wish list is aligned with what we've heard in our interviews. Community listening, which is a little bit of the work you've probably heard about the survey support desk, and making sure we have information to show as we're making adequate decisions. Do not be misled in that the guy is catching the woman in this photo. But I think it does imply that we need experience program leaders to help those who are just starting. And so engaging and supporting the leaders through continued learning has become really key. In this area, we're talking about real growth of peer mentoring, not only the hub but investing in community leaders. AFCOM support, the AFCOM supported model. We really actually, we're combining these in many ways because we've seen comments on the metapages about AFCOM really stepping up into this peer leadership role. And then of course, continued focus on learning and knowledge exchange. We've also been following the WNF strategy and what's interesting about this message process is a lot of the comments on the consultation are also showing up in the team's plan. So that's it. Questions, join the conversation, not only online, but we really want you to help us. I've told the team that at Disney, when they make a new film, they say every film begins as an ugly baby. This is our ugly baby. Make it better. Identify the risk right in on what you think is most important. And I'll take questions. All right. So based on how the strategy process is going and we're just past the halfway mark, do you see anything surprising or anything impacting how you're deciding your annual plan? As I've participated in some of those conversations, Kevin, one of the things that's interesting is needing to know more about volunteer engagement and what really will encourage them to increase their level of contribution and the potential marriage with technology. So the best example I can give you is I've been asking editors, what really motivates you to get to that 100,000 mark? And the one thing they said is, oh, well, getting a featured article, but imagine if there was a way where we had more community happiness, where you have 10 positive reinforcements kind of sharing with you that, guess what, your articles across various pages, you've had an impact on two million members of the world who are getting and benefiting from knowledge. So we've seen some of those cross conversations where it's not only about tech, it's also about people and relationships really also are critical. I have a harder question. Is there anything that you're looking at stopping? Yes, that was the hard question. Each of, and I have to give my unbelievable commitment considering holidays, considering other things in the environment, each of my teams went in and looked at their current workflows, mapped out hours, took 30% out of existing workflows to apply to capacity building, because we know it's a tough year. One of the things that's helped is, for example, you're going to hear about, and your team actually supported the magic button for global metrics. We've made reporting a lot easier, and we're going more toward, rather than providing that individual hand-holding, we're working on also a video series, for example, for education that we use, really focusing on those peer mentors, investing in them versus being the doers of work. So each team is going through and saying what is going to be on that stop list, and I'm sure we're going to have a little bit more pain as we go forth in prioritizing these projects. Any questions on wiki tech? None? Or one? So information, it seems like a lot of stuff about a lot of things. Are there any resources to get some more context or? Yes. So we do have, on this slide deck, you will see examples of how we got to the criteria, all of the prototypes, you'll see conversation that's really informing. What are the strengths? What are the weaknesses so that we can prioritize those ideas and those comments that are going to help us build the annual. Thank you. Let's give Rosemary a hand. Thanks. All right. We have a late edition surprise for Lightning Talks. Yuri is standing by with a presentation. Hello. Can you hear me? Yes, we hear you, but we don't have the video yet. Stand by. This is weird. This thing, I'll refresh it to this in case. Hello, and Yuri is rejoining. How about now? There, we see you and we hear you. Perfect. Hello, everyone. My name is Yuri, and I work for Discovery Team with Media Foundation. And today, I will be talking about some of the work that I've been poking at. So let me share my screen, because otherwise you don't get to see anything interesting out into my face. This is much nicer. OK. So first of all, thanks. Next to Ed Sanders, who's been helping us from visual editor team, we are a full steam ahead working on a little editor for the maps. So this will allow you to insert a map and then say, oh, this is the map. Kind of drag it around. Say, oh, no, I actually want to highlight a certain area and draw a line or a polygon, select a few points, and something like that. And then this clicked. OK. And then basically have this, save it, and be done with it. This automatic updates the GeoJSON generated for this page. And this is what we hope will be available to everyone fairly soon, at least on Vicky Voyage. And then assuming we'll get enough funding for the servers for the maps for the Wikipedia, we'll have it available for everyone. Next, Star Smaller Ship has been working on this amazing thing called Vicky Data Query Service. It allows a very complicated query to generate things like a list of disasters and how many people were affected by them and their class and their GPS point. Well, this is kind of nice information to have, but we have graphs for that. Graphs currently on beta. Hope we will launch them soon for all Wikipedia's. Allow you to draw that data directly from Wikipedia. It allows you to show you all the different disasters by different colors. It is a bit small, maybe I'll make it just a bit bigger so that everyone can see it. And then you can move your mouse and you can see what the disaster was, how many people were affected, like different earthquakes, et cetera. And there's other useful maps from Wikipedia directly, like largest cities in the world. And number of museums per country, highlighted per capita. Apparently in Greenland, there's five museums, but there's so few people that it's actually the highest rating. And last, but not least, obviously, well, no, one, two last. It's page views. Directly from page views API, we now have this nice little template that you can insert and say graph page views. And without parameters, it just shows you the number of page views for the current article, or you can specify how many days and which article to show. You can see Albert Einstein, and you can see gravitational waves that were making waves recently and how they actually correlate one to the other. Lastly, there is someone from HebrewWiki asked to create this pie chart that I improved a little bit to show the number of pages in subcategories of a given category. So I translated it in English as well. So here, this is the category people. It shows you which are the largest categories, subcategories of the category of people, and how many pages they all have. And at that, I will stop broadcasting myself. But please turn the sharing on. Questions? Hi, Yuri. I have a question. So who are the early adopters of graphs? And what will it take to get the next wave of people to adopt this? We have a large number of automatically generated graphs, basically Lua generated graphs that actually pull data from wiki data, but without using the queries in the new query system. That was done actually fairly long time ago. The biggest change of wave, change of tide, happened when someone from Germany created a very nice, very useful, very easy to use template called graph chart, which allows very simple graphs to be inserted into a page without knowing any of the Vega syntax. You just insert a template, and you specify a couple of values for X and Y. And it just draws it for you, and you specify the type of graph that you want. I believe the best thing for graphs could happen if Visual Editor creates a gallery style system where community can pick the nicest templates to insert graph templates. And then Visual Editor would just show these little icons for the graphs, and the user can say, OK, I just want that graph, and type in some numbers for that graph, and insert it. But that gallery will be created by the community, because community is much more flexible in terms of creating new types of graphs. Well, thank you. Any questions on IRC? Yes. Awesome talk. Are there links to your samples? Sure. I'll post them on the Lightning Talks page. That's awesome. Thanks. Any other questions? It looks like we're all good. Thank you very much. Let's give you a hand. Thank you. And that concludes our Lightning Talks. Thank you, everybody, and have a great day.