 Hello, and welcome to the August 2016 Wikimedia Foundation Metrics Meeting. We have a number of exciting feature presentations today. Yup, the exciting ones will be all of the things you see here. Metrics, transparency report, an engagement survey update, and a research update. And then you can buttonhole all of the people who presented. So we'll start with the welcomes. New people, Ricardo Coccioli, Melody Kramer, Olga Vasileva. Please hold your applause to the end. It is very exciting though. Chelsea Shea, Danny Kaufman, Sam Patton, Anna Maria Acosta, Tarun Krishnakumar, Namayli Huyjit, and Samir Elsharbati. You're all too good. I thought I might get to use that Jeb Bush line. You can club now. And then people who have been drinking the Kool-Aid forever. Brian Vibber, 10 years. James Alexander, 6 years. Jonathan Curiel, 5 years. Andrew Bogget, 4 years. Manpreet Brar, 3 years. Brendan Campbell, at 1 year. Ash Miner, Peter Hedinskog, Eliza Barrios, Chuck Rosloff, and James Holder, all also at their 1 year mark. And still haven't found another job. New update. Hi all. I'm Maria Cruz, communications and outreach coordinator for community engagement department. I'm going to be sharing a few stories from communities. Too many things. Okay. Wiki conference India 2016 took place between 5 August and 5 and 7 August in Chandigarh, India. This was the second conference in the region. The first one was in 2011. And a lot of amazing things happened. I was organized by Punjabi Wikimedians, the newest affiliate in the region. It was, it's one of the largest regional conferences. It had like about 250 participants from 20 language communities. 25% of scholarships went to women. And there was a hackathon as well where seven different projects saw some progress. And I have my notes here. Some of the projects. Some of the projects include optical character recognition for India and Malayalam. A communications platform that hackathon participants used during the event. And Wiki speak for the web and for Android. In this conference, the first Wiki women's launch took place in the region as well. There was one new Wikipedia that went live. Sorry, Tulu Wikipedia. And as Ravi said in the closing keynote, this was a conference that really brought community together beyond any language and political barriers. And there was a renewed sense of hope in the region. People are coming together to work in new programs. We also noticed that Wikipedians often work beyond the frameworks of formal organizations. They work on their own. And a lot of them were inspired to start new Wikimedia programs after this conference. So it was a really good experience overall. Another awesome thing you may have heard about is a global campaign called her story. It was co-hosted by UN Women Initiative and Wikimedia. It took place in 12 different cities all over the world and there were other volunteers who joined online. It was a very innovative strategy as UN women facilitated partnerships with local organizations in different cities. So that was one of the highlights at least that we are focusing on in the programs team to try and bring out a case study of how this actually worked. And they focused on adding content on Wikipedia with two main focuses on quality, how to write from a gender perspective and on quantity. Sorry, I didn't make this clear before, but the objective was to write women back in history as many gender gap initiatives have as a goal. And this was creating biographies and content articles related to women topics. So these are the two stories I have for this month. And Wikimedia project milestones and affiliates. Well, as I said, Wikipedia and Tulu went live with 680 articles. Oxytons Wikipedia reached 90,000 articles. And Punjabi Wikimedia reached 1,000 entries. Yay! There are two new affiliates recognized, Wiki2Learn user group, which is dedicated to make the content on Wiki2Learn accessible on Wikimedia platforms. And Wikimedians of IOI user group, who is that it's a group that is starting activities in the capital this month. And the other news about affiliates is that the affiliate committee announced a new criteria for chapter recognition. So this applies to user groups that want to become chapters and for other groups who want to just be chapters altogether. We can read more about this on the meta page. Finally, upcoming collaboration with communities. Wikimedia Central and Eastern Europe meeting is happening this weekend in Armenia. There will be a leadership conversation starting in September. And if you have any collaborations with communities that are reaching 50 people or more, please don't forget to add it to the calendar on meta. That's community engagement. Thank you. Passing on to Matryek. Thank you, Maria. I am Tim and Bayer from the reading team. And I'm going to present the usual walkthrough for our core metrics this time for leadership. So global page views are around 500 million every month. Time desktop is a bit more than half of page views, decreasing mobile web with below 50% apps at 1.4%. And I should say that we had, if you look at the graph here, that's the last seven months. You see a pretty huge rise here. So I mean, that's an anomaly we saw. It was a pretty missed year. We found out that a couple of years on our brothers were requesting the main pages of some Wikipedia's Russian, Dutch, English, very often. And those were isolated, but it totally blew up our total metrics. So it was about 11%. And we could say that's great, of course. But after investigating it doesn't really look like these are human pages. So we have been trying to correct this. So what you're seeing is actually correction, but keep that in mind. And yeah, it's a useful reminder that these metrics can be kind of brittle sometimes. I should mention one other thing. As you may have heard, Google did a great thing. We had a few search for things on mobile version of Google and they have this panel, which accepts Wikipedia content and they improved attribution a lot recently. We had a lot of questions about it. It did not drive a lot more pages to our mobile site. But that's what we have been seeing here, but it's a good set nevertheless. Quick look at our long-term traffic trends. This is basically the same time, same every time. We're seeing since last three years, we're seeing a slight decrease minus 3%, which is kind of in a margin of error, but it's not falling a lot. But it's also not rising. Best of though has been falling very markedly over these three years. That's the blue area here. Mobile has been rising, although a bit more slowly recently. If you're interested in which current changes are regular seasonal and which not, it's also kind of hard to distinguish from actual effects of things that we're doing. You made this chart where what you're seeing here is four lines, two pages this year, last year, two years ago and three years ago. That's the time spent where you have page view data by the new page view definition. And it's kind of interesting that you see the same things happen every year. You have a huge drop on Christmas or actually before Christmas, getting back in January. You have a drop in the Northern Hemisphere summer. So I've been talking before that last year, I've had a pretty huge drop more than these long-term trends. Due to two events in June and May, which was the switch to HWS, only connections, which caused some slow connections and a block of the China. We give you a China. And we cannot also confirm this directly. So if you look at the red line, that's what happened last year. So it was above the yellow line, which is here before, and the blue line, which is now. And while in May June, it really dropped below the yellow line and even the blue line. So you can infer that there was probably an unusual non-season effect. You're also checking the ratio of mobile traffic. So I mentioned this before on Christmas. We had a huge bump because people were getting new phones probably. And this was also sustained, but it kind of stagnated. And the last few months, it actually started to rise again. We don't really know why. And what you see here is this drop is the anomalous mention was these artificial devices were only in desktop, so it decreased that ratio to keep that in mind. That's not corrected yet. Then we look at unique devices. That's a matter we introduced at the beginning of this year. And just to give you an idea about the largest 5 Wikipedia's have issued with that for 159 million. And what's also to give in mind is that the majority of them are already mobile. So what we're seeing is that on desktop, people have more pages per device to rely on mobile web. It's only less than half that. And so to give you in mind, there's still more pages on desktop, slightly more, but in terms of devices, the majority are mobile. And also in mind, devices are not users. One user might have several devices. And so this has been available for a bit over half a year, and we don't track long-term transits. But something we're seeing is that the pages per device have been slightly rising. So we'll see if that's the actual thing that's happening or that continues. But that's anything to keep in mind. Lastly, we're looking at the pages per region, Global South or the Global North. This has been decreasing, as I mentioned before, at the beginning of this year. That's the Global North, so you might have gained more to have performed the Global South, but it's been pretty much stagnating. And that's something that we're thinking currently. So the new readers are probably looking into maybe replacing that with a different country selection. Okay, that's pretty much what I wanted to say. Now we have a legal team. It's the Transparency Report. Is this on? It is. Yay! Hi, everyone. I'm Erin Palmer, a legal counsel, and this is my colleague, Jim Bawadi. And today we're going to talk to you a little bit about our latest Transparency Report. We're basically going to give you an overview of what the Transparency Report is, what's included in this report, some comparisons with previous reports, and tell a few stories that we featured in this report. So what exactly is the Transparency Report? The first page of the report really says it best. Every year we receive requests from governments, individuals, and organizations to disclose information about our users or to delete or alter content on our projects. Some are legitimate, some are not. The purpose of this Transparency Report is to shed light on the requests we receive and how we respond to them. So as you all know, Transparency is one of our guiding principles here at the Wikimedia Foundation. And one way that we can let our users and the public know that we're really committed to Transparency is about requests that we receive to alter content or delete content or to hand over information about our users. Putting together a Transparency Report like this is something that's especially important for a couple of reasons. One, our community really values privacy and transparency. And two, Transparency Reporting has become something of an industry standard in the past couple of years. Most tech companies or organizations do put out a Transparency Report. In fact, last year, Harvard's Berkman Center did a report on best practices in Transparency Reporting, and our report was cited positively in a couple of places, which was nice to see. So the report is comprised of three main categories. And the first kind of request that we report on is actually the most common. And those are requests to alter or remove content from the projects. And the classic example of this might be someone who emails the legal team because they're unhappy about a Wikipedia article about them or their business. The second category we report on are copyright-related takedowns. And we report these separately because they're governed under separate legal rules. And if someone sees content on the projects, they believe violates their copyright. They can send us what's called a Digital Millennium Copyright Act Notice or DMCA Notice, asking us to take it down. And then we evaluate the notices to make sure that they're legally valid and that no exceptions like fair use apply. And then the last category we report on are requests for user data. And as many of you know, we actually don't store very much user data, mainly just IP addresses, user agent information, and an email if once provided to us. We also don't store it for very long. But we occasionally do get requests from individuals, governments, and organizations to disclose some of this information. And so we report on that. And I'd say the classic example of this would be a question about who edited a specific Wikipedia article. So in addition to the sections I just mentioned, we also have a few other features on the report. We'd like to tell a few stories about requests that we received. We'll be talking about a couple of those in a minute. And we also have an FAQ that sort of explains some of the legal concepts, provides cross-links throughout the report, and just helps make the report as comprehensive as possible. So now that we've introduced the report itself, let's talk a little bit about the numbers we had this time around. In the last six months, we received 243 requests to alter or remove content on the projects, and we did not grant a single one. Thank you. Included in that 243 are five requests that were made citing the right to be forgotten specifically. The right to be forgotten is a legal concept that exists in certain countries, most notably throughout the European Union, that allows people to ask search engines to de-index certain content about them. Despite the fact that we are not a search engine, we do still get requests to alter or remove content that cite the right to be forgotten as the reason why something should be taken down. This is a comparison showing the numbers we had this time with our previous reports. As you can see, we had about a 10% increase in the number of requests we received from the previous cycle. This is the highest number that we've ever gotten, but the distribution has been more or less consistent since late 2014. Another notable fact about this report is that we had requests from more countries than ever before. This time, we had requests from 42 separate countries, usually the numbers in the high, 20s, or low 30s. We have a pretty strong culture of pushing back against requests to remove our alter content because the user should decide with the ones on the projects. That's the point of the projects. Of all the requests we've ever received, we've only granted one. That was an instance where someone's unredacted travel documents were posted on the projects without their consent and with all their information visible, and so we thought that was an instance where it should probably be taken down. We feature a couple of stories about interesting things that happened during the time that the report was being compiled, and one of the requests that we received this time around, we decided to tell the story Happy Hour, which is something the legal team knows very well these days. This was about a request from an alcoholic beverage association. They were frustrated that certain Wikipedia articles were referring to their product, which is region-specific, by what they said was a generic term, and they argued that this was contrary to both trademark law and also an international treaty. And so Jacob, who handles mostly requests that come into legal at or in box where we get information and requests from all over the world, examines this request, the examines all requests, evaluated their arguments and got back to them to let them know that A, neither trademark nor the treaty prevent people from just talking about products, and B, if they wanted to express their concerns with the community, he was happy to put them in touch. So I mentioned before that a DMC notice is a special kind of notice we can get under U.S. copyright law, asking us to take down allegedly copyrighted content. The total number of requests we got this time around was pretty straightforward. We got 22 and we granted six. One unusual thing about this cycle was that we received two DMCA counternotices. And for those of you who don't know, a counternotice is a special notice that a third party can send us after content has been taken down, asking it to be restored. In this case, we didn't grant either counternotice. The first one was mooted when the underlying DMCA notice was withdrawn, and the second one was missing some vital information so it wasn't legally valid, and we had to reject it. So over time, you can see that we received more copyright notices than we've ever received before, but the increase was pretty small, just about 10%, and the distribution was consistent with previous cycles. The bigger story is probably that the grant rate dropped from 45% last time to just 27% this time. And as I mentioned before, we received two counternotices, which we'd never gotten before. One thing worth pointing out is that it's basically unheard of for projects of our size and scope to receive DMCA notices at such low levels, and just for comparison, it was receiving 17 and 21 million of these notices per week. So to receive them in the low 20s, not 20 million, really speaks to the diligence of our user community. We also have a copyright story for you. So earlier this year, we received a notice from an Asian newspaper alleging that an article on English Wikipedia had infringed the copyright on one of their articles, but when some people investigated it, it actually turned out to be the opposite, that the content appeared on English Wikipedia before it appeared in the newspaper. They had copied Wikipedia and then sent a notice asking that it be taken down. Naturally, we rejected the notice, and we provided them with links to Creative Commons licenses and how to properly reuse Wikipedia content. Yeah, it's appropriate to laugh there. So the last category is requests for user data. This time around, we received a total of 13 requests for non-public information about our users. Seven of these were informal non-government requests, and six of these were informal government requests, and none of these requests were granted. So unlike a formal legal process like a warrant, an informal non-government request is when a private individual or organization just emails or calls us and asks us to turn over information about a user. An informal government request is when a government does the same thing. Now comparing with previous reports, we had a pretty substantial drop in the number of requests this time around, almost 50%. The type of request was also less varied. As I mentioned, we only had informal requests this time around. Usually we do get one or two formal requests, a warrant or a court order, maybe subpoena, something like that. Another thing worth noting is that as with the content request, we have a very strong culture of pushing back against these requests. We'll only disclose non-public information about a user in response to a valid and enforceable legal process such as a warrant, and often we don't have any information to turn over in the first place because we collect so little and retain it for such a short time. So during many reporting cycles, we haven't disclosed any information at all, and if we do, it's maybe one or two in a six month period. So we thought we'd wrap up by talking about some of the changes we made to this most recent report. And the first is that we added a users notified category. And just for context, we've always had a policy of notifying users when we plan to disclose their information in response to a request so long as we're legally allowed to do so and we have the necessary contact information. But until now we weren't reporting on the number of people we reported pursuant to this process. And one thing we're noting is this time the number was zero, but that's because we didn't disclose any information at all, so there was no one to notify. The second change is that we added a bit more detail about the amount of information disclosed in response to a user data request. In previous reports, we just said yes or no. We either disclosed information or we didn't. But now in going forward, we're saying when we disclose just some of the information that was requested and not all of it. An example of when this might happen is if someone asked for more information than we have and we disclose just what we do have, or when we receive a valid legal process for some of the information, but not for all of it. So that's the summer 2016 transparency report. We'd like to thank everyone who was involved in preparing it, and that includes Siddharth, Michelle, James Alexander, Ori, our legal interns, the comms team, and of course Jacob, who actually responds to all these emails and phone calls. And we'd also like to thank Pratik and Moaz who helped design the very first report. So thanks again. Okay, hopefully you guys can hear me. This is Jodi with the Talent Culture team. We wanted to give you a quick update on what's been happening with the employee engagement survey. It's this follow-up to the employee engagement survey. And we also just wanted to show you guys a bit more broadly. So if you can go to the next slide. So this is a survey. This is probably very familiar to most of you. This is the one that was shared with us in June. The biggest highlights here are the 11% increase, as well as a continued high level of participation with staff. The engagement scores for the questions related to leadership also showed an increase of 23 to 63%. Now, despite all these overall increases, there's still tons of work to be done. So on the next slide, what's been happening since. So as part of this, we just wanted to give you a quick update. First, we wanted to share that we have put a more transparent process about the engagement survey to make sure that they are consistent going forward. We've also restarted the engagement committee. The talent culture team helped them with a workshop to help identify projects that are going to be the most impact to the foundation. Initially, they focused on collaboration as a top priority and have been working on a project with staff to bubble up what ideas make the most amount of sense to get the biggest impact. We've also done a sea level workshop in a similar fashion, and they've also launched their first project focused on showing staff how important they are to the success of the foundation. Talent culture has also been doing a lot of hard work focused on manager training starting with foundational work this quarter and then moving into performance management. The team has also been doing a lot of work on values, inclusion, well-being, transparency, and operations. Next slide. So what's next? Now as part of the next step for the sea level, they're going to be talking with their team, finding a bit more about what helps the teams to feel like they are important to the foundation. The engagement committee is also going to be taking feedback from staff to pinpoint what their projects are going to be and to roll those out. And of course the talent culture team is going to continue working hard on all of our work. And then of course the engagement survey is going to run again in November to see how we're doing. And that's it for the engagement survey. Hello. Hi, folks. Thank you for coming today. People in both the foundation and the community. I'm Toby. My name is Toby. I'm a member of the team. I lead the reading team. And I'm going to tell you, I'm going to introduce the new readers research that Abby is going to tell you about. I wanted to give you a quick background on the project. So New Readers is a project designed to reach people in parts of the world that have not traditionally read or edited Wikipedia and sister sites. It's really exciting for me because it gets to the core of why I'm at the foundation. Working with great people and making an impact on the world. New Readers has been and is a collaboration. Community, communications, partnerships, reading and design research are all part of the project. We're united by a common belief in what we're doing and we've worked together for many months to make it happen. We've started the process with design research. We realize that we get the people that we're trying to reach. We have a little of experience with Wikipedia zero, but we clearly needed to know much more. And we're really hopeful that with deep engagement, with our users will lead us to better outcomes. So anyone who knows me will also know that I am interested in getting things done. So I'm going to tell you a little bit about the project itself before Abby tells you about the research. So we envision the process. The project is having three stages and we're just finished with the first, which is research. We're going to figure out what we need to build. We're going to build it and then we're going to launch it. And obviously this is very high level. There are iterations within the projects, even though the research is super exciting. We're just at the beginning. And here's a little bit more detail about the timeline because as a manager, I like timelines. We're generative research in last quarter, where we're finding concepts and then we'll start prototyping and doing evaluative research with our users before implementing and building in the next calendar year. So I'll leave you with this. If you have a mission you believe in, collaborate with great people and listen to your users, you can do great things. So Abby's going to tell you about the research and about how we did the listening and what we heard. Thanks, Toby. So yeah, I'm going to tell you about the research. So we went to four different countries. Dominic Valley did a volunteer design ethnography for us in South Africa. He did 15 interviews in about a week. Then a small team of us from the foundation of staffers went to Mexico. There were six of us. We were there for two weeks. We also did 15 interviews. In both of these research projects, the researchers were able to observe some patterns and they stepped it up and scaled their research by partnering with a team called Reboot. They're an organization that does this kind of research all over the world all the time. So we went to Nigeria and we went to India. In Nigeria we had a team of 12 people and we did 70-plus interviews. In India we did 60-plus interviews with a team of six or seven. Both those projects were two weeks too. So because we did so many more interviews, we were able to see patterns just like we saw in South Africa and Mexico. But then we went back into the field and we validated and we found more patterns. We found a lot more patterns because we were able to talk to a lot more people. So there's differences in those researchers. So when we went to Nigeria and India and Mexico, we spoke with community members in each of those countries. We were able to visit with some people in Nigeria from community of some of which you can see there. We did a whole series of phone surveys led by the global reach team in Danpoi to learn some qualitative data about people's use of wiki technology. We also did the design research that I'm going to go into a little bit more. So design research is a way, it's a part of human-centered design and it's a way that we can learn from the people we're building for what they need, what their challenges are, what goals and motivations they have so we know what to build so people can participate in free knowledge whether it's consuming or tripping. So some of the methods we use to do that are ethnographic interviews where we go into people's homes or in their workplaces and learn from them through interviewing them and talking with them. We also can see their context and their technology that they use and the technical context as well as the physical context they're in. And we asked people to demo their technology if they felt comfortable with it. Also we did some key informant interviews or expert interviews with people who are studying technology or communication or education in places that we went. We also, the phone surveys were really integral also in these regions. So I want to dive into some examples of what we learned. We came back with about 24 findings and I'm just going to share a few examples of those right now. So in using Wikipedia, a lot of times people go to Google of course and search and then find what they're looking for. They might go to Wikipedia but they might not even use Wikipedia. So, but then once people use it a little bit more and start to get to know it a little bit more, they can become suspicious of the content model. If anyone can edit, it can arouse some suspicion, especially in places like Nigeria, Mexico, when the media is captured by political or commercial interests. So trust in the content may be compromised a little bit. Utility is super high so people keep coming back to get that information they're looking for when they're searching. Low band with browsers rule both Nigeria and India. People are precious about data usage because it's expensive and... Here we go, put the card. So in Nigeria people use Opera Mini. It saves on data and it ups the speed when you're browsing. And then in India, UC browser is a highly used browser. Even though in India the cost sensitivity is a little less than in Nigeria, the speed was really articulated as a reason to use UC browser in India. And then this is one of the most interesting findings that in my opinion about language. And I'll go a little bit more deeply into it than just this one slide. But English is widely accepted as the language of the internet and of technology. Even among people whose mother tongue or their language of comfort isn't English, people don't necessarily expect that the internet and technology are going to be delivered in their local language or their language of comfort. It's also a matter of keyboards and other input devices are usually designed and used in English. So that's general for both places. In Nigeria, English is the general language. People said, yeah, English is the general language. We have lots of languages and we speak English to be able to communicate with each other. And in Nigeria also, school instruction is done in English or a combination of English in a local language, potentially. And when people write in their local languages, it's usually done in, you know, transliterated through Roman alphabet. Also, yeah, so English is the default language for online content. And local languages are used more for interpersonal communications. And in India, people can choose their language of instruction, what medium that they're going to go to school in, whether it's Hindi or Tamil or English. And when a person chooses a language besides English, it sometimes necessitates workarounds. Because of English being the language of the internet and technology, people need to do potentially workarounds with finding different input methods, potentially learning some English. And it impacts their comfort on online learning searching. So mobile web dominates for getting online and Android's a platform of choice. So feature phones and lower-end grade androids are the primary devices for getting on the internet. Only wealthy people use the higher-grade androids or iOS or Blackberry. And even then, most people prefer Android as a primary connecting device or a tethering device because of cost and also battery life. So learning technology, people are learning through other people, through friends and family and people at work. And also digital immigrants are learning from digital natives. So parents learning from their children. This particularly happens when young adults are going away to school or going away for a job and technology will then intervene as a way for the family to stay connected. So maybe someone gives someone else a phone and then helps them install WhatsApp or Facebook so they can keep in contact with the way. So in India particularly, people share and pass down devices and this spurs digital learning within the household. So people are teaching each other at home. Also men heavily influence women's technology behaviors. Women of all age groups are influenced by their male family members or colleagues in access to choice of and use of technology. In Nigeria, app stores and little shops where people go get their iPhones and get apps installed are a source of applications. Some people will just say, hey, give me whatever you need. So maybe as we were interviewing people, we observed that they had many, many apps on their phone but only used two or three. So where'd you get those? Oh, the guy put it on my phone. So that's one way people are learning about different apps and how to use their phones and stuff in Nigeria. So shop workers are a big source of learning. So if you step back a little bit and we look at all these, not only these findings but with others in our depth, you can go to and find in a minute. I'll share the link. People using Wikipedia doesn't necessarily equal recognition of Wikipedia because as I mentioned earlier, people can go to Google and get the information they know and not even know that they're using Wikipedia. We saw Wikipedia in people's browser history but when we asked at the interview, at the end of the interview, do you know what Wikipedia is? They would say, yeah, I know it but I don't ever use it so we saw evidence of them using it. So familiarity does not necessarily equal trust as people get more familiar with it. That thing happens where people see the, there's like a crisis of trust when people find out that anyone can edit, particularly in these contexts where the content is co-opted. And also repetition of use doesn't mean that people understand about Wikipedia and the movement and the values of the movement and what it's all about. So we're really using this research. It was a big investment. A lot of people worked on the project. We now have a really valuable database of observations, of patterns, of findings and we're using this research to the biggest extent that we can. So here where you can find unmeta, there's pages you can keep up with what we're doing here. You can see the report that Reboot and us who were in the field with them wrote and built. You can look all that over and please just add questions, talk with us on the talk pages. We're really interested in knowing of these findings, what are efforts that are already existing out there in the world. We want to partner with people to address these things. We're going to have some workshops in September which will be more about discussing and communicating about these findings and learning about efforts people are making. Then those will be announced soon. And then our next steps, like I said, we're learning and getting information about what's going on around these findings. We're also going to focus in on what we're going to take action on in this first round. Because we're also exercising new processes and tools for product development, for developing partnerships and for communications and other things that we're going to do with these findings. We're narrowing into focus. With that, I guess it's time for questions and discussion. Questions from IRC. Hi there. First question, Abby. I've got multiple questions for you, but that's what. I'll start with one. Erin said, your results suggest that English has already gained dominance. So why focus on non-English in these areas? Who or what are we missing by not focusing on other languages? Well, people need content in their local languages. This is no way of suggesting that we're only going to focus on English. It's just an observation that people are using English, expecting English. That English is kind of a... Because of the way the hardware is built and the internet has been built, English is a major language there. Yeah, I just want to echo what Abby said. I think this was a super interesting finding and something we definitely didn't expect. But, you know, I don't think it's going to drive our product development into, you know, or just focusing on English. Hello, yes. This is a technical question for the legal team. I'm curious to know what is a formal request versus an informal request. If... I'm actually curious to know what is an informal request. If that happens in conversation, if it's at an email, but it's not like a formal legal paper, what does it mean? So an informal request is basically just an email or a phone call asking for something. Sometimes nicely, sometimes not, but usually they'll just ask for information. So a formal request is when we get a formal legal process like a warrant or a court order or a subpoena. It's where there's some sort of legal oversight with a set of legal rules that someone had to follow to send a request. And informal is just when they call or email us without any of that just asking for something. Thank you. And also thank you so much for the presentation. It was really clear and it made at least me understand better what transparency work is about. Thank you so much. Hey, another question for Abbie from Layla. How do we choose the specific countries to study versus all the other countries in the world and what was the line of reasoning there? The short answer is there was a lot of thinking and a lot of work about that. I'll let Toby describe that process a little bit. I'll just echo Zach. We basically looked at a bunch of factors about countries, things like income, things like population, especially population relative to use of Wikipedia, mobile subscriptions, all of the things. We looked at the continent on which they were located and we wanted to have sort of one country in Asia, South America and Africa and then we voted. So it was a combination of qualitative and quantitative data and Zach put up a link to the discussion of the assessment process on and you can find it from the New Readers Wiki page. And third question also for Abbie. Mike asked whether the interviews were typically with English speakers in English or were they translated in some manner and were they all city dwellers, were they countryside people, how did that work? So about the interviews, we worked with a mixed team. So there were a few staff members in both countries. There were also reboot researchers who are on the reboot team and then we hired local researchers who are there. So there were four local researchers who spoke Pijin and the other languages that we needed to speak with our participants. In India, there were also three local researchers who spoke the languages that we perceived that we would need. So most of the conversations were done in the language, all the conversations were done in the language of comfort for that individual. So we just started the local researchers who would start the conversation and kind of assess, you know, sometimes people will start in English and assume, but then maybe they would detect that they're more comfortable in their other language so then they'd switch. So that made me like the, I was there to take notes basically. And so at the end after that, I didn't understand the language. The researcher and I would go and do a quick debrief right after the interview. But the interview itself would be taken in the language of comfort for the person that we were interviewing. What was the second part? Rural versus urban. Oh, most of the places were more urban. We did go to a peri-urban, second-tier cities. We did go to a peri-urban place in Nigeria called Ape, which is less urban than the other two places. So I also have a question. I'm just curious if you can maybe talk about the sample sizes because, you know, someone who comes from, you know, more hard science like statistical stuff, we are used to a lot of like a really big large data set to draw conclusions on. And this seemed to me to be a relatively small one. And then again, the conclusions were kind of like a different type that would expect from a large data set. Like how do you handle the fact that, you know, to speak to a few people, what do you expect from that? How can we kind of generalize this? Yeah, thank you. Great question. So you notice that in Mexico and South Africa, there were only about 15 interviews. And we were able to see a few patterns across those interviews. So that's one level of, you know, like build hypotheses from them, from those. When we had 60 or 70 interviews, we would start to see patterns, and then we would go back in the field to refute those patterns or iterate as we went. So we did that in several iterations as we went through the 60 or 70 interviews. And that said, these are all hypotheses. These are patterns and groupings of patterns we did before. And we're triangulating with quantitative data so that we can, you know, learn from the qualitative and we've observed and we've heard from people, along with quantitative data, because if you just look at the data, you're not seeing the human part as much. We're seeing the actions of the humans through the machines. So I think that this kind of work is really valuable for us to understand the human aspects in collaboration with quantitative data. Thanks. Yeah, we could have. I know, like, Zach is also getting in line to address this question. But I think it's a great one, because we've relied on quantitative data here a lot. And I think in general quantitative data can tell you what, but qualitative data, what Abby's been doing can tell you what. Also, you can't get quantitative data about users you don't have, right? If we're going to put our new readers and our new editors want, you know, we're going to have to go ahead and talk to them. I just want to add that... So in an effort to be thorough about doing this and saying, okay, if we're going to be detailed and intimate with a certain set of research participants, we also want to know how these map to trends. So wonderfully, Dan Foy and the Global Reach team have conducted phone surveys with more than 1,000 participants per region to go ahead and look at some of those high-level findings. And in India, remarkably, we did, I think it was 6,000 phone surveys across 12 languages. So it's not that we don't have quantitative data. We have qualitative data and quantitative data in an effort to really have a holistic view of the new reader situation. And to sum up, check out the thick data. That's sort of the buzzword for this kind of blend of qualitative to get a true stack, but a holistic view of your users. Okay, so thanks for this amazing research. We're looking forward to seeing how it's implemented except for the reading team. Actually, the question slash suggestion for Tillman about the traffic report. I'm very happy and grateful that we're having constant discussions on traffic trends. I really appreciate that. One suggestion I'd like to make is that as much as it's interesting to see where traffic comes from on a regional basis, I think we need to start monitoring and discussing comes from resources. I know this is something that is co-owned by reading and discovery, and the discovery team is a focus on this. They also know that we have some invitations. There's a data collect, specifically after HTTPS rollout. But I think going forward, something to monitor and discuss it. Yeah, I agree, of course. The data discovery team has been tracking this. They also have a dashboard abroad, which some of you might have seen for how the percentage of traffic we get from search engines, from internal referrals, from one link to another, et cetera. I mean, of course, this is like the surface of the ocean, what these graphs you're seeing, and we don't know much about it. I think it was country changes. I mean, I sometimes try to add these, which countries gained or lost, but there's a lot of detail that drives these changes, and we could do more to understand it. Who's that? You can understand our referrals as one-third Google, one-third internal, like links from our own site, and one-third everything else. So a question about the new readers' research. It seems like the selection of countries, or one of the findings that I thought was particularly interesting had to do with people's expectation and the fact that they were accustomed to using English on the internet. But I do wonder to what extent that comes from the selection of countries, because South Africa, Nigeria, and India, those, to me, are countries where English is kind of the language of unity even within the country. And I know that I can think of others that's not necessarily true like Brazil. Yeah, I'd say we're not generalizing this to the whole world. These are the findings we found in these particular countries. I agree with you. Yeah, it's a good point. I do think we saw this in Mexico as well, but, again, that's right next to the US, so... It was a little different, so this was 15 interviews, but one pattern we did see was people wanting to search for content in Spanish, not finding what they're looking for, using Google Translate to translate into English, search in English, and then using some translation tools depending on their English capabilities to understand the search results. So it's not just about the language you speak and the language you're comfortable in and the language of the internet, it's also about the content that's available on the internet. So that was a little bit different in Mexico from those 15 interviews that we saw. Hey, a follow-up from Aaron. Abby, do you feel like you can speak to what we'd be missing if we were only focusing on English? We said we're not only focusing on English, right? But what is it where the value is from those working on lots of different languages? For those new readers, is that particularly valuable? I'm not sure I understand exactly. What's the value? You're asking me the value of the local languages? Oh, we'll speak English, not getting content in the language that they speak. That's not a good thing. I mean, I feel like there's a huge amount of evidence. I mean, in South Africa, one of the things that Dominic Valley, who did that volunteer ethnography for us, he brought back saying people want content about stuff in their local languages. About local content in their local languages. That was one of the things. And I think that, you know, not only this is now, I'm speaking from my own perspective here, but I think that Wikis are not only about language, but they're about the culture that is representing that language and the culture of that language. And so there's huge amounts of information that is missing if everything is in English. That's my perspective. I can't say much more than that. I think it's a huge value to have many, many languages represented and I would love it as they grow personally. Yeah, I think this is definitely it's such an interesting question because there are so many things wrapped up in it. It's just not simple, right? Like there are the, I used the word, going out in the world and proselytizing English is, I think, a poor one. I don't think it speaks to our mission. I don't think it speaks to what we're about as an organization and as a movement. But I think it does give us, you know, a valuable tool in how we reach readers in specific countries. Another question. Just a clarification about language. Do you think from the experiences of interviewing the people that, is the issue more that they're so used to reading stuff in English that they're expecting it to be English or is it more that they don't even know that that language exists? Like do we have a wiki in their language but they don't even know about it or is it just they're not expecting to use it at all? I think it's the first one. If there are people who have used wiki and recognized it as Wikipedia, I think there's especially if they're a student or someone, I think people do know about the other language wikis and it's just more of an expectation of the content, the language of the internet, the language of technology. It's also the keyboards and input mechanisms that are in English and Roman alphabet. So I think it's more that than the lack of knowing about other languages. At least other language wikis. Looks like that's it. Wait another minute. And then everyone just eat everything you see.