 Hi, everyone. So thanks for coming. I think other people will be joining us. And we're recording today, so we can share it out. Hi, Steve. And I'd like to say that we're doing this, because it's about three teams collaborating. Oh, thank you for the reminder. OK, so welcome. There's a bunch of different teams represented here. And I just wanted to kind of call out, there's people online. And she was going to talk about some guidelines to promote people included. And so if you guys come here, she's going to be your liaison. So she'll be online and watching chat she's doing. And see. OK, so I want to say a shout out to multidisciplinary collaboration for innovation. We all have things that we can implement to create great products in the end. So I always like to spend diaphragm about desirability, feasibility, and viability. And if user experience, technology, and business, meaning our mission, all collaborate and work together, that middle section where that rule is, that's where innovation can really happen. Because a triangulation and working together can make satisfy user needs. We can build things that we can build. We can also accomplish our mission. So it takes lots of people, lots of expertise. So now Shira's going to talk about some guidelines. OK, guidelines. I'm going to be facilitating the remote participation. I don't. I'm going to be facilitating remote participation. So if you have a question, ask me. If you can't hear anything. We're also going to have a parking lot. So the parking lot that we're talking about this is, once we start talking and we've got a lot of questions, we might run over time to those questions. So we're going to start with those in either pad. That's the either pad thing. And then someone at the end of a or two may pull the rest of those things in. Remember, we don't get two people out of here. Speak into the mic so everyone can hear their frame. And if you don't mind, could you please maybe close your laptops a little bit later? That's it. Thank you, Shira. All right. So now I'm going to talk a little bit about benefits of design research. And some of you may have seen some of this before. And Jame was going to keep me on at about time. Dave's going to say, great up. Back to you, Shira. So design research helps us to know the people beyond the data. Helps ensure quality and usefulness of product. And also it helps us focus our solutions to things that we build around the needs of users. So we do this in many different ways. We have a lot of methodologies. We work with a lot of different teams. And recently, thanks to Shira, Mae, Yufi, and Jonathan, and the whole design research team pitched into this, we built a little WordPress site that describes a very high level product process. And for each part of the process, you click on it and see what methods we use. So I don't know if this is the right way to do this linking, but it seems like it works. And then is that cool? So here you can see, here's a high level process. And this is not a top down, hey, here's how we need to design products. It's just a high level stages of going through product development just to demonstrate the different methods that we use in design research. And later, we're going to learn from everybody in the room about how to implement where and when. You guys need to implement your expertise in this process. So let's say, if you click on Understand, you're wearing your researcher hat, you're discovering personas' needs and challenges and opportunities. So this is where we go out in the field and we're just learning, learning, learning. And we'll probably get a lot of ideas in this period of time. And we have to put them in a folder. And then you can do, move on. So this website is just a quick demo of this website. You guys can dive into it later. And we're going to be adding it. So if you're growing over time, we're going to share it out with everybody. So for example, we could do a diary study and learn about people over a period of a couple weeks where they're maybe writing in a diary about how they contributed or what their experiences were like, what activities they did. We're going to do contextual inquiry in the emerging regions, emerging communities, global south, people who we need to learn more about, who are hard to reach over Google Hangouts. That's one of the methodologies in Understand. And we'll talk more about this process and details about it in a little while. But I wanted to show you guys this website. You can go in and look like, if we're in maintain, what kind of methodologies would design research be able to contribute? Personas are always useful all across the product process. But we can do things like benchmarking and unmoderated usability tests to see how things are going in real life. So that's just a quick dive into that. So for knowing people beyond the data, we're building personas and Dan is going to talk about that in his lightning talk. So talking to people, it's learning everything we can about the various types of users that we know about. And we're doing ongoing research. So we're talking to readers and editors and all kinds of users all the time. We did research, Daisy and I did research on how people are learning to edit Wikipedia. So we talked to like nine people who had never ever edited before and asked them to edit. They had described that they had some motivation to learn. So we were able to observe their very first time going in and hear from them and watch them experience their first time. So we observed some bumps in the road. We heard about their feelings about editing. Some people were hesitant, learned a lot of things about by watching and listening to them. We also did an editing task survey. And in the future, we're gonna go potentially to Mexico or Chile and do a contextual inquiry. So these are kind of examples of how we're talking to people and being with people and learning about them along with learning about them with the data. Really important point for design research is that observing how people do things is more reliable sometimes than asking them how they do things because people build work grounds and then you just do it. You figure out, humans are determined and we all have a goal, we're gonna figure out how to do it and if the technology or something's getting in the way, figure out how to do it. And then after you've been doing it for a while, it's just this rope path and trying to describe it to someone who's trying to understand your process is you don't get all the details, you don't see it. So if you actually watch someone do something, you can discover a lot about what they're going through to accomplish their goal. How am I doing? I'm doing fine, Jamel. Excellent. So I wanna play a video here and to demonstrate this point about showing, watching what people do, uh-oh, okay, the audio. So this is in visual editor usability testing that we did. Just tell me when I should push play or if you wanna push play. Oh, that's okay and I can skip over this and people can watch it later too. I'll just tell about it. So this guy was, this was from usability testing for visual editor and we asked him to do a reference and you can see that he's going, you can just watch, you guys online, see okay. So he's reading the task and we provided him with a reference so he's copying the reference to be able to insert it. That's the, copy's the link and then we're gonna see how he goes through trying to figure out how to make a link. So he's looking for the spot where he can add his link, add his reference on the spot and now he's scrolling up because he doesn't notice that the bar's there. He's looking for where to add the reference. He thinks insert might be a good place to find it and he inserted incorrectly so he figured, okay, that's not right. Probably versus control X. He's kind of narrating like, oh, maybe it's in here, maybe it's in here. He pushes, see he pushes the reference button but he didn't recognize it and then he added a link instead of a reference. And he was not, he didn't add a reference, he added a link. So he thought he was successful. So being able to observe this person doing this task really taught us a lot about how to iterate the UI to make it intuitive. There's another one here about, and I'll just tell you the story. I was observing people doing an upload wizard and I just started with how would you get a picture in a Wikipedia article? And I know it's a little bit of a trick question. I didn't intend it to be a trick question that people would be motivated as they're editing to just add an image and they don't know about comments. So this person went, she's used to WordPress so she was looking for where to add an image. She clicked an icon that looked like this icon that looked like ad image. Can't see it very well right now. And she couldn't find the image anywhere. She tried to drag and drop it from her desktop and she got really frustrated and she said, this interface is so complex. I'm used to WordPress. So she was not able to do it but she ended up doing this go to Google to figure out, learn how to add an image to Wikipedia and then she found out about comments and upload wizard. So we also do a lot of ensuring of quality. So some of this usability testing I've been talking about. We worked on media viewer. It was released and we helped to iterate it down to a basic set of tasks for a specific user, for new users and we did several rounds of usability testing to help iterate it towards a little bit easier usability. We're doing the same with visual editor, link inspector and flow and collections. We've done usability testing and heuristic evaluations. Those are just some examples. I wanted to show you guys this. This is Bart. If you guys live here or visit here, you probably recognize this interface and personally I've had a lot of difficulty with it once. You notice there's three different places you can put cards or two different places to put cards, a place to put bills, lots of little places to put things, lots of buttons. On time I was on my way to Hong Kong and I was rushing as many times people are in Bart and I put my card in the wrong slot and just went in my credit card to buy a ticket. It just went in and it wouldn't come out. I was panicking trying to get to the airport. Anyway, the attendant helped me. So I noticed this one time that a user, not only me that's had trouble with this interface, a user, someone put a note giving directions for people about how to put their card in correctly. I thought that was an indication to me that maybe they didn't usability test this before they released it. So I thought, wow, that's a costly installation too. This is manufacturing, this is installation, this is a lot of labor to get these all over the place and they could have iterated a little bit before release. Maybe they didn't have time but it could have saved them money and effort in the end. So this is just a little example of design iteration. This is visual editor. Like before we did usability testing, like you saw that guy go and he totally missed the icon for reference because it kind of looked like a bookmark done. So then we've changed the icon and people got it right away. We put the word site. We had the icon that is quotes which is kind of a mental model of citation. And we also iterated the link inspector or the citation inspector tool. So also focusing solutions around needs. How am I doing? He's writing out time, okay. 20 minutes left. Wow, I've got a lot of time. Awesome, okay. So focusing solutions around needs, to focus the solutions around the needs, we need to know what the needs are. So being able to go out and observe people, talk with people, learn what they're trying to accomplish, learn what they want to do by talking with them and observing them helps us to define solutions that then people will use if they need them. So for example, we at Wikimedia, you guys, people probably saw the presentation we did for monthly metrics once. It was about mobile contributions and we just randomly interviewed 15 people asking them, do you contribute to Wiki projects on mobile? And then if they said yes, we said, hey, would you mind showing us and talking us through what you do? So they did and we got really interesting observations about the kinds of tasks that the people were wanting to do and they were mostly experienced contributors. So we learned about looking at contributions and evaluating people's contributions to see if it's vandalism or not and all kind of thanking people for editing. There was a variety of tasks people did. We also observed some of the issues they're having with the UI so that we can inform how to iterate it. We can inform what are, we'll do more research on this to understand people's needs like maybe new editors and readers but that observing is a way we can understand people's needs and in the future, we're really excited that we will be doing some work with the stewards to help them improve their workflows because when they were here, came to the brown bag and they were talking about one of the things they need is to have easier workflows and a lot to have to push the buttons a lot to do a task and so we're gonna ask them to record their workflows, we'll have conversations with them, we'll learn about what they need and how we can with the help of design and engineering and product teams how we can make it easier for their workflows more efficient by improving the UI and understanding their needs and we're gonna go to somewhere in February with the reading team to do some contextual inquiry where we're gonna go visit people where they live and where they work and understand what's the context of using mobile devices, what's the context of reading and learning on the internet and learning in general and what are the devices people are using, what's the relationship with technology, what are the constraints, the challenges, what are the things that are accomplishable and we'll bring stories back home. So this is one video, maybe some of you have seen it but it's one of my favorite, it's from IDO and it's one of my favorite that describes contextual inquiry and design ethnography so I wanna play this, people's needs so we don't always need them from them, we think in our heads about what they need, learn from people. So let's see now, just a little bit about collaborating with us, there's an email, people can send emails to us and we're happy to have conversations and start projects, we have a page on MediaWiki that describes our projects and has our reports and we start a stub of our research when we started and then add to it as we go and share the results right there, we also have a fabricator board and you can add projects, add requests either from our MediaWiki page or add a ticket to our fabricator board and then we do our prioritization and try to keep going with everything so there are three ways you can contact us plus just coming up and talk to us also. So now we're gonna do some lightning talks and how, are you ready, how? I'm Paul Giner, I'm a designer working on different Wikimedia projects. I have been conducting research on my own in the past but since we had a user research team I have been interacting with them to do so and today based on that experience I want to talk about how to better interact with the researchers to make the best use of research. Learning about our users is essential in all stages of design in order to create a solution that meets our user needs. We need to know which are those needs, which problems users have with the current solutions and how well or new ideas are solving those problems and research can be very useful in all those steps but first we'll start with what we don't want research to be. We don't want research that produces a bunch of slides that no one reads. We don't want research that does not clarify our path forward because it's too generic, it's too specific or it answers the wrong questions. For example, learning which kind of clothes people wear when using Wikipedia is of little use if we cannot do anything about it or we don't plan to do so. We want research that helps us make decisions. We want research that helps us to pick the path that has more chances to better fit the user needs. For example, we want to discover that translators are more interested in getting a high quality initial version of an article than translating the whole thing in one go. That informs our decisions to support translations on a paragraph per paragraph basis when creating a translation tool for Wikipedia. The usefulness of the research results heavily depends on how we frame and set up each research study. And I want to provide five quick tips that I found really helpful. So first is to state a problem. Defining the problem to solve before jumping to a specific solution is a good design practice in general, but it is also very useful when planning research. So please put in writing in each fabricator ticket which is the problem to solve that will be very helpful. Stating the problem to solve not only avoids us to be biased towards a specific solution, it also helps to surface our understanding of the context. What are we sure about what we don't know and which are our assumptions? Having assumptions is good. Having them explicitly is even better since we can revisit them when we get unexpected results. It's also important to be specific about what you want to learn. Let's imagine we have a list of articles and we propose to add a way to delete them. We want to know whether this works and we may ask ourselves, can users delete items easily from a list? But that's not a very specific question since the user is involved in a whole process. Are we interested about the need for deleting articles in a given context? Are we interested about whether users can find the way to delete articles when they have such need? Are we interested about how much users understand what delete means and the consequences it has in this context? Are we interested about whether users can operate the delete mechanism once they found it and know they want to use it? All these cases are very different and they require different ways to learn about them and probably different ways to solve the associated issues. So it's better to explicitly state what we want to know rather than just asking for a broader question and expect to magically get the answer we need. The second point is about defining the audience and the goals. Identify early the audience for your solution and the goals is also very important. Regarding the audience, you can always be tempted to think that your solution should work for anyone, but at the end of the day, you need to recruit some participants and you don't want those to be random. It's totally fine if you focus the design in one main group and then add additional participants with a different profile. For example, or recent research for notifications is focused on experienced editors but we want to include also new users to make sure that the new features are not adding unnecessary complexity for them. Regarding the goals, they are the compass that should direct the research. If the goal of your solution is to help people to do tasks faster, the scenarios, prototypes, questions, et cetera should be focused on that goal. Third, identify the big questions. Human curiosity is infinite, but resources are not and while there are many things we can learn about our users, we need to focus on those that are key for the project. The best way to keep focus is to ask ourselves what will you do with those answers once you have them? Think in terms of I need to know this to make that decision. If any possible answer of a given question is not going to affect your solution in any way, maybe it's worth not asking that question and pick a different one instead. You can go even farther and anticipate the possible answers to a question and imagine how different the solution would be in each case and even prototype and test some of those hypothetical variants. Fourth, get involved at the right time. When you're presented with a research plan or a prototype, think of it as a simulation of the final experience. This is the right time of raising your questions and your doubts about the solution to work on certain conditions. If those are raised later when the building process is about to start, it's very hard to go back in time to check if a different solution would work better. I would recommend also to everyone to watch the recordings of sessions since it's a great opportunity to see your products or ideas live. They are fun to watch and help everyone to be on the same page which will save time of meetings and email threats. Also, there are many aspects about how users use your products that cannot be captured in a slide deck. That also will allow you to share your own conclusions or dispute the existing ones. Share your thoughts because the more perspectives the wider the view will have. Five, verify and evaluate. We should not serve research as an oracle that will tell us whether our solutions work or don't work. Research is a continuous process of learning. It can inform and provide a lot of context about why and how things work or don't work. For this to work we should make sure that we have enough stability in priorities, coordination and room for everyone to participate at the right time. And as with any interactive process it's good to make a retrospective. Once you have results for one round check if all the big questions were successfully answered with the level of detail that you needed. If not, you need to adjust the process for the next round. And yeah, basically that's it from me. Thank you and let me know if you have any questions. Yeah, so we have a couple of minutes for questions. If anyone has some just go up to that mic over there. And is there anything online? Nobody has questions. Okay. Oh. Hi, pal. So it's so weird to be on the other end of this. So you and I have worked together on a couple of different projects. And so I'm curious. I'm Jonathan Morgan, Senior Design Researcher. I'm really bad at this. So it's true, the struggles that you go through. So I'm curious, was there anything as you started to work with the design research team whether you're working with me or with someone else? Was there anything that surprised you about how that changed the process that you went through when evaluating and testing designs? Because of course you've actually been engaging in design research yourself without the design research team. So I'm curious, how did working with the design research team change your process for good or ill? Yeah, it's been an interesting evolution because as you were saying, I used to... Oh, I've considered research an essential part of the design process. So when there was not research team, I was doing the research myself. And changing, being able to get help from the design research team has been really, really helpful. And not only because the team has been helping with parts of the process that I no longer... I had to invest less time on it, but also because it also allows to add a different perspective. If you are designing something, you're proposing a solution that you think it works and that obviously creates some kind of bias. So I think that being able to interact and show the plan for testing or the ideas you have to someone else that has not been going through that process, only that it's also increasing the perspective and your understanding of the problem and how that will be presented to the users. So it's been really helpful in those both dimensions in terms of saving me a lot of time, in scheduling, in defining the plans and also from this other perspective of being able to discuss how are we facing the research and also discussing about the results. And I think that added a lot of perspective. Cool, thank you. Any other questions for Pao? Those questions? First off, I'm just, sorry. Okay. Now Grace is gonna talk with us about, she's an agile coach and she's gonna talk with us about teams collaborating together to make a high quality product. Yeah, can you hear me here? Okay. Hi everyone, I'm Grace. I'm an agile coach on the team practices group and I've been working with design research for about six months and Abby asked me to talk a little bit about my work. At a high level, what I do is I help teams with how they work and I look for ways where I think agile can help. So let's start by talking about some of my values. Okay, I believe in recognizing and creating value. I think traditional project management methods are more focused on cost and I believe in value. I think that waste is the enemy of values. So I believe in eliminating waste to enhance our focus on creating value. Ah, okay. This is the dining hall at Balliol College in Oxford University and you'll note the elaborate ceiling beams. There's a story about another college at Oxford where the ceiling beams needed to be replaced after five centuries. They were able to do their job for a really long time because they were not asked to do more than they could. I think running your teams at 100% is about as effective as running your CPU at 100%. I believe in working sustainably. This is a barrel full of monkeys. I value finding the fun at work. Next. Okay, individuals and interactions. Here's some behaviors I like to see. Here's two people talking to each other and I once worked on an engineering team where two engineers sat right next to each other and they did not know what the other one was working on. And so when they found out, it was like, hey, I just wrote that same line of code. So I like talking to each other. Next. Ah, okay. This is six blind men and an elephant. And they're each chatting a different part of the elephant's body and they're each coming to their own conclusions. I believe in seeking multiple perspectives because I think it helps us uncover blind spots and our own biases. This is Thomas Jefferson. He's the third president of the United States and these are his 10 rules to live by. It's my framed copy of them. 200 years later, they still stand up pretty well. And number 10 is the one I want to focus on. When angry, count 10 before you speak. If very angry, count 100. Just putting that out there. Processes and tools. Okay, so here are some of the processes and tools I like to use. This is the big board and Dr. Strangelove, I believe, in visualizing work on a board. This is the traditional triple constraint of a project, scope, schedule, and cost. Abby reminded me that Picasso once limited himself to working in one color for a whole year as a form of discipline. And like Picasso, I think constraints make us more creative. This is the intersection of church and market streets in San Francisco and you'll see that traffic lights are limiting the number of cars who can pass through to the next block. If there are, I think that work is a lot like traffic. So if there are a lot of cars in the block, they're not gonna go very quickly. But if there are just a few, they will flow much more quickly. So I think that the fewer things we work on at any one time, the more things we finish. So let's stop starting and start finishing. These are post-it notes. I just really like to organize my own work with post-it notes. Okay, so that's some of how I like to work now. Let's talk about how we worked in the past and how maybe some of those contexts were actually different. So these are the prescribed labors of the month for July and August. This is from a 15th century illuminated late clay leisure to Duke de Barris. And you can see that in July, they're harvesting all the wheat. So in August, they can thresh it. Makes sense. And we do that every July and August. It's prescribed, okay? This is a general ledger. Traditional project management is more focused on cost dimensions. And in manufacturing, we wanna work in phases because we want to drive down marginal costs. So if there's a cost to setting up a widget machine, we wanna make all the widgets before we move to the next phase. This is a GAMS chart. And this is general William Crocher, an American general who used them in World War I. What we're trying to do here is control risk through planning. Times linear in agile scrum, it's cyclical. Here, if we make a mistake over on the left in the requirements gathering, things might go not so well in user acceptance testing at the very end. Wanna find things out before the end. This is NASA in 1961. And you can see they have a big board. The Apollo 11, yes, the first lunar landing project was driven by the hard constraint of the end of the decade. President Kennedy said that we were gonna put a human on the moon by the end of the decade. And they got there by communicating. They broke down silos. They spoke daily, constantly, instantaneously before the internet. And then for some reason, that culture evaporated and afterwards they went back to their silos. The silos got competitive with each other. They were engaged in phase gated development. And by the 1980s, one employee described it as the post office and the IRS went to space. This is the gallery, Day Machine, at the 1900 Expositional Universal in Paris. And Frederick Winslow-Taylor demonstrated his improved process for cutting steel. He was a process nut. He ran around with a stopwatch and he observed workers because he wanted to make sure they were following his processes quite faithfully. He believed in managers' ability to make decisions, but not workers. And he wrote Principles of Scientific Management which had an influence on 20th century management practices. The factory work is algorithmic. What we do is heuristic. But I prefer to work with trusted, self-organized, and teams of intrinsically motivated individuals plan ahead. These are two Victorians in San Francisco. The one on the left is covered in something called permostone. And the one on the right is painted a pretty pink. I saw an ad for permostone from 1950 that said that it eliminates the need to paint your house ever again. And what I don't like about long-term planning is that I think it limits our ability to respond to change. So it would be a lot easier to paint this blue than to get all that permostone off and paint it blue and celebrate its Victorian features. Toyota, okay, let's go to Japan. All right, this is the Toyota automatic loom. Some of the brilliance of this device went into the Toyota production system which is their culture and process of work. Much of Agile Scrum is based on Toyota. And two of the bits about Toyota that I like for design research in particular are there's the idea of the Gemba, the shop floor. No matter how high you rise in the Toyota's management structure, they want you to return to the shop floor to watch work being done. Okay, okay. So the Gemba, the factory floor, they want people to go there and observe work being done. Not in a tailor way, but in more of the design research and pathetic way. Sorry. There's another concept of Toyota that's go and see because they want people to get their knowledge and opinions firsthand. And that's in design research. We like it when the engineers attend usability testing sessions. That idea has been translated into, it's been absorbed into lean startup as get out of the building. And the reason why you might want to do that is people in the building might tell you what you want to hear, they might have biases, so you want to talk to actually users. This is a book called Lean Software Development. It's one of my favorite books. It was written by Mary and Tom Papendick and they work in 3M to make this a post-it notes. Outside of Toyota, a lot of this thinking is described as lean, just enough, just in time. And the Papendicks were able to adapt seven principles from lean manufacturing to software development. And one of them is that works really well for design research is build knowledge. So we want to talk to actual users and I've heard this described elsewhere in lean literature as seek feedback or create knowledge. Sorry. Oh, yeah, actually you can. So there's seven principles and the seventh one is see the whole. So let's go back to the 15th century. You can see the artist is looking through the optical device of the grids on the windows and he's translating it to grids on his paper. So if he were to come and say, hey, I want to test this one square in the grid, design research would rather see the whole and test entire workflows and not individual squares in the grid. Okay, so, yeah. So this is a butterfly and the small disturbance of it flapping its wings over here could cause a tempest over there. I think we live in an increasingly complex world, one where we can establish causal relationships in retrospect rather than through prediction. The planet is being stretched to support a bigger human population than ever. Those humans communicate constantly through complex networks and dense networks as well. And Stanley McChrystal, the general had this wonderful comment that a phenomenon that used to take months to manifest can now happen in the time it takes to type 140 characters. So I like working with researchers who are working to understand. This is the inside of Shakespeare and Company, which is an English language bookstore on the left bank of Paris. I like the advice to be not inhospitable to strangers unless they be angels in disguise. That's what I like about working with design research is that they welcome strangers and they do it with empathy and without judgment. And they embody the Stephen Covey advice to seek to understand before being understood. So this is Galileo and he said all truths are easy to understand once they are discovered, the point is to discover them. So let's talk to each other, let's amplify learning, let's create value, let's have fun, and let's discover the truth together. Anyone have any questions for Grace? Zach? There we go, it's on now. Hi, I'm Zach. I'm on the communications team and I focus on global audiences. I wanted to ask you about how many people you think is a good amount of people to be both agile, nimble, lean, so representative and inclusive, right? So there's obviously a balance between hey, we're gonna go ahead and sample thousands of people or work with thousands of people and then that is a cumbersome process and we're only working with five. We're not really getting the best of something that's representative. So how do you find that balance? Do you mean for team sizes? Team sizes. Team sizes, I like smaller teams better. I think that less than, fewer than 10 is, people can talk to each other, they can communicate, the number of pads of communication increases as you have more people and so I think that the cohesive bonds of the team are just better with the smaller numbers. I think it's, Scrum suggests seven to nine. No, five to seven. The two others would be Scrum Master and Product Owner. And how do you decide just like representation or responsibility? Is there a logic to know we have people? How do we know if we have the right people? I think the teams, if the team is functioning, if people are talking to each other, if they are less formal and more comfortable with each other, the right people is a question of also cross-functional capabilities. You want those represented. Sometimes people don't fit in with the team and it's obvious. Does that answer your question? That does, yeah. It's a topic that I'm sure comes up all the time but I think in terms of making things happen both quickly and thoroughly, it's just an extraordinary thing to consider. Like how do we have the right, as you're saying, like cross-discipline representation and I'm just kind of interested in like both what the methodology says about it. Like, okay, seven to nine, space five to seven but also saying like, how do you even challenge the biases you might get a small group, right? Because consensus can form quickly which is both the advantage and potentially the disadvantage. So how do you keep stirring? Do you keep challenging? How do I keep stirring and challenging teams? Right, yeah, I guess what I'm trying to push at is just like how is it not all about productivity? How is it about getting to the right place because the team doesn't settle even though the team continues to move forward? I agree that it's not all about productivity. I think that value is more than that. And like I said, I mean, I think that teams form and you can tell if somebody doesn't belong but the less formal people are with each other, the more comfortable they are with each other. So I can usually see quickly if there's something wrong, if somebody doesn't fit. I don't always have the means to remediate that but you can usually tell. Thank you. Thank you very much, Grace. All right, so now Jonathan is gonna talk about triangulation between quantitative and qualitative data. I totally saw that, Abby. Yeah, yeah. So I'm Jonathan Morgan, design researcher and today I'm gonna talk about triangulation for roughly the process of mixing qualitative research methods with quantitative research methods to learn more about your users and your product than you would learn by using either of them independently. My background is in mixed methods research. I've never, I was trained to be both a quantitative and qualitative researcher and here in the design research team and in the research and data team, we actually do. There are a lot of people who have multiple skill sets. And so we found that triangulation works effectively for us. So when I say triangulation or mixed methods in the context of research, it actually means more than just the distinction between quantitative versus qualitative. It also means sampling the way you sample. So for instance, if you want to get a sense of what are the different kinds of users you have, you might wanna sample for diversity when you're scheduling a study, just to get a sense of all the different types of people and their motivations. You wanna get a sense of what are the highest priority personas to serve, you would probably wanna get a representative sample. So you know who are the people who are most, who are using your product most and you can focus on them. And then there's all these other dimensions along which you can mix different research studies or different research investigations to get different kind of data. So for instance, there's observational data, data that you gather by watching or watching with a script or with analytic software. And then there's data where you're actually eliciting responses from people. You're asking them to communicate information to you explicitly, whether that's through an interview or a survey. Those are, that's just another dimension along which research methods differ. And by combining observational methods and elicitational methods, you learn things that you couldn't learn otherwise. And even just by having multiple researchers looking at the same data and talking to one another, you're engaging in a kind of mixed methods research. Because ultimately the research instrument that is kind of the final arbiter of what gets decided, what gets observed and what gets built is the brains of the people doing the observation and the analysis. But after complexifying this issue so much in the last slide, I'm just gonna break it down into Quad versus Qual because ultimately it is a lightning talk after all. So here's one way in which you can effectively engage in mixed methods research. You can perform qualitative analysis first and then use the results of that to inform quantitative analysis. So for an example of this, something that we're working on right now, a collaboration between the design research and the research and data team is article recommendations. Current use case for this is for the, or a content translation interface. And we have a prototype of article recommendations that shows articles that are similar to something that could be translated from one wiki to another. When the prototype was built, the metric that was surface for the end user to signify the importance of the article translation was page views. And within just a couple of interviews, we found that most of the hardcore translators we were talking to did not consider page views to be a particularly important metric. That's not what motivated them to translate articles from one language to another. Better metrics would have been number of inbound links within the wiki to that article and also to number of other wikis where that article existed. So right away, we found through some qualitative research some key design insights for how to present, how to rank these articles in the recommender and also what information to present in the interface to signify importance. Another collaboration between research and data and design research that's going on right now this week. We put a little micro survey out asking people, why are you reading this article today? We got thousands of responses and we went through them, a subset of them and qualitatively coded them. It was an open coding, which basically means you get a bunch of people together, you have them read through the data and you have them discuss what are the patterns and themes that they're finding. The next step for this research is to take some of the themes that we've identified through the open coding relating to the motivating factor that cause people to come to a Wikipedia article and then release another micro survey and rather than having an open text box, we have a list of check boxes or radio buttons and that kind of allows us to discriminate intention and potentially correlate that with browsing behavior that we're gathering through the logs. Another way in which you can do mixed methods research is to do qualitative analysis after or in parallel with quantitative analysis and this can help you contextualize what you've found through your quantitative analysis and identify some potential next steps. So one example of this, drawing from the visual editor work that was done earlier this year, Aaron Halfacre did an AB test analysis of the versus Wiki text for new editors and found there was kind of a puzzling conclusion that VE editors seemed to take longer to save their edits even though they were no more or less productive in terms of the content of their edits than Wiki text editors. At the same time, Abby and Daisy were doing usability testing with new and casual contributors on VE and one of their key discoveries was that people when editing on VE for the first time didn't know when they were in editing mode versus viewing mode and didn't know that they needed to press save to get out of it. So this is one plausible explanation for some of the findings from the quantitative analysis which you wouldn't be able to identify by quantitative means alone. Another thing you can do is you can try to make sense of a positive result that you may have found. This is an example that I'm probably going to be talking about a lot over the next several weeks. So Aaron and I again did an AB test of if we invited new users to the T house did they stick around longer? We found out that they did but that analysis in and of itself did not tell us anything about why and there's a variety of reasons why. Could be just getting an invite is a nice thing. Could be seeing pictures of real people and Wikipedia is a nice thing. Fortunately, we had survey data to back up and let us dig into what people might have found to be most useful about the T house experience. And one of the things that came up in these free text responses from our survey was that people deeply, deeply, deeply appreciated having a place where they could ask questions and get answers and get the information they needed without feeling intimidated. So this gives us a strong sense of what are the most valuable features of this particular product. And so that's all I have to say really at this point about mixed methods research. I'm happy to talk to any of you about it more at any time. It's what I geek out on. It's my process. And I'll just leave you with the value of mixed methods research as I see it is that there are always, there's always going to be some distance between what you believe, what your assumptions are or your current interpretations and what your data is actually communicating. And so by using it, by triangulating with multiple methods, you can close the distance between those two points. And that's it. That's a picture of a cat. I'm not supposed to attribute it. It's public domain C0, I think. Hi, John. Thank you for the talk. One of the things that I, one of the questions that I have about qualitative research is quantitative research is how to actually apply quantitative methods such as being rigorous around sample size to draw conclusions from qualitative research with smaller numbers of participants. Can you, because you have expertise in both, could you speak to that a bit? Sure. Ultimately, whether you're doing quantitative or qualitative research, sample size, the two meaningful factors that determine sample size is what the sample size you're able to get and the sample size you need to answer the kind of question you want to answer. So for instance, one of the kind of canonical sample sizes for usability testing, right? According to, you know, Jacob Nielsen, this, you know, guru for forever is five plus or minus two participants. In that case, the reason why he's settled on that number is that, A, the focus of usability testing often is to identify the set of potential usability issues rather than to get a sense of which ones are like the biggest killers. And then two, if you run enough usability tests with, you know, a variety of different participant numbers, you can actually quantify kind of how many of these issues are found with a certain participant number and when you get to a point of diminishing returns. But ultimately it's a map, what matters is what kind of question you're asking. So if you want to be able to determine, for instance, which of these two interfaces users are going to be able to kind of accomplish a task on faster, you can get a sense of that potentially by doing a usability study, you know, kind of like an in-house A-B test with say five or 10 people in each condition. But ultimately that would probably, if you really needed to determine with some certainty that one interface, you know, had a statistically significant impact on task completion, you would probably want to follow that up with a formal A-B test. The advantage maybe of doing an in-house study first is that you have the advantage there of asking people questions or observing why their task completion rate might be lower with one interface than another, which could save you a lot of time. You find out that there's a difference between the two, but don't know why. You're arguably not really any better off. Does that answer some of it? All right, so just to make sure the microphone captures that the director of reading just said that I gave the best answer he's ever heard. Sorry, I didn't catch that. Anything else other than how great Grace is as a presenter? Cool, thank you. Now we're gonna take a break. We've got 15 minutes and... All right, just be okay, work then, or whatever. There is a little, okay. We're gonna have some visuals. You'll see it, the surprise. Restored from our break. So now, Danny Horne is gonna talk to us about personas. Hello everybody. Are you ready for personas? Hi. I'm Danny Horne, I'm the product manager for the community tech team. And yeah, personas, I hope by the way you guys enjoy my hipster slide deck. Very useful for me. Some of us believe in the classics, okay. Personas, basically when we are doing product and design work, we naturally design for ourselves. It's just, it's something that happens to everybody at every stage. You think about the kind of user that you are, or possibly the kind of user that like, one other type that you know, where you talk to some people, but it's sort of a, it's a trick in product and design to kind of back yourself out of that. So, for example, we have a reading team. And there are a lot of different kinds of readers. Basically, we have contributors, and then we have the 100% of the world that is a reader. And those are people with really different kinds of needs and different experiences. And so again, your first impulse is to go back to yourself. That's because product and design is fundamentally a creative act. There's science to it. The kinds of research that we've been talking about is really useful, and it informs the kinds of decisions that we make. But there's also an art as well as a science to it. So for example, coming up with kind of the solution to the problem that Abby was talking about with the site button on VE, you could observe people having trouble with that. But there's a lot of different ways that you could actually solve that problem for somebody. And so coming up with the quote marks and coming up with the word site there, that's a creative idea that product and design people come up with. So personas and what they are for. Yeah, I'll do that, thank you. So what personas are for is a way to, it's a tool to help with that creative act, to get you out of your own head and out of your own experience and to work on thinking about other users and sort of have empathy for other users, not just for yourself, but also for your team, personas can be essentially like the kind of shorthand that you can use so that everybody on the team knows the kind of user that you're talking about. So what I have to show you is six people. So the process that we use to come up with people that we're about to see, this is called doing a pragmatic persona which basically means we read a bunch of stuff and made them up. A lot of stuff that we did was sort of informed by a lot of the research that's been done. But we have not actually sort of run research specifically on these kinds of personas that there's more work that we have to do. What pragmatic personas are is saying using the data and research that you have right now, how can we understand the user though? So at the end, Daisy and I read a bunch of research that's been done on readers and on contributors and then we came up with stories. Now essentially, one of the big things that we have to do is to take some of these big ideas, like this is a casual reader, this is an active reader, this is a new contributor, translate that into a person and sort of come up with enough details like enough local color that's gonna make people like them and be interested in them and sort of get them into their heads. So it goes through a long process. We took the ones that we had to a workshop with I think all the people from products and design and the community of the agents learned a lot. Most of what we learned was the terrible, terrible mistakes that we made. Right, you're not seeing that version. So because it's illustrative, here's a couple of terrible mistakes that we made. For one thing, we mixed up too many things. We took some basic stuff like this is a new contributor and tried to add onto it, also limited access to the internet and also speaks three languages and is sort of bouncing back and forth. And it just kind of, it was hard for people to really understand like, so what's the core thing that we're looking for? And you also start asking the question like, is that supposed to be every new contributor that we have also has those other issues? We know that's not true, so it felt kind of false. And some of that stuff sort of just brought up questions and ideas that were really just distracting. Another mistake that we made was that every single persona had some kind of problem with time management, did not realize this until like three quarters of the way into the workshop and we're describing like the fifth person. I'm like, oh, wait a second. Like as we talk about challenges that they have, all of them had challenges that were like, they're really busy and don't have enough time. And it was really like this obvious reflection of like me and Abby and Daisy, like feeling bed line pressure putting that onto these nice people. What was it thing? So we had to just dial back on a lot of stuff. And now I'm gonna show you what we came up with. So these are sort of the six pragmatic personas that we have right now. I'll talk you through them. And then we'll talk about kind of how we can use it and what the next steps are. So first one, her name's Sandra. She's 46, she lives in Chicago, works as a bookkeeper at a tax preparation service. She's got a BA, she's single and she lives alone. None of that comes specifically out of research obviously. It's sort of local color to get you to like imagine a real person. Now the way that Sandra interacts with Wikipedia is working as a bookkeeper. She is working at a job where the computer that's in front of her is not allowed for personal use. So most of the time when she's at work she's using a phone to sort of access the internet. She has, her job is sort of a real desk job. And so outside of work she likes to be social. She likes to sort of interact with a bunch of people. So the main websites that she's really interacting with are Facebook and Twitter, sort of to keep up with friends. She uses Facebook a lot for sort of keeping social contact going. And she also, what's the other thing? Oh, she goes to PubTrivia with her friends and she's also in a book club. And so in all of those three cases there are times sometimes when she has to look up information for sort of talking to her friends on Facebook. They start having an argument about somebody needs to go and look up a fact. For the book club she wants to look at who the authors are who are coming up and to help sort of pick books that are interesting. And she cheats at PubTrivia also. That is actually, that's based on research. All casual readers cheat at PubTrivia, just saying. So every once in a while she's sort of looking for a fact. Now she doesn't actually have really a relationship with Wikipedia, per se. She has a relationship kind of with Google and the internet that when she's looking up something like that, she looks it up on Google. Sometimes there is already the information that she's looking for right there on the search page. And sometimes there is not. And she has to go in and click a link. As we all know, sometimes that stuff it's called a knowledge graph and sometimes that's pulling information from Wikipedia. None of this is at all interesting, Sandra. She doesn't know anything about it, doesn't care. She just wants to know how do I get the fact? And so some of the challenges that she has are things like she comes to Wikipedia page. Can't find the thing she's looking for. And when she's looking up authors with the book club if you're looking for a bibliography on a page about an author, sometimes that's on the page, sometimes it's a separate page, sometimes it's under different headings. And as far as she's concerned, Wikipedia needs to get its act together. She has really kind of no interest in the fact that it's different people and they don't necessarily have the same style. She's just saying this is a site that she ends up on a lot. So that's where a casual reader is. Our active reader, Michelle, she's a school teacher. Middle school teacher, she's 32, she lives in France. Married in two kids, has a bachelor's in education. And the thing that really makes her an active reader is that she thinks about Wikipedia a lot when she's teaching and also when she's raising her own children. And so she actually has sort of a really intentional relationship with Wikipedia. There's problems that she has, but they're all about sort of not just finding information, but also being able to share that information with other people. How to help her students to look up information, how to help sort of understand what sources are just to find like the useful parts for especially for the age group that she's working with which sometimes is easy on Wikipedia page. Sometimes it's not. Our new editor is Henry. He's 53. He is in Seattle. He's a city planner, master's in urban planning. He has a boyfriend. No kids, he speaks English. He's our new contributor. Thing that he really loves is mid-century modern design. Hobby of his that he's had for a long time. He has a ton of books about it. He's really a nerd about it. And he knows that when he looks up stuff on Wikipedia, there really is not a lot of information about mid-century modern design. I'm pretty sure today, if you look it up, it's a pathetic page. So he knows that there's stuff that he wants to add. And a niece, when they were talking at Thanksgiving, niece is I think in college now, we dialed her back, so. She was in law school. Yeah, I think she's in grad school. That she recently became a Wikipedia and therefore she's a real evangelist around it. And she encouraged him, if there's a problem on this page, if there is information that you have, like on the bookshelf right behind you, go ahead and add it. That's what this is for. So he's trying. But so he's sort of new in that process. There's a lot of stuff that he doesn't know. There's some stuff that he's afraid of because he's heard that sometimes people get reverted, sometimes people get banned. He doesn't want that to happen. He's not comfortable with this system yet. But he's one of the, and when he's not the kind of person that sometimes some of our users talk about where a new editor is a stupid person who we want to keep out. This guy kind of represents, there's a ton of smart people who have lots and lots of knowledge. And they just aren't able to use this particular set of features in order to share that. So we've got to get, we have to help Henry like get those facts out of his bookshelf and onto Wikipedia. All right, Adriana, our active editor. She's 24 from Mexico City, copy editor, bachelor's in journalism and she's single. She lives with her roommates and she is bilingual. What she really loves is the music scene in Mexico City. Especially indie stuff. She goes to a lot of concerts. She's really involved with that. And again, she sees a lot of that stuff is just missing on Spanish Wikipedia and definitely on English Wikipedia. And the motivation for her is really to document the local culture that she really loves and cares about. So she's mostly contributing content at it. She doesn't really do a lot with talk pages or RFCs or sort of wiki-wide conversations. That's not super interesting for her. She just wants to get stuff onto the page and then move on. So some of the problems for her are when things are slow or inefficient or something that she does gets reverted. She has to do something again. That's all very annoying. She just kind of wants to keep adding stuff and then move on because there's tons of work to do. Wayne, 30, he's a librarian, our power editor. Married, he's in Tennessee, master's in library science. He actually has moved out of content editing for the most part. He used to write a lot about planes and about transportation systems, but as he got more involved with Wikipedia, he started getting involved more in governance issues. He's a guy who really takes the reputation of Wikipedia very seriously and kind of identifies with it. When somebody says there's a lot of inaccurate information there, it's just kind of a mess. Anybody can do whatever they want. That raises his hackles. He really hates that. And so it's really important for him to keep Wikipedia accurate and comprehensive and complete and to keep that stuff out. Oh, and also he's really busy. I think this was the one where we left. Really busy has a factor in the life. And then Jack, or Vandal, he's 14 years old. Student in London, he has parents that he hates and sisters who are totally annoying. And he goes to school, which is totally boring. He's a smart kid, but he's kind of hyper and is entirely bored with his life. He was, his relationship with Wikipedia is that one day recently he was over at his friend's house and they were looking up Satanism as you do. And they thought it would be hilarious to blank the page and replace it with Hail Satan 100 times. They were right, that is hilarious. And then the fact that it got reverted like two seconds later was even more hilarious, obviously. And then they get to see nasty messages, left for their IP address. This is obviously a fantastic game. And so they have actually come up with a game where the trick is to see if you can get false information onto Wikipedia that stays there the longest. So he uses a bunch of different IP addresses, uses a bunch of different devices to try and make that happen. Now, the reason why Jack is spending so much time on this vandalism career of his is that he is actually sort of fascinated with Wikipedia and he's kind of learning about how it works by messing it up. It was important to me when we were kind of working on a vandal persona, like not to make him the most evil person ever, partly because don't want whatever characteristics he has to be like, this is what the worst person is. So Jack, there is hope for Jack. In a few years when he gets out of school, kind of grows up, he's actually gonna become a really good contributor on Wikipedia. He has worked saving Jack, but not right now. Today he is a pain in the ass. All right, so those are the six pragmatic personas that we have. That's just sort of like a quick thumbnail for each of them. And then we're passing out the handouts that we've been using that has a lot more details about them and their story. And I believe that we have sent that to all the remote people. Basically, what we're doing now that we have kind of a set of personas that have been vetted by a lot of people and we think that we can use is that we're gonna start working with product teams to think about how can we use these people to inform those creative decisions that you're making about how the product is supposed to work. So that the team can say, for example, that this product is something that Michelle would find really helpful. And then you can talk about, what about Sandra, does that get in her way? If we add something that's gonna help, Michelle, the readers that are gonna piss off, Wayne as the really powerful contributor, that kind of stuff. So this is kind of a start, it's a pragmatic start so that we can start using that and kind of culturing that within our product team. And then the next steps are actually getting real facts to back this stuff up. So Abby and Daisy and other people, I think, are involved right now in doing research that then we can use to test some of the hypotheses that we made about readers and contributors and to be able to revise these as we sort of get more real facts and information. One thing that is super missing and that's really important to us to have a good persona for is somebody in the global south or emerging regions. That was something that we knew when we were putting these together that we really wanted to explain and sort of something that we wanted to talk about. And so Abby and I spent a lot of time writing about this young girl named Anjali who wants to be a doctor. And I forget what city we put her in in India. We came up with a ton of things and it was this really beautiful story. We're both kind of sad to lose Anjali. But we decided before we even showed it to anybody, like, you know what, let's step back from this a little bit. We know nothing about this. We don't actually know any facts right now about what her life is like. The internet is like what the education system is like. We are writing a weird stereotype based on not even having much of a stereotype around the question. So we dialed that back. We did not actually make a persona like that yet. But some really good research is now coming in. I think there's the people coming back from Ghana and doing surveys in Ghana and then where's the other places? Oh yeah, in South Africa they're doing some research. So once we feel like we have research to kind of give us a solid base and to be able to know that the kind of story that we're telling is more or less authentic then there will be more personas added like that. And so in addition to that, and we're also gonna start talking to potentially some other teams. I think we got some interest from education team, from yeah, community teams who say these personas don't work for us, but it's an interesting idea. What if we put together like fundraising personas? So we're sort of talking to people like that as well. And that is the process with personas right now. Thank you very much. Anybody have questions? Some questions from remote people. Maria wants to know, she says, I'm interested to know how qualitative research relates to the creation of personas. How do you come up with the profile? Abid, do you want to come over here? So right now we're doing, Daisy and I have been interviewing new editors. So we've looked in data to see where, to find the right people we're looking for who have had an account for a short period of time who have done a certain amount of edits, so that we understand that they have some motivation to learn how to edit. So then we invite them to talk with us, we ask them general things about their life, what do you live, what's your job, what do you do in the day, and we ask them what kind of technology they have and how they use it, and how we ask them, how do you learn on the internet? What do you do on the internet? So we kind of get to see how they get to Wikipedia, and then we also, well that's more for the new editors, we ask, what was the first time that you, do you remember the first time you edited? We ask them to go back, kind of walk us through it, talk with us and talk with them about how they feel about it, what they think, what challenges are, and so we get to talk with them and hear what they're talking about being an editor, as well as watching, having them show us their paths through Wikipedia and their experience, talk about their experiences. We ask things like, we ask things like, how, when you needed help, how did you get it, what was that like, what are things that you learned, how easy was it to learn, how difficult, are there any things that you wanted to do that you can't do now, how might you find out how to do that? So we have, we're doing about 10 interviews for each of these personas, and then we're gonna do analysis and see what kinds of patterns we'd see, see if anything against these kind of personas to see if we may need to split that. One thing we're seeing is that there's different motivations for people to become editors. Sometimes people will see a text that's wrong, that when it's correct spelling error or something, sometimes they say, oh, look, there's some content that usually people are starting out with something little, but then another motivation that I didn't really, but have been learning from these interviews is that they have a certain expertise and maybe, and a friend of theirs knows that they have that expertise or someone in their circle says, hey, why don't you go put this in Wikipedia? So it's like the motivation is that their social group is asking them to go contribute and so then sometimes it's even an editor of Wikipedia says, hey, I know you know about this, we go to this talk page and discuss it. So we've seen that a few times too. So that's something that was a surprise from talking to people. Really, that was my mom. That was actually one of the reasons why I love Henry and have protected Henry through this process is because he is my mom, who is super smart and calls me on the phone every once in a while to say, the article on William Butler Gates is terrible. Like, okay, what's the problem? It is terrible. And I'm saying, well, you know, you could hit that button. No, this will not happen. Henry is kind of my mom, plus he's trying. Yeah, so we're still doing the analysis on the new editors. Well, that's true, yes. Okay, can I answer your question again? Correctly points out that my mom is trying to. Rachel, I'll choose. I think it, is it on now? Okay. Hi, I'm Rachel from the community department. I understand that these are meant to be possibly comprehensive, but possibly not all inclusive. I think there's going to be some situations where product teams and different groups may be building for outside of these personas. And my question, you know, like, I mean, I go straight to the back here with Jack. He's kind of, you know, doing the Hail Satan stuff, but like, what if it's like a gamer gator and like somebody who's really intensively, somebody needs to be blocked, trying to start prevent systems, or you know, somebody who's just kind of outside of this particular persona in such a situation, well, how would you act in such a situation? Would you do additional research with other users? Would you expand or consider adding another persona? There's a lot of people who are very involved with the movement who may have like a different viewpoint, even if they're just a reader, and I'm just going to read through, like they're a reader who is, you know, maybe very active chapter and does things outside of reading that are not in terms of contribution. Like, how would you, how do we accommodate situations that are outside of these personas? Yes, really good question. We actually, we made six because that felt like a number that we could handle and get our hands around to have a first set that we can sort of put out and start working with. But you're right, there are sort of, for all of these, there are sort of multiple different kinds of stories that we could be telling with the persona. I think the kind of thing that you're asking about, sort of for the community team, thinking about a different kind of, like if having multiple levels of vandals or multiple types of vandals is important and I can totally see why it is when you're talking about the kind of work, we might actually build out another set, like so that's the thing that we've been talking about is helping people in other departments to identify sort of what are the types of people that you want to think about and then how can we sort of build a story for that and then also how can we then do research to find out more about it. So it's kind of been evolving for us. Also, because it also has to do with products as well, like if you're building a particular product and you're building for like a particular volunteer group, like that, yeah, but that also answers the question. Was, yes, yeah. So for example, there are marketing personas and there are product personas and those are very different types of personas for different purposes. So we may need, you know, for example, fundraising personas. It's a different, it's not necessarily as much product development as it is outreach. So there are different types of personas. So we may need to build up sets. That said, you don't want to have 50 personas at all. So we would want to have maybe a solid set of personas for product development and then some for, maybe we need some for community engagement for different efforts outside of the product. Maybe we need some for fundraising. So with the work we're doing, we can kind of iterate a process to evolve and produce some for the different disciplines. And just straight up, if having a set, however many that turns out to be, helps a team to talk about like, well, no, there's this kind of person and there's also this and that, that's actually still a super useful tool for the team, even if they don't have like a build out persona for all of those people. The idea is to kind of help identify and sort of empathize and to be able to talk as a group about the kinds of people, the kinds of users that we're serving. I have a question about the creation of personas. So you said that you read a lot of existing research to come up with the pragmatic personas and that for the last one, you felt that you didn't have information so you just built a stereotype. So my question is, how do you know when you have to build a persona and not just create a stereotype? And are you the same as personas and the research that you used to create them so that if someone is interested in one specific persona, they can go and dive into the research that you used to create? That's a really good idea. No, that's actually, you know, that isn't something. Just the second part of your question, like that's an excellent idea. And yes, let's do that. Because we do have like all the notes and all the stuff that we looked at, we can absolutely like pick out here are sort of the main things where we found something that supports this persona. With the, with Anjali, it was super simple to tell that we didn't have enough because we literally didn't have any. I think at the time that we were making these several months ago, there really was not very much sort of on a person level. We had the Global South Survey, but that was about it. And then, but now we've been talking to Dan on his team. Thanks for the information. Actually Global South, South America, so on. Yeah, so a thing that's super interesting about the Ghana research is that partly it's a survey, so partly it's quantitative. And so there's sort of things that you can pull out of that, but also there are stories. And I think to build a real persona, I think you do have to have a real sense of like, what's an actual story about this? So when we were talking to Dan Foy about the Ghana research, he sort of just abstracted a lot of stuff and said, whoever that is, probably a student because we see that sort of overwhelmingly, probably male because we see that overwhelmingly, to sort of pulling out those kinds of insights. So now we're getting more research and we're getting more data on that stuff. I think it'll help a lot. I just want to note that we should do a couple of questions quickly, but we're a little bit behind at this point. Yeah, just to comment, Daria from research. I wanted to expand on the suggestion that you made. I think it's a brilliant idea, even when we present our documentation on products or research report to tag them so that it's very visible to either our audience who the target persona is. I think we have like a see like some challenges when it comes to discoverability of our documentation and also how to communicate effectively who these products are built for and who they're not. I'm looking back at some of the confusions around the, you know, article feedback, visual editor and I'm thinking how much of a decision could have sold if we're like a very prominent patient who the target speaker. Oh, like sort of communicating that out and saying here it's for these kinds of people. So if you feel affected by this feature that you're not a target audience, maybe there's a reason to talk about it. I think it's a very valuable suggestion to start experimenting. Yeah, that's great. We could always when we're defining features to build we could attach the personas to the fabricator tickets and so that's one way of. And second question from Maria. These are coming from the etherpad by the way. What is the Ghana research and how can we access that? So Deanna and his team did a phone survey in Ghana and they're gonna share out a report. He gave us high level briefing about it and I didn't do the research so I wouldn't be good to articulate those findings but from what I hear it's gonna be soon that we'll all, it'll be shared out with all of us. There was also a social scientist who was in Ghana for another project and she went and interviewed a whole bunch of people and gave us a report out about what she learned on the ground and so the triangulation between that phone survey and those interviews on the ground were really useful to learn. For example, it was about 80% men who responded to the survey. We were kind of curious about well, why is that? And we asked the social scientist in a meeting when she was sharing out some of her preliminary findings, well, what do you think about that? And she said, well, from what she's observed, most in the households, mostly it's the man who owns the phone and then there's one phone in the household and then people will borrow the phone or use it. So that was, that's kind of high level of what I know about the Ghana research. Any public documentation or publicly available resources? It's in process, I think they're still doing the analysis and writing the reports from what I... Okay, so it seems like we're all back. Thanks, Brendan. Thanks for everybody's patience too. All right, so now this part is where we're gonna do a little collaboration together. As we've been talking about design research and building products is a collaborative process, it takes multidisciplinary teams, lots of people. So we just kind of defined this very high level process so that we can all talk about our expertise and where it fits as we work together to build products. So I wanna just talk very high level about this and thanks to Maya for making this visual design. So there's seven phases here. What? Oh, my fault. Here we go, okay. All right, collaborative process. So, not really showing, right? Okay, I'm gonna just show... Do you know how to... There we go, okay. So I'm just gonna very quickly go over these kind of periods of time that we do different things to get to an end result. So in understand, it's where you wear your researcher hat. You're discovering personas and needs, challenges and opportunities. And this is something I'd really like if we can do more of here. So concept generation, this is a period of time where you develop concepts from the fodder that you got from going out into the field, from our expertise as technologists, from our expertise as designers, from the things that we know, concepts are generated, so you make a lot of concepts. And then you evaluate concepts back with what you know, back with what the problem is and technical resources and other constraints. Then in developing, you build and you test and you iterate to refine that concept. And then reviewing to make sure everything's gonna work with everybody, like technology is, everybody, engineers say, yeah, this is good to go. Design says it's good to go. PM, whoever needs to say yes, this is good to go out, community engagement, there's a check. I don't know, that's something I would like to suggest and I've heard talked about before. And we're working to do that in some teams. And then maintain is where we're measuring the impact and we're understanding how it's evolving and if it is meeting the needs that we've been hoping would meet all this time and working towards that. So that's just high level product process. So now what I wanna do is there's a, Jonathan just sent out an email to everyone that has a link to a spreadsheet that it, it looks like this on the top. And then down on the bottom, there's lines where there's teams all along the left hand side. And if you're on that team, maybe if you have team members with you, maybe collecting groups, not as, not like the reading team and the editing team, but group by expertise, like group by community engagement, by front end engineers, by back end engineers, by whatever your expertise is. And if yours isn't there, put it there. Legal, maybe people are in the room. So, yeah, and then, so maybe group together or chat together about it or something. And then we're gonna spend maybe 10 or 15 minutes for the spreadsheet to start being filled in. And then everybody who's put stuff in, like each of the groups maybe can get up and talk about what are the things that they do at what stage. Maybe not everyone does something in each of these stages. Maybe, maybe you do. Does that make sense to everyone? Does anyone have questions about what to do? Does everyone have the spreadsheet? Any questions online? Jonathan sent an email to everyone. Yes, exactly. It's also on the either pad. Yeah. It's also in the chat. Under the heading links. Everyone's asking, is this supposed to be read-only? Is it what? Is it supposed to be read-only? No. Okay, thank you. It's no longer read-only. Is everybody seeing it? Getting it? So the problem did start on YouTube again. Currently, OIT is opening a ticket with Google. There's plenty of space on the Hangout right now for the three folks that are watching on YouTube and the Hangout is performing just fine. Okay, so if you're on YouTube, please jump over to the Google Hangouts on Air Link and you'll be able to see much more clearly. So is everybody understanding? I'm not getting any feedback. I didn't get it. Okay, so write the activities that you do in that stage. Like in discovery, what would you as a designer do, Katie? No, it's not you as an individual, it's your discipline. So you as a designer and maybe you and Nizar can get together and talk about it and other designers in the room can get together and say, hey, when we're doing concept evaluation, these are the kinds of activities that we would do. Can you repeat the question? No, it's the... Oh, the question was Katie asked, do I put my name in a line and put the things I do? And I said, no, it's you as a designer and maybe the design team who are represented here could get together and yeah. All the rows and columns are different. The rows across are for team, for disciplines, for designers, engineers, front-end engineers. Hello everyone. So I checked out the spreadsheet, it looks like there's a lot of great information in there. And so we have like five lunches here, but I think we should kind of get through this activity and then have lunch. So if you guys could each little group, could you nominate one person to come up and kind of walk us through some of the things that you guys do in this process? So like PMs, who would you like to have come up and come on up and kind of walk us through some of the things that you guys do? You don't have to say every single thing, but or you can take a mic or something, you don't have to, you need a mic. I mean to put you on the spot, but I think it'll be helpful for, it's gonna be hard, but. That's true. I have to learn about decisiveness in product. You all should do it. So basically what we wanted to write for all of these is that the product manager does everything and is the law, but we thought we could be a little more specific and inclusive, so for which segment is this? For which segment is this? Understand, so for understand, we identify, we try to identify the problems and the user segments that we wanna understand, so not just asking questions about things we know we don't know and trying to identify areas where there are probably things we don't know, but we don't really know what they are. And then prioritize those things, so kind of give directions about the people, to the people doing the research about what kind of research would be helpful. And then in concept generation, to brainstorm ideas and connect them with shake holders and communicate concepts of plans, bring people together to talk about this in Kumbaya. Create, yes, for concept evaluation to kind of do the meta work of deciding what counts, what tests to do and what counts as passing a test. Create that framework and then other people will probably be doing the test and deciding whether things are met, kind of. Oh, okay, that would be very, all right, okay. Ah, that's great, thank you. And then for develop define the spec, evaluate the trade-offs, kind of again do worry about focus and prioritization, kind of help other people get to the areas where their work can be most helpful. And in review, coordinate analysis, communicate results, kind of be that excess. And the bridge release, announce the feedback, feature gather feedback, communicate learning with team for further development. I'm just reading it. And for the maintenance thing, again, prioritize what needs to, what's most important to be done when most of the work has gone on to other things. Okay, yay. Now, who wants to go next? How about community liaisons? Here comes Rachel. Be sure to follow me. Okay, the doc kept, like as people adjusted things, like he kept messing with it. So for understanding, you know, our first step is get a clear understanding of the problem that the product intends to solve. You need to define the users that the product is being created for. That is usually involving data and whatever requests and concerns have been brought forward. I would say we'd start with contacting a widespread of communities, like all areas of communities and say, this is what we intend to do and here's how you can be involved. We might be looking for specific kinds of feedback but we might also be looking for everyone's feedback. But being really clear what the problem is and being really clear with the audiences. Concept generation, ask the problem again of what does this solve and who is this for? What do you need to be able to do as a user to help? Sorry, Mushir and I were both editing this so it's a little sloppy. Help users generate their own user stories. You know, as a user, I want to be able to do this. As an admin, I want to be able to do this. I want to create working groups of users to discuss needs, conducting IRC office hours or other public meetings to sort of gather and generate ideas. With concept evaluation, help product owners compare the new idea with similar existing ones or previous trials that existed on Wikis, helping to define MVP from audience perspective against what the product team might consider to be an MVP. Ask questions like are there blockers? Are there power user tools we need to consider? Is there some sort of situation that we might find ourselves in that causes maybe a shift? Development, getting community updated on planning, data, et cetera. I'm helping to incorporate community and product development so far, as far as the process allows. If they're a volunteer developer, maybe supporting that. Coordinating users for testing and feedback loops, discussing data impacts and reviewing concerns. And then once it's launched, reviewing, gathering feedback and correlating across different areas of product teams. So making sure that the things that we're hearing are the things that you're hearing in your research, making sure that those match up with the AB testing. We do deal with some subsets of users who have a very specific viewpoint. Coordinating users for testing and feedback loops, discussing with users data impacts and reviewing their concerns. What's a go and no go? Discuss the timing of release with different communities when we're releasing. Is there something going on in the community that we need to consider so that we do one day or another day or do they feel it's ready? What is the data that we have to prove that it's ready? Ensuring a smooth rollout by not surprising anybody, making sure that it doesn't disturb any set of users or block any workflow. And then with maintenance, continuing to, hold on. Well, continuing to work with communities on features to add bugs, to fix performances, to enhance basically gathering the feedback, finding out again how it correlates to the data and the impacts that we're already seeing in other areas, trying to sync all these things. And then accommodating needs to replicate product in different languages if it were different projects even if it was originally done in only one wiki or one platform. Thank you. Yay, Camille. Let's see who would like to be next to his next program, Capacity and Learning. That's you, Edward. Okay. I think I'm just gonna add one thing. So we kind of already do a lot of this process in our team because we, well, originally our team was called Program Evaluation and Design. So we kind of already did a lot of, so there's a lot of programs in the movement, Education Program, Education Program, Glam, Wikipedia Library, all kinds of activities that volunteers do on a repetitive basis. And so you can collect data on that and learn from it and redesign the model or improve the model based on your evaluation. But what I wanted to bring up here is that there's like an element of time. So I think a lot of agile and online, et cetera, a lot of that is applied to creating product, like technical products. And a lot of times we need to create more social type of products or involve people in doing things with people rather than machines. And so the iteration time can take a lot longer. So if you think of a conference like Wikimania, we actually do a process of evaluation of Wikimania. And so, but that, we only get to do that once a year. You don't really get it to do that every other week. But within Wikimania, you have all kinds of different things. You can also evaluate the process as well. So I guess that's kind of all I wanted to bring up and add to this. I think for the most part, a lot of it is similar to other teams. Well, thank you, Edward. All right, woo! Community resources. Is there anyone in the room representing or online? Okay, well, go to the next. Frontend engineers. Who, who, come on up. No, I think, well, for understand Frontend engineers, understand what the future is. I would somehow like to, so, kind of compare it to UX engineer below. I don't know if someone wants to, for example, maybe Flickr. But yeah, like, for example, where for UX engineer, it's more understanding where the issue is. And for working with the designers and more with eventually the PMs and thinking about the user, while for the engineer, while it comes to the implementation part, it's really like what is exactly the future that you want to do and how can you make sure that what I'm gonna code is the future that you guys were thinking of for the concept generation. At some point, as a frontend engineer, you need to look at what libraries, what plugins already exist. And then we eventually use those existing tools and you avoid basically wasting effort duplicating work to implement the future. Once we've done that research, we're gonna like look at the tools that we're gonna select and select them and also helping understanding of non-tips technical constraints around concepts and maybe even extending ideas with small technical refinements and things. So I think that comes back also to the understand part is make sure we actually understood what the problem was so kind of go back in time and verify that. And also maybe extending ideas with some technical refinements because sometimes you might think something and it might be too beautiful actually and you have to come back to the reality. And this is something also I put on the UX engineers part so yeah, work with design. So the UX engineers had these work with design to make sure concepts are not too far from reality without stopping our concept and because we shouldn't be blocked by technical considerations but at some point we need to have them. Right, so next. Next for the UX for the frontend engineer, you have the development part. So this is more about writing the code, writing the test for your code and write on it. And then you have the review. So make sure you respect like everything like the coding question, coding style guides. Make sure you actually like implemented the feature it has to implement. And they also have security and performance in mind. It depends on what you're working on but you should really have them in your mind. For the reason, entering documentation is reflected in this product and all the maintenance just fixed facts. Yeah, if I cover the UX engineer maybe. Yeah, so just for the UX engineer it's less but like it's actually understanding what the tools are or what the technical details are. It's more about understanding the initial work that's been done with the designers and the manager but okay, what is what we are trying to do. So understanding the issue, the need and challenge look if issue has been come up elsewhere before. So try not to reinvent the wheel because maybe it already exists and look together all perspectives on it. For the conceptual generation, it's what we've designed to make sure that the concepts are not too far from reality without stopping early sets. So we cannot always change the world but we can do something still. For the evaluation, already addressing UX it falls that might end up in product that ends later. How can we be supported like fast? So same thing addressing the pitfalls and also how can we be this prototype fast? I think this is important for the UX engineer because you're not in the actual implementation so you're not being a front-end engineer. Well, you're ahead of that and you're still in the explanation process so it has to be fast. It's not a several months that we're not here. And if it's not fast you have to re-think it. You have to iterate again only to make sure you can come with something right fast. And for developing, building the prototypes, then doing some A-B testing and any other evaluation and that will help you drive some decisions later. So for the review, that's when you're gonna, you have done your prototypes, you have done some evaluation, and now you need to basically retrospect and look at all this data and see what's next. So concept with stakeholders, users and analytics and determine what works and what doesn't in your feature, in your design, in your prototype. And on that you can keep your iterating. For the release part, announce features, results and reports on UX, really key channels, communication is very, very important here. Listening to feedback, because you might have also learned something, so I'm not the main one about that. Stepping back and then trying to document it well. And for the maintenance, it's just iterate. Just here, eventually at some point, you're gonna basically leave your prototype to work in front of an engineer's hand or nowhere. And so, the type is not a finished product, it's just something you need to keep iterating, iterating, iterating. Thank you, yay. All right, see you. Research and data, Aaron, okay. Hey, so let me start from the beginning. So in the understanding phase, it's sort of weird to have this phase because that's sort of the point of everything that we do, but it still fits into the product development bit. So we do a lot of reviewing the literature, find out has anybody looked at this problem before? Has anybody made some progress in the space already? Is there something that we can build off of? Even if it's not like an actual solution to a problem, it might be a measurement strategy that'll help us get at it. We performed some exploratory analyses and this is really about scale and frequency. How many people, how often does this sort of thing happen? Not even knowing what it is, but we can still sometimes get at those sort of things. Ethnography is important. Actually sitting in the space that our users do, trying to use the tools that they're using or following the community processes that they're engaging in. Without going through this, we can't even develop the measurements to measure how often they're doing these sort of things. So sitting down and doing it is important. And of course, documenting what we learned in this phase on Meta. For concept generation, I mean, it really depends on the strategy that we're pursuing. I mean, it might involve sketching if we're gonna build an experimental interface or something like that. It might involve storytelling and telling stories. We can figure out sort of bits like traces that people produce that we can measure or find correlations with. We'll write proposals on what type of research project we might engage in. We'll perform a different type of exploratory analysis that looks at correlations, like what sort of things are associated with other things in the system. And of course document this stuff on Meta. For concept evaluation, this is much more like proposal reviewing. But in this stage, we'll review the available technologies. And I'm using the word technologies broadly. It could be measurement technologies. It could be actual digital technologies that people might be able to use in this sort of space. We'll perform replications of past analyses. So if some of the researcher did an experiment that's like ours in a different space, we might apply that methodology in our space and see if we can replicate their kind of results. So we'll design and review the methodology at this stage. So this replication, this exploring of past technologies, it's really about developing this, the set of methods that we're going to pursue to try and get at the thing that we wanna know. We'll discuss these plans with other researchers, stakeholders and specialists, because we'll generally get good feedback on how we're applying these methods. We might catch things that we missed. Stakeholders will know how disruptive our methods might be within the work that they're doing. And of course, there are a lot of specialists in various measurement strategies. And so we'll reach out to them like the surveys group, that sort of thing. And of course document what we figure out on meta. So for the development stage, again, this one's like really different depending on the methods that we're gonna use. So we might be calibrating our measurement strategy. We might be, well, we'll probably be taking measurements that's a lot of what we do. We might be running simulations, usually like agent-based modeling or something like that to figure out if our measurements are working or to see if our intervention will catch as many people as we expect it to. We might develop a technological information and then design and run a field experiment. We might be working with somebody else's technological intervention and helping them run a field experiment. We'll be recruiting participants, performing offline analysis. Sometimes we're not running an experiment, we're just analyzing log data. And so that might be the only thing that we do here. And we'll often compare methodologies too. Sometimes there's no clear right answer on what way to measure the thing that you wanna measure. So just measure it in multiple ways and see if those measurements produce contradictory results. If they don't, then you're good, just pick one. And of course document what we did on meta. For the review, so this is quite a bit about presenting the results to somebody and having them react to them and challenge them. So we'll provide the data, present the findings in a language that people can use to make decisions about. We'll try and reach out as broadly as we can because we'll very often have direct stakeholders that need to know the answer, does this intervention work? But generally people will benefit from do these types of interventions work generally. So we'll try and hit like large venues for this sort of stuff. We'll iterate on how we're explaining the observations that we have a lot of like scientific practice is not just coming up with a hypothesis and then testing it, it's also figuring out, well, why is it that we saw the result that we did? Are there any explanations that are interesting or any explanations that aren't very interesting? And therefore we would have to fix our methodology. Let's see, we'll propose future work at this point like scoping a research project is really important. You've got to draw the line somewhere. And so we'll have to point and say somebody else is gonna have to pick up this project when we left off. We'll often also perform follow-up analysis that are proposed by stakeholders. And so we'll have, you know, Wikipedia's might show up on our meta documentation and say, you know, I don't understand this bitch. Can you look and see if this other thing might be happening? We of course get a lot of follow-up requests from product teams and that sort of stuff. This is like a delicate balance with scope. And of course, document those things on meta with release. This is much more after we've iterated on the explanation of the observations. And we really think we know what's going on here. And we think that we can actually explain what's happening then we'll provide the data, present the findings in a language that people can use to release a product, to make decisions about whether they want to release a product. Like a lot of times when we get a positive result on something that's sort of a contentious issue in the community, then it's really important that we make a sort of clear case for why we came to the conclusions that we did from the study so that we can sort of minimize pushback or maybe even justify the pushback. Let's see, we'll write up a manuscript for future reference. This is really important. We'll also very often publish interview journals because we're not the only people looking at open knowledge production. There are a lot of other people that can benefit from the research that we do. We'll give talks to broad audiences. I encourage others to explore the future work for us because we're not that big of a team. So it's great when other people can pick this up. And we'll also upload data sets to public repositories and then document all that stuff on meta. Finally, for maintenance. So this is much more on like the theory building side. Like everything up until now has really been like testing hypothesis with a product. But on the long term, we wanna develop theories on how these sort of products work. And so this is really what we're getting into with maintenance. So a longitudinal analysis is a little bit here. It fits into the time scale. A lot of times we'll get short-term results and we think we know what's going on. And the longitudinal analysis will help us make sure that we were right. We were seeing what we thought we were seeing. We'll often also we'll work with data for long running experiments. We'll very often do the analysis before we've stopped collecting data. And just let the data collection continue and come back and revisit things. Make sure that our hypotheses are still supported. And if they are, then try and summarize them into theory. Just that this intervention worked, there's a reason why it worked. And this should imply that other things should work or maybe not work. And so we wanna turn those into theory so that they're much easier to reference. This is sort of like moving from the research paper to the textbook. And then supporting external researchers who want to replicate and extend our work, use our measurement strategies, use our data sets and document these things on meta. Thank you. Document it on meta. Thank you. Thank you. One more team, come on up Zach from communications. Fabulous. Okay. So communications will be brief. As we looked across this approach, we realized that not all of the sections are evenly weighted for how involved we would like to be. So we saw understand as a very critical, important foundational space and we'd have a lot we'd like to add there. One thing we'd love to help with here is defining the audience segments. And kind of the contextual realities around the audience. So saying, what other media habits exist? What is trusted? What is read? What is watched? What is listened to? What are the other media touch points that surround the people that we're looking at? Building context around the personas. Then we would also wanna assess, how is the brand perception? What is Wikipedia understood to be within these regions, within these audiences? Is there trust considerations? Are there familiarity considerations? Again, these things are uneven. So we would wanna help build out the understanding of those things as we come into this. In concept generation and evaluation, again, you can see we're leading with those verbs of help because we see ourselves in a supporting role here. Again, we'd like to help audience work with community liaisons. They actually mentioned this already, but building the user challenge or user story of like, okay, we've got a problem, but how does that problem come to life? It's believable, what's to sync, what summarizes that? And then as we move into evaluation, we've begun using the social media channels increasingly to quickly validate or get widespread communications back from our communities. On Facebook, we're able to, of course, target these messages. So we can basically do creative A-B testing and ask direct questions like, what does Wikipedia mean to you? What did you learn on Wikipedia today? Have you ever been on a talk page? Things like that can be kind of assessed and they can be localized down to specific countries or specific language uses. Develop, wow. You can see that the column just really jumps up and developed because this is where we would come back in a big way. Again, we'd want to continue A-B message testing, but we'd also want to start running some messaging and positioning workshops and again, gathering all of the people who are thinking about that so that we begin to communicate about the new features, the new products, the new tools. We've already thought about how they're phrased, how they're presented to be really succinct and exciting. Something that again, we would continue down to ensuring local language, localization, making sure that these things would work for a country to country region to region. And again, utilizing our vast comms list there to go ahead and make sure that individual communication leads with the affiliates in the different chapters are super involved there. Review, release. Here again, we'd want to go ahead and make sure that we know where we should place the story. So again, Juliette was just with us and she was talking more about this, making sure that we would work with the different media outlets to go ahead and say, you know what, within this market, we'll have a really great way to storytell through these periodicals and these press, these journalists who are very attentive to this and can storytell well for us. Of course, at this phase, we would also start doing that communication strategy. I hope you're all familiar with that at some level. If not, we'd love to make you more familiar with it. That's when we'll sit down and again say, all right, what's the perception you want the audience to take away? What are the key messages you want them to know? Let's prioritize those messages. Let's have like a forced list. So if they only understand one thing, it's this. If they understand two, it's this. Three, it's this. And kind of creating that sense of hierarchy. And then on the release point, that would be a moment where we would spring into action. We would be kind of working with everyone else to make sure that, you know, really succinct, but kind of omnichannel effect. We're telling the story on a number of platforms. So hopefully blog our social channels. Those are the things that we own. But then again, getting the community to tell the story of their involvement, what they think this solves, and then working with press and influencers to make sure that the story is told there. I think our last column is maintenance. Is that right? Basically in maintenance, it's pretty simple again. What we'd make sure is that, depending on the importance of this product and this solution that's been developed, we'd want to make sure we're still asking the questions around how it's working for people, if there are things that are breaking down, if there are things that they want it to go further with. And we'd also like to make sure that it's even kind of established into our editorial calendar, something of a reminder, right? Because of course, on our existing social channels, we have a lot of turnover. We have new people starting to follow us on Twitter and Facebook and our other channels. So again, it's not that messaging once is enough. You want to build a cadence where messaging every month or every two months continues to make it something people think about and try a meal. Thanks. Thank you, Zach. You're welcome. Awesome. Okay, I'm getting hungry. I think many of you might be getting hungry too. So that's it. We're done. Thank you, everyone, for collaborating on this activity. I'm really, what? Oh, I'm really excited to learn. I learned some things today. And we have this document. We can keep filling it out with other teams that aren't here. And so we have a parking lot. I don't know if there, is there nothing in the, okay, great. But there is, well, there is. Okay, so what's in the parking? Yeah, well, what we'll do is we'll look at the stuff in the parking lot and send an email up, follow up with that. We can, if you guys want to have a meeting with us to discuss things, or if you'd like to chat over email or have a hangout, whatever. We will make sure to check each of those things and follow up. And that's it. Thanks, everyone. Lunch is ready. Thanks, everyone, online. It was really good to see you.