 So, I'm guessing that people are going to continue to filter in, which is awesome, but we're going to get started. My name is Dani Norden, and I'm going to be talking today about collaborative usability testing for lean and agile teams. So before we get started, a little bit about my background, if this will work. I am the senior UX designer at Harvard Business Review, and my mission is essentially to take a bunch of marketers and technologists and designers and editors and turn them into user-centered designers, which is a really, really fun and rewarding task, although sometimes it can be difficult. I also have been in the Drupal community for several years. I wrote the book Drupal for Designers for O'Reilly. I also do video trainings for them. And I'm the co-organizer of Design for Drupal Boston, which, by the way, I do actually have, I have two things for you if you want to come up after the session's over. One is I have postcards for Design for Drupal if you're interested in coming to the camp. The site is now live and accepting sessions. And I also have a fantastic discount code that the folks at O'Reilly have given me for 50% off eBooks and I think 40% or 50% off video trainings. So if you're interested in either of those things, come and see me after the session's over. And this is my daughter of the aforementioned stuffed pig. So let's talk about usability testing. I have talked a lot about this subject and user research in general over the past few years. And one of the things I always hear is some variation on, well, we just don't have the time for that. We don't have the, our clients don't have the budget to do that kind of research. And I can understand the need to prioritize resources, especially when you're working in tight timelines, but the reason we test is because we, you and I sitting here making products are not making products for us. We're making them for a set of people who have a very real set of needs and problems. And the reason we test is because, is because we want to make sure that those people out there in the world who are going to be using our products understand how to use the product and also understand how this product solves their problem in a way that is better than the way that they're currently solving that problem. It also does a great job at helping us as a development team prioritize what we're going to work on and what we're going to do. This is one of the fundamental reasons why we test. If we test and we realize that a feature that we were intending to build is not useful to any of our audience, then you know what? We've just saved ourselves a heck of a lot of development time. Same thing if we're testing a prototype of something that we're about to build and we find out that there are major usability problems. We've just saved ourselves a huge amount of development time just by talking to people and watching the music. The reason we want to do this is because assumptions can be extremely dangerous. Let's talk about Windows 8. The fun thing about Windows 8 is that they did usability testing while they were creating that software and guess what they heard? Where's my start menu? Where'd it go? I mean, what, what is this? Where's my start menu? And they did not listen to those people. They said, you know what? They just don't get it. We're creating a new paradigm. We're innovating. We're disrupting. And guess what happened? Several weeks after Windows 8 was launched, the Internet exploded with where the hell is the start menu. Third-party vendors started coming up with applications that would give you a start menu that you could download to get your start menu back. One of the best reviews of Windows 8 that I've seen actually compared the system and the interactivity within the system to a goblin coming out and farting on your head while you're trying to check your email. And guess what happened? Windows 8.1 came out a couple months later, and guess what came back? The start menu. Windows 10.1 has been announced, and guess what the first thing they're talking about is the start menu. When we ignore the legacy of knowledge and built-up learning that our users have experienced with our product and we refuse to listen to what they say, we fall in trouble. We become laughing stocks. And I don't think anyone in this room will disagree that Windows 8 became a bit of a laughing stock for a while. So now that we understand why it's important to test, let's talk about what actually goes into testing. Now there are many different methods that you can use to test. Some take a very short amount of time. Some take a much longer amount of time. But the anatomy of a research plan is pretty much the same. It starts with understanding your research goal. What is it that you actually want to learn? And what parts of the interface do you need to understand how people use? We also want to get a sense of who the people are that we want to talk to. What are the problems they're facing? What are the specific characteristics that they might have? Are there specific demographics? Are there specific levels of tech savviness? All of this creates how we actually screen and recruit people. Then we want to get into some of the logistical details. So examples of this might be if we're testing an e-commerce platform, how are we going to get dummy credit card information so that we're not asking people to put their actual credit cards in our system? If there are features that are only available to a certain subset of our users, how are we going to make sure that we have test accounts created so that everyone's getting the same basic environment and we're not essentially taking up valuable time during the testing process, walking people through getting involved in the system? So a couple of things that can help with this. The first is to involve all of your stakeholders as soon as humanly possible. And get them aligned first and foremost on the actual goals of the research. And one of the things that you'll find when you're doing goal setting with your stakeholders is my personal favorite, well, we just want to understand what the pain points are in the system. Does that sound like a research goal to you? Does that sound like a thing you want to learn? No, we need to get more specific than that. What specific pain points do we anticipate in the system? And how are we going to focus the things that we're studying to a narrow enough area of the interface that we, A, can focus our efforts in one particular way. And B, we know what questions to ask people and we can keep them there for the amount of time that is allotted for the test and respect these people's time. The other thing that's really important with this is to have a bunch of different methods at your disposal. A lot of people think of usability testing as you have to sit with me for an hour. There seems to be two dichotomies. One is you have to sit with me for an hour in a usability lab and I'm going to watch you do things and ask you questions. The other one is I'm going to run up to you in a Starbucks. And shove my phone in your hand and ask you to do some things. Neither of those is actually true. Neither of those is the only way that you test. We're going to actually discuss in a little bit some case studies of different ways that we've tested things at Harvard Business Reviews, at Harvard Business Review, but there are all sorts of ways that you can test based on what you want to learn. Card sorts and tree tests are great for testing information architecture and understanding how people group things. First click tests are fantastic for discovering whether people can find the thing you want them to find. And they're quick and dirty and they do not require a ton of investment in time. Unmoderated testing through services like validately, user testing.com, user zoom is really great for very small, discrete parts of your interface that can be tested in 10 to 15 minutes or less. It's not good for anything where someone's going to be putting in credit card information. It's not good for anything where the prototype is still so janky that someone could get hopelessly lost. And then of course there's moderated testing, which is the stuff that many of us think about when we think of usability testing. This could be happening in a lab, this could be happening over screen sharing software, which is how I usually do it. And those are typically reserved for much larger sections of the interface or for really understanding people's mental model of how they solve a particular problem. So we do that a lot more for generative research than we do for basically just figuring out what's wrong with things. So the next thing we want to understand is how we actually screen people. Now screening people and asking them if they qualify for a study. I'm still trying to figure out a nicer way of saying that. But essentially it's saying you are the type of person who is trying to solve the type of problem that this product solves. That's all we're doing. We're just making sure that you are the likely target audience for this particular product and that you have some relationship to the type of problem that this solves. Now there's two ways that I screen participants. One is through user testing.com, which we actually use extensively at work. And in that case, what we do is we actually have a couple of screening questions that essentially include or exclude certain people based on things like their job title or their level of income. They're more likely to be HBR readers. The way that I screen participants for moderated studies is actually through a survey that is delivered via email to registered users in our system via SurveyMonkey. And there's logic set up within the survey that will politely disqualify you if you don't meet the right criteria. And if you do meet the right criteria, then you get sent to a service called Reservio, which allows you to book an appointment with one of the researchers, which is a fancy way of saying me. This piece of it, the screening and the scheduling, is the toughest part of running usability studies. It is the most painful, it is the most annoying. So whatever you can automate, the better off you're going to be. But participants are still going to need reminders. So once they set up their appointment in Reservio, I then create a meeting in join.me, which then reminds them and sends them details for the call automatically. So I don't have to think about it. And this actually is really helpful for making sure that people aren't dropping out randomly, which does actually happen much more often with lab testing, but it happens with remote testing as well. So let's talk about a couple of different case studies. I'm going to present basically one of every type of research that we've done at Harvard Business Review. And this is pretty indicative of, I worked at the Bentley User Experience Center before I worked at HBR, and this is very representative of the types of studies that we ran there as well. So this first study, basically this was when I first arrived at HBR. And we had launched the news site about a month ago. And we wanted to get a sense of how people found content, especially now that the architecture had completely changed. And what types of things they were looking for. So this was an unmoderated study. And the first thing we did was we got a bunch of stakeholders from different areas around the business together in a room and had them align on the actual things that we were trying to learn. And that can be a little bit challenging sometimes. Because commerce has a very different section of things that they want to learn than editorial does, for example. And someone in the room is going to say, well, we just want to understand the pain points of the system. And you have to get a little more specific about what specifically we want to test. Then what we did was we actually had each person in the room start writing their questions on sticky notes, without talking. And then we put them on the board and I was able to look through the list, and realize where there were questions that were coming up multiple times. I'm like, okay, so this is something we definitely want to know. And from that, we were able to basically create a test flow, a series of tasks that our users would go through as they went through the process. And there, from there, it was actually pretty easy to just put things into user testing.com and launch the study. Once we did that, we also made the strategic decision, because we are a responsive site, that we wanted to make sure that every study of this type, we test half mobile and half desktop. So we got 18 videos, approximately. We ran one pilot first on each platform to make sure that our tasks made sense and people could understand the flow. And then we ran another group. We got all of our results in about three hours. So this was using their panel. We sent it out. We got 18 videos in about three hours. Then I spent some time going through the videos, understanding, taking notes, making highlight reels, and making sense of all of the different things that we had learned and needed to work on, which I then created as a memo. And what's really fun about doing the memo is that this memo was nine pages total, which included the test plan. Now, I don't know if any of you have ever done usability testing before or have worked with an agency to do usability testing. But when I worked at the user experience center, a test report was 60 pages of PowerPoint. With screenshots and, you know, fun arrows and all of this other stuff that took like a week to write went through several rounds of revision. And I can guarantee you, not a damn person ever read it. This was presented as nine pages. It was sent around the building. Everyone completely went nuts for it and loved it. And it was then presented in a meeting with half the building, essentially. And what was nice about this was, first of all, it was a heck of a lot easier for me to do, but it actually made change. Me and my manager were actually able to take these results and these recommendations and put them into JIRA as tickets that could then be prioritized in the backlog. So it was really nice to be able to see the progression from giving the results to actually seeing changes happen on the website. So the next study we did was to understand how people used their account features. So one of the things that happens on hbr.org, if you're a registered user, is you get access to essentially save and share articles within your own personal library. So you have this whole section of the site that's devoted to you that where you can follow topics that you're interested in and get articles related to those topics and you can save topics or and you can save magazine articles and other articles into your library to sort of access later. And so what we wanted to understand was how do people actually organize the content that they need to refer back to later because all of us have this stuff, right? We have the blog post that we always turn to. We have the articles that we want to send to our colleagues because we have to prove a point to them. So we wanted to understand how our users did that. So this one was a moderated study and we started basically the same way. We aligned on goals. We involved sticky notes because that's very important in any meeting I'm in. And this one was actually created as another memo. So this test plan was a combination test plan and notes grid, which is basically a fancy way of saying where you take your notes. And then the screener was created and survey monkey. People were recruited through the marketing team, through email. And then they were sent directly to Reservio to schedule their appointment. And then we went over web conferencing software. In this case, WebEx, this was before I was doing joined up me. And essentially I started asking people a bunch of questions about. Tell me about a time when you had to learn something new for work. Where did you go? What did you do? How did you how did you collect that information? What tools do you use? So it was a combination of usability test and almost a user interview. It was telling it was getting people to actually tell me about the process they used completely independent of our site to collect this information. And as I was doing this, I started collecting these quotes, these interesting tidbits, these things that I was uncovering as, you know, problems with the way that our site was constructed that conflicted with the way that people thought about this process. I also used it as an opportunity to do a little creative child rearing because you gotta. But what I found was there were very distinct patterns in the way that people organized content and the way that people thought about the type of content that we provide. And so I started writing down these little patterns almost as a mini persona and then I turned them into more fleshed out personas. And we and we socialized these throughout the organization because we do have profiles of our visitors. But what we didn't have was action based personas that actually talked about how people use our product and our content. And then we were able to actually do a journey mapping workshop with all of the stakeholders to get them under to understand the different workflows and the different problems that people have faced using our website, using our content, trying to get done the things that they wanted to get done. And that workshop not only covered one really critical usability problem that was affecting subscribers, which are our most valuable audience. But it helped open the eyes of several members of the marketing team who did not realize that the specific things that they were trying to do to get the information they needed to get were actually stopping people from doing things like completing a purchase. My favorite moment in usability testing was when we were testing the digital download process and we knew that people couldn't figure out where their downloads were because they were sort of hidden within the interface. But I had identified my third week there. Well, when you actually try to buy something, it sends you through this whole registration rigmarole. And I don't know if you know this, but like there are at least four different places where someone could actually forget they're making a purchase. And so we did a round of studies which we actually streamed in one of the big conference rooms and had people sort of come down during the day. And the head of marketing came down for a couple of minutes just as one of the participants was being reminded that she was making a purchase by the moderator. And I looked over to the head of marketing and I just said, fourth person today, let me tell you that changed some minds. So this, again, is why we test because we have so many people that are involved and have a stake in all of our products and they are going to have very different opinions. And these are good people. They want to do what's best for the business and what's best for the customer. But often they have blind spots that they don't necessarily know about until they actually see people struggle with the products that we make. So this study was a little more expensive than a user testing dot com study. This study took a total of four to six weeks to plan and implement and analyze. It also did require creating, it also did require getting compensation. So we had to give $50 Amazon gift cards. But again, what we learned has completely changed how we're going to build these particular features. The final case study I'm going to share with you is an unmoderated study. This was a first-click test. And the story behind this was basically we have this red bar, this call to action that tells you to register or subscribe on the site. It basically represents our paywall. And it's very strategically important to the business. It's relatively unobtrusive, although a lot of people in our building are not fans of it. But what was happening was the way that this red bar was sticking from just sort of a code level was actually forcing the menu to get extremely jumpy and just jump around mobile phones. And so it was really problematic. And we have this great front-end developer who's coming in for the summer. And he said, well, you know what? If we actually fix this to the bottom instead of sticking it along with the menu and having all three of these things tied together, we can probably fix the majority of the jumpiness. Now, mind you, again, very strategically important to the business. So if we're going to make a change like this, we need to make sure that we're not going to inadvertently lose people. So what we did was we created two screenshots. One is the control screenshot, which is basically the red bar and exactly the position it is when you look at it on a mobile phone. And then we created a variation where the red bar was actually pinned to the bottom. And what we asked people was, please click on where you would go to buy a subscription to this magazine. Very simple, single question. Half of the people got the first screenshot, half of the people got the second screenshot. And what we discovered was, in this test, you can see 15, can you, eh, okay. Okay, so 15 people saw the red bar and saw the subscribe link in the control screenshot, which is where it is normally, but then 20 people, because there's 19 plus one other person, saw it at the bottom. And what was even better was that those 20 people came from less than the number of people who actually saw the control condition. So this was a factor of something like, I forget it, I think it was like 18 to 40% of people saw it in the right place. So guess what we got to do? We got to move the red bar. Now this test cost nothing more than the platform. The platform itself was relatively cheap. It's like a thousand bucks a year maybe. And we spread this on social media. So this went out on our Twitter and Facebook and we got a couple hundred people who came in and took this test. And it takes 60 seconds of people's time. You don't even necessarily have to give them compensation because most people are happy to just go in and do what you need them to do. So this, if you're doing things like testing labels, trying to understand if people can figure out where to go to get to a thing that they want, this is a really cheap and really effective way of getting the data you need to make decisions, especially when they have huge strategic impact to the business. So these are the three primary methods that we use at HBR to understand our users and figure out how to use data to drive our design decisions. And they've been really effective so far and we're constantly evolving our process to include more of this type of activity into our workflow and we're able to do it without actually disrupting the activities of the tech team. Which is a really nice balance. So to wrap up, make sure that you start with the research goal and you understand what it is you actually wanna know. Involve your stakeholders as early as possible and find ways to make them be there. Make them watch the videos. Especially if you're just at the point where you're trying to get investment in this kind of activity within your organization, having them actually sit there and watch people struggle is remarkably effective. Especially developers. One of the things I'm experimenting with is actually making stakeholders, note takers. So forcing them to watch every single session and take notes. Cause I'm fun. And then again, use a variety of methods. Don't depend on one single method. There is no one way to do user testing. And the more methods you can build into your research toolbox, the easier it's going to be to say, you know what, I think that's a great idea. Let's run an AB test on that because it's going to take us like a couple of days and we'll know what we need to know. And then automate as much as you possibly can. If you can work with, if you have a marketing team or a social media team that is working on promoting things about your brand, involve them early. Get them to help you build a participant database. Get them to help you promote these tests on social media. It really makes a huge difference and takes a large load off your shoulders when you're trying to get people. And that is basically what I have to tell you today. Did I go short or long? I can't tell. I'm short? Oh darn it. I can tell more stories. So let's talk about usability testing. What kind of stuff are you guys doing? No, you're not doing anything? Okay. Oh yeah. Oh so yeah, so yeah, that is one thing that people like to know about. They like to know how much it costs. So the costs really vary depending on the service. One of the things that I like is a tool called Optimal Workshop and that has three different tools available. One is Chalkmark which does just first click tests. The other is, what is it, Treejack which does tree tests which are essentially usability tests for your information architecture. And then there's another one called Optimal Sort which does card sorting. And what's nice about those is that the entire platform is something like 3,000 a year. And those are really, really effective if you do a lot of information architecture work. With the notable platform which is the one that we use for the first click tests at HBR. That is a platform that is, I wanna say it's like 100 bucks a year or something like that. It's not that expensive. It's not that expensive and in addition to the first click tests which you can run as A, B tests. So you can do two different versions. You also can do prototypes. You can do label tests which are really good for testing icons. So with a label test you literally ask people to label what they think this thing represents or what they think they're gonna find. And so we've been doing a lot of that to test icons. Oh yeah, it's called Notable. It's called Notable and it's actually run by Zerb the same guys who do foundation. So it's really, really, it's really cool. And so that one again, that one's the cheapest of the tools that we use. UserTesting.com can be expensive but it's really worth it for the platform. And then with the moderated studies a lot of it is just coordinating with various departments around the building. So most of the expense is actually in participant compensation which is usually a $50 Amazon gift card. Occasionally it's free product. Yes, no problem, thank you. Oh yeah, so the four to six weeks is really an on and off sort of thing because recruiting people takes time and that's really what it comes down to. In terms of the team members, typically the stakeholder workshop is about an hour and that's really just getting a bunch of people in a room to agree on what we're gonna test. Then putting together the test plan is like a couple of hours and then you usually have a little bit of time for editing it. The biggest thing, the biggest time cost is in moderated studies having people observe. And a lot of times we set it up so that we're just streaming the study in a conference room and people just sort of float in and out. And at one point we have this thing called tech office hours on Wednesday. And so I basically camped out there and made people watch the study. Which was very fun watching the developers just like, oh yeah, that's a bug. Oh, no, we gotta fix that. And one of the developers is brilliant and hysterical and he was just like, oh shit, okay. Go fix that now. That should be fine now. Oh, Steve Krug, yeah. Yeah, there's actually, rocket surgery made easy is interesting. The Krug method, so this is another method of testing that we're going to hopefully get to fairly soon. In this method of testing, you actually work with prototypes. So you don't work with the live site, you work with prototypes and it's really good for the early stages of testing. And this is actually one of the most fun and most productive types of testing. Because basically what you do is you plan the test, you get the prototype together and you run the tests one day and every time you see a major usability issue, like something that's just stopping people in their tracks, you make changes to the prototype and you push it for the next test. And you just keep refining as you see more usability problems and then you debrief, you spend a day adjusting the prototype and then you go for another day of testing on day three. And by the time you're done with that, you should have actually refined quite a bit of stuff. You've talked to about 12 people and you've gotten the prototype into a point where you actually know what most of the usability problems are and you can get it into production pretty soon. Yes. So accessibility testing is sort of an interesting duck. I mean, like accessibility testing can happen in a bunch of different ways. I think one of the things that I'm trying to deal with now is there are several ways in which our website is openly defying the WCAG guidelines. And so I'm working very hard with the design team to gently remind them that the web is not a magazine and not a printed page and people look at it differently. And so part of it has been discussions, well, do we know how many people are using screen readers? And like, no, no, no, honey, this is every human over 35 that we're offending right now. Like this is a thing we have to deal with because people have eyes and those eyes see things differently on a screen than perhaps you do when you're reading a magazine on paper. But it's been a really good lesson in getting people to sort of understand what it's like to exist and to work on the web. And part of our accessibility work right now is mostly in just getting things in line with the guidelines unless having people with screen readers or other assistive devices actually use our products. I don't believe that they do. There are definitely things that you can do that will get your site up to code. So web aim is one of the things that I use to just sort of make sure that contrast ratios are working and get things, figure out where we score on the different guidelines. But yeah, accessibility testing is something we're going to be probably getting into a while from now. Yeah, well, so one of the things that's really nice about working at a company that does business publishing is we have a lot of people who do a lot of research on our users. So we actually, there was work that was done before I arrived that essentially created user segments. And this is where, so this is actually out of the Lean UX book from Jeff God Health. He has a great tool called Proto Personas which are basically initial character sketches of the sort of broad types of people that your product serves and the types of things that they're trying to do. So the more that you can get behavior-based as opposed to demographic-based, the better off you are with recruiting. So for example, we're launching a study in the next couple of weeks for the Visual Library which is essentially a collection of all of HBR's infographics or like a curated collection of infographics that you can download as a subscriber. And the types of things that we use to screening criteria is, you know, I often include graphics in my presentations. I consider myself a visual thinker. I like to collect infographics, like these types of behaviors as opposed to I'm between 25 and 65. Because what it, when it comes down to is demographics like that aren't very useful anymore. Exactly. They're just not very useful anymore in actually understanding how people behave on the web. Yes. Hi. Understanding their user base and rearranging architecture. And I noticed that you have more kind of action-based items. How do you suggest like working with people with different kinds of content and moving things around? I know you suggested a couple of things. Oh yeah. Trying to bring that to stakeholders and getting them to understand the difference. So the biggest thing to do is, I mean the best thing to do if you're really talking about, I think what you're talking about is there's a ton of different content and we need to know where to put it. And my guess is because I have worked in universities before, that there's a lot of people with a lot of opinions as to where said content goes. Yes. And so there are two things that are really important, really important and really useful tools. One is card sorting. And in card sorting you essentially create cards that have the labels of the different pages or the different pieces of content. And then also have a description of what the content is. And you have people essentially go through and put the cards into buckets. And they can be either closed buckets that you create or they can create their own buckets. And essentially what you find is after about 50 people do this, and I mean you can literally send this out as an email and promise them something in return. After about 50 people you start seeing very clear patterns in how people are grouping things. And you start getting a picture of what your information architecture should do. And once you have that, you can then take that architecture, put it into an Excel spreadsheet and then upload it into a service called Treejack. User Zoom also has a service like this. It's called tree testing. And essentially what you do is you create a couple of common tasks that your users have to perform. So an example for university is you'd like to learn more about how to get into the school or the requirements of getting in. You wanna know where the campus is. You wanna know where this office is. Show me where you'd go to get there. And essentially what they do is they use a stripped down version of your navigation to perform that task. And you get to see not only whether they actually find it in the location where you have it, but also how long it takes them to get there and how many times they had to try again and go back and keep looking to get there. So they call it speed and directness essentially. And so you can find basically, well yeah, everyone finds it but they have to click around five different places to find it. Those two are really the best way to test your information architecture and to make decisions based on data and not on endless meetings. So for that you'd probably be better off doing usability testing or interviews but it doesn't change the problem of like our architecture is a mess. That's too bad for me. Yeah, analytics can be hugely useful, hugely useful. I think you and then you. Yeah, oh nice. Oh, unmoderated? Yeah, so the basic thing, I mean the basic rule of thumb if you're doing an unmoderated study, it has to be a situation where no one can mess up a database. Like that's really what it comes down to. Like having people create fake accounts on your production site is not necessarily going to make your marketing department happy. Is all I'm saying. But I think the biggest thing is, and then the other thing is things like credit card information unless you give people a dummy credit card to use. So if you want someone to find something you'd like to buy and buy it then you want to make sure that you're giving someone fake information so that they don't have to put in their own personally identifying information because the big thing with unmoderated testing is there will always be video. With unmoderated testing you might have a certain piece of the test where you're turning the video off so that that information isn't on the recording but with unmoderated testing they're gonna get video all the way through. And the last thing you want is someone basically saying oh I'm not phishing out my credit card and giving you that information. I had, oh actually hold on. Actually if you guys could line up behind the microphone I think that might be useful. Hi I work for a digital agency out of Colorado and right now we have probably five or six people in the office that are really passionate and interested in usability testing but it's kind of happening in this organic way. You know it's coming out of our post production plans or we're getting recommendations from our SEO or account managers where they're seeing problems in the booking funnel or just in the UI in general. So we really want to take that next step towards monetizing and restructuring this so that we can begin offering usability testing as a professional service offering. And I'd love to hear any recommendations that you have for that to kind of take that next step and your recommended tools for that as well. Sure I think that I've really loved usertesting.com although Validately is another service that is becoming increasingly attractive to me. Both of them are pretty reasonably priced if you compare it to going with the usability lab for example. I think the biggest things that you need to get started usability testing are the decision that you're just gonna do it. Like that's kind of it. I used to get clients all the time who were like well you know that's a little bit expensive do we really need all this research and I would just tell them no we do. We need all this research because you want me to do it right. Obviously sometimes that's a little harder to get away with. But to do most like if you're gonna set up for moderated studies like really the only thing you need is good screen sharing software, a service like SurveyMonkey and Reservio, a good mailing list that you're starting to put together and then essentially an old laptop. I mean that's it, that's all you need. And then it's really just getting into the process of getting into the process of actually creating the usability tests and running them which once you've done it a few times you really get the hang of it. The handbook of usability testing is a really good resource for that. It gives you sort of the, and then rocket surgery made easy is another one which goes over the right and crug method. And those two, between those two you'll be able to figure that out. But it doesn't have to be onerous and it should happen at each stage of the development cycle. It should not just be a post-production thing. It should also be during the time that you're building. You're up. I work for small shop developers and we take on clients. One of, two of the things I've noticed that smaller clients are kind of resistant to and probably the price factor, getting a designer up front because they think because their friend did the wireframes and knows something about this, that's gonna work. So they're very resistant to that and to usability testing at the end because they think, well my friend who did the wireframe knows all the steps and I know what I created this trail through the site that people will get. And then they end up with all these changes at the end because as they go through it they realize, oh, it's not very usable but during the process they've been like, stay with my wireframe, do it this way. These cost me a lot of money. But I'd like you to speak about the importance of getting a designer up front together with the developers when you start a project and is there any easy way to present this to the client who's unknowledgeable why they need this? Oh gosh. Sorry. That's, I mean that's an easy one. So I think that part of it is just education and part of it is taking a stand. I think that one of the things that, one of the things that can be useful is to have a service. If you have a service like Validately, for example, I know I'm name dropping a whole bunch of tools but whatever, Validately is really nice because it's 40 bucks a month. And if you create your own test panel like with MailChimp and you can, and you can get a sense of like, and you can get some questions in there on how to sort of segment users so you can create a screener. So it takes a little time to build that up but if you do that, you're basically at most you're spending $10 a test and you can find the budget for that. And so you can actually build that as part of your sales process. We include this amount of testing. So we will test with five people, we will find the usability problems and then we'll talk to you about it. If you can find things like that, like Validately and then Deserve Platform is the other one that's really not that expensive but it can get you a lot of quick feedback from targeted individuals about what might cause people to fall down within this interface. And if you start there, you can usually build up the understanding of why this is important. So one of the things that happened, one of the turning points for me when I started at HBR six months ago was being able to actually create highlight reels and show people a guy taking a minute and a half to log in on his mobile phone because the site was so damn slow and being able to sit there and show people what was actually happening when people were trying to use our site. That video can be incredibly powerful. Incredibly, like unbelievably powerful. The minute I showed those videos, people were on board. And I had people, I just had, I may tear up a little. One of the marketers said to me in a meeting, well, yes, but that's our internal term. I don't think a user understands that. It's amazing, once you just start the process, how quickly people understand the value of it. As for the designer, I think one of the challenges that I'm sure every designer in this room has faced is the increasing commoditization and almost like can you work your design magic? I still get emails, can you bring our test creative? I'm like, what the hell are you talking about? This is a wireframe, it's not creative. But there are people who just speak that language and they think that design is an unnecessary expense and I think that some of these design for a buck services have really caused problems with that. But I think that as you stand up for who you are and the value you bring to the table, that really transfers to how clients treat you. Well, it varies by the goal and by the type of test. So if you're doing moderated testing and even unmoderated testing, with unmoderated testing, what we shoot for is five participants per platform. Yeah, same thing with moderated. Five to 10 participants is usually fine for moderated testing. The big thing you have to watch out for though is segmentation. So if you have people in various different audiences, so I used to see this with financial services firms all the time, they would have consumer clients and they would have small business clients. And so when we're talking about five or six participants, we're talking about per segment. And those are, let me tell you, not fun tests to run. By day three, you're just punchy. You really are just like, oh, yep, okay. Logged himself out again, okay. Yep, we gotta fix that, all right. And then also compensation, for example, will vary by the type of person you're recruiting. A consumer, you can usually get away with a $50, a $50 or a $25 gift card. There was a shoe company that would actually give you a $100 gift card to their company, like to their website, and they're like, go buy yourself a pair of shoes. That was the entire test. And I was like, how can I be in that test? But then if you're recruiting a highly specialized audience, we did one study that was all medical professionals. And we had to up the compensation three times just to get people for the study. Because if you think about it, a doctor is gonna, he's gonna come out to Waltham for an hour. Yeah, no, you gotta give me 300 and some odd dollars for that. But usually, what we found for the work that we do is 50 bucks is usually fine. So it's not that expensive if you think of it, I'm recruiting 10 people. No problem, all right. So actually, we had a really fun conversation, which is what I was hoping for. If there's any other questions, come up and see me. And again, I do have postcards for Design for Drupal and I have discount codes for O'Reilly. So if you want any of those, come up and see me.