 We'll just let people trickle in as we can. Today we are going to be talking about getting better results with user testing, really the why, what, when, where, and how. What we're not going to be talking about, just to make sure you're in the right space in the right room, is we're not going to go over how to be an actual usability expert. We're not going to cover all UX and now evaluation tools. We're going to be coming a lot, but there's still a ton more that you can be using. We're not going to go through instructions on how to specifically run a test. And most importantly, we're not doing designing for touch. That session was canceled, so if you were looking for that in the print guide, that session is no longer going on. But we're really excited because we are going to be talking about why you should actually care about users, signs for reasons why to start testing involving those users, expectations you should be setting with different stakeholders in your group, tools that you should be using, and when, lessons we've learned along the way, as we've been doing this, as well as resources. So we're going to actually post at the end of this, we have a Google doc, there's a link in our session already that is going to cover books, articles, and a lot of different tools. Hi, everyone. So you're going to have to stand like this when you do this. All right. So hello. Hi, everybody. We're Lynn. I'm Lynn, director of projects at Gorton Studios, and this is Drew, he's the CEO and founder of Gorton Studios. We're a boutique web firm, seven people strong in Minneapolis, and we build websites. We work mostly with nonprofits, and we do kind of medium to large range, full website builds ranging from like 50,000 to maybe half a million dollars. So if that gives you a sense of the types of clients we're working for testing, we also do maintenance with our clients as well. So we're going to start off by talking about why you even care about those darn users. And somebody just mentioned it was maybe a little bit hard to hear in the back. Are we okay for volume? Okay. Cool. Thanks. So why care about this? We all want to do something. I personally want to make the world a better place, that's my deal. And I've decided that you can do that better, or I can do that better, but by working with other good people. So as an organization, we find good clients, who you think are doing good things in our own opinions, and we use our powers to make them more effective. That's the way, that's what I want to do. Together we impact people. And we're only successful though if those people actually do something. So they need to listen, they need to care, they need to act, they need to vote, they need to buy something, they need to donate, they need to change. They only do that if we can understand them and act on what we learn. So we all have to individually ask this question, what do you want to do? But odds are it's going to involve people. And to get there, you're going to need to listen to those people. So that's kind of touchy-feely. You or someone you work with, or a client, might need a little bit more justification to try some user testing. And when we're talking about user testing too, we're going to be talking about a series of tools and things, but it's not just testing per se. It's listening to users, surveys is one of the things we'll be talking about and how to use those most effectively. But there's some really concrete benefits to user testing. And the one that we probably know about, I mean this is not new stuff, it's been around for a long time. The one that we all probably know about is fixing stuff, right? Things that are broken in websites, user testing can help you identify those things and try and come up with solutions. In the world where many of us are producing websites for clients and such, there's actually a whole bunch of other benefits that are kind of hidden and we've discovered as you start doing more user testing, there's some actually kind of cool things that are other good reasons to do user testing. One of them is you can help your clients or your organization outshine the competition because you can test your own website and what it's doing, but you can also test the competitors. And that information in the right hands can be super powerful. It also can be really helpful for breaking deadlocks around decisions. So sometimes in a web project, there's a decision about, we really, really need this one thing to be there before we can launch. Really? Maybe? Maybe not. Why don't you try it out with some people? It can be a useful way to sort of break through an impasse and a decision point. Many of us work with clients, all of our clients are awesome. Some other clients I've heard of occasionally have people who have a bad idea. Sometimes you can test to sort of demonstrate there are difficulties. And I think, you know, honestly, a little bit more, like we also can come up with bad ideas and it's really good to remember that we're just human beings too. And so having someone try what you're doing can be an enormously powerful tool to just remind you we're all human that maybe wasn't the best recommendation. As people who make things, there's nothing more demoralizing than building something or being asked to build something that you think will never see the light of day or if it does, probably won't be very useful. That just really kind of eats at you and becomes a self-fulfilling kind of problem, right? You're working on it, you're like, this isn't going to be any good. And then it becomes not even good, even worse because of that. And one of the last, or the last thing I'm going to talk about is trust. User testing feels right. In an industry where we're working with clients and building something really ephemeral and hard to understand, but clients know it's costing a lot of money and taking a lot of time, trust is a really, really important commodity for us to be building something that is an asset to build as you're working with because there will eventually be decisions that will be hard and you'll have to really trust each other. And if you can start building that early with some testing along the way, establishing a commonality, it's a pretty powerful thing. So when should we do it? Really when should we do it? Oh, sorry. So you might can tell that Drew's probably, you know, he's one that talks to the client right away. He's a sales guy. He's like, this is great. We'll do this. User testing. We've got to involve them. Here, Lynn, go figure out this problem. And I've said, that's great. We don't have any money or time. So I think we can all get on board that bringing users in is great, but let's be practical about it, right? When should we actually do this? So there's kind of nine different times where really can pinpoint and say, okay, that's a really good reason. The first one is if you have complex or high risk functionality. So maybe you have a user onboarding situation where you need clients or your users to go through and purchase an account or a membership. And that really floats the organization. They need that money in the door. That's really high risk or complex. You've built something that maybe you don't really build all the time or people aren't really doing out there. So that's kind of a tricky situation where it's good to get some feedback. The other one is if you're unsure of your users, who your users are. Analytics are a great tool, but they don't tell you a lot of things. And just trying to find out who they are, why they're using their site, what are they doing there. Testing can bring that in. Accessibility requirements. Sure, most of us do some base level stuff, but if you have a really important website that has to be accessible to certain people and you're not really familiar with it, some testing to make sure that you're right on the same, on the right track is good. Also, if you have a new website, you're a new company, you're a new product, you've never been out there testing your competitors, seeing what you should put out there is better than just kind of throwing some something up and seeing if it sticks. User buy-in, we have a lot of clients that have membership organizations. And their members really drive to the decisions of the organization, not just on their website, but what they do as an organization as a whole. So it's important to bring those people along the way and user involvement and testing is a great way to do that. Conversion rate, if your website is based on how much product you sell, it's going to make a big difference to get it done right. Back up your expertise, we're experts, but maybe sometimes the client doesn't want to listen to us and has some different ideas. We recently worked with a client and we just really struggled trying to figure out navigation. They're like, no, no, I want a super nav, I need a super nav. I'm like, that's not a super nav. And we just were talking against each other. So we threw it in a tree jack test, went through it, and we were on the same page quickly. A task driven website, there's really specific tasks. Like you need to purchase a ticket or you need to get a bus pass or something. That's a really quick, easy way to have tasks that are quick to test. And then finally maneuvering through some political environments. I know we have some clients here today that has a very interesting environment in the government situation where they need to make sure they're able to continue and do their work. And then all of a sudden, senior management comes down and says, what are you doing? What's going on over there? And testing is a way to provide proof of decisions they've been making along the way. All right, so we're gonna test. We've decided it's great. What are we doing? So now's the time to get everybody in the room. And if you're a consultant, make sure you're there. And if you're a client, don't do this without your consultant. And you really just need to start asking questions. And this should either happen right after you've got that job and you start working together to make sure you're on the same page. There's some questions that you can run through, and these aren't all the questions, but they can maybe give you some ideas to get going. The first one, what is the goal? Why are you even doing this? What is the value you wanna bring to your project? What's the budget, right? Can you spend $100, $1000, tens of thousands of dollars? That's gonna probably decide when you're gonna test and what kind of testing you do. The time you can add, I would say the average test that we do, and we do a lot of remote testing online, adds about two to three weeks to any project, and that's per test. You're gonna make a product, you'll post it up there, have people test it, iterate, post again, test, iterate. Takes a couple weeks to do that. Are you able to add two, three, four, five, eight weeks to a timeline? Who will be working with the users? Are you, as a consultant, gonna be contacting them, telling them when to go test, working with them face to face? Is your client gonna be involved? How are you gonna split that division up? Who are they? How are you gonna find them? A lot of people have great mailing lists for newsletters, or they have a membership base they can reach out. You might not have that. How are you gonna go find those people? The other thing is you might not wanna be testing with those people. They've been giving you the same feedback. It's time for a fresh, new perspective. Where will the testing place? And this is really the conversation of remote versus face-to-face observation type testing. And finally, who will interpret the results? I will strongly recommend that you leave it up to the consultant expert, because they can bring all their UX experience to the conversation. We worked with a wonderful client that we still work with today, where they really, they didn't have much money, and they wanted to iterate on their site, and they wanted to do some site navigation and taxonomy testing. And so they were like, we're great. Okay, here's some tools you can try. They went out and did it, and they're like, okay, here's what we wanna do. And we're like, that's kinda not what we would do, but they had already put like 20, 30, 40 hours into this testing. It was a very difficult conversation. So as a consultant, even if you're not getting paid for a couple hours, if you can lead that analysis, that's really, really helpful in the end to a better product. So let's talk about tools that you can use and when you should use them. Yeah. And so actually one of the things I noticed a couple of people taking pictures, she just mentioned that these slides are actually on the session. Note the link to the slide share for these. And also we're gonna start going through specific tools now. And there's a whole big, we're gonna show a couple logos and such. We're gonna show them sort of as they relate to different phases of a project and what you can test and what each of these different kinds of tests can help you with. This big document, it's a Google doc. It's also linked to from the session notes. So hopefully, I mean, take notes because that's awesome. This feels really empowering and such, but yeah, you don't have to, this is not the only source for this information. So there are different phases of projects and depending on how you work with folks, this, like something that looks like this might be two years or that might be two weeks. But the idea that in initial discovery what you're doing sort of project, formulating the project plan, what kind of tools for a user, listening and reaching out to users can be useful at that point. Navigation, at some point you gotta figure out how all of this stuff's gonna go organize some very specific tools for finding those things out, content. It's actually quite important. And there's some really great tools for testing content and getting insights into how well your content is written. Layout and design, working prototype. And again, these things sort of lay out in the flow, but how many people have a website that's done? No, you don't. No, because websites are, well, all right. So I mean, like I sort of think I maybe do too, but most websites, they just, you have to maintain them, you upkeep them and such. And so you can continue testing long, long past launch and evaluating your content or evaluating the new ideas and features and such. So in initial discovery, there's a few tools that we found to be pretty useful. So I'm gonna start by talking about the usability test. And this is sort of the classic of the industry. And again, this is sort of the stage where you're figuring out what's working, what's not with an existing website. So you give users a task and you watch how they try to complete it. If it's somebody who's gonna register for an event, that's what needs to happen. You say, look, you're someone, please try to register for an event on this website. And then they go wander off and go somewhere totally different. And you say, wow. You wanna use this early on. And the earlier you can have this insight, the more impact that can have. And it's a great way to pinpoint what your analytics might tell you, like people aren't registering for events, but they're getting to the page right before it, it can really help you understand why they're not registering for events. So there's some pros and cons with this, as with all things in life. You don't need that many users to actually start generating some good ideas about things that are broken or could be fixed or done better. It can be really powerful to have the clients in with the observation here. The author of Don't Make Me Think mentions that one of his sort of gorilla tactics is to pull in sort of upper level executive management of a client. Just say, hey, we're doing some testing down in the boardroom. Come on by, it'd be a great morale boost for the team if they saw you there. And then they'll come in and then they'll see somebody actually using the website and they'll, you know, they think they're coming for five minutes, but then they'll stay a long time and realize, wow, there's a lot going on here. And that's kind of a cool thing. And you now have a lot more traction with the project. And then remote options are possible. You can set up these, we'll show a couple of specific tools and again, there's more in the Google Doc, where you can just go get a whole bunch of people very quickly to go test the site and you get to see the results. So the testing analysis, so it's a downside, it can be time consuming. If you do this in a lab, there are actually usability labs, a lot of universities have them. That can be very expensive to do so. And it takes time. All projects that you take on have a shorter timeline, usually, than you would like. And that's sort of a universal thing with all of this. The more you do, that means you're doing more of this and less of something else. Nevertheless, pretty useful stuff. The way you do it, you create tasks, registering for an event, making a donation, getting a permit, doing something. Recruit users, get a place to do it, test it, analyze. That can be quite expensive and 15 to 25,000 is what a usability lab costs, which is jaw-dropping the first time you realize that. But if you need somebody to help recruiting and screening, there's services that will do that. But these online services are actually way more affordable. But you don't even need to do that, actually. You can sort of do hallway usability testing. Just grab somebody and have them do that. There are a lot of tools. User testing seems to be a pretty good one. But again, there are these and many, many more. And again, all of these are in the Google Doc. The bottom line here, though, is just really super valuable to actually see people trying to use either your website or the website that you're hoping to replace. And you really can gain a lot of insights from seeing this happen. Another really useful tool, which isn't testing so much, but it's still feedback, it's still listening to users, is our user surveys. We've all seen them online. Sometimes maybe we've occasionally filled them out if we care about the organization. Asking questions, getting some ideas. It's a good idea to do this when you're dealing with, as Lynn mentioned, organizations that have a really, where the user base is a really important part of the organization, then maybe a membership organization or any other kind of organization where it sort of has a grassroots feel, perhaps, to it. When you would like some input on ideas on ways to make things better. You might have your own ideas, but it really makes sense to listen to the people who are using your site as well. Get their perspective. And it's also a great way when you're like, I don't know who these people are. Apparently a lot of them. But what do they think? Surveys can be really valuable for figuring these things out. Super fast to put together. And another benefit of doing this, if you do this early on, is that you have maybe created a list of people who would be willing to come back and help in later phases of testing. Right, so if you say like, hey, checkbox at the bottom of the survey, would you mind if we contact you with further questions later on? And maybe some of the people who put what looked to be time and effort, thought, and care into their responses, then they become people that you could recruit for some of the later phases of testing too. The accuracy, however, can be a little bit suspect. What people say they do, and what people do, are often two different things. And with the best of intentions, how often do you come to this website? Oh, I kind of really like this website. Every day, no, no, actually, honestly, I only come about once a week. But you sort of start skewing your answers when you're asked, I don't know, maybe I'm just revealing my own personal behaviors. But another thing too is it's not, the only people who are responding are the people who actually felt like they were gonna give you an opinion. And those people may or may not be the representative of all of the opinions. Analysis on this can take quite a while because when you have 100 or 500 people filling out responses, some of the things are yes, no, yes, and then a paragraph. And now you have to read 100 paragraphs. And that's not the only one. Like there's maybe several questions that have sort of longer, more thoughtful responses. It can take time to sort of work through that and figure out what's the pattern here, what are we seeing. And you're not gonna see specific user behavior in this. You're just asking for opinions basically. But super straightforward to do. Figure out your questions, promote them, then analyze your results. Most of that's analyzing results. It's gonna take you a little time to figure out the questions too. Promoting also important. And again, we all know at least some of these tools. Serving monkey, just toss it up there for free. Pretty straightforward stuff. Quick, easy and affordable. But spend your time making sure that you're asking the right questions to the right users. And that's where your time is. Asking the questions, figuring out what those questions are, and then spending time understanding the answers. All right, I just want to touch on that as well, that specifically, we've done a lot of user survey tests and you guys probably have too. And I think we've had a lot of that, really unvaluable ones. I think if you don't know really what the goal is to that and tailor the questions to that, you're just gonna get general answers and you're not gonna do much of anything. So another way to reach out to your users is to do user surveys. And this is kind of smushing of contextual inquiry, which is going out in the field and being with your external users and seeing them use your website day to day and stakeholder interviews, right? The internal people count too and seeing how they're using the site. So I kind of smushed it into one for user interviews. But really, you're coming up with questions, but you're having to dialogue and back and forth with them. So this is a great time to do a task-driven website, to do this, sit at a bus stop, talk to people getting on the bus. Do they use a website? How did they use it? What is missing from it? And when internal users are high-party, maybe you're doing an intranet or it's a political thing where internal people need to be talking about their needs. What if you don't know much about a topic? We're working on a genomics website right now and I have a degree in chemistry and I don't know anything about it. And that's okay. We're going to work with a lot of people that we don't know the client and their content and their topic. And it's okay to reach out and learn more about it from your stakeholders. Also, if there's an event you can piggyback on, no matter if you were planning to do this type of outreach to your users, do it anyway. Say if you have, you know, your client has a booth at the State Fair or they have a conference, go sit there for four hours and talk to the people there. And it's just like a free way to gather all these users that are in one place. The pros are really that you're kind of stepping into someone else's shoes. You're stepping out of what you think is the right decision and choice and kind of getting a sense of how they're using the tool. You can make a really big impact with a couple conversations. And you're going to also, if you're in their space where their computer is, you might see how they're doing some workarounds and cheat sheets. Maybe they have some notes or maybe they're going through the functionality and you're like, wow, I didn't intend them to do that. Maybe we need to fix that functionality and they don't even think it's a problem because they found that workaround. So the cons are really the cost of traveling around with lots of folks. Then to the opposite side of it, you can do this with a few users. You're going to have a limited view, so you need to take their feedback with a grain of salt. And internal stakeholders can kind of take over the process. You need to, when you meet with internal stakeholders, kind of talk to them about what their feedback is going to impact the project and how it's being played out because if you go and meet with a bunch of people in different departments, they're like, why didn't that end up on the website? Why isn't that in the design? Well, we got to take everyone's opinions together. So it's really quick. You want to come up with some questions. The questions are going to be tailored to different types of audiences and then you want to go meet with them. It's going to probably take longer because you're going to be waiting to start doing the discovery until this is done. But the costs are really minimal. I think mileage is 56 cents a mile now, so that's kind of staggering. Like you could literally go and do that. So the hard costs are really low and then you probably get hungry along the way, right? So the bottom line is for this one is really don't overthink it. Just go into someone's space, talk to people, build a college website, sit outside the student union, go to the bookstore, see what people need. So let's move on to navigation. So with navigation, there are two kind of techniques you can be doing to get user feedback and that's card sorting and tree testing. With card sorting, you're literally taking pieces of paper, you're writing on page names and you're telling someone to organize and group them. There's two ways to do this really. There's open and closed and closed. You've set the top level navigation. You've told people what the buckets are called and they have to put them in the buckets. With open, they can pick their own buckets. So they could come up with 10 levels of navigation and someone could come up with two or five. A great time to use this is when you're completely overhauling when your content is. If your labels are concerning, maybe you have a sitemap that has tools, resources, FAQ and help. Where are you going to go to find that information? Or if you have something more technical or medical, something more complicated for labels and also duplication. We did a usability test with a client where they had content duplicated, the same sort of content about permits and regulations for hunting, fishing, and people just kept going about it in different ways and kept coming in and they got different information but there are links to the other pages and they're really confused about where to go. So kind of parrying that down and seeing where the users think they should find it is helpful. So super easy exercise for people to wrap their heads around. This is a great thing. I also do with your clients in like a kickoff meeting and have them kind of start doing it. Might not drive what your navigation is going to be but you can get them thinking about that. Super quick and affordable and it's going to expose the weak point. So when you have all the cards and people have five or ten cards they have nowhere to put that's going to expose that you're missing something. The bummer is it's not going to make a sitemap for you. It's only a guideline. Large data sets are really overwhelming. You can't give someone 100, 200, 300 cards and tell them to go. They're not going to do a very good job at it and also if you're doing something online and you have to analyze it everyone puts this grouping together differently times 100 people you're going to be reviewing that for weeks. And specialized contents really hard. It's specific words and maybe the medical industry or something that people don't really know. They don't know where to put them so they kind of just toss them to the side. As I mentioned you just create cards and tell people to group them. It's a really quick task. Maybe it even takes just one day because it's with your clients in a kickoff meeting but if you don't have pen, paper, scissors you can buy them for 15 bucks maybe or online services. We recently done an online remote service. We used OptimalSort and I'd actually advise against it. Great tool. They did a great job. The problem is we don't know why anyone put anything anywhere. We never get to talk to them. So if you're going to do this do this with someone in person. Ask them. Have them talk through where they put something because you're going to know okay, deer hunting goes under deer or goes under hunting. I get that. Why did they put all these weird things over there and then that person did it that way and that person did that way? You need to ask them why because you can figure out the easy stuff on your own. Tree testing. Some people call this reverse card sorting otherwise tree testing and really you're taking your site skeleton or your site map structure and you're going to go through and make people complete a task to see if they can find it. So you're going to ask them all right, can you purchase a ticket for a Lady Gaga? Let's use that perfect example. Go buy that. Where would you find that? And so they go through and they click through and they click okay, I'll find it here. Now you can validate your structure. You're a little unsure of it or just what the heck, test your site structure anyway and make sure we've got everything and people can find something. That's the time to do it. Continue to work through those labeled concerns and oftentimes those concerns come up because us and the client are disagreeing about what they should be or the client internally is disagreeing. And it also helps visualize the flow. So it takes out any of the layout. If you're going to make a site map document you might have laid out how it looks on the primary nav or the secondary or the tertiary. And then people start thinking layout. You don't want them to think layout. You just want them to think of the labels and the groupings and so it kind of removes and strips that and so it helps people we work with kind of visualize stuff. So again, with a lot of these things especially the online tools they're quick, easy, affordable to do. It allows you to iterate more fine tune. What we do is we run the first test and we said okay, what didn't test well. Let's do those five questions not the rest of them and start working down the things that we're unsure of. And again, helps you find the holes things that don't have a home. The bummer is large sites are really hard to test. You need to pick and choose what you want to be asking people. Typically a test runs anywhere from 10 to 15 questions and they kind of exhaust people. So you're going to only be able to pick specific areas to look at. When you're writing content you're link across linking to things. You have a search tool. People can't use that. So you're not going to know how that affects the results. Also once you put it on the page you're going to make it pretty. That changes the whole game anyway. And then iterations definitely add to a timeline. So what we do is we'll get the sitemap ready. We'll test. We'll let users have three days. They don't need any longer because the people are mostly going to do it in the first day. Then we iterate and change. Throw it up there again. Do it for three more days and then keep going as long as we're having problems. That's going to add about two and a half, four weeks to your timeline depending on the complication or your uncertainty. You can technically do this with paper and walk someone through each page. That seems really laboring but it's an option. But the online services are great. We use Treejack. It's really affordable. You can buy one test or a month for like $109. Give it a try. So for this try it. If you have users it's easy to get a hold of them. Just try it. You probably have some questions about some area in your navigation. Make sure you have time to iterate though. It's not really worth it if you can't test your changes. So let's run into content. And content is kind of a funny one in the middle here because content is going on the whole time. You're going to be working on a strategy and doing content audit and blah, blah, blah, blah. But it goes on the whole time and more importantly you're going to continue to iterate once it's live. But let's just focus on a couple things that you can do along the way as you're rebuilding your website. The first thing I want to point out is people's reading level. It's kind of crazy. 21% of adults read below a fifth grade level. That's sobering to me. And I think really important as we write our content and make sure that people can read it. We need to know who our audience is, what kind of level their education is, how we got that from the survey. So we're writing to the right people. So using this tool is great if you have time. No matter what it doesn't take much time you need to make sure that complex topics are easy. Say you're WebMD and you have 100 diseases you need to be writing about. How do you make that simple to the basic user that needs that are home at two in the morning and trying to fix their sick kid. They need to be able to read through it and understand basics. I also think it's a good confidence booster. All our clients, they don't have the money to buy writers. They don't have web writers on staff and it's a really specific thing. It's not the same thing as writing a paper. So if they can do some tests along the way to build up confidence and figure out what's working and not working that is very helpful. So let's start with readability formulas. So with the readability formula basically there's a whole bunch of online tools you chuck in a paragraph or two and then click a button and it spits out a grade. So now we're looking at that grade, right? Are you going to a fifth grade level? We have a client right now. We're launching a couple of weeks that is a foundation website and people go there for grants and fellowship. So those are a lot of PhD people. So we need to know that when we spit out a grade level if we're at grade 10, we're probably okay. So it's quick easy. Do it no matter what. It'll assess that level. The problem is this is not going to help you with context at all. It doesn't tell you if your sentence or paragraph makes any sense and it doesn't tell you if the layout makes sense. So maybe the grade level is great but you've got three paragraphs smushed together. Someone's not going to skim through that. What about the titles and the bullets and things like that? Tools, these are awesome, awesome logos and the websites are even cooler looking but they're free and they work. So check them out. We use readability-score.com ourselves. And so with content it is that readability comprehension. So readability just talking about if the words and the sentences are at the right understanding level and comprehension is really about what did I just read? Does it make sense? You could literally put in a sentence that says I love Drupalcon Austin and then Drupalcon Austin I love and you'll get the same grade level. So that's not helpful. So you need to be able to test con prehension as well. A closed test is an option. You basically have a paragraph when you remove every fifth word. It's kind of like mad libs. So it's kind of cool. And ask people to fill in the blank. You're looking for a 60% rate approximately to be successful. Another easy test. You're testing comprehension. The problem with this is you're not testing if they understood what they should be doing. You're not testing if they understood the goal of the page. You're not testing if they can get to the next page. So that's really where usability testing is kind of instrumental. You need to sit down with someone and have them read a page and tell them what they should be doing. A couple of tools for this as well. But figure out the role of content on your website and how important it is. I think content's really important, but everything is right. We should be doing all of this. So kind of figure out what the goals are for your project and figure out the place for content. And if it's higher a place, maybe pull in some usability testing as needed. Layout and design. All right. So at some point, you think you know what the problems are. You think you know what your content is. You think you know what the structure is. And now you've got some layout and design ideas. And it's awfully helpful to test those. And pretty straightforward to do. The most common kind of test that you'll be doing with layout is something called click testing or mouse tracking basically. And it tracks the first place that somebody would click to complete a task. So just like with the tree testing where you're saying where would you go to register for an event or where would you go to donate or where would you go to do a thing like X. You have them look at a page. You say where would you click to do this? And they go to find the big button or they don't or they miss it. And then again iterate and make improvements. It turns out that first click is really important. If people start succeeding, they'll continue to succeed. Conversely, if they start down the wrong path, people who will successfully complete that the get to the thing that they wanted to do, it drops pretty dramatically. So the kinds of things you want to test are those like really again specific tasks. Lou mentioned the example of if you're working with say a transit organization, people need to figure out the road schedule or whatever it is, like the very specific kind of tasks. Those are excellent for being really super testable here. And or if you're someone like an e-commerce kind of store or operation where you need to have people adding to cart and actually getting through a checkout, making sure that all of that's going to work in the new layout and design that you're proposing. These are pretty quick and easy to put together. You can, these next two pros are kind of about running this on a live site where it's no one more is mouse tracking. You get basically very large sample sizes. You can run this on a live site and see how people are interacting. And with that, you get no observer effect, right? When you're doing a survey or you're setting someone up in a testing environment, you're saying, we're going to give you some tasks. Please complete these tasks. And their brain mode changes. They go into like I'm test taking right now on a live site. They're just moving around. They're just doing things. So the cons of this is you don't get feedback. You don't, there's no way for you to know why they went over there and clicked on that thing. This is done via an online tool typically. And they just went over there and clicked on the thing that said donate. And you ask them to go register and for an event. Like did they not understand the question? There's a lack of understanding there. You just don't have a chance to reach back out and figure out why. Click testing. It's not the same as, your focus is not the same. Where your mouse goes is not the same thing as your eye. There's, research has shown a pretty high correlation in the upper 80%. There's overlap between mouse movements and eye movements and where you're focusing your attention. But it is not the exact same thing. Pretty easy to do. And again, then iterate. So create your layout and design. Put up the tasks. Recruit users. Test, analyze, revise and repeat. It's going to take a little bit of time. But again, lots of great online services. We seem to be going towards verify more often recently. Works really well. Gets you better results. And that's kind of the bottom line too. So at some point in time, you'll have something that's actually working. And this might, you know, this applies to either sort of, you know, code that's come a long ways or an actual live website. And the testing that's most often used specifically only in this environment is A-B testing. Again, usability testing, some of these other kinds of tests can still be done at this stage. But A-B testing kind of becomes a new tool in the toolbox. And that's where you have users comparing two versions of a page. So they've got version A, which has, you know, the donate now button over here and the version B, which is down here. And then you just see how many more people donate one over this is the other. Or instead of the word donate, you use the word support. There tend to be very sort of focused small changes, small differences between the two. And then you just test it, see which one works better. So this is used to just verify your layouts. And, you know, a place where, again, there's a conversion rate kind of thing. Oftentimes the people are doing, again, like something like donate or add to cart or something like that has a real important thing. If you can improve the number of people who are actually successfully able to do that thing by, say, 5% or 10%, that's pretty cool, or 50%. So it's really quick to iterate. You get to see the small changes that you're proposing. I mean, there's lots of case studies online by changing the word go from view details to go and seeing, wow, that just doubled the number of people who actually go do the thing. Because they were confused by view details. I don't want to read the fine print. I want to do the thing, and then getting frustrated when they can't do the thing. The weaknesses of this though are, you really never know the why. So you can't actually figure out what, you know, you don't know. So famously Google once tested 41 shades of blue to blue green to try and figure out which one is going to be more appealing for people to check their inbox or do something. I don't remember what it was. They literally went through and they changed the shade of the button, changed the shade of the button, changed the shade of the button, changed the shade of the button. The winner is shade number 37. You don't know why. You don't know if that has something to do with the time of year. It was springtime and that one had slightly more pastel and it just made you sort of feel better. Or what, you really don't know. But in the case of something like changing, donate now to support, seeing a big change, that can be pretty compelling. The other downside here is they have to build both A and B. And again, there's tools that can really help that, that sort of do as an overlay on a website. And you sort of just get a clickable interface, but it is something else to build. And this is, again, a place where you really need to iterate. So, create your options, some small differences, get the users together. Or if, you know, in the case of a live website, just put it up there, A and B both running, and then iterate. Look at your results, try to improve them. Try again. Optimize at least probably the most famous in the bunch. But again, there's a whole bunch of other ones, including Google's got the A, B website optimizer, which I didn't get the name right, but some combination of those things. And it's really awesome for focused goals, like something like add to cart, or donate, or other things like that. So, some lessons learned. Lessons learned. So, we're still, you know, trying things out and learning from them, and we will continue to do that. But I wanted to highlight a couple things that we've really learned along the way, and will not make, hopefully, the mistakes again. But, you know, there's no substitute for talking to people in person. We've done a lot of remote testing, more than in person, and a lot of that has to do with clients being remote, and, you know, the members being in different states. But, you know, if there is money time available, meaning people in person and having a conversation is much more valuable than some sort of remote test the more time you put in it, the more value you get it. If you're just doing a survey to do a survey and you don't have really a focus and goal, you might as well just skip it. You know, you need to be invested and believe in it and have goals and focus and how it's going to impact the project to get any real value out of it. Leave the analysis up to the experts. Goes back to my example of the whole sitemap debacle, but really bringing that consultant to analyze what's going on. Analytics, we haven't talked about that at all, but it comes into play the whole way through. You're going to use it to find out more about your users to figure out what the questions are, to figure out what points you should be testing. Analytics is going to be kind of your friend throughout the way. Testing will not make your decisions. A lot of clients think they will. It doesn't. It just helps guide you. That's something to clarify with them right from the start, because they kind of think, okay, we'll put that up and then we'll decide. No, that's not necessarily like that. You need to pick your own priorities as an organization. You definitely can't test everything. Pick a focus based on the project you're doing at the moment. You'll be doing iterations and evolving and changing. What is the most important right now? Balance the results as an expertise. If you are the consultant in the room, don't forget that you're the expert. Make sure the client knows that this is a balance just because the results say something doesn't mean that's the best practice and should be done. And maybe it's a skew result. So don't forget that you're the expert and don't let the test kind of lead the process. And finally, do not make the test more complicated than it has to be. There are so many ways to get feedback that you don't have to do in some sort of structure test or anything like that. Just talk to people. Kind of easy like that. So I really appreciate everyone staying with us. Here's the resources guide. It's also up on our session link at the bottom of the description. But that has really all the tools we've talked about, ton more with pricing and URLs and things like that. Also, some books and some articles that we kind of recommend looking at. And then as everyone's been sitting here quietly, we're going to actually do our own user test. Okay. So I need participation. We're going to do a mood test. So I want everyone to tell us how did this session make us feel? You can fit happy, frustrated or angry, go. I'm seeing a lot of angry. So feel free to evaluate the session. We'd love to get some feedback, see what was useful, what wasn't, so we can improve over time. And then we'd like to take questions. I know we don't have a ton of time and we're here for the rest. But if you guys do have a question, they do want us to use the microphone so that they can record it. And if you don't like doing that, please come talk to us afterwards. But does anybody have any questions? All right. Okay. I'm from a very small university. Our entire web budget is the salary, the very small salary. We pay our web guy. So we don't really have money to hire an expert to come in and help us analyze things. So do you have any tips for people who aren't necessarily data stat people to really be able to analyze that and make it useful? Yeah. Well, your web guy and you are also experts about your content and your users. So there's a lot of resources. If you decide, at first kind of figure out what you're trying to test and then figure out the right test is and then if it's A, B testing or whatever it is, let's start researching it and learn as much as you can about some best practices. And I think that's the best way to go. But just because you're not an expert doesn't mean you can't do it. And I would say also that you're an expert in your organization. And that's one of the things that we are a disadvantage at always. So when we come into an organization, it's one of the things we have to try and learn, like everything we can about the university. And so you bring, you know, you might not know, you know, bunch of these names of tests and other jargon and nodes and whatever, blah, blah, blah. But you know all about your organization. So don't, that's really critical to judging the success of things and writing the right questions and finding the right people. So you bring an expertise that's pretty valuable just in being you. Okay, thank you guys. Yeah, my question is about testing more complex multi-step workflows. If you do user testing with prototypes that are built for a browser or to be actually used, you've actually got to build all that out. I read a little about paper mockups and paper clicking. I'm not sure if you're familiar with that method. If you've tried it and what you think might be pros and cons of it, where you don't have to, you don't have to build it all out first if you have it on paper. Right. We do something like that. Well, all right, so I'll answer the new answer. So we do really see a lot of value in paper wireframes and working people through things. I don't know that we've ever actually formally set that up as a test. Like, have we? Okay. So we've done wireframe. We have done paper wireframe testing and we've put stuff up and asked specific questions. We haven't done it where it's as complicated probably as much as like, where would you find this and that? But I think with that scenario, as it's more complex and there might be some background someone might need to use, definitely doing it in person and be able to guide them along. You don't want to overcrowd the test so that you're now influencing how they're experiencing it. But at least you're there. You're kind of like the helpline. If they get stuck, they can ask you that question. But I think that's, it's a great thing to do. It's cheap to do it on paper than to build it out. Yeah, paper is a super awesome tool. That's like an official quote. Paper is an awesome tool. I just wonder, when you're doing user testing, is it usually just you and are you trying to take notes at the same time or do you usually have somebody there taking notes with you? Yep, when we have done it face to face, like I mentioned, we kind of done mostly remote testing. So all the notes are the feedback and the analysis the tool gives. But we've had multiple people there, I think, especially with usability testing. Everyone comes to the table thinking of different things. So you need someone, you're going to catch different things. And then so for usability testing, we actually had time in the schedule that day to take a break and spend an hour talking about each participant. So we'd meet with someone, they'd run through the tasks in the lab and then we would meet for half an hour and talk through what we all saw, made kind of a task list and then went back to it later on so that we were sure to grab everything. There's three of us in the room or four of us and we all thought of different things. I'm wondering, especially when you're talking about, if you're building a college website to just go and talk to people outside the student union, have you done much of that, sort of like, I think you're calling it hall usability? No, we mostly have done that with ourselves or our clients, but not like that. I want to reach out and start doing that more often, but I think at the very least, if you don't have multiple people, go do it with one person. And if you can have two, I think that would be the best. I wonder if you have experience working with audience that are really hard to put together. I work on a project where it sells the software products to executives in corporations and that group of target audience is really hard to find and put together. And I can gather them as frequently as I would like to. Just wondering if you have any experience working with people that are really hard to find, or is there any way to go around that? Because I tried going to feedback army to get some feedbacks, but the feedbacks are not as valuable because it's just not the target group. Yeah, I think that's the tricky one because that's going to be your limit. It's not going to be your budget or timeline. It's going to be getting the right people. So kind of really focusing on what the most useful test first and then reaching out to those people and explaining how their feedback could make something better. And you probably have some people that you're selling stuff to that have been with buying things from the organization for a long time. So reaching out to those specific people and see if you can grab a handful of them and say, here's the deal. This is what we're trying to do and this is how you can impact it. You're going to find somebody that cares. And you're also the more you can be really well organized and explain when you need them and how you need them, they're going to be easier to work with you because they're probably really busy and doing their own thing. Or if there's any sort of incentive, we didn't really talk about that but giving users that probably those aren't going to work for like a $50 gift card or something. But if there's some sort of incentive that you can pull them along, maybe you get them a discount on their next order or something. And so not specifically in the case of executives and such, but when we've had hard, for whatever reason, audience that we thought might be hard to come together with, you can actually hire a recruitment service. There are people who will actually, and that was one of the costs mentioned in the usability test, there are people who will specialize and go and find out, all right, I need to find people who are between this age and this age and they make this much money and they do this and this is what they do in their free time. And then they come back with a whole bunch of them somehow. Yeah, that cost is about just under two grand. So, to give you an idea. All right, thanks. All right, anyone else? All right, thanks everybody. Thank you. It's a great day. Come ask some more questions for you.