 So, so I'm just say welcome again now that we are reporting and handed over to Angela trip. Good afternoon everyone thank you so much for being here. This in case you're in the wrong in the wrong room this is a remote usability testing webinar. It's brought to you by a pretty awesome team of LS and tap Michigan advocacy program and the graphic advocacy project it's kind of a culmination of the usability. Training TIG that the Michigan advocacy program has been working on over the last few years hopefully some of you have participated in other aspects of that and so this is. Instead of our our test of Palooza at itc we're offering this webinar so we are really lucky to be joined by several panelists. So we have the Deirdre Johnson, the manager of LS and tap. Can you go to the next slide. I'm Ashley Treny, and Victoria Sewarderman, both from graphic advocacy project and they're going to do the bulk of the presentation. But we also are very fortunate to have some additional panelists to talk about their experiences with remote usability testing. Carla from LA Illinois legal aid online, Lauren from Lone Star legal aid and Kim from Michigan legal health and they are going to do their own introductions. When is their turn. So now I'm going to pass it off to Ashley. Wonderful. Thank you so much. It's great to be here today. We're very excited about this presentation. So just for a little bit of orientation. The presentation today is really going to focus on, you know, these four sections. So we'll talk a little bit about just the broader user research landscape and opportunities that there are to conduct user research. We're going to focus in the majority of the presentation on talking about usability testing and why we do it, as well as talking for and conducting that usability testing. And then we have some exciting resources to share at the end. So, first and foremost, what is user research. So user research is an important tool that really allows us to understand and assess how our resources are really serving our clients and our users. It's to, you know, understand user behaviors, their pain points, their needs and their motivations for seeking out information for engaging with a service. And really allows us to, you know, to understand and empathize from the perspective of our clients, sort of what they need. So user research is really helpful. There's a variety of ways to sort of conduct research through different methods depending on your goals and we'll talk more about that today. And it really helps to expose the root cause of problems and opportunities for improvement. So at the end of the day, really putting people at the center of the design process is our best chance for getting adoption, making an impact and achieving success. So let's turn it over to Lydia to really talk about why this is so important for us in this space. Thank you, Ashley. So, I think it was really important that we put this slide, kind of at the outset of this presentation, because there is a lot of talks about usability testing and who's doing it and how are we doing it and all that is important, but we want to talk about not only why usability testing and particular is important, but overall why is it important to involve our users. Regardless of what kind of projects you're doing, ultimately you're designing it for your users and thus the user should really be at the center and your project should reflect the needs of our users. Oftentimes we are all very smart and we have great ideas of what, how we can better assist our users and what they need, but sometimes unfortunately we're wrong, completely wrong. So it's important to understand that way that our brains, especially, you know, legally trained brains think about problems are often very different from our users and so it's really important to hear their voices and hear actually what they need. The best way to understand the needs of the users is simply to talk to them and get them involved at the beginning. So usability testing is not usually at the end, you can get it as soon as you have a, you know, viable product, but it's really important to, excuse me, get users involved in the research portion as well. We're at the outset to understand what do they actually need and creating these tools that are centered around your users, not only fulfill those basic needs but it does build both trust and retention because your users will trust that while we were involved, trust your tools that you're making and also continue to use, you know, different iterations or more tools that you use and for other people to them. And as Angela usually says in her presentations she has this little graphic and I always come back to it because it has just resonated with me. We don't want to build rocket ships when our users just need a bike ramp. Thank you so much. So yeah so like Lydia mentioned, involving users at you know every stage of the process is really critical to build that trust and to ensure that you're getting that user perspective. You're getting those insights about the things that they really need at various stages of the process. So to talk about this process oftentimes you know we're thinking of it in the context of, you know, the design process and there's a really great framework here in this visualization this is called the double diamond. It's a common framework for talking about design process that I think does a really good job of showing the iterative nature of research and design. At a very high level you'll see that this diagram right has a lot of arrows pointing back to each other it's a you know a lot of opportunity to have research and form insights which directly inform our solutions and that process never stops we can continuously be learning, and we can continuously be improving our resources so that they, you know best support your end users. And, like with your dimension there's different opportunities throughout this process you know at the very beginning at the onset of a new project, as well as iterating on existing resources, and so we'll spend a lot of time focusing on that today. So, kind of to level set look you know zoom out a little bit take a look at the landscape there are a lot of different types of research that you can conduct. And what that really maps back to is is understanding your goals, maybe where you are in this design process whether you are, you know, desiring more exploratory research to really explore a problem space or you know confirm or validate that you might have. Maybe you are in the process of you know thinking about solutions and you want to really use research to generate more hypotheses or you know create focus areas or ideate specific solutions about a problem that you've identified. And perhaps you want to leverage research to evaluate existing ideas to really validate those solutions, and an iterate on on a current understanding of a particular problem so there are a lot of different types of research that can serve different phases of of the sort of broader process that we just shared and so it's really important going into any project to have that clarity around, you know, are we, are we doing exploratory research generative research or evaluative research. Within those different types of research, there's also a lot of different types of methods. A matrix like this can be really helpful at understanding how different methods that you might choose to conduct research can have different outcomes right can reveal different learnings about things that you might be trying to understand. And so this is a great matrix. That really highlights these two main quadrants so behavioral versus attitudinal. And so this is behavioral being, you know, you're an understanding of your users current process or actions or things that they might be taking versus more attitudinal which is maybe you know understanding their mental models or their thought processes or reactions to an existing space. And then the next quadrant is maybe more familiar to us all but you know the difference between qualitative insights and quantitative insights. So qualitative maybe being more about, you know, feedback and opportunity space sentiment, where quantitative, maybe, you know, being more focused on quantitative data things that can be more, more measured. So, you know, once we really understand the goals of the research we can really focus in on choosing the best methods that allow us to achieve those learnings. And specifically today, you know, we are talking about, you know, remote usability testing. We also wanted to highlight that in general there are, you know, a lot of research research research methods that do lend themselves well to remote research and so there's a couple highlighted here. So today we're going to really hone in on one specific method usability testing and how and where it fits into this broader design process, and the things that we can really learn and leverage from bringing this into to our work. And so to dive into that I'm going to turn it over to Victoria. Thanks, Ashley. So as Ashley mentioned, the focus of our session today is on usability testing specifically remote usability testing but just an overview of what usability testing is. I like to use this infographic because it really distinguishes the differences between more broad conversational interviewing usability method or research methods and usability testing which is really a method that observes how someone would interact with a thing I usually called a thing because it can be anything an experience a paper or website. Today we'll end up calling it a prototype but we'll get to that. And just to highlight here that the main goal of usability testing is really to identify usability issues. Next slide please. And what is usability testing as I mentioned in the previous slide it's a research method used to evaluate a thing kind of usually falls in that you evaluate bucket that Ashley went over. And it's important to note that it is an observation of a thing so you have something tangible that a research participant is going to interact with and evaluate and review with you. So this questions and prompts are often task based. This is really to prompt your participant to interact with the thing so you're wanting to see you know how they go through your design or your website or whatever your thing is. Because of this is, you know, you're watching them interact it's also falls on that behavioral spectrum that actually went over in that matrix. So observing how they interact with the thing, not just asking their thoughts are you know asking more broad questions. It is important to note I have it kind of buried in that behavioral bullet is that usability testing is simulating an environment. So while we are watching them observe or observing them you know interact with your thing. So hopefully that can give us some clues about how they would interact with it in real life. It is a simulated environment and artificial scenario there are some factors that you just can't recreate, like how someone might be filling out a form in real life, where they are in that, you know that space and then the emotion they're feeling that might not be able to be something that we can see and observe but at the very least we can watch. You know their actions as they're walking through and prompt them along the way. This method really helps us see where their issues with the thing but it also might uncover how users think about what they're doing. So, one really interesting thing is through usability testing. You might, you know, have a specific design setup and you're wanting to see how they interact but through this testing you, it can reveal that they're they actually think about things completely different there's a different sequence of actions they have different methods of your thing. So, that can again uncut be uncovered in usability testing. And one thing that we've just noted is that it shouldn't all it shouldn't be the only method that you're using is a really accessible method, and we of course, you know want you to do it but kind of back to what the deuter said it's how you incorporate your users throughout your process and get them involved as much as you can so hopefully that this method is just another way to continue to build your knowledge and understanding of your user. So where does this fall in the double diamond that actually presented just a bit ago, it actually can come in at any stage of the process so it could be done early on to identify existing issues or problems for informing this is sort of early on in the double diamond. It could be done once the solution has been proposed so you're wanting to actually validate based on your research. Does this actually meet what users are needing. It could be done after a solution has been validated so at that point you might just be iterating and hoping to find issues to fix to make it really the best solution that there is. So really, we put it right there in the middle because it's part of the iterative process that can come in at any of the stages. So diving in even deeper about what are some goals for usability testing as I mentioned, primarily it is used to determine usability issues but there are a couple different flavors of it, when it's using a more formative manner. You're observing how participants are interacting and you're actually, you know thinking about specific ways to fix them and soliciting ideas for how to fix those issues so you might have some specific design questions. So here does it go over there and asking questions about how users might use a thing or have used something in the past. When it's using a more summative benchmarking manner it's it's I think actually called usability benchmarking but you might have a more polished solution you might have your existing resource and you're interested in measuring performance. How many people are actually accomplishing what they need to do with your thing at this time, and then maybe when you propose a new solution you want to test that and see that it's actually performing better. So there are some some ways to use usability testing to measure performance, and especially when you have multiple concepts so whether this is like a redesign happening or you have multiple designs that you're testing out. You can do that usability testing as well. It's not limited to just one version. One important thing to note during testing multiple concepts is that order bias is the thing that happens whatever one might do first or see first my bias them and how they interact with everything following so making sure you alternate the order of those concepts. Alright, so lastly, everything I just covered I know we're focused on remote usability testing today, but everything I just mentioned really is true of usability testing in general, both remote and in person, you know with an observation method. You're looking at behavior, it can meet different goals for you. The rest of this presentation will focus specifically on remote usability testing but I do want to highlight a couple of the differences between remote and in person. The first is that it's actually not so different. So one of the things that you know actually showed in the matrix is in highlighted is that many of the methods that you can do in person and that are commonly done in person can be done remotely, especially today with all the technology that we have possible. There are different considerations though when you're doing things remotely it does require a little bit more prep and a little bit more setup, and you have to be more creative in the tools that you're using to capture observations because typically if you're in person you're just sitting next to them you might just see things remotely you'll have to you know think about just be more intentional about how you're actually capturing those observations. Usually remote usability testing actually has its own benefits. One you might be able to reach users you might not normally be able to reach just a wider audience you know if you're only able to do in your own geographical area. In this way you might reach more people. You can collect large amounts of data again with technology, you know you can reach more people. And there is also. You have some interviewer bias you're doing in person work with folks so in this remote setting you might be able to remove some of that pressure and people might be more honest with you in their feedback on your thing. So that is a little overview and usability testing. Next we're actually going to talk about some of the logistics of what it means to do a usability test starting with preparing. So in preparing for usability test is really similar to any other research effort, you're scoping your project and thinking about what you're wanting to learn, creating a research plan and guide plan of questions, preparing the thing you're wanting to test again we're calling it a prototype so whatever materials and tools you need for that. And then lastly of course actually recruiting your participants. So we'll start with project scoping and I'll pass it back to Ashley. Um, so yeah project scoping is is very critical at the start of any project or research phase. You know it's just important to have that plan of action for you and your team to hone in on the specific goals and learning objectives that you know you are intending to learn and also you know just creates that alignment right across your team around what those goals and objectives are. It's also a very valuable thing to be able to point back to and refer back to as you move through through the process to keep to keep those goals front of mind. So creating a project brief or a project plan is a great place to start. Obviously, you know you can't test everything so this it also really helps to think meaningfully about, you know what can be tested what feels like too much what feels like not enough. So it's really important to start you know thinking about the scope of that project and really defining that from the onset. As a result of you know creating that project scope or that project plan. It allows you to think you know about some of those aspects that will start to you know unfold as you continue in the planning process so as you hone in on you know the learning objectives the things that you intend to learn that will help really define you know the methods that you're using right reiterating why usability testing, you know is the best sort of research method for this particular project. And we'll also start to give insights into different ways that you might prototype or you know set up that usability test based on your learning objectives and you know the audiences that you're intending to connect with. So the project brief has a few sort of core elements to it. Again, you know to create that alignment across the team to really think through that project plan. So the project brief often consists of your problem statement right the piece that you're really, you know, want to focus on the thing that you that you want to explore the target audience that you want to connect with through this opportunity. You know identified constraints right so right off the bat understanding that you know remote research will be conducted, you know identifying that as a constraint that will help with your planning moving forward. This is also a great place to list out any hypotheses you may have right to get those down to identify those as assumptions and be able to return to those later on after you've conducted research. And to start thinking through some of those activities and timeline and so you know all these things here again to really help to scope and shape, you know that plan, and make sure that that you and your team are aligned. Victoria I'll pass back to you. Yes, thank you. So once you have scoped your project and determine who it is that you're interested in talking to and some of your activities and determining that usability testing is one of those activities. It's helpful to create a guide and a guide can be for really any method that you're choosing but for usability testing. It's really helpful to get down all of the questions you're wanting to ask and then also sequencing organizing your questions especially this is considering that a participant is going to be walking through interacting with something so it's important to kind of set up the scenarios and have the questions in the right order so some of the primary elements of a testing guide is one just having an introduction so you're explaining some goals setting some ground expectations. And then walking the participant into what they're going to do for the session. Begin with some background questions, and then ask questions in reference to the thing you're reviewing the prototype so this is again setting up a scenario for context getting them in the right mindset. As they start walking through it and then closing out with some wrap up questions and always allowing some space for additional comments. So I'm going to have some these next few slides are going to be examples and like sample language that you can use kind of in these different sections of a usability testing guide. So first on an introduction, important to orient the participant to your goals. This is just broadly. They just know what you're interested in what you're working on and what this session is all about. It is important to maybe not tell them too much because you might bias them in how they will answer your questions based on what they think you're looking for. Or otherwise so again just just hitting the right level of what are your goals but not telling them just too much and leading them. So always giving them a disclaimer of what they'll be doing and reviewing, especially on. If you have an unpolished product or prototype that you're walking through, just letting them know that that's the case so that they're not like, this sucks there's no color. Well, that's because this is a very early stage prototype and idea and not to expect a fully developed product. Set the expectation just generally when they're answering questions in a research setting for you that they're really in control they can stop whenever they'd like to stop they can skip questions when they need to skip. Yeah, so just that they are in control this session obviously there are specific questions we're hoping that they'll answer, but that it's it's up to them, you know, how they participate. The second slide is around background questions again just really sample language but it helps ease participants into answering questions and can also really give you some important context around their current experience what they've done in the past and how that sort of colors the way that they answer questions interact with your prototype moving forward. Again, so this is the bulk of usability testing is around more task based questions and it could be different depending on what you're interested in learning. But if you're interested in learning about navigation you might ask them like what are you going to do next and you can see where they, you know, click and go if you're interested in functionality help me asking some probing around how they might use something. And then important to ask about comprehension and mental model again is this what you thought it would do is this what you were looking for. So it kind of varies there. It, I would say also ask other questions in between these questions, because these are very like logistical you're asking them to walk through something. But in between ask them why to, you know, like, talk me through how we use this information. Why is that the way you would use it. So always find opportunities to both get the observation see how they would complete things using your thing but also hoping to understand it as well. Kind of to the more summative like benchmarking types of usability testing. There are some performance based questions that you can ask that this it's very like measurable. So asking some questions around task time success failure, number of errors is just some sample language you can kind of probe and watch folks do but in these types of questions is really critical to observe, because if you are doing more of a measuring type of usability testing they might not talk as much as opposed to when you were probing about issues and questions. If you are just, you know, having them actually do something and watching them observations critical because they might. They might not talk about loud about why they've done something but you know just note it down is that something that you've noticed. So preference questions is again to the kind of the flavor of usability testing that is multiple so you have different versions. When you are showing multiple versions, don't forget to ask about preference so you do one and then the other you know asking about those preferences and why. Also might be nice to allow participants to react to different elements of each version so if you're showing two different versions, maybe they like some parts of one and some parts of two. And then lastly that fourth element just wrapping up questions. It's, it's nice to have participants reflect on what they did and reviewed some rating questions are a helpful way to capture a more quantitative measure especially if you're able to talk to a bunch of participants, you can kind of average it out see how people are finding usability usefulness, and then always allowing space for any additional suggestions ideas or just overall thoughts and what they shared reviewed that day. So that's about prototypes I'm going to hand off to Ashley to talk even more. Awesome, thank you. Yeah, so, continuing on with the planning and the project scoping, you know after you're creating the usability testing guides and thinking about the questions, and the sort of the script that you'll be guiding your research participant through. You know this is the opportunity to really think about what is that thing that they will be interacting with right what are you testing with. And that also maps back to, you know, the goals right the research goals and the learning objectives the things that you're really focusing on for this usability test. And you may be, you know, focusing in on different things might be really interested in getting feedback on content, maybe navigation and information architecture right the hierarchy of certain information. It's a layout of a page right how how folks are, you know, finding and discovering certain information, or potentially, you know functionality on the site as a whole. So, you know these different things can help to inform, you know, how you might prototype the solution that you will then offer to the participant to test with. You know, thinking through determining what kind of prototype you'll make is, you know, who you're intending to test with this might, you know, the audience that you're trying to connect with and do research with there might be considerations here that determine, you know, a certain prototyping tool or framework over another. So it's always really interesting to think about that assistive technology is non native English speakers, you know potentially testing on in a mobile first environment as opposed to desktop. And so thinking through all of these considerations up front is a really great way to, you know, get to the right type of prototype that supports your learning objectives that supports the usability script that that you've crafted and to ensure that you're getting the right information and gathering the information that you know you've set out to collect. When planning prototypes for remote usability testing. I'm just going to highlight, you know, a few ways to approach creating these resources. So a few different flavors of prototypes if you will. So first, just like Victoria mentioned, you know, maybe more in this usability benchmarking or summative sort of usability testing, you know, leveraging existing resources so if you are, you know, interested in gathering information about a resource that, you know books are already using, you know, testing with with that resource directly again with that focus and that project scope. It can be really beneficial for for identifying some usability issues that might, you know, be yet play with your current resource, or uncovering other pain points or challenges that you know your users have when when navigating a resource. And so this is a really great way to, you know, to gather that initial information. So physical prototypes or paper prototypes are another great way, not only for testing but for, you know, preparing for and creating a prototype so you know this is especially if you're planning to test with a new concept or idea paper prototypes are a great way to think through that activity. Obviously, you know, you can, I think, you know, conducting a usability test with something like a paper prototype would be easier in person. Although there are really interesting tools that can, you know, be leveraged for remote pasties so this one that I've put on the slide here. So Marvel has an app called pop prototyping on paper, where you can actually you know hand sketch your wireframes on paper take a picture of it and create, create a prototype using your hand drawn sketches. So another way you know to consider if, you know, again for that sort of new idea you're thinking through, you know, putting together some solutions in those early stages can be a great way to think through solutions. To support the planning of you know the usability script that Victoria just went through. It can also be really helpful to create these physical prototypes, and actually, you know, feel what it's like to go through them feel what it's like to go through that activity and test that out and even you know before you begin usability testing with your participants have a way to, you know reflect on that and iterate on the usability test itself to really ensure that you're focusing on, you know those specific tasks based or you know research goals that you have for the usability test. Digital prototypes are, you know, another way to gather this information and of course one that you know lends itself very well to remote testing. There are a lot of different tools for creating wireframes and interactive prototypes to sketch out your solution. It's all really good to do, you know, while you're fleshing out your designs wireframes so wireframes if you look at the top image the sort of black and white, you know, mock ups right. That's what we call a wireframe and a wireframe specifically doesn't flesh out the full design it's really focused on positioning the pieces of content and information as they're laid out on the page. And just to ensure that your focus, your focus and energy is really on thinking about the flow of information the content right from the perspective of your user where might they think to go and find you know a particular piece of information or, you know, navigate to something that that might be part of that task and so it really puts the emphasis on making sure the information is designed intuitively. Rather than you know investing that time on you know fleshing out a perfectly polished design and for usability testing where we're trying to you know gather information about these usability issues. It can be really great to to show some of these earlier prototypes to get that information early and often so you can iterate on the design before you know finalizing the final product. So, um, wireframes and programs like balsamic are really great for again sort of, you know, creating those those digital wireframes. And then there are a lot of interesting programs like envision where you can actually take those screens and create a simulation of a live site so it's not you know a live coded site but you can through what's called hot spots actually link a series of screens to give the feeling that you know a user is interacting with something that's dynamic and that responds to the tasks that they might be engaging with. We'll show some some other tools like this later on and I think we'll be hearing from from some of the folks in our community panel about other free tools that they've leveraged also to simulate digital prototypes. And finally, last but not least, functional prototypes. So this is actually you know if, if the opportunity exists, you know, to actually just create something that's live. You know, either, for example, if you, you know, leverage WordPress or another CMS and you're able to sort of create a page to test with, or if you have engineering capacity to stand up a simple page, something where users can, you know, interact directly with the experience. This is especially good for testing accessibility, and making sure, you know, if you are interested in testing with books who might need assistive technology that this is, you know, a great way that they can test usability test with with your prototypes. Without further ado, I'm going to turn, I'm going to turn it over to some books from our community to share a little bit about some of the prototypes that they've created. So I will hand it over to Kim to take us off. So one tool that we use a lot in our most recent usability testing is PowerPoint. And so I just wanted to highlight that as a resource that most of you probably already have access to and have decent familiarity with how to use. And we don't use the actual slides from PowerPoint to show people but PowerPoint is just a really easy place to dump a screenshot or an image. It's reminding me to introduce myself. My name is Kim Kramer and I'm a staff attorney at the Michigan legal health program. And we maintain the website Michigan legal health dot org which is served self represented litigants in Michigan. And along with one other staff attorney, I had up our usability testing at Michigan legal health. So with that out of the way, we use PowerPoint, because it's an easy way to just add colored shape, you can even use, for example, if you have a screenshot from your website, you can use the little dropper tools that something has the same colors existing from your website. And we found that to be a really easy way to test out you know what would it look like if I added a button here. Do people know what this button would do you can just mock it up and throw it on your sandbox for people to take a look at and it doesn't take our developers time to actually put it on the dev site. I can just do it in PowerPoint. I think canva would be similar so screenshots plus shapes and PowerPoint or canva can go a long way. Thank you, Kim. Carla. Hi, everyone. I'm Carla Baldwin, and I'm product support manager at Illinois legal aid online. And some of my jobs there are conducting usability studies and performing quality assurance testing on many of our products. So, as far as prototyping. We have also used PowerPoint for prototyping simple, short projects that we have things where we don't have a lot of screens, but another thing we've used is actually, which is an interactive prototype, and developer approved our developers use it, but it's also easy enough for me to create prototypes. You don't have to be a programmer to use it. And it, one of the reasons we like it is that it can really closely mimic our website. I'm sorry that's a X, you are a actor. We can get it to closely mimic the look of our website with the prototype. And it's also really good when you're testing a long series of steps in a process, a little bit better than it would in PowerPoint, just because it is so interactive. So, yeah, we really like actually. And, in my opinion, it's a lot easier to use than Figma, which is another, another program that we looked into we like actually. Thank you. Thank you. Okay, so the final component of preparing for usability studies of course recruitment can't do the usability study with no participants. So a couple different approaches to doing recruitment I know this is especially hot topic, especially thinking in a remote setting, and also in the world that we live in today. But first, just existing context, especially for existing resources if you're evaluating something that you currently have or thinking about redesigning or building on what you already have in play. Consider reaching out to those who have engaged with your resource before. If you don't have a contact list already I would highly recommend consider building one this can be as simple as tacking on a question to any resource that clients or folks are interacting with and asking if they're interested in providing feedback just so you can start building those relationships and building up that database of folks you can reach out to in these times when you want to do some testing. There's other different outreach methods that you can employ in reaching out to existing users even blasts forums. Maybe if you're feeling it cold calling, if you have their numbers, and then other places to post announcements on your website in the office, thinking about other ways that you can get contact information. The second approach is outsourcing your recruiting to someone else so there are some recruiting firms and platforms out there that you can outsource and actually do recruiting for you obviously this is there's a cost to it. Because you're not doing yourself. It does also mean you need to be really specific with your criteria so thinking back to your project brief, and also the questions that you're asking your prototype. So those things play into who is it that you are exactly trying to speak with and do testing with so being very specific with your criteria if you're planning to outsource it. Some of these firms and platforms are really nice because they will handle scheduling and incentives for you. It might sound like a minor thing but it's actually a really big deal. When you're a busy human and you want to make sure you know you you are focused and have the attention to do these really testing. But there, you know, everyone has things going on so scheduling and we'll talk about incentives in just a minute. There is a risk to, especially for platforms, not so much on recruiting firms but some of these platforms that I have listed as examples here respondent and user interviews they're really great tons of users on them, but a risk there is you might get more tech savvy folks. They might be power testers like they're on the platforms, they're doing a ton of these types of usability tests and other research methods so that's just, you know, a consideration for who you might get. Another approach to recruiting is kind of similar to the one just I just talked about before but there are some full like full blown platforms for doing user research in general. So I have a couple of these listed here user testing calm ping pong. And in addition to having user research like tools built in questions and capabilities they also have a recruiting service often and very similar to the recruiting platforms and services. So you'll need to be really specific with your criteria because you are outsourcing it. They are quite a bit, you know, expensive, usually to get this this full, you know, service platform. But with them they come with some really built it nice built in features and questions that can help facilitate your remote research. And it has the same risk of who are you actually reaching out to with these platforms. So it's nice having folks, maybe folks who are really familiar. So really regardless of the approach, those are just a couple, couple methods there, but regardless of the approach that you take to recruitment. We highly recommend doing some screening of your participants. So this can help, you know, again ensure that people are meeting your criteria, you know wasting anyone's time and also ensures quality research. You know, not just making it up. They actually know what they're talking about can give you valuable feedback. A couple tips for writing screener questions. Keep the questions really short. This is not the interview part of it is not the test part of it this is just making sure that there's someone to talk to and your usability test. Depending on where it comes in your process. If you're able to talk with them and validate that you know they meet all the criteria, try to get them signed up and scheduled. And then this little example I have on the left here or just ways to actually write questions or ask the questions. You want to be specific but not lead, because often if you ask like, Have you ever done this. If someone's really interested in talking with you they'll say yes, even if the answers no. And then I mentioned before that scheduling while seems minor is a big deal. If you use one of those recruiting firms platforms or I use the research tools, it will facilitate all of the scheduling for you but if you are doing it on your own. It's helpful to use a tool to do the scheduling. So something like Cal and Lee is something we often use it helps organize time slots so participants who are interested can pick their own time slots. You set up what works for best for you so there's no back and forth and scheduling. It can also ensure participation some of these platforms have built in reminders to make sure that folks show up. And a couple tips of doing scheduling just generally, depending on your target audience. Remember, people work just as you do so there are times that you can do use really testing outside of normal working hours. And then, especially in a remote setting, adding really clear instructions so you'll be joining a zoom you'll be talking to me if you can't do that, call me or email me at this number. So, really clear instructions. And then lastly incentives. This is really helpful and nice to just thank participants for their time. It also helps with participation, a huge issue is just getting someone to be interested in, and spending, you know a little bit of time with you to do usability testing so incentives can really help encourage that participation. If they know that it might get a little something from it. There are a couple tips that here is. I've worked in like state, like with state agencies. So sometimes it's not appropriate to provide like monetary gifts so just double check your policies, whatever it is that you can you need to ensure that you're, you know, okay to do some incentives. If you're doing raffles if you're doing like some really short, you just need 10 minutes on a survey. You need, you know, a couple different usually test back to back, consider doing a raffle for folks time. And then if you are providing something like a gift card, making sure those are flexible. So that participants can use them in the way that they need to. And last know here on sample size we'll talk a lot more about this in just a second that even within remote usability testing there are different ways that it can be done, unmoderated or moderated. If you're going the unmoderated route, you won't have to be there. Hopefully you're able to get even more participants. So usually if you're able to get a lot of participants it might be nice to run some numbers get some quantitative data there. The recommendation is often to get really high numbers so that you can run statistical significance analysis 40 plus here for moderated against as you are not maybe not you but a facilitator is there monitoring the session it'd be really hard to talk to 40 plus people. And so the goal is often to talk to about five is industry standard but I always say that should increase with the more like participant criteria you have, as well as how often you've talked to your user base so if you need to. If you have really limited base knowledge you want to learn more, the more people you can talk to the better. Alright, so back to our community panel I would love to hear some recruitment stories I know there are tons. So Lauren how about you kick us off. Sure. My name is Lauren Figaro I work for Lone Star legal aid in Houston, Texas, and I am a staff attorney and content developer. So our recruitment story has to do with our self help tool we built for a website called Texas disaster legal health. So it's basically a self help tool that it's a guided interview type tool that allows people to find Taylor information that is tailored to their unique disaster related legal legal need. So, we were tasked with, you know, testing the site. And basically, we had a really hard time for a couple of months getting getting people to join we learned a lot of lessons from that. So the first thing we learned is that, you know, if you're doing online testing it's really it's a really good idea to online recruitment we tried to work with our legal part with our partners and the community to gather people but it was just not during COVID they weren't really meeting the people at face to face it either so it was hard to go that way. We actually settled on the method that actually worked out really well for us in the end was using Facebook. It's very cheap to get an ad on Facebook. The ads very effective. It's really good at recruiting people who aren't necessarily that tech savvy because everyone knows how to use Facebook so you're not going to get people who are on these specialized sites to do the user testing. It was pretty decent at allowing us to limit our demographic information for instance we really wanted to test only people in Texas and it allowed us to place ads only in that geographical area based on a certain level of income. It was very good at doing that. We had the Facebook at a place that helped a lot with getting people to sign up. And then once we screen them with screener questions, we got a pretty good list. Another challenge that we face that you discussed is getting our incentives right. We found that even though we had a huge list of people recruited. A lot of them weren't showing up for our. The reason is because the incentive we offered just wasn't enough to make, but they didn't consider it worth their time in the end. And so they would cancel last minute. And our first incentive was about $10 for an hour and that was just not enough for people and so we raised it and it was a gift card. It was a grocery gift card they could use so we raised it to $25 for an hour and that increased the amount of people who actually showed up for the actual interview. So it's really important to get that incentive right. So yeah, I think that it that worked out for us really well. The only thing about Facebook is that you know it is, you can't really get the demographic information completely right and you really have to make sure you're screening people at the back end so we had a smart sheet that what we would have in fill out a form and we would have that to a smart sheet where we could actually look at that and see what actually eligible for the test. That's great. Thank you for sharing. Carla, when I had some. Yes. Actually, we found that recruiting people was easier when we were doing virtual remote user testing. In the past when we had in person user testing. We could offer an incentive of the gift cards, but people found it hard to park downtown Chicago. This was before the pandemic, of course. And we found that now when we have users, they're doing it remotely over zoom, whatever, and they're more likely to respond, especially for gift cards because they don't have to worry about traveling. So that's a plus, a big plus as far as remote user testing. We recruit people, mainly from our website. We ask people to create an account. And when they create an account, we ask them if they would like to be involved in user testing. If they say yes, we do a report of everything that said yes, and we have their names and their email addresses. So that way, we know that these people are our target users. They've come to our website. As long as they are pro bono attorneys or work for legal aid organizations, then we can use them for testing. And another thing we do is when we send out the initial email, we're very specific about what we want to test. If there's something we're testing on our mobile website, we, you know, ask them, you know, to make sure that they have a working phone, whether iPhone or Samsung number. And we also make sure that if we're testing on desktop, we make sure we know what browser people are using. We found that we had trouble with the differences of the way our website is rendered on tablets. So that's something you always want to find out. And put as much information as you can into the initial email, but you don't want to bombard them but make sure that they know that the testing will be done over zoom and not go to meetings or send some other video conferencing. We also make sure that you know what the time commitment is and that they know what it is and give a general overview of the testing procedure. And we would like to give them deadlines, as far as their responses and give clear dates as to when the testing is going to be. Once we had a link up for testing, and some people went and tested but didn't let us know that they tested. When we close the link, we didn't get their information and but we still had to give them the card because, you know, they, they did the testing for us. I would also just reiterate that if you send out an email to 50 people you still may only get 10 responses, if that. The response rate is always going to be lower. So just keep that in mind and don't let it discourage you. Another thing, like Lauren said, like be intentional about the demographics when you're recruiting and people may be internet savvy, but not everyone that uses our site is internet savvy or computer savvy so we also want to make sure to include people that may not be as may not use the computer, you know, five hours a day, or 10 hours a day. Some of us. That's awesome. That's super helpful Carl, I think hearing about those different like devices and systems that you're finding during testing is really interesting. Thanks for sharing. Kim. I'm going to share some recruitment lessons we learned and I think are the very first time we tried to do remote usability testing. We were testing a texting program testing a texting program that we were building to allow our website users to get follow up information from our website. So say if someone visited our website about a divorce because it's on follow up to say, did you fill out the forums, did you ever file them in court, etc. And so one thing we did initially that I would encourage others to try, especially if you don't have a high traffic public facing website and you're thinking how can I find people. We initially just tested with our own family and friends that obviously has some limitations because when you're testing with people you know they may be worried about hurting your feelings or saying bad things about something that you made. Although I will say that some of our family and friends really did not hold back and gave us really truthful feedback about feelings and thoughts they had on this program. But it's better than definitely better than doing nothing. And so everyone in our team recruited about 5 people and we were able to test on that group. And a big pro of this is that you can bother them and kind of prod them along if they forget to answer your survey at the very end. So it's a very reliable way to get some feedback from people who haven't been looking at the product day after day after day and have a good outside view. And at least one person on our team actually had a friend who had used Michigan Legal Health resources in the past to file a case. And so if you can think of people you know who might be in that situation, those are great people to recruit. Later we wanted to test our texting on actual website visitors. And so we put a banner at the top of the Michigan Legal Health website that said, you know, click here help us test this new program we have. And there we learned some really good lessons about levels of drop off and how many levels of drop off when you have to have lots of touches with someone to complete the process. So about our process was that someone clicked the banner, they sign up using their email address, then we contact them with information about how to participate. They have to sign up for this text flow, then after the text flow is done. They had to complete. I think we had a Google form to use to give us other feedback. So just as an idea of how that filtered out about 40 people initially signed up, we sent 15 of those people to another program doing usability testing so basically we had 25 people out of those 25 11 of them follow through to start the testing. And we finally got feedback from about eight. And we've learned from people who work on stuff like this that that sounds about right. So you have to kind of cast the wide not recruit a lot of people and just get what you can get. And we definitely learned that after each step that someone has to take after they click as a chance for them to forget about us decide they don't have time. And so I'll talk later about how we improved this with our other methods of remote usability testing as we moved along. Thanks for sharing Kim yeah casting a wide net and it just, I just saw the chat about it just allow a lot of time because it takes, it takes time. So, thank you for sharing. All right so next we are going to talk pretty quickly on conducting remote usability test so we just spent all that time I had a prep, get all your materials your guys recruiting, and now we'll talk about actually conducting. So first setting up. So I kind of touched on this just a moment ago, but there are, you know, even more splits and how you could do remote usability testing one is moderated. Moderated is someone is present with the participant you're observing your prompting questions. This can be done over conferencing tools over the phone even just really a mode to for a facility facilitator to be able to share the thing that needs to be reviewed and then also have that conversation. Unmoderated is a little bit trickier because you're you as a moderator or someone won't actually be there with the participant. So requires more thought to share the thing and instructions with your participant. So that you can capture their feedback. So we'll talk about that in just a moment. We will start with moderated so this is maybe a little bit more familiar easier translation from in person to remote. So, in this scenario, you know you have a moderator they're using a testing guide to ask questions and observe participants they're talking out loud sharing their thoughts. You're watching as a participant is walking through a bonus in this is that because of moderator is present a prototype can be a little bit less polished. So if something breaks or they hit a dead end or something sort of redirect reorient them. If the prototype is, you know, not quite as built out. It is not limited to digital prototypes. It takes a little more time but like mail a paper prototype to someone to receive and then do this more remotely so it's not not just limited to digital prototypes but in digital prototypes you can actually obviously share this in a much easier way. You can screen share. And I like to do often is screen share from zoom and zoom has a feature where you can actually hand over remote control so if you share your screen and you have something that you want them to interact with you can share remote control, but you could just email a link over to them. There are lots of ways to share and really just requires a way for you to share and have the conversation. So here is, especially since you are moderating the session, you want to sort of in the introduction set up the scene and the stage of what you're wanting from them and having them think aloud is often something that you're interested in hearing. You can also sample language to get participants to be comfortable with you, thinking out loud. While it seems normal, when you're actually interacting with something can be a little bit unnatural and uncomfortable. So just having them just letting them know that we want you to think about tells what you're looking at what you think about what you're looking at that can be helpful to sort of remind them and prompt them as they're going through a more moderated session. In a moderated session, a huge benefit is that you can actually invite other team members and we really recommend considering having a dedicated note taker so just have a partner to join. Listen in take notes for you or even recording your sessions and you can do that, having a standard format for capturing notes is helpful because you'll have multiple sessions. And by the end of it you want to be able to take a look and analyze that so having a standard way that you've captured notes can be helpful. Moving over to unmoderated. Again, this is a little bit more tricky requires a little bit more planning but can be just as effective as doing moderated. So in this approach, participants will interact with your prototype on their own and responses need to be captured in some way. So the screenshot that I have on the right there is a user research tools called user zoom or go platform. Some of these research tools have built in unmoderated capabilities. So the screen shots tiny but it'll actually like task out things for participants to do, and they can start and stop. So that's a way to capture like task time. It's a recording software so you don't like the participant can verbalize as they're walking through interacting and it'll actually capture it. So on the prototype, since you won't be there it's important to have a way for participants to recover from mistakes or dead ends in your prototype. So, maybe making sure that everything's connected or they, you know, know where to go if they ever get lost in your prototype. And I think I mentioned this before but it could eliminate any nerves with the facilitator present this does allow a lot more flexibility and autonomy for the participant they can do it. When they have time on their own. You know someone's not there sort of probing them along obviously the drawback to that is, you're not there to probe them. If you have any questions. As they're giving feedback that you want to dive into but kind of one of the pros and cons of unmoderated. And again, you will need a way to share and collect feedback. I'm mentioning these more high tech user research tools but you could even just have some really detailed instructions on having to record themselves. So you send them, you know, a prototype and just ask them to find a way to do a video recording of themselves doing your usability test that's another way of capturing that feedback. So this is just, I won't talk too much on this but a chart of some industry tools that facilitate a moderated testing I mentioned user zoom. So it's in there. But there are quite a few so we'll link those into our resources but wanted to show that there are lots of capabilities to do unmoderated usability testing so check those out to make sure to see if they meet your needs. One bonus of using a tool to facilitate unmoderated usability testing is that there is some built in tools for analysis. So these platforms I mentioned like the user zoom go platform can aggregate task time. If you had a bunch of users that can or participants it can show you like how many succeeded and failed at each task might have transcription for you. So lots of cool features in those tools obviously, you know, you get what you pay for. But even if you're just having someone record themselves in zoom or in some other tool getting transcription can be really helpful. Because you're you know you're able to actually see so the screen shot I have is, I want to say it's a plugin for zoom that actually does transcription. And the last note is that built in tools for analysis are great, but it does not replace your own synthesis and actually we'll talk about that in just a moment. So first considerations before we move on just for remote generally usability testing, moderated or unmoderated is that, again, there's just more consideration so one is especially on the tech. So as much as you can prepare pilot with your teammates, make sure that it works. All the tech, I mean prepare ahead of time, and then share really specific instructions with your participant, especially if they get stuck on how they can contact you. And then the second is the prototype make sure it works. And again, there are different considerations for moderated versus unmoderated, but just making sure you're thinking through how you're going to share, and then also having backups because if you have a live link. Who knows when you know the day that they're testing maybe that link goes down so you have to figure out some backup methods, making sure that all of that is prepared ahead of time before your test. So consideration is especially in the unmoderated world, you might end up sharing links. And that's great because the participant can open it on their own computer and walk through it but if there are things that you don't want to be shared, you can't guarantee that it won't be shared. So thinking about confidentiality and what you're willing to have out in the world and you don't. Okay, so love to hear from the community panel again. Carla how about you kick us off. I just like to reiterate that it's really great to record the sessions. While you're doing it or to have another person taking notes. That way you can really focus on what the user is doing. If they're at a certain place in the testing, and they make you know a face or you know or, you know, like a grimace or something you can always say, you know, ask me. Ask them what are they thinking about now or or how are they feeling about what they're doing or, you know, you can find out why they're making that face and that's something you might miss if you're not really paying attention. Another thing is, you can always share your screen or have them share their screens so. But just really recording it is in has really helped us out a lot so we do that. Awesome, good advice. Lauren how about you. So we use zoom and we had them share their screen which was always a struggle, because a lot of people know how to use zoom and that like maybe one tent that people know how to share their screen on zoom. So make sure they know how to do that before you start the test. But we would have to share their screen and we would always ask them permission to record that and then we would record record it. I also think that it's very important to have another person on the test I would always have a silent partner who was taking notes, because you really are extremely focused on making sure that going the path you would expect them to go and not figuring out why it's it is extremely to listen to verbal cues to see facial expressions if they're willing to share their video with you sometimes it's not feasible because you know it's hard to look at a share screen in a video at the same time. You know, ask if you notice any confusion, make sure you give them a couple seconds to figure it out which is really hard when you want to ask the question, but make sure you give them a couple seconds and then be like are you is there something confusing here. It's very important to follow up on that kind of. So yeah it's, it's kind of a mix because you really want to have your set questions that you always ask everybody but you really want to also be flexible enough to ask open ended questions for people like if you're noticing something that's different, following them. Yeah, and every test is different sometimes you'll notice people people will sometimes give you conflicting advice you're going to have to figure it out. Like, sometimes people will be like well I feel like that would be better if this was over here and then a person will like the next test you'll have moved it and they'll be like well I think it was really better over here. And so it's, that's why it's important to get don't base your feedback on one test you really have to get multiple people because of that. So some things really not an issue it's an issue with one person, and it's important to pick up patterns. Thank you, Lauren. Yeah so building on that I think a lot of things that that Lauren and Carla just shared really take us into the last part here which is, you know, now that you conducted this usability testing and gathered all of this great research, how do you incorporate these insights to make improvements in your in the thing in your resource. And so synthesis is critically important because it's pulling directly from the user research again in that spirit of letting the learnings really guide the design decisions and the improvements and not just you know sort of basing that off of our own problem and intuition we're really letting the research be the driver of defining what the pain points are and and revealing opportunities for improvement. And so as part of synthesis, you know you want to go back through your notes and analyze, you know the notes that you've taken for each participant or if you have those you know zoom recordings. It's really great to go through again. You know, start to identify, you know things that can things that can pull out patterns and themes and that's why it's really helpful to have that, you know, set that script of questions so you know you're gathering the same information but like Lauren mentioned right you may have probes that are sort of nuanced for for each participant. As you know Victoria shared in the script you might have some of those quantitative insights right around, you know how did people feel about a certain thing that they interacted with so any of that information that you gathered. This is really the opportunity to dive in, find those patterns and themes, especially with an eye towards you know highlighting those pain points or those moments of frustration or confusion right and really letting the data reveal. Was this a one time thing or was this something consistently that we saw you know across multiple users and let that really guide and shape how you prioritize and define recommendations and opportunities to improve your solution. So you know again going through the notes grid, make sure you're pulling from the raw data you can pull out quotes you can pull out insights, you know that that the sort of excuse me my brain is failing me the quantitative information again. And it's really important to make sure that your synthesis is unbiased and really helpful if you did have someone who was helping you take notes it's something that you can talk through together and really think out in parts together, while you're doing that sort of synthesis. I won't play this video but when we share the slides there's a great video here that highlights a really great technique called affinity mapping or insights mapping that actually allows you to tease out those insights, and through the act of pulling out insights, define those patterns and things. And this is really important because this you know helps to create and define those recurring pain points the recurring challenges the things that are sort of consistent across your body of research and synthesize those into those high level findings. So affinity mapping is really important because it allows the research to reveal those patterns and insights, as opposed to going in with predetermined challenges and fitting insights into things right that we might have assumptions about so it's really letting the research reveal that. We have identified those those challenges, those pain points, those places and spaces that you know we know can be improved. That's where we can go back to the design and bring those insights to think about new solutions to think about, you know, how can we communicate this better how can we organize this information, you know more intuitively based on the insights that have been provided iteration is a really key part of this process. We might also you know identify that there's more we want to learn and if you recall the you know the double diamond right that's okay to iterate it might expose an area that we want to do additional research for it might reveal opportunities or challenges that weren't on your radar before they're actually you know something that that you want to continue to explore and so here is really where you know your designs are updated to incorporate that feedback and you can you can return to that. Moment to pause and reflect and consider, you know if if that testing has really revealed these positive improvements for your resource. I know that we are coming up on time. So I'm going to lightning round through a bunch of resources. That are really here, just you know we at the very beginning sort of acknowledged that there are a lot of different types of research and so wanted to just highlight a few additional considerations now that you know you've heard about usability testing, you know, maybe usability testing isn't the only thing that you want to try and so just to highlight some different tools and resources that also can support you in, you know, learning more about your users. First is a resource called optimal workshop that is good for card sorting and tree testing. So this is, you know, trying to understand your users expectations or understanding of a topic or perhaps, you know where they might think to go and find something so it's really good at diving into the navigation of a site by doing you know a card sort or a tree testing where might a user intuitively think to go and find a certain piece of information. And we will hear from Kim one more time on how how her and her team have leveraged tools like this. Another resource is called hot jar and hot jar is something that can be incorporated into existing websites, and it has two primary functionalities. One is that it creates a heat map of where users are clicking and interacting with your website. So that can be really valuable information to just understand the behaviors of where users are, you know, exploring and finding information on your site. It also has some built in surveying tools some light surveying tools so you can get, you know, quantitative input and data on, you know, what users are doing and get their opinions there as well. We love to highlight, you know, some of these surveying tools maze is a really great one that actually incorporates both prototype usability testing and surveying and one. And Google Form and Google Form or other tools like this are also really great for prototyping, you know, if you're designing a form or a flow, right, mocking up with that flow of information might be. So the sort of simulate, you know, a user might be encountering, you know, a specific flow of information on your site you can mock that up, leveraging these tools. Next, it's also a really great way to capture both qualitative and quantitative insights in one. So definitely think about leveraging these tools. Unbounce is a non development or it's a development free resource for creating landing pages. Primarily it's used for a marketing context, but it can be really great for a few things as well. You know, exploring content. So it's, it allows you to very easily set up a B testing so if you have to, you know, two versions of a page that you're exploring, you can easily set up those two versions and kind of see, you know, how users are responding to those different versions. It's also a really great way to measure performance. And potentially, you know, we've heard a few groups recruit through their website to stand up a simple page that allows you to, you know, recruit users through a site on something that you can easily add to and connect to any website. Google analytics and you know data studio so you know if you do have an existing online resource. Hopefully you have the ability to have some sort of analytics from that site and it's always good to check in to just see, you know, what you can learn about user engagement and how your users are, you know, interacting with your site whether that's you know seeing where they navigate to how they come to the site right if they're coming from a Google search or referral or other. Maybe the time that they're spending on different pages or the site as a whole that can, you know, give some insight into the usability. And finally, we always like to make this plug for customer service data if you have, you know, a help desk or customer service team or chat box, leveraging, leveraging user insights that come and do that way are their pain points or their challenges are their things that users are reaching out for through those platforms that can also help to you know identify areas for further research. So I'm going to hand it over to you to do a quick overview of some of the card sorting that you've done and and then we'll wrap up through this. Sure. So, I think Ashley mentioned earlier that we jumped in and started using optimal workshop to host our most recent round of usability tests. And so we use unmoderated usability testing. So I wasn't on zoom with someone we just created the test and put links on the internet and then people could do them when they wanted. And so we looked at a few other tools, but so there's other tools that do similar things. We broke up our usability testing into multiple shorter tests. And I mentioned earlier that we had a problem with drop off when recruiting people to do this texting program. So, here, what we did is we had a landing page on our website that listed our usability testing options. So there's a banner on our homepage that says click here to help us improve our website. They then go to a landing page that explains here's what we're doing here's the incentives. We did a gift card drawing for participants. And then there was a section that had available tests. They clicked the link, they went right to the test and they could do it. So we had a pretty easy time recruiting people there because I think number one, they were five or 10 minute tasks. So they were easy to do. And the moment someone decided they wanted to participate. It was like great quick building can do it. They didn't have to wait for us to respond and then guide them through the rest of the process. So we had, I looked up our numbers right before this. We had our landing page up for about one month. And we were able to get 171 completed tests in that time. And that was over six tasks. And some of those were linked. So, benefits of doing the testing this way were less drop off easier to recruit. We also, one of our tools was a, one of our tests is a card sort. And in the past when we've done that in person, the card sort can be hard to present data. So if you've never done it, you give people a list of categories and a list of cards, they what this where and they put the cards. And what do you do with that information? How do you synthesize it? So optimal workshop has this built in automatically made matrix that shows how often different cards appear in various categories. And it's just there for you to use. Related to the landing page, that's something that was really helpful because it did take some time up front for our developer to create this landing page. But now it exists and it's out there. The link is not front and center web page anymore. But from now on, when we want to do some usability testing, we can just throw the links on there, turn the banner back on. And it's good to go. Other tools for this, if people don't have, you know, and it has developer to build a landing page like this, you can use something like Wix or Squarespace and make a link for usability testing landing page. Yes. And do you have a ready link to that landing page that you can pop in? It's not live on our homepage anymore. So I can't drop the link, but the link exists. It's just not advertised on our homepage. And so that, yeah, you can use Wix or Squarespace. That's a great tip from Adam Stasse from Briefly Studios, if anyone here knows him. We used a raffle payment structure for the shorter test. So we did one drawing for 10 users and we offered a $20 gift card and that seemed to be a good incentive for people to join in. I will mention it is, as Victoria mentioned, a bit annoying to coordinate like the drawings and then the person doesn't respond. So we were really clear on our landing page. Here's what you need to do to be considered. If we don't hear from you within this time, we'll move to the next person to be really clear about expectations. And Optimal Workshop also has built in, you can buy testers from Optimal Workshop. So for some tests in the future where we're not specifically looking for our own website users, we definitely might consider that option just to get rid of the headache of needing to do these drawings and send out the gift cards and all of that. Thank you, Kim. Thank you, Kim. I know we are very short on time. We want to leave at least a few minutes for Q&A. So I'm not actually going to go through resources. The one I want to point out to is on this slide. As part of the same tick that Angela mentioned, we also created a more deep dive into the whole design process for legal design and yes, beautiful, thank you. So that is linked there. It also has a bunch of links to templates that might help get you started. So those are linked in there as well as this presentation. And then the other resources in the deck are all the things that we've covered, just links to them in one place. So feel free to check those out on remote user testing, unmoderated, and then other remote user research tools. So that's all that we have. I'm going to pass it back to Angela to facilitate just a few minutes of Q&A, but thank you all for joining us today. All right. Well, thank you all. It's always wonderful to hear from you, Ashley and Victoria, and I loved all the questions. So go ahead, Carolyn. Thanks. My son, I'm going to call my son back in five minutes. So that's why I raised my question really quickly. So one of the things I've been working on, you know, my own little private self is there's a, there's a a screening tool called Realm, which is the rapid estimate of adult literacy in medicine. And I've been trying to develop my own real, real rapid estimate of adult literacy in the law, because I think it would be really useful when we're screening, when we're recruiting people, but we, if we can screen people at the fourth to sixth grade reading level and it's a tool, there's about 60 words, they're all great, they all are at different grade levels. And I'm just what I, I can share it with people or I put it on LSNTAP or something. If people be willing to help me both put in legal terms so that we can help people. Essentially, can you, it's really, can you read this word out loud. And if you can read it out loud, then you may fall into one of these different grade level reading groups, and then it helps you very quickly screen out, screen who you want to be testing your users. And I would rather use legal words rather than medical words. I still need some sort of reading specialist to help us put together the score, create the score at the end. But that this is like my pitch, if people be willing to help with this, I would, I would, I've been pitching it everywhere I can find, except I forgot to do it at LSE, the ITC conference but I just wanted to do that see if people are interested in that. Thanks, Carolyn. You could definitely follow up with the Deirdre. Maybe put something on the LSNTAP listserv to other questions. I think Carolyn also highlights like, you know, another factor for, you know, screening when you are, you know, trying to recruit, recruit a certain type of user so that could be another, you know, something that you're trying to screen for so I appreciate you flagging that. That's a very cool idea and project Carolyn thank you for doing it and for bringing it to everyone's attention. Are there other questions in our last minute, our last few minutes I'm sure people would stick around for another extra minute if you had questions. If you don't have questions and you're getting ready to leave you'll get a survey as soon as this is over please take a few minutes to do our survey if you've ever had a tag you know that feedback and evaluations very important so please help us out. If you don't know or care about tags please take our survey anyway. We really appreciate everyone coming. Sadly, we could not do our test of Palooza in person, but this was a great resource instead. So if there are no other questions. So now you can always reach out to the deer draw with questions about usability testing or I'm sure any of our panelists would be happy to talk to you more about their experiences and what what they have learned so. We will sign up.