 Welcome everybody to Let the User Be Your Guide. We're really happy to have a great panel today to talk about a lot of different models for user testing. And I will primarily be serving as a moderator. My name is Mike Brunewald. I'm a program coordinator at ProbonoNet, and this is a topic that's particularly of interest to me right now because I'm in the process of redesigning just about every aspect of the ProbonoNet platform. So I'm looking forward to learning a lot from our panelists to have a lot more experience in this area than I do. But what I hope you walk away with today are a set of practices that you can apply fairly easily. And with some recognition that user testing is something that's important for our projects at any stage. So we'll talk a little bit about that as we get into things. The presenters today are Tony Lu, who's a product manager at the Immigration Advocates Network, and Candace Barham, who's an attorney and web content specialist at Kansas Legal Services. She is joined by Melissa Nolte, who's a website coordinator in research and program development at also Kansas Legal Services. And then we also have Dina Nicotitis, who's the user experience manager at Illinois Legal Aid Online. And finally, Claudia Johnson, who's the Law Health Interactive Program Manager at ProbonoNet. So I'd like to thank all of our panelists for joining us today. And just want to take a couple minutes here at the front end just to talk a little bit about user testing in general. So as a dictionary definition might go, user testing and usability testing refer to evaluating a product or service by testing it with representative users. So typically during a test you would have participants try to complete a typical task while observers watch, listen, and take notes. So the goal is to identify any problems, collect qualitative and quantitative data, and determine the participant's satisfaction with the product. So you're kind of thinking in three different ways as you're doing testing. Does it work? Are the users happy with it? And what kind of reactions do they have to the design and to the functionality of the product you're testing? So in order to run an effective user test, you need to develop a test plan, recruit participants, and then analyze and report your findings. So of course this isn't something you just jump into. It's something you have to do some planning for and then identify the relevant user group so you can make sure that you get feedback that's not only actionable but actually helps the actual user group who will be using your product. So the challenge for user testing is determining which method to use to get the data that's most helpful. You can do all kinds of different things with user testing but some methods are more appropriate in some settings than others. So we want to make sure that we're getting data that's helpful in making optimal user experience decisions. So remember too, it's important to remember that user experience isn't limited to the technology but the whole experience of interacting with the product. So what is it that leads up to accessing your product? What do users walk away with in terms of information, in terms of experience during the use of your product, and then what further actions can they take? So it's good to know what's going on internally with your users and where they might go from there, especially in our space where we really do want our users to walk away with useful information that they can actually act on. So I think I may have answered this question to some extent but why do user testing at all? Why should we care? We're all experts, right? We're either experts in the law or experts in technology or both. So why do we need to get feedback from users? You've probably heard this in some format or another before but you need to think of your project as investments and you're investing your expertise, you're investing your time, your resources into creating a piece of software or some other product for your client base to use. So user testing can actually help increase your return on investments and it can do so by improving user satisfaction. It'll help you uncover problems early and it can also help you just learn random things that you would never expect. I remember when I was testing hotdocs interviews when I was still in DC, one of the most valuable experiences I had was actually going to the court, to the self-help center and sitting down with an actual user and watching them go through my interview that I had worked for months and months on and the attorneys in my office had vetted and we had identified all the bugs but what we didn't realize was that certain words we chose weren't making sense to actual users. So even though we tried really hard to avoid jargon and we tried to make the logic really clear, you just don't know until you actually sit down with the user and you see how they interact and they tell you what their experience is and what they need. So it's really important at a lot of different stages to do this and I think when we think about the methods and as our panelists talk about the methods they've used, there's a few things I think that are going to emerge. One of which is to test early and often. This is often lost in project plans. I know I've been guilty of this myself. I have this grand project plan and then I realize I have not scheduled time for user testing. So it's really important for you to think about that at the outset of a project, particularly if it's a grant project. What ends up happening a lot with grant projects is that you do exactly what the grant requires and if user testing isn't a specific requirement that can get lost, it may end up being something that gets triaged out of the process. So do it early, do it often. If you already have a product in place and you're not sure if it's working well or you have one that's in place and you're pretty confident, either way you don't really know what's going on until you talk to your users and find out what their experience is. So think about your audiences. Think about the information that would be helpful to you to get from the audience. So you develop instruments that are helpful. And then remember that user testing doesn't have to be expensive and it doesn't have to be time-intensive. There's a lot of different ways to do user testing, some of which are expensive and are time-consuming, but many of which, as we're going to see, are less of a footprint on your time and your resources that can still be very valuable to the development of your product. So I think with that, actually sorry, I don't mean to jump past my presenters, without I want to actually hand things off to Tony Liu who's going to talk about his experience with user testing with Immigration Advocates Network. Great, thanks. Is everyone able to see my screen? We sure? Yeah. Great. So I'm probably going to be a bit of a black sheep in this panel. In that, I'm going to be talking about a product where we haven't actually had a chance to do testing of the product and service itself because we just did our soft launch, our beta launch last week to a closed group of users. But I want to talk a little bit about how we used focus groups in the design process. So in some sense, it was sort of user testing of the concepts that we were developing to build this new service that we call IMI. And I think one of the major takeaways from this project was that we were constantly being challenged to rethink our assumptions about our users and how they interact with legal information and the workflows that we were hoping to build. So the idea, a little bit about the platform, IMI was a platform that we wanted to create that was targeted at low-income and low-computer literacy users who were interested in learning about immigration benefits as they applied to themselves and what the requirements for immigration relief might be. And then one of the big challenges for us was to take this huge complex body of law and make it accessible and welcoming to users as well as lay advocates. So we were already starting to define multiple different user personas or profiles. There would be the immigrants themselves, lay advocates such as social workers, librarians, people who routinely provide services to the immigrant population but aren't themselves immigration experts. And we were also designing this platform to provide tools to immigration experts as well. And so right off the bat, we were starting to run into this challenge of needing to define multiple different user types. And so we fell back to a very typical approach in software design which was to sit down and think about what are the personas that we're designing the software for. And this comes from a UX website called theUXReview.co. It's a British UX site. And the definition of persona as it's used in software development is just basically defining a particular audience segment based on various types of qualitative and quantitative research that tries to identify what is it exactly that the person is there on your site or your service, what are they trying to accomplish and what are the things that either motivate them or frustrate them as they try to solve a problem. And for us, they're really helpful as a starting point to figure out how to design something. They're kind of like a compass. So it helps to, you know, any decision that you're making, like where should we place these buttons? How many calls to action should there be? All of these things are really, it's really helpful to define these personas so you can just go back to, well, what would the typical non-computer literate immigrant user of the site expect to see when they're trying to navigate this site. Once you've got some user personas to find, it's also really important to define what user journey is. And so typically that's a series of steps, typically about four or 12 steps that represent a scenario for a user as they're trying to interact with your product. And it's helpful to think about user journeys not just within the context of navigating a website or a flow, but really to think about sort of a more abstract application of that concept to just solving a problem. And so here's an example of the concept of a user journey applied to getting legal services generally, right? And I actually took this from actually a really good website, the Open Law Lab, which is the URLs they're done at the bottom, which takes a lot of these design concepts and applies them to legal service delivery. So I encourage everyone to take a look at that as they think about trying to figure out how to build products, build interviews, build even just service delivery. There's a lot of really excellent articles there at that site. But as you can see, the user journey doesn't have to be anything that a higher professional designer to do. It can be done with stick figures. The basic concept is just that you need to figure out how to represent a series of steps that somebody needs to take and the general flow that leads a person from having a problem to the resolution of that problem. In engaging in this exercise, it kind of took us down this path of understanding that these personas actually were causing us to rethink how to organize legal knowledge. And what we did was we started organizing these focus groups of people and we organized focus groups with potential users of the site, so immigrants potential with lay advocates. So we had focus groups that included librarians, social workers, community organizers, and we also had legal experts. And what we learned in these focus groups was that the way that lawyers think about legal knowledge is typically represented in this slide by those verticals. So each of those vertical columns is a legal, an immigration form of immigration relief or a benefit. A deferred action for childhood arrivals or asylum or status under the Violence Against Women Act. And we would think about the various legal factors that need to be met for each of those forms of relief. But what we found in these focus groups is that the immigrants themselves as well as the lay advocates didn't think of immigration in these verticals. What they thought of was a person's holistic story. And that's represented by the sort of red horizontal box and the bullet points on the left side that kind of tell that person's story. So typically what we would hear when we asked, well, how did you figure out that you were eligible for something? We would hear somebody say, well, I found out because my cousin got this form of relief and then my teacher told me I should look into it because my situation sounded like other students who had been able to apply for something. And it became clear to us that what might make sense for experts doesn't necessarily make sense for your typical user. I will take it actually just as sort of an aside. One of the things about focus groups is that this was our first time conducting these focus groups and we learned through our consultant that it's not always easy to just organize these and have people show up. We had to think about different incentives for people to participate. And luckily for us, the Immigration Advocates Network, we've built up a network of partners and done a lot of outreach to service providers. And so we were able to offer as incentives to the lawyers and lay advocates the possibility that in the future, once this is built, they could be there at the ground level during launch to be able to actually have feedback into the services and the development of the products as we went along. And then in terms of target users, what we learned was typically one of the best incentives is to offer gift cards of some kind and that's actually what we ended up doing. I've started to dive a little bit into what the takeaways from the focus group were, but here's a little bit more information on that. So in addition to thinking about how we needed to reorganize the way we thought about legal information to group some more around stories as opposed to specific forms of relief, we were finding out that a lot of people needed information to be curated and organized and that just providing the search bar is not going to be sufficient. So we've undertaken to figure out a way to help people navigate very complex information in ways that are a little bit more intuitive. The second point that we really picked up was a lot of people don't even know what they don't know. So putting a lot of navigation heavy elements right there front and center on the homepage is probably going to be overwhelming and so there had to be a different kind of entry point into getting information. That wasn't about just like here are all the categories, which one applies to you. People weren't even really sure which one they were looking for. And then the final point, which is the point I made earlier, people learn particularly in the legal field by relating to others. So do they know somebody who was in a similar situation? Now ultimately what we ended up doing was we took this design tool, the user personas, and actually made it a feature on the site. So front and center on our homepage we have these personas that actually they're fictional. But they describe typical scenarios where people who maybe immigrants in this country might have certain forms of relief available to them or certain benefits available to them. And so we just tried to create these archetypal narratives that people could connect with a little more easily than seeing special immigrant juvenile status as a link and knowing to click through on that. And so this is something that we've actually made a critical element of our site and we're hoping now to find out how successful that is. So we're, as I said, we just did our soft launch and we are hoping to get a lot of good data after our launch to figure out how people are using the site, how people are navigating the site. And I suspect that other panelists will talk about some of the specific mechanics about how that's done. But I'll share very quickly. We're using a social sharing tool called Add This on every page. So we'll be collecting data about when people share through social media, either Twitter or Facebook or if they email certain pages as links to relatives, friends, anyone else who they think, oh, this is interesting, this kind of sounds like you. We'll hopefully see that our use of these personas is successful in kind of getting people to internalize and understand the legal information and we'll also be heavily relying on Google Analytics to kind of understand how people are navigating through the site. And then just a few points that I wanted to close with. Mike had already mentioned this, you know, you want to test early and often, which means building prototypes and testing those. And that was the whole concept behind our beta launch was we wanted to, you know, it's not quite ready for primetime, but we wanted to get it out there and get feedback from our community before we do a public launch and invite the public to take a look at our site. In addition, we rely heavily on customer feedback and support requests as testing, as sort of live testing as opposed to organizing specific testing sessions. And oftentimes our support coordinators, they're the voice of our users and in some sense it's sort of like real world testing and we've empowered them to become the advocates for our users and that guides a lot of our product decisions. So, you know, in any given week, we may realign our development priorities because support requests are coming in and we're finding that something doesn't quite work well for our users and we'll need to focus our energy on fixing that before we develop some new feature that will have less of a direct impact on the users. So, I think with that, I will pass it back to Mike or I think we're moving on to Candice and Melissa next unless there are questions. Yeah, Tony, I have a quick question for you. This is Jillian. Do you have... Can you tell us a little bit about your plans next? You mentioned the data that you'll be collecting on your site with the share feature, but are there any other strategies that you'll be undertaking going forward or do you find it's a moving target that based on what you learn, you adapt your strategy? It is a bit of a moving target, so this is the add-this functionality I was mentioning, so we're hoping that we'll get some good data from people sharing information, sharing these pages. That was something that we built heavily into our design was the idea that if somebody connected with this narrative and thought, oh, that sounds like my friend or that sounds like my cousin or my relative, that they would share this information so we tried to make it as easy as possible. We are also conducting user surveys with our partners now during this soft launch period. And then as we move forward with our public launch, we're going to be doing some roll-out of services with legal services partners who might use this tool at, say, like a library, and so there will be some on-site kind of observational user testing which may include user surveys, but a lot of that, a lot of our user testing is heavily driven by our service delivery coordinators who are actually on-site observing users and asking direct questions much in the same way Mike was describing his experience with testing. And a lot of the more data-driven stuff is more passive in the form of Google Analytics and this add-this social sharing functionality. Great. So I'm not seeing any more questions, so thank you, Tony. And I'll go ahead and pass presenter roll to Melissa and Candice. All right, do you guys see our screen? We do. Okay, this is Melissa and I'm going to start for us. We have been evaluating user and users' usability experiences on our websites for a long time, but that long. So we're learning as we go. We've been using all of these methods in Kansas. We've got a TIG, the Technology Grant from LSC that has several projects in it for websites enhancements, and we're building a pro bono website with pro bono net, so we've been doing a lot of user evaluation with that project. We use surveys, we're going to each one of these areas a little bit further. Focus groups, we have our law interns do projects for us. On each one of our web pages, we've got a feedback mechanism and use Google Analytics quite a bit. And then Candice will talk about a usability testing model. When I first started doing evaluation years ago, I always thought of it in terms of pre- and post-testing, and that works okay if you've got a short-term project or if you're just doing a program or something where you need to find out if people learned anything or if their attitudes changed over a short period of time. But for this kind of testing, when you're trying to evaluate website usage and the user experience, it's a continual process that can go on forever. And that's because you've got a live organic thing you're trying to evaluate, and it's changing all the time, and you can change it too. So you should just look at it as a pretty much lifelong process, at least the website's lifelong process. So, like I said, we have a lot of options. Surveys. We survey monkey quite a bit because it's simple and easy and people can do it at their convenience. We put URLs on the website and in emails, and I've been surprised at what good responses we get to survey monkeys. We do them very short and sweet, one page. That's either on paper or on survey monkey, either one, makes them short and sweet with a Likert scale, and then always have room for people to write their comments. And as long as you do that, I think you'll be getting some good information because they'll say one thing in a Likert scale and then maybe, you know, go ahead and tell you more about it, you know, in the comments. And you all know what the Likert scale is. Five points Likert scale. Don't do seven, don't do three. Five is really the best number to have in the scale. So you can measure all these things. Attitude, knowledge, satisfaction. We measure how useful the website is to people and regular intervals after each enhancement and throughout the project. Tony talked about focus groups, and I think each one of us today is going to be talking about focus groups, so I won't go into too much detail. We've done in-person focus groups, and we did one online to talk to our pro bono attorneys about what they wanted in a website. And I think it's a lot harder to do a focus group online because so much of what you get from a focus group is body language and facial expressions and the way people interact with each other. That's a very important part of a focus group because they will encourage each other to remember something that maybe they hadn't thought of. That's what's so great about a focus group is the discussion that comes out of it, the people stimulating each other. But if you don't have the ability to have people come in for a focus group in a pinch, it does work. So for us, it did work out because our focus group audience was a group of attorneys. They all had very busy schedules. It wouldn't have worked out for us to try and get everyone in a room together at a certain time. So in a pinch, it does work. It's not ideal, though. And you want to try and get a manageable group number of people. 60% is what I recommend. And I've done lots of focus groups and it's always a good idea to start with a real broad big picture question and then get more into the focus. Because people will start their thought process going from the wide angle to the close-up. And Tony was talking about incentives. I've always found that food is a great incentive for just about anybody. You can get them to a focus group you're going to have or desserts or something. That's been my biggest incentive for people to get into focus groups. Also, lots of appreciation. You just spawn all over people. I think it's a really good rule of thumb. And keep it short so that they don't get tired and want to go. And it's always good to get them invested in the process by telling them you'll share with them what you find out at a later time. Candace and Melissa, there's a question. Can you give examples of questions that you might ask in a survey versus those you might ask in a focus group or are they similar questions? Well, in a survey, you can be more focused like right from the beginning because people have time to be thinking about the questions as they're answering them. In a focus group, it's an organic process where you want to stimulate discussion that can't be pointedly put into a record scale. So you're getting impressions from people. It's like a qualitative picture. It's a snapshot of what that person's thinking and feeling that day. And surveys are, you know, you're going to be measuring with numbers what people are saying. So if you put those two together, they're very different, but they very much complement each other in terms of evaluating users' experiences. Does that make any sense? So the questions used in focus groups are going to be like think-back questions or how do you feel about this? Not yes-no questions. You don't say in a focus group, how would you rate blah-blah on a one-to-ten scale? You're going to be doing things to stimulate discussion so that they stimulate each other in their opinion. I would say survey questions are questions where you want a quick answer, whereas focus group questions are questions where you want people to talk to each other. Since there's more than one person answering in a focus group, the ideal thing is you want the focus group participants to start talking to each other, you know, about the topic. And you just want to observe. You're just directing it. So you want to sit there and have them start talking to each other about the topic, and you can just direct what they're talking about. And so the survey is more of an isolated they're alone and they're thinking all by themselves. And that's valuable information, too, that the focus groups make it much richer, in my opinion, to add to what you're measuring. Does that make sense? Yep. Okay. And I'll talk about this real quickly. We've also been enhancing the search feature on our website. So with legal interns, we had them evaluating what searching the website was like. We give them a list of search terms and they take screenshots of what comes up first. So that's a real simple little evaluation tool. If you've got some interns that you can have put to work on this. And we asked them to try searches on other legal aid websites and discuss the difference between the websites and what they liked and what they didn't like. On each page of our website, we get to choose which pages we do this on, but we like to have on as many pages as possible. How helpful do you find the information on this page? It is at the bottom of a lot of pages and they can leave comments once they start. If they answer this first part, they can give comments after that. And we can look at that every day if we want and find out if there's any problem that they're learning this to like a broken link or just whatever on the website. And so we monitor that about twice a week and that gives us a good idea of how we're doing and what needs to change. So that's a good way of, again, not pre-imposed but continual and organic monitoring. So you can note changes over time, how people use change as to the helpfulness. So that if you find something in the feedback and you change it, then you can get other views on that if it's helped or not by what people tell you after the changes. Do you have something else to say? Google Analytics, I really like to see what pages are most popular. And as you know, there's so many variables that come into play when a page is popular. What's in the news? It's something that just might be trending at the time. But it's still a great tool to monitor the changing issues and to see how the technology changes. Since I've been in charge of our website, I've seen that 80% of our users on desk computers and now it's like 60% mobile. So most of our website users are using their phones and that was definitely not true when I started. So that's been all interesting. All right, so our usability testing model. Now, the reason we did this model was because people don't always do what they say they're going to do, but they always do what they do. So I use that quote to talk about the importance of observational research. Our usability testing model is a way for us to have people perform tasks in our presence and have us kind of look over their shoulder and watch them perform these tasks while asking questions. So we prepared a bunch of pre-testing questions, testing questions while they're performing the tasks, and then post-testing questions. And so for our purposes, which was this project with our search function, we had people performing tasks with our search while we asked follow-up questions. And the value of it is for us to really see how people interact with our site, not just in theory, but in actuality. And you learn a lot about where do people actually move around? Where do they click, you know? It's very beneficial to do this kind of testing. Even the expressions on their faces if they're getting frustrated. Yeah. But they're feeling while they're going through it. And that's why it's very important, again, one of the first steps is reminders and an introduction. So the introduction is telling people what kind of project it is, why you're doing this, and really giving them an idea of why they're even there. But the reminders are very important to tell people to think out loud because that's the most beneficial part of your observational research is you want to get into their mindset. You know, if they're looking around the website scrolling, you want to hear what they're thinking while they're scrolling. That's the benefit of this type of testing as opposed to any other type of testing we've been doing, you know? So before you click on a link or submit a form, let us know what you expect to find on the next page. After you click, tell us if the result is what you expected. We want to know what you, the user, is thinking when you're dealing with the content, okay? So pretesting questions might be something like, do you commonly use websites? Where do you usually go to find this type of information? You know, those kind of questions. Post-testing questions may be something like, would you recommend this in the future? How easy was it to do? Now, for the testing type questions, there are two types of testing questions you might use in a usability testing model. Tasks and scenarios. Now, an example of a task may be something very simple, like open a KLS homepage, find the search box, conduct a search for the topic of wills, examine the results, narrow your, narrow the results to living wills, et cetera, et cetera. So something where you're just giving them steps and seeing them go through the steps and see how easy it is for them to go through those steps. Now, a scenario task may be something where you put someone in a mindset and see how they would react if they were in that mindset. For example, you are a Kansas tenant living in an apartment building in Topeka. Your landlord, Lord, won't fix a problem you're having with a water pipe. You've sent him many requests, but he keeps saying it's not his problem and you have to fix it. You want to know whose responsibility it is to fix the problem, and then you just let the user interact with your website and see where they would go from there. Now, this kind of testing really lets you see just hands-free just what the user would do. And I would say observational research is really invaluable because it gives you the kind of feedback you really wouldn't get any other way and lets you be a fly on the wall. So it's really been very useful. And again, likelihood scales are very beneficial in this setting, but also I would say questions like what did you like, what did you dislike, how do you feel, those kind of questions are also very, very useful. Thank you for listening. Are there any questions? I'm not seeing any questions right now, but, oh wait, sorry. I do have a question from Michelle who asked how viable do you think it would be to do observational testing via video? Well, it's pretty important to have, give and take, you're there with them so that you can give them pointers and remind them if you need to what the tasks are, what the scenario is. My thing is you would see them through the video. How would you also see their screen is the question. If you had a way of doing that, if you had the video set up to where it was also behind them, I guess, then it would be possible. I mean, you would always be behind them in the usability testing model, so if you could set it up to be that way, it would be viable, I suppose. It just kind of defeats the whole fly-on-the-wall approach. That's why it's very valuable to be in the same room behind the person, observing them doing it. I've done testing via this model for other sites, other things outside of legal services. The fact that you're looking at them through their webcam and they're streaming their desktop with where their mouse is going and you're recording that as part of the process, you're actually less intrusive because the webcam is the only thing they're seeing there. You're not looming over their shoulder. I like the particular method. It is different, but it is very viable. This is Brian Rowe from Northwest Justice. No, I could see that. Now that you bring up streaming the screen and then watching them through a webcam, I could see that being viable. If you just keep reminding them to think out loud and keep those reminders going and keep asking those testing questions as they're going, it could definitely work. I've just never done it myself to give you that kind of feedback. I have another question. I'll take one last question just in the interest of time from Pat. Do you and colleagues sometimes come to different conclusions or observations of the same user interaction with the tool? Well, one of the best parts of this kind of evaluation is that you talk about it immediately afterwards and compare your impression. It's good to have more than one person in this regard because one person will catch something that somebody else didn't. You can usually make a report out of it because you generally agree on the tone. There might just be subtleties that you don't agree on. I mean, things can be interpreted in different ways, I'm sure. For example, like our Google Analytics results, if you have good enough backing for your interpretation, you can make an argument for why you think something is indicative of a certain result. Sure, people can come up with different reasons for why something comes out the way it does. As long as you have good enough reasoning for it and backing, I'm sure people could come to different reasoning. I read a good example of the difference between people writing something down and people saying something in a group. You just take the simple statement, that was good. You can say that in so many different ways. If it's written, it just says that. But you can get in a focus group or a one-on-one observation study and the person can say, that was good. That was good. That was good. So there are some real distinct advantages in how to interpret feedback and the user experience in that regard. How the inflection and so many of the details that you don't get in just paper and survey. That's why I think it's very important to whatever conclusions you do come to have good arguments to why you came to those conclusions and be able to back it up. Great. That's very helpful information. Thank you so much, Candace and Melissa. We're going to go ahead now and turn it over to Dina. So Dina, take it away. Great. Hi, everybody. I'm Dina Nicotidis and I'm the user experience manager at Illinois Legal Aid Online. I've been in this position for over a year. You're not quite a year and a half. And I just wanted to kind of tell you how it came to be that we as an organization decided to become more focused on user experience design and then go into a few of our methods that we've used. So we started before we recently for the past couple of years have been working on a redesign project and we got started on that redesign project because we had five websites that we thought were meeting the needs of five different audiences. They had been around for a while. We knew visually we wanted some redesign. We knew back and we needed some work. And so we went into a business process analysis and through that we realized even though we have all these different audiences and what we thought were meeting their needs that actually a lot of people just used our one main website. And so that was kind of interesting to us and we wanted to learn more about our users, learn more about their differences and their similarities and the different needs and how to meet those needs. And so we engaged in this new project. Part of what was negative about our old websites was we had a lot of information. We had so much information that it was hard for our end users to maybe see everything or find everything or know what was exactly right for them. It was also hard to maintain for our staff. We also had lots of good ideas that we asked people especially maybe our advocate audience what are some features or what are some tools that would be helpful to you. And we implemented those and then they were never used. In terms of our main websites that you can see here towards the bottom, we were very search heavy. We had no way for our users to easily browse to kind of see the breadth of the information and services that we offered on our website. And our websites were built so long ago that they were terrible on mobile and we know that more than half of our users are coming from mobile. So we set out to create something that was extremely easy to use. This is actually a snapshot of our website as it currently is. We just launched in August. We focused on creating or kind of condensing our content. We wanted less individual pieces. We had lots of duplication. But we still wanted to cover the same amount of areas and the same depth of information that we had. And so we needed to figure out how to do that, how to approach that and make sure that it was best for our users. We wanted all the information that we now have and services to be very findable via search and browse. We found out that half people like to search and half the people like to browse. And that's just kind of how generally people approach all different kinds of websites, whether you're searching or looking for medical care, different items like that. And we also wanted to find a balance between what was enough information for our users and what was too much. So that we kind of hit that sweet spot of getting them enough information to get going without completely overwhelming them and making them feel like they can't do this on their own. Because we do have a large audience of people or users, our self-represented litigants. So how we started becoming user focused. We wanted to set out to study how people really work and how they really do things versus how they say that they work or do things. Some of the other presenters have talked about this. We wanted to not only ask but observe. We really wanted to see how people were interacting with different pieces of our website or how they might use some of our different features. And we also wanted to make sure we were reaching all of our different users. So to get started with reaching all of the different user types, we created user personas. We based our user personas on Google Analytics, on surveys. We also did kind of a large focus group brainstorm of different people who participated in our website and knew our different audiences. And we came up with our five main personas. We use these personas for everything that we do now. We use it to make sure that all of their needs are being met. Some of our features are focused, might only be geared towards a few of our personas and then we can kind of specialize in those areas. But generally this really helps us maintain focus on meeting the needs of our different users. So the other thing that we did was we revamped the way that we were doing development. So we used to be very kind of what they call a waterfall approach where it was, you know, we did a little bit of research up front. We would get a grant. We would maybe do a little bit of design and research, but we just got developing right away. And then we always said and wanted to do user testing at the end, but sometimes there wasn't time or there wasn't money or we would get to do it. But then there was no time or money to implement any changes because we waited to the end and we were basically launching. So we changed our entire approach and how we develop. We now focus heavily on the users at every single step and we've broken it down into smaller steps. So we do lots of research at the beginning and that includes interviews and observation and surveys, text data, to kind of find out if we're developing a new feature or a new area of information. What is it that users actually need and want? And then we might from that go on to create personas. We've done user journeys for some specific projects and all of that is before you get into design and then we do design and during the design phase there's also user testing and all of that again is before we have even had any of our tech development start. Then we start our tech development and during that process we're also doing, we're doing QA testing but we're also doing some user testing before we do a final finished product and then we do user testing again once we are kind of at our beta launch phase. And then we just keep cycling through until we feel good enough to launch. So this has been a huge change and really helped us be able to focus more on our users and not run out of time, not run out of time, not run out of money. It's really easy to say you know what we just don't have the time to user test right now. We just need to get it out the door especially when you work based on grants and you kind of have those set milestones to meet. So this has really really helped us and this is kind of how my position was created. I worked at this organization and as we were changing our process and our approach and how we do things we decided that it was so important that we wanted to dedicate stuff to this. So they always hear me going nope you can't cut out the user testing whenever we're kind of doing our project timelines now. So this has been really helpful to keep our focus. So some of our testing we have done everything that everyone has talked about here so I don't want to go into too much detail about them. Some of the things you mentioned on here are tree tests. We've done observation testing. We are doing unmoderated observation which is Brian mentioned using WebKeym's in recording screen, doing a screen capture at the same time. We've done some comprehension evaluation and we usually do that through SurveyMonkey and Surveys and Focus groups and we do each of these things at different points in time. We look at at the beginning of the project and who our users are and then we kind of evaluate what kind of testing would be best for this particular situation. What do we want to find out from our users and how is best to get that information. For each of these different types of testing we kind of have testing protocol set out. So we know before we launch a test about how many users we're going to need before we get enough data to start to make decisions and so for each of the different types of tests it's different. For the different types of audiences it's also different. So I want to go into a little bit more details so you guys can learn about maybe some new tests that you haven't done. I'm going to talk a little bit more about card sorts and click tests and how those might be helpful to you. So a card sort is pretty much what it sounds like and this picture here is a visual of what you would do on paper and this isn't one that we've done. I was digging around because we have done these on paper. We actually created a really large board with envelopes and we gave the person a stack of cards and we asked them to sort them and that's really what you do. So whether it's on paper or virtual which I'll show you next you're really giving them a list of cards or a set of cards and you're asking them to organize them into a set of categories that you've given them. We often have an I don't know where this should go category to see if there's an outlier that we haven't accommodated. And again like I said you can do this in person on paper. You can do it online, unmoderated. We address each situation as we come and decide what we want to do and this really provides a great understanding of how users organize information. We learned so much about this when we were doing the rebuild and how to organize our main navigation and how to organize legal content. And so this was also really helpful I think at the beginning of projects but I think it's always good to kind of check back in with audiences. So if maybe your website's been around for a while or you've categorized things for so long in one way that maybe you just assume that is the best way but it's always good to kind of check in. This is an example of one of our online card sorts. So we for the past year, year and a half used a software called Optimal Workshop and it allows us to do all kinds of different tests. It allows us to do card sorts and tree tests and click tests. Lots of different types of testing that we would need for lots of different aspects of the website. What's great about it is we were able to launch these on our website and just have users take it as they wanted to and then we could see the results real-time as they were coming in and then there's also tons of great information the software puts together for you once you kind of end your test. So this is what a user would see if they clicked the little thing on our website that said that they wanted to participate. They would get a welcome screen. We make sure to tell them how long it's going to take. What's great about the software as well is it shows you how many people started, how long they were in it, and where they dropped off. So we like to tell people right up front that their commitment to taking this little test for us will be and why we're doing it instead of just kind of leaving them in the dark but we want to make sure that they understand that they're helping us create a better website to meet their needs. So then they see some general instructions and then they get started on the test. And this test is, again, it's a card sort and so the ones on the left are the different items that we ask them to organize into the categories on the right. And it's great, they literally click and drag. They can move them around. They can drag them back. They can drag them to different categories. And when they're done, they click finish. The results that we get once we end the test, you can kind of see the different tabs in the software. There's an overview, analysis, downloads, and sharing. But within the analysis alone, you get all kinds of information. The participants tab will give you demographic information about your users. The cards and the categories, those are what I created. I created the cards for them to sort. I created the categories for them to sort them into. This is the results matrix. So this is really where I can see the data that's going to help us make decisions. And this was an actual test that we ran when we were trying to figure out how to categorize our legal information. As lawyers, we all assume, you know, we were taught how different areas of law kind of break down. But once we started asking our users how they thought about it, we realized that we had completely hit the mark, or missed the mark, and they probably weren't going to find very much if we organized things the way we thought as lawyers. So we did this over and over and over again. We did this in person. We did this multiple times online, and we would just change our category titles. So our categories are across the top, and the cards we asked them to sort are across the left side or down the left side. And then you can see the brightest blue or the darkest blue is the greatest number of people for each card that was organized. And so we can see where people were predominantly putting the cards. Most of the time, this is probably later to us, I think we're hitting the mark. But then you can see somewhere you have four or five people that are putting it in another category. So my kid is in trouble at school. Twelve people put school in education, but five people put family and safety. So now we've learned people don't think about school issues, the same way as either we would or each other or other people. This allowed us to not also learn where we would predominantly put something, but it taught us that maybe we were limiting our resources by putting it in only one category. So we decided that we were going to list things in multiple categories if that's how users thought about things. And so that content you can find now under both school and education and family and safety for our users. So card sorts are great for understanding how people interpret the information that's on your website, how they might categorize it, how you can organize your entire website or even just a small piece of your website. So the other type of test I'm going to talk about is a click test. And this visual here is actually a paper version of this. So even though it's a click test, you can do this both online or in person. And this is meant to get kind of that first impression. And users are shown a prototype. This is a pretty basic prototype that you're seeing here. These are all real examples. This was very early on when we were trying to figure out, again, our main navigation. And so you can see three different versions. The one in the furthest back was pretty basic. Legal information got help in for legal community. And then the next one down, we wanted to show a little bit more information. So those three main categories had a sub-navigation. And then we thought, well, maybe people just really want to get out their legal information. And so we broke down our navigation by just the legal content areas. And we moved all of those other elements that would be under Got Legal Help or for the legal community to different places on the website. And so this was a test that we did in person. And we asked advocates, where would you go if you were looking for a discussion group to talk to your colleagues about a particular legal issue? And we found out within three to five people right away that this very first layout that you see here wasn't going to work. We put all the information for the legal community at the bottom of the page. Nobody found it. And so right away, we knew we could toss this out as an option that they needed their information in the main navigation up top somewhere. And so, like I said, it didn't take very long. It didn't take very many users. You can do this on paper. It was quick and easy. It didn't have to cost a lot of money or take a lot of time. We actually did this in our lunch room. We share space with another legal aid organization. And so as people came in, we just asked if they would like to give us a few minutes of their time and help us out. So that's everything I wanted to emphasize with both of these tests is that it doesn't have to be time and labor and cost intensive. And this also gives you great insight as to, like, what you start to understand is how people, their kind of tendencies across the web, right? And so that what people are going to do across the web, how they approach a website is going to be the same for every website. So you kind of start to learn about that when you ask them where they first look for something. And this is a great test to do, I think, at any time. So it was helpful to us in the beginning, but we still are doing them now because we weren't able to test every single feature of our website. So we've launched with some things that weren't tested as maybe as thoroughly. And we're going back and we're saying, OK, wait, is this in the right place? And if it's not, then we're going to figure out how to make it better for our users. This is a version of one of our online click tests. And again, this was in Optimal Workshop. We launched it on our website so that people could just take it whenever they wanted. So it was unmoderated. They can get a welcome screen. We let them know how long it's going to take, give them their quick instructions, and then they're off. And we try to keep all of our tests, whether it's in person or online, pretty quick and easy. And so they know right away that it says up top here, it was Task 104. And so they know they're going to have four of these. And so they can kind of understand how much time it's going to take them once they get started. And so in this click test, we give them a scenario. So we don't know who the users are when they're online and what their legal issue is. And so we created that scenario for them. And so this was a scenario about someone who, I believe, had a roommate and I think something happened and they got in a bad financial situation. And they needed help. They really needed a lawyer at this point. And we wanted to know for ourselves where should we put access? We do referrals on our website. Where should we put that access point to referrals? And so this was the click test that we used. And we gave them, again, different versions of our navigation that we were looking at after we nailed it down from three to two. And then what we got were these results. And so this is what they call a heat map. And it shows you where every single user clicked. And then it gives you the highest percentage of where people were looking for information. And so you can see the one on the left, people were much more sporadic in kind of where they clicked for legal information. And then the one on the right was much more concentrated. And so that, again, helped us narrow down our design options in navigation. And that was our specific task in this. But we've actually used click tests for lots of different things. So again, really kind of fast results that can be really impact. And so a great way to find out how to organize visually on a page, your navigation. Maybe you have too many elements, then enough elements, they're not in the right place. This kind of test will really help you find that out pretty quickly. So I'm going to share some information that we had shared before and then talk about some very data-focused work we're doing to make the LHI, to make the online forms go. And by that I mean to get them used, to get people to complete them, to get people to assemble them or email them. And so the work that I'm sharing is mostly coming from a very data-intensive project we're doing with Minnesota. And it's funded by a TIG. But the thing about this, Jillian, if we go to the next screen, is that you already have people on your tools. And piggybacking on the concept that Mike said, that online tools are an investment. You already have people using these tools. So how do you leverage that use to improve what you're doing with those tools? So when we hone in into online forms, if your goal is to go really far and run away really far and get something really used, the initial conditions really matter. Actually, let's go back to like the little Boston Terrier trying to do a lot, like cover a lot of ground. And for online forms, we know that, you know, plain language, form design, peer instructions, process maps, getting people ready to do that interview and get the document that they need. All of that really matters in terms of getting that form used. And the other thing is if we look at the initial conditions, and that's the next slide, Jillian, is that, you know, initial conditions are really everything in life. You probably have heard in real estate by law, right? If your goal is investing in real estate and you want to make money with real estate, you buy law. Same thing with online forms. You need to know who is using your forms. And what we know is that for LHI, the majority of our users are people that are coming directly from other referral websites, generally the court-approved and SWEP, you know, SWEP, LSE-approved websites, they're coming in, they want to do the form, they want to get through it, and a lot of them don't want to create accounts. So these are the initial conditions, and so when you're designing, when we're designing forms and helping people design forms, we need to be aware of those, because if you're assuming a lot more than that, you know, you're going to not have as much use on your forms. The other thing is that for us LHI, we do have a portal that's used for frequent flyers, and so we are now looking at designing different swim lanes or different run lanes for people with different expertise, because our system is also used by pro bono lawyers, and also used by advocates. So if we go to the next slide. So we have been really thinking a lot, and this is not a new thought about, so you have your precious object that you want to take really far, like the doggy and the car analogy. And once you have the perfect dog, and for me that's a Boston Terrier. We own those in my family for generations. So we have a perfect dog, a little perfect form. I want to take it really far. So how do we do that? What is it that it needs? And so aside from, you know, what's the vehicle for that? And next slide. So we have been working on this for a long time, and we have had a lot of good ideas and people in the community, the best part of our working with all of you, is that you are very creative, because we are resource constrained. And I really believe that necessity is the mother of all invention. And because we don't have huge development budgets, and we don't have huge travel budgets, and we don't have huge of a lot, and technology is generally not the core thing that we are focusing on. We really want to do the big legal work because of the resource constraint. Because then we have a lot of ideas. And so, you know, in 2011, those of you who were working with us on online forms, we did a whole survey where we came up with six criteria based on that survey that we found out most of the staging pages where the online form resides in the swabs and in core pages. These were the things that all of them had, and we pulled examples of the ones that looked the best based on rate of assembly of LHI. So that was in 2011. Then Minnesota, around the same time, came up with a form finder idea. And the form finder idea basically aggravated forms into one really nice page, and it had all of the forms, not only the LHI forms but also PDF forms and other forms that other organizations were making like welfare departments and stuff like that, and put them all in one page in the SWEB. The other thing that came up, which still is very successful in being used in some states, is the idea of creating mini-portals. And this happened when we got hit with, for closures galore, evictions galore, it's like how, you know, put all of the resources about one issue in one page. So rather than organizing by issue, by state, we organized by issue. And the mini-guide idea was very successful into driving traffic to the forms, and that continues to be used. If we go to the next slide, in the meantime, we've continued to come up with new ideas and develop new tools. And in 2013 and 2014, we worked with Georgia in creating an LHI widget. And then in 2015, we worked with Oklahoma in doing a tab approach for forms. And so, you know, we wanted to see, okay, so if we use these two innovations, how does that lead to use of the forms, clicking on the page and then getting the person to complete the form? In the meantime, if we go to the next slide, Jillian, you know, so we realize, okay, we need to look at a lot of different data because you don't measure like volume and mass for liquids the same that you do for apples. And so, if the apple is the, the swipe or the page that refers to LHI, that's going to have its own tools and metrics. Versus the liquid, which is what gives you the nourishment, which is the form, and that's going to be a complete different metric. You may be measuring the same thing, but the formula and the tools that you're going to use are going to be different. And so we were, you know, we've been thinking about these things and working on these things. And so how do we bring these two ways of measuring things together? So next slide. So in the meantime, our friends and partners, not just in Minnesota, but all of you are probably having these discussions, right? An online form is an investment. They take a lot of time and they're very valuable. Once you get a form adopted and it starts being used, it is getting great help for the person that needs it. It is helping your program be more effective and really change the, it levels the playing field depending on the firm, the context or, you know, whoever it is that you're going against. And it also helps you recruit lawyer, recruit volunteers. It helps you, you know, keep your lawyers. One lawyer can now support 700 on bundle cases, for example. In some states we have those kind of metrics. And it helps the courts. It helps the clerks. It improves the outcomes. So, you know, it's a huge investment and you can get a lot of return on investment. But if people are not finding the form or they're finding it and not clicking on the button to get to it, all of that investment on the form and all the potential return is just kind of lost. You never capitalize on that opportunity. So, they were struggling with the same questions, looking at it from a sweat point of view and we in a little bit thinking about it. Like, how do we measure these two things? So, they were, let's go to the next slide. So, some of the questions that they were asking themselves is like, but okay, we can try a lot of things. You know, we're creative. We know how to do it. We are a community. There's a lot of great ideas. But really, how do we know which one to try? So, let's skip two slides, Jillian, to the one with the lightning boom. So, in talking with them on other projects and really working with them, it was kind of like positive and negative met. And we had this great like synergy and energy that was created and we put together a playground where we had this aha moment like, okay, if we look at the user behavior on the web and we do a click analysis like Dina talked about and we look at GA analytics for the web and LHI and then, you know, remember our users are giving us data. Could we ask them some basic demographic information data including income, which we've never collected of LHI as a platform and most forms don't collect it unless they're doing a calculation and where they're doing the form because we don't track IP so we don't have locations among these other stuff. And then we look at our own internal server data, right? Like account use, assembly rate, and length of interview. If we can pull all of this data together and look at it, can we improve the utilization of the forms? And so that's what the project was all about and the project started in 2015 and we're gonna end it at the end of the year, I think, with really looking at all of this together and so if we go to the prior screen, when we wrote the grant, we, I mean, I thought I got this great suggestion, you know, if you guys wanna really measure clicks and how the sign is going to change what people click on the form and there's this, you guys should do A-B testing and the idea was set up a website that looks one way, set up the same website that looks another way and this tool will randomly select people and you can create goals and see what buttons work the best. So let's keep past that one. So, you know, our focus was getting people from the SWAP to LHI to actual assembly and account creation and all of that. So let's go down next. And so this is the A-B testing, right? So which button? And you can get pretty granular. So the first thing is to see can we do this and what will it take to really do this A-B testing? Is that in Minnesota, Mary and Jenny started with the button? And so if we go next, you know, it does the design of the button matter. The button, if we had this button here on click forms, would that get more action if it's green or if it's blue? And they run the test and what they found, let's go to the wall slide, is that the blue button actually drove clicks because of the blue button and the design of the blue button by over 740%, which was like, wow. Once we knew that, then we said, okay, so what else can we change about the staging page? And they started experimenting with not just the buttons but the placement. Where do you put the button? Does that get you more clicks? And then the content inside each of the staging pages, you know? And so we've been doing this very detail, I should say, they have been doing that and sharing it and we've been trying to track the metrics on the other sets of statistics I was mentioning to see if it's yielding an increase in use of those forms. So let's move to the next slide. And so these are some of the things, you know, that we together have experimented and they experimented with the width. They looked at the tab approach also. And so now in the Minnesota pages, and I can show you quickly because this is probably too abstract. Let me just move this over. They have now arrived that this is the best place and they signed for the button because it got such a huge amount of traction. And then they have a modern form helper that is more of an accordion style and this was also tested. And then when they serve all of their form nows, all of their form nows, they're using this tab approach that guides the person very carefully. Some of them are longer or shorter. It depends on the form and how nuanced the form is. And then at the end, they get the link to LHI and what they're doing is that the link to LHI, they're putting it in a widget form and meaning that the LHI, there's not a jump to a different website. The user does it from the inside. So that minimizes some of the confusion that less technology liter people have. And then they work on the form. And so inside of the forms, they're asking five to seven questions for these pilot forms. We're testing five in five different areas of law that ask these questions and then ask the person permission to pull those questions for that survey only out so that then we can analyze that and if necessary, consult with an academician that they have on call that can look at that demographic data and look at the problem area and see if we can make any inferences on users of criminal expungement. For example, the majority of users, we know now from all of this work with the extraction of the answers and all that, we know that the majority of users for that are in the metropolitan area. We know that powers and wills are mostly in the non-metropolitan area. We were able to pull income data and Jenny figured out how to replicate the very complicated federal poverty. And so now we know that people who are over 300% of FPL really don't like sharing any information about that. And we know that people that are over 300 FPL that are using these five forms in Minnesota, most of them are in the metropolitan area. So we're learning a lot about bi-problem area and by urban and rural and by those five metrics who is really using the form which when will enable us to custom better language for those very specific users because somebody doing a will for their mom, you know, that's a family and friend and we have a lot of those in some problem areas of law is going to need different approach on the website and on the instructions than somebody that is trying to do an eviction expungement somewhere else. So that's the kind of thing that we are doing on this project. And so let me just, Jillian, let's just go back back to you. Let's just scroll down. So on Optimizely, the tool is a, let's scroll down. The tool is a very easy tool to use for those of you who have used Google Analytics. You basically set goals and then you move to measure those goals. And so it has a standard kind of free online type of account and there's an enterprise account if you want to do a lot more. So for a lot of the work has been done on the standard free account and if we scroll down there's a slide with cost, Jillian. If people, you know, they found it very easy to use. Okay, I'm lost as to who has presented rights. That should be me, Claudia. Folks should be seeing what's the cost screen. Oh yeah, okay. So, you know, it wasn't overly honest to do this A-B testing with your users. And from now on I think that in Minnesota whenever they come up with a new idea for a page or text and stuff like that they have found that it's really helpful to really get the metrics and then based on that metrics make decisions. So if we go to the next screen in terms of looking at the LHI server metrics what we have found and I just pulled the example for one of them because I didn't want to give you like five forms times seven, you know, demographics that we're pulling actually from the users and all that is that looking globally at Minnesota forms in LHI they have increased significantly Minnesota is now like the third largest in terms of rate of growth and by that I mean the ink line on the slope, the percentage change which is the vector. So the direction of growth has completely like going up now which is exactly what we wanted to do and in terms of numbers what is really interesting is the rate of assembly which is how many times somebody actually produces a document has also increased in most forms which is really good the other thing is that we are also seeing that for some of the forms there may not necessarily be a volume and by that I mean a number increase but the rate of assembly has increased and so now we're going to start really looking at the nitty-gritty to see if because they got better instructions before they clicked on the LHI form maybe a lot of people decided hey this is not for me this form is not for me so they didn't bother to click on the form and we didn't serve the interview so it could be a similar result where we're not having people go to forms because they understood that the form was not for them so what we're going to do next is you know this is kind of like putting together a very delicious soup and so what we're going to do next is share the recipe in a report that we will share with all the partners that use LHI as a platform where we will describe this is what we did this is what we were changing and these were the other metrics that we were seeing and we're going to go ingredient by ingredient understand, optimise the data LHI analytics LHI answer data and all of that and then we're going to like share you know why Minnesota decided that using the tab approach and all of the language there how each of the forms has been maximised for maximum use and then we'll share also writing experiments sometimes things don't work we're going to share some things that though we were able to do we found out were not scalable you know and things that we want to do but right now we don't have the technical capacity to do so I just wanted to share this just let you know that the report will come and I'm hoping that some of you will take a look at some of these things because I believe that a lot of you have the capacity just from seeing the work but also encourage you to reach out encourage you to think that as you are using technology you're creating data and using the data to improve what you're doing it may feel like it's really hard but it can be done and we want to support you and particularly when it comes to online forms they're my favourite thing in the whole world so if any of you want to hear about this more in detail I don't want to think about hey what can I take from this and put on my form pages what are good ideas I'm happy to share that with you I'm sure Minnesota would be happy to also help and just a shout out that a lot of what Illinois Legal Later Online did it's kind of parallel track so I'm going to stop so people can ask questions but just be on the lookout because it's really fun we're still very energized about what we're learning and I hope all of you are too great thank you Claudia I just wanted to see Mike and Brian if there are any questions I think we can stay on for just a couple of minutes more we've run over today but wanted to thank you all for attending and just letting you know that at the beginning there's more information on additional webinars that can be found at lsntop.org I just don't see any additional questions but I just wanted to chime in and say thank you to all of our presenters and look forward to putting into practice the methods that you've shared with us today yeah thank you so much for organizing this I greatly appreciate it it is a really enlightening process to go through I strongly recommend that organizations give it a try I just put links in the chat to our upcoming training on Excel tables and to our YouTube channel we've got a YouTube channel with over 120 videos on it all of our presentations from the last four years and we started to do some other short form like 20 minute videos on particular topics we did a series with Idealware last year that is all up there if we're going to be doing a survey for topics for next year here in the next month or so which will be published on our blog and on the NTAP Discuss email list please look for that if there's topics that you would like to see we would love to try to incorporate those into our webinars for next year