 Hello, thank you all for coming out. This is Brian Rowe with LSMtab. We're here today to talk about usability. We've got Jackie Holmes here, who is a intern for us from University of Washington's Information School, working on her master's in Information Management. And before we jump into it, I've got a few quick announcements that I want to let people know about with regards to LSMtab. Our training calendar is, we've confirmed the date for our next large webinar by Pro BonoNet. It's going to be on online intake and it's going to be September 19th. We should have a link up within the next day or two for registration on that. We're also going to run a small focus group on the 26th, where we're going to sit down with members of the community and ask what areas you would like to see improvement for on the LSMtab website. So if you've got any ideas, things that you would like to see done differently, please let us know. Can you hear it, Sue? Give me one second here. Somebody just got a hold of me. You can't get off. 10-12 of us on. Okay, I've got somebody else working on troubleshooting individuals who can't seem to get off. The other quick announcements that I've got is we have a new VISTA volunteer helping us out at LSMtab. This is Liz Learman. She is attending the webinar and going to be active in providing content for the LSMtab blog. You'll also probably see her CC it on some of the emails back and forth with me. She's going to slowly but surely start to take over some of the projects over the next year when she's helping us out. I also wanted to let people know that the LSMtab YouTube channel is up and running at this point. We'll be featuring it on the website soon, but all of the webinars from 2012 and 2011 and even some of the older ones from 2009 have all been put up there. We're looking at about 31 videos overall including some short tech tips. Also, I wanted to point out that a lot of the things that Jackie's going to be talking about today have been covered in a series of blog posts on the LSMtab website and her slides for today's presentation are up over at our Slideshare account. That's about it for introduction here. I'm going to turn things over to Jackie and I will also be monitoring the chat and the question window in case people want to send things in. I can also unmute people's mics if they'd rather ask questions of the audio instead of chat. Thanks. Thank you, Brian. Thank you everybody for attending. This is my webinar on usability testing today. My name is Jackie Holmes and after today I hope that you come away with seeing the value of usability testing and its purpose. Today will really be an introduction about what usability testing is and I'll cover some of the basics of this evaluation technique. We'll be talking about what it is and why it's important, how to prepare for and conduct your own tests as well as how to analyze your test results and use those results to influence changes in your product, probably a website. Since I've conducted several usability tests here at the Northwest Justice Project, throughout this presentation I'll be sharing examples of what I've done to show you how we've used usability testing and what we really learned from it. Before we get started I just want to give you a little bit of information about me. Brian covered some of that. This is a picture of me on an unusually sunny day here in Seattle. I've been working here at the Northwest Justice Project this summer as an information architecture and user experience intern and I like to describe my role as a bridge between users and developers. I'm working on this project with NJP about creating a new information management system using Microsoft SharePoint and it's meant to share resources and information in one central location and the goal is to increase the effectiveness and efficiency of information sharing and client services. A lot of my time is spent interviewing and shadowing users to discover their goals and tasks and with this knowledge I'm able to create task flow diagrams and prototypes to influence how the site can be used by the users and then with these materials I'm able to talk to the developers about how to design a SharePoint site that will satisfy the user's needs. So getting back to my internship title I architect how information will be used on the SharePoint system and involve the users throughout the development process to understand how to provide them an enjoyable and useful user experience which is often abbreviated UX. I'm currently a graduate student at the University of Washington and pursuing my master's information management from the UW's information school. I also have a BS in psychology from the UW which helps in the user experience and research design because it ties closely with behavioral sciences and doctors and cognitive psychology. Okay so what is usability? This is a picture of Dr. Jacob Nielsen who is widely regarded as the father of modern website usability and he defines usability as this. Usability is a quality attribute that assesses how easy user interfaces are to use. The word usability also refers to methods for improving ease of use during the design process. So usability, a big part of it is trying to design systems that are easy for users to interact with. He also says there are five components to usability or helps measure usability. Learnability which is how easy is it for a user to accomplish basic tasks the first time they encounter design. Efficiency once users have learned the design how quickly can they perform tasks. Numberability, when users return to the design after a period of time and not using it how easily can they reestablish proficiency. Errors, how many errors do they make, how severe are they, how easily can they recover from them and satisfaction. How pleasant is it to use the design. So what is usability testing? Well it's a research technique for evaluating the usability of a product commonly a website and it involves using real users so actual people that would be using the product to do real tasks. So things they would actually use the product for. And why is it important? Well to me there's three big reasons it's important. One is it allows for us to get user feedback and determine satisfaction. It helps us catch problems before launch and fix the problems as well. And it also promotes user friendliness because we hear from the users and want to design around them. So how do you get started? If you're familiar with my blog post then you've probably seen this process cycle graphic that I made up showing the different steps of usability testing. So first you want to start preparing for your tasks. Then you administer the test and then you'll get a bunch of results from that and you'll analyze them. And often times those results are going to influence changes in your design. And I've made this a cycle because I want to emphasize how this is really an iterative process. Usability testing hardly ends with only one round. Because you're likely to make changes based on the results. And then you want to go through the cycle again to see if your usability has improved. So for the rest of this webinar I'll be talking about the different steps of this cycle. So you start with preparation and there are some major things you need before you begin your testing. And one is definitely a prototype of your product. It should function well enough so the user can complete the tasks that you give them during the test. This here is actually a screenshot of our SharePoint site which is called BIKE. Standing for Information, Knowledge, etc. You also need to identify your users and your testers because you need them for your tests. For his own research actually Dr. Nielsen suggests five is sufficient for finding all usability problems. You'll set up a time to meet with your users and it really shouldn't take much longer than an hour to perform your test. Another very important component to your usability test is harder tasks. What are you exactly testing? It should be realistic tasks that the user would need to use the product for. And the last component that you'll really need is recording devices. And pen and notepad are just fine for taking notes and observations during the test. You might want to consider also an audit recorder and or a video camera to capture information you want to go back to and listen to maybe you missed something. I personally wouldn't recommend a laptop to record your notes even though it's easier than a pen and paper. The sound of the typing might be distracting for the user. Also you want to know what you want to record and what you want to remember from the test. You'll consider metrics like the number of minutes it takes to complete tasks. And this helps you organize your notes during the test. So give you an example of what we've done at Northwest Justice Project. As I showed before, we have our Ike prototype. So this is a picture of it again. When we used it in the test we felt it function well enough to help users achieve their goal of sharing information resources quickly, efficiently and effectively. So the task that we gave them could be completed with the prototype that we had made. We were able to get actually eight people at NJP to be testers and really helped to have a liaison within the organization to help connect me with people that could possibly be users. So if you can have somebody within the organization who can help connect you with possible testers that's really helpful. And these testers were really representative of those people who would actually use the system and use it the most. From interviews that we did before the test, it sounded like legal assistance would be some of the main users of the Ike website. Since they are on a daily basis uploading or retrieving documents so we tested with a lot of them. Our tasks focused mainly on the usability of one particular area of the Ike and that was the library and we used that because it's a central place for sharing documents and other resources and that was one of the major goals of Ike to have a place where you could quickly, easily retrieve and upload documents. So we were primarily looking at three pieces of the library, the ease of uploading the documents ease and learnability of tagging and tagging being adding descriptive keywords to documents and also the ease and satisfaction of the various browsing and search functions within Ike and in the library. The tasks that we gave our users or our testers were these two. Upload a document you would like to share with your colleagues to the Ike library and this was getting at looking at uploading documents and also the tagging feature since during the upload users are supposed to put in their tags at that time. They can go back and do it but that's the first prompt they'll get for tagging. And our second task was think about a case you've recently worked on that required you to look for a document in the network drives and show us how you would look for such a document on Ike. And this particular task was getting out the search and browse functions that our users went to or used and how useful they were to them. So coming back to those usability factors that I mentioned before, some of them were more relevant to this first round of usability testing than others. We are primarily looking at the learnability errors and satisfaction for this first round. The other two are more relevant when a user knows and has already worked or interacted with the product. So those ones will come up in our later usability tests so they will measure those since people will be more familiar with them with the site. Okay, so the next step is administering the test. And it's a good idea to run the tests in the environment the user will actually be using the product in. So you make the experience as realistic as possible. So I went to the offices of our different users and sat down with them at their actual computers. However, it's fine to conduct a test in a lab, maybe an empty conference room, or even remotely. I did have a couple users that are at different offices within the state and we couldn't see each other in person so we used GoToMeetings to conduct our tests. And you want to have software that allows you to talk to the user as well as see what they're doing with your website or product, how they're completing your tests. So you'll want to introduce the test or the test to the user to get them familiar with it and comfortable with it. You want to be able to emphasize that you're testing the product, not the user. And also encourage the tester to talk aloud while they do the test because this helps you take your notes and find out what they're actually doing or what they're thinking as they're going through the test and trying to complete it. You want to read the test to the user and I found that it's a good idea to ask them to tell them to ask them to tell you when they think they're actually done with the test because I found a few times in the first few tests I was doing, I had thought that the tester was finished and had interrupted them in the process and they actually weren't. So this kind of lets them take control and then you know and you won't be interrupting them. And your major role is to observe and take notes throughout the test. It's nice to tell the tester you can ask me questions during this test but I won't be telling you where you need to go or if you're doing anything wrong or if that's unexpected. Furthermore you'll want to stand or sit next to the tester just to kind of be in a comfortable environment. And if you can have two moderators, some one who can take more notes and maybe sits on the side more and another person who might take less notes and answers questions the user might have. It allows you to get more data and more notes from your tests. And lastly after each test you want to ask your tester about their impressions. So asking them such questions as what did you like or dislike about the test? Did anything make it easy or difficult? Is there anything you would change? And this can help you gauge satisfaction and get more out of the user. So now you've done all these tests and you have a bunch of data that you need to sort through. Some of it might be quantitative so numbers averages, number of clicks and a lot of it is probably going to be qualitative such as user comments or questions. It can be really daunting to know what to do with all of this data especially the quantitative. So from here on I will explain one way that you can organize and analyze your qualitative data. And this is the way we actually did it here. So one way you can analyze your qualitative data is with an affinity diagram. And this exercise this diagram helps you figure out what it's saying and helps you discover themes within your data. So you start this by writing all of your qualitative data on individual sticky notes. You'll need a lot of them probably. This is just a portion of what we had and this is for the test. Then you stick the notes to a blank wall and you start arranging them into categories. And you'll start seeing themes jump out in all the data. It allows you to easily see all your data together and move it around into clusters based on common themes. So for us this is what our final affinity diagram looked like. And to zoom in this was one particular cluster, one cluster of themes that a lot of our comments were about how people wanted more instructions or wanted more training on a particular part of ICE. So after our affinity diagram was complete, we realized a lot of the comments had to do with problems people had and suggestions they had for improvements. This really wasn't that surprising because we hadn't run any tests before and this was the first time a lot of people were getting to see and use Ike. So a lot of the issues affected how easily it was for users to learn some of the basic tasks Ike would be used for. And so going back to the factors of usability, fixing these issues could improve vulnerability, reduce errors and with an easier system improve satisfaction. So that's overall improve usability. Here you can see what our different groups were and mainly what the different issues were. Naming issues were the most frequent types of issues commented on such as people not knowing what a certain term meant such as metadata or feeling that certain labels could be named differently. In second there was visibility problems such as not being able to find a button on Ike. Now if you do have a lot of issues arise out of your results it's a good idea to actually rate the severity of those issues. For us we rated it on how frequently the issue was encountered and also how much it could affect priority tasks to be done in Ike. So those priority tasks for us were those tasks we had people do during the tests, uploading a document, tagging and finding a document. This is a display of all of the different visibility issues we actually had come out of our results. And the first one you can see on the left was people couldn't see the search box because when you open a certain tab called library tools it actually covered the box. And we rated that as pretty severe because it eliminated a major search function on Ike and negatively affect one of the major goals which was to efficiently and effectively find information. And also our usability results showed us that 7 out of 8 of the users tested went straight to the search box when they were given the task of finding a document. One of our other issues was people could not actually find the library tools tab. And this was rated as moderate to severe because the upload button was actually in this tab but there were additional ways to upload a document that were always present in the library. And many of the other features that are listed under this tab will probably not be used by users. This is another view of some of the issues that we had and it just shows you where problems were coming up and you can kind of infer why they were issues with people. And this is actually a view, a screenshot of our library, of the Ike library. So the results of a usability test can really have a big impact on the design of your product and can alert you to making changes. You'll often discover where and what changes should be made. And coming back to what I was saying about being able to find the search box when that library tools tab was open we decided to change this by putting the search tab or the search box underneath the library tools tab. So it's always present. So as you can see in the before picture, the tab that's open at the top blocks the search box. It's always present below in the green outline green there. It's always present even when that tab is open. So this helped us improve both search visibility and navigation. So in closing I just want to bring us back to the process cycle again to reiterate that usability testing usually happens more than once. So after we will go through all these results that we got from our usability tests and make changes to try usability, we'll go back and do another round of tests to see if and if so how usability has improved on Ike. So thank you very much. I've provided here a few resources and references that give further instructions on why usability testing is important and how it can help you. Thank you so much Jackie. Let's go back one slide to that last diagram. I've got a few additional things to add there from my experience in usability testing. I also wanted to remind people that there's both a question box here and I can unmute people if they have comments or questions anything related to their own experience doing usability testing. One thing to remember is that this testing doesn't just happen at the beginning of a launch of a product. Your users needs may drastically change over the life of a particular system. We're definitely learning that with LSNTAP. Five, six years ago there was very little video content and now video content has been made much easier to produce and create so we need to find better ways to highlight that. Our text library and our front page currently don't allow easily embedded videos to show up in the search results or on the highlighted blog post so we'll be going through and redesigning that so that the style sheets allow those videos to show up immediately. That's something that has come about in talking to users over time and trying to find out what their current needs are as the system has changed. Another big issue that has come up for us is that the first time that we are doing usability testing with regards to IKE is also the first time that many of these users have ever seen the system. This brings up the challenge that we're both doing usability testing and we're doing some degree of marketing in showing off the new system and trying to get buy-in from individuals who are going to be using the system. There's a heavy emphasis on the point that Jackie made that we are not testing the users, we are testing the system. If something is difficult to find or can't be done, it's either that we need to look at more training over the system or we need to redesign how the system is and our hope is that by bringing in users as early as possible during the design process, they'll be able to feel a bit more ownership of it and as we change features to reflect their needs they'll feel more like it is something for them instead of something just given to them and told to use. We've got a comment anything else that you would like to add in regards to those or other points Jackie? No, the comment appears, is it on the screen? Oh no, did you have any additional comments? I've got a question here that I'm reading through, I'll repeat it here in a second. No, I think it's great to emphasize that it's about the users feeling like they have ownership over the system rather than giving it to them. I think that's a great point because it really is us trying to design a system for them and making the research and design of it a very user centric. Sharon brought up an interesting point here which is how difficult it can be if you end up getting user feedback around something and that it ends up being something that you can't necessarily change directly in the system such as if we weren't able to move the search box to make it more visible. That's definitely or can be a very difficult problem to deal with. Some of the systems that we use have limits to them or outside coders who have more control over it. One thing that we've definitely tried to do on projects that I've worked on is keep a long term wish list of items that we are aware of and as new updates for the system come out, we revisit whether or not those can be changed easily. The other thing that we've done is tried to reach out to and get help from the greater community if it's something that we can't handle ourselves here. We've brought in a SharePoint consultant but we also have a partnership with Microsoft and we've asked for some suggestions on how they've implemented their system with regards to their legal team. They've been able to help us find some solutions that we wouldn't necessarily be able to find ourselves. There is a balancing act in setting users' expectations because not everything is mutable. Has anybody else here in the audience done usability testing with regards to a website? Have they run into any surprising features or feedback? I mean even finding out that most of the users that we're going to be dealing with the information management system are going to try to search to find things was pretty important to us because our previous system was a series of folders that you had to dig through the folders and the search didn't work and we weren't sure whether it was more important to spend time replicating that file structure or if users would gravitate towards search and seeing that most users are actually using search means it's more important for us to get the metadata into the individual documents so they can be found via search. I know I've also found in website design that if you give people a task of three or four different documents to find on your site you can quickly figure out which tabs are well named or poorly named such as tabs that say things like resources or links are often interpreted very widely or differently because many many things can be a resource or a link and different users will have a different idea of what that is where a more narrow naming convention can make it easier for users to find things. Looks like Sharon's system or Sharon's program has just been out a short point and they're in the phase of getting user feedback now. Any other comments or questions or experiences with usability testing? My last tip here is also the earlier in the process that you can start the usability testing the easier it is to make design changes in the process because you may end up choosing to abandon or change the focus of entire features if it doesn't make sense to users. I've heard a lot about not getting, I mean if you're on the usability team not getting too attached to your first design or any design really because your users may have a different opinion and you may have to change a lot. One of the things that we did very early on in this information management system project with one of our earlier interns was create personas of what kind of the average users would look like and what their tasks would be inside of the system and having those individual personas in mind when designing a feature or putting in place a process has definitely also helped. I don't think they're as useful as actual user testing but they're a way to try to keep users in the forefront of your mind while you're designing features and putting together a process. One of the projects that I worked on a few years ago actually had little cardboard cutouts that were about 6 to 12 inches where they actually put a picture of each person on and a little bit of demographic information about that particular type of user. Something that more humanizes the user as part of the process makes it more likely you're willing to consider their needs. Thank you all for showing up. All of the slides for this presentation are up online and we will also cut this into a small video to put up on the YouTube channel. Please look for the upcoming online intake webinar done by a pro bono net and also the focus group for improving the LSNTAP website. If there's anything you'd like to see there we'd love to hear your feedback. Thank you so much. Also thank you Jackie for putting this together and presenting on your research and work this summer. Thank you very much. I'm glad I was able to present and thank you everyone for attending.