 I am Renata Agigai, I am part of a free Libre open source software Kosovo, which is the only open source community in Kosovo. I'm also involved in open source design, and I work for URA as usability researcher, but I started getting interested in UX and usability during my outreach internship where I did usability testing for GNOME. So what do we think about when we mention usability? There are a lot of formal definitions of what usability is. They are all related to efficiency, user satisfaction, memorability, but I have another definition, which is a simpler approach to what is usability. So I like to think that people are always busy and they just want to get things done. So a program with good usability is basically a program where real people can do real tasks in a realistic amount of time. So because developers and designers spend so much time creating the application, they usually neglect the endpoint, which is the user, so that program ends up being not very user-friendly. And that's why usability testing is really important, because it makes the connection between designers and developers with the users. Here is a misconception around how to conduct usability tests. So people think that you need fancy labs, or a lot of resources, a lot of people to do usability testing sessions, but that's not really the case. I will show you a way that you can conduct usability testing yourself in only three steps with minimal resources. So the first step, the first thing that you should think about is who are your users, because that's how you choose your testers, right? So who is using your software? So if you have a more general application, like for example a calendar that you want to test, you want to include a diverse range of people. So people who have different professions and who are more technical or not very technical, but if you are testing a more specific application, like for example GIMP, so then you would want to target the designers. You would want to pick the testers from a designers group. And how many testers you choose to have, it depends on the complexity of the testing. So if you are just testing one feature, then you would need less testers and less tasks, but if you are testing the application, the whole application in general, then you would need more testers to test with. And another thing is the matters here is if you are testing in iterations or not, so if you test in iterations, you only need five testers to test with, because that's enough to uncover most usability issues. So usually you do just five testing sessions, and after that the problems will start repeating, so you won't get much information, new information. The second step is creating the scenario tasks. So a scenario task is basically an imaginary use case that you give to users to accomplish on a certain amount of time. So for example, you say you forgot your glasses at home, and you cannot see clearly, so you want to make the task bigger. That's how a task is done, for example. You should give more context to the tasks that you're doing, so you shouldn't just say, please make the fun bigger. And what you also should try to avoid is giving hints to the user, because you don't want to show them the answer, so you don't want to say to them, please increase the font size. Instead, you just say, please make the text bigger, because when you mention the font size, they will start looking for that keyword on the screen. So the third step is actually the process of testing. You need a very few tools to do the testing, so you would only need a computer or a laptop or a phone or whatever you're testing you want to test in. You would need a paper and a pen for yourself, so you can keep notes. You would need also a camera or a screen recorder or a voice recording, because it depends on what the tester is more comfortable with. And before you start the testing process, you need to prepare the environment for the testing. So, for example, do not let the user install the application or install the extensions needed. You just do that before, so that they do not waste their time, and they should just focus on the task that you give to them. After the preparations, you have the actual testing, so you should be sitting next to the user and just observing that user. You should not interact with them, and before the testing starts, you should make sure to explain to them what usability testing is, and you should explain that it's not about testing them, but it's about testing the software so that they do not feel uncomfortable if they cannot solve a task. So during the testing, you do not interfere, you do not ask questions, you wait for after the testing, and then you can ask specific questions. Like for example, what were you thinking on this task? Why did you make the decision to choose this submenu? How would it be more intuitive to you to place this menu? Do not forget to ask the tester to think out loud, because that's very helpful. So when they are trying to accomplish a task, they should just tell you, okay, I'm clicking this button because I think it will download a program or something like that. So after we have done all the testing, the preparations, and we've gathered the information, we've studied it, and we have the results, we know what the pain points are. I now want to show other people what are those points and how to improve them. You can do that in so many forms. So for example, you can do a formal paper, you can do a more less formal paper, so you can include each task and explain how each task performed, and include screenshots maybe, or you can just do a live presentation in front of others and explain what you have found out, or you can just use bug trackers, you can file an issue, explain what happened, and all the issues there, and then you can get your feedback there. One thing that you should always include on your results is visualizations, because they help the readers to see where was the problem, because the text usually can be very complicated, and the reader doesn't know what the problem was. So if you just make screenshots or charts, explain what happened, and where was the problem, so you just highlight it with red or something, they will know where the problem was. This is another great way to visualize your tests. So this is a heat map, so on each row you have each task that the user has done, and on each column you have each tester. So a box corresponds to each task and a person. So for example, the green boxes represent the tasks that did very well, so the participant didn't have any troubles with it. Then the yellow tasks represent the tasks that people hesitated while accomplishing them, and then the red boxes are for those tasks where the user spent a lot of time accomplishing. The black ones are the ones that the user just gave up and couldn't accomplish it. So you can use the heat maps in different ways. So for example, we use them here to compare the two designs. So this is actually taken from a real usability testing that we did for Thunderbird preferences UI. So the old design that they had, you can see on the first heat map how it performed. So there's a lot of red boxes and yellow boxes, which means it didn't go really well. And then after we did the redesign, which you will see on the next slide, we tested that again, and you can see how it got improved. So there's a lot more green boxes dominating the second heat map. So this is the design that I was talking about. So you can see on the sidebar, there's a lot of menus, which were very confusing for the users, because they didn't know where to expect to find whatever they were looking for. So they just tried skipping and just searching into all of them. And also the horizontal tabs, they did not understand that you could click on them, so they just ignored them. And that made the tasks, they couldn't accomplish the tasks on that way. But on the redesign, we did, we redesigned the, so the sidebar. There are just five menus now, and everything was more clear. And we got rid of the tabs. So now the user needs just to scroll down and see and get all the information they need, which, and that's why it performed better. This is another example of before and after usability testing. So this is Briar. We did the, Briar is trying to implement a new feature, which allows users to add contacts remotely. So when we first did the prototype, it didn't go so well. So you can see the prototype here. It didn't go so well because there was just too much information on the screen. So the user complained that they did not, they got confused when they saw the screen. So we decided to split it into two parts. And we put the progress bar to show the user where are they right now. So first they just need to exchange the links. And then on the second part, on the second screen, they choose a nickname for their contact. This performed better, but there is a lot of room for improvements. So we will do another testing round and see what happens. So we've learned what is usability hopefully, why it's important, how to conduct the usability test, and how to present the results. So maybe now you can just take these tips and steps and apply them to your own project and do the ability testing for yourself. On the useful links side, you can find, if you want to read more about usability testing, I did for GNOME. The process, or if you want to find the projects that I talked about, there's documents about usability testing for I2P and Thunderbird or Brier, they are all links there. So thank you. If you have any questions, please. So every time I do usability testing, I always have a hard time making the changes happen. So you said you had a couple of different ways to present the results. Which one did you find more effective? I personally found more effective doing, you would think that more effective would be when you do a formal paper, but that was not the case. I was expecting, the most impactful was for Brier. We just did issues on GitLab and it was impactful because it was being done on iterations. So we would create a prototype, testing right away, putting on an issue, filing it and explaining what went wrong and they would immediately do the changes. That's why it was effective. I think it was because of the iterations, not because of the way I presented to them. So I mentioned that before, it's five. So five is a good number if you uncover most usability issues. So if you do less than that, there's a risk of you having to miss out some of critical issues, usability issues, but if you do more than that, that's okay. Just do not do less than five. If you do more than that, that's okay, but for effective testing, five is enough to uncover most usability issues. Another question? My feeling, maybe it's only me, but I feel like the general usability of application is actually decreasing, like with all those new material designs and all those large preference paints. It looks like the software is less usable than it was like a decade ago. I just want to know what do you think, what's the reason for this, like why the software is becoming worse than it was ceases to? I don't know if we see this at the same point of view. I mean, I get it. I think that with these, people are being used to a certain design. So material design and whatever bootstrap, and they all look almost the same. So people get used to them and they memorize them, and that's why it gets easier to use. And if you use something different, then that would be harder because people would need to get used to that one. So I think it's the opposite, but I don't know. That's just my opinion. Another question? One more question? Yes? Okay. Okay. One last question. In person. In person. And they are all traditional usability tests. So we didn't do paper prototyping or something like that. They're all in person tests where I sit next to the user, observe them, and just take my notes on that. So that's it. Okay, thank you so much for coming.