 Hello, my name is Sarah. I'm currently a UX researcher at GitLab, and I've been conducting usability testing in one form or another for about five years now. And two common assumptions that I've heard in that time is that, one, usability testing is expensive. And two, that you need to be a researcher in order to understand and speak to your users. I'm here to tell you today that neither of those assumptions are true. It teaches you how you can do your own usability testing, and you can do it on a shoestring budget as well. So what is usability testing? So for those of you who don't know, it's where a user completes a series of tasks whilst being observed, and the observer takes note of what that person says and does. It's an effective way of understanding which parts of your interface work. So why should you conduct it? As your open source project grows in popularity, you're going to be faced with a lot of opinions and assumptions about what is right for users. And when you make changes to your interface, you might be met with some negativity or opposition. Usability testing provides you with evidence as to why or why not you should be making changes to your interface. And it also assures you that if you do need to make the changes, that they are the right changes for the majority of your users. Your time is precious. Don't spend it building features that nobody wants. If you can conduct usability testing early and frequently throughout the development process, you can understand whether users have a desire for your feature. If it's something they will use, is it something they will like? And if it is, usability testing can tell you the point at which users can use your feature with these. So you might have held back from releasing that feature. And you might be thinking, I'm going to work on it for another couple of weeks. But actually, if you can release that feature because users like it and users can use it, then it's great because the more users who start using that feature, the more feedback you're going to get. And I think we can all agree that the early that you spot problems in the development process, these are the artifacts. And finally, users are not going to spend time figuring out how to use your interface. There's plenty more competitors out there that just go elsewhere. So what are the advantages of conducting this type of research remotely? Well, I can speak to users all around the world, and that offers me greater diversity. And it's also more cost effective. It means I don't have to fly around the world to meet them. And users are also in their natural environment. So they're using the hardware and software that they use on a day-to-day basis. And sometimes that can make users feel more at ease. So in order to find the right kind of users to speak to, you need to create a screener. And a screener is essentially a short questionnaire that you give to users for them to test their eligibility to take part in your study. So your study overview should contain details about the study. So when is it taking place? How long is it likely to take? Any technical requirements that you need users to abide by. And ideally, you need to offer users an incentive. Now, all the tools that I'm going to mention today in my presentation are completely free. We all have free versions, and I'm using those free versions. This is the only part of my presentation where I'm going to insist that you spent a tiny bit of money. And it doesn't have to be loads of money. I typically use Amazon gift cards. But even like a token gesture such as a sticker can encourage more people to take part in your study. You need to collect availability. I typically do this using a multiple choice question. The first is whether I can record the screen and the conversation that I have with a user. And the second is whether I can share that recording publicly. Now, I ask users whether I can record them, because when I normally conduct a usability testing session, I'm normally the only researcher. I'm not the only researcher. Which means I'm also the main observer and the main note taker. And it's very easy to slip and sometimes become absorbed in your notes, and you might miss something. So being able to record the user, you can go back and you can review anything that you feel you might have missed. Secondly, my colleagues can also watch those videos as well. So they can witness first hand the problems that users are facing. And I ask whether I can share the videos that I'm going to share. And I ask whether I can share the videos publicly, because at GitLab we work out in the open. And we have an open issue tracker. And this means that anybody can go on to this issue tracker any time and see what GitLab is working on and understand why we're making changes to our product. So that includes all our contributors as well. You need to collect contact information. Typically, that's just an email address for me. Just a way to feature out for the user. And finally, filtering questions. These are a way to screen out your users. So how do you write filtering questions? Well, first of all, you need to think about the kind of people that you want to speak to. So if you're testing something like installation or onboarding, you'd want to speak to new users. Whereas if you're testing an existing feature, you might want to speak to users who are currently using the feature or are potentially going to use the feature. So the other thing is that you don't want users to try and cheat your screener, especially if you're offering an incentive. So, for example, if I put in my brief overview that I worked at GitLab or even just put my name on there and someone googled me, they could work out that I worked for GitLab. They might presume then that I want to speak to people who contribute to open source projects. So they might actually cheat the screener and just simply put, yeah, I've contributed to an open source project in the last month. And that introduces a risk. A risk speaking to a user who isn't representative of my wider audience. But if I'm a bit smarter with how I work with that question and by offering up more kind of answer options, it becomes less obvious to the user what the qualifying answer is. And therefore, that reduces the risk of people cheating the screener and me speaking to the wrong kind of person. So once you wrote your screener, get it into Google Forms and share it via the appropriate means. At GitLab to speed up the recruitment process, we actually created a research panel. And the research panel consists of 2,000 users who have opted in to receive research studies from GitLab. And what we typically do, we email the screener out to them and we tend to get a good response very quickly. So if you're thinking about kind of conducting regular usability testing, then I would recommend a research panel. You only need to test with 5 users. There's lots of research and evidence as to why you need to do this. It reveals 85% of usability problems. And you can find out a bit more via this link. And finally, you just need to confirm your user's participation. Reach out to them by email, confirm the details, remind them what exactly they're doing. There's nothing wrong with scheduling all 5 users on the same day. But take it from me, it can be really exhausting. So make sure you leave yourself a comfort break between each user. Give yourself time to refresh and prepare yourself for the next user. So whilst you're waiting for your screener to be completed, you can think about writing a script. And this is where script consists of. So you need to begin your script with an introduction. And the introduction, the aim of that is to build rapport with users. So when users typically start doing some usability testing, they can be very nervous. So you need to make them feel at ease. So introduce yourself, tell them what they're doing today. And reassure them that you're testing a feature, not them. There's nothing wrong that they can say or do. And as they move through the task, you'd like them to try and think out loud, say what they're looking at, what they're doing, and what they're thinking. And assure them that at the end of the day, you're trying to improve the usability of a feature. So you really need to hear their honest feedback. Then move on to your warm-up questions. Never jump straight into your product questions. You want to get users used to talking to you. And the idea with these is to, when they move on to the task later in the study, you've already kind of built up a conversation with them. They'll feel a lot more at ease. So they'll be more honest with their initial reactions. Also finding out a little bit more about who they are as a person can sometimes give you a bit more context about their initial reactions as well, and explain why they respond in a certain way. So tasks. Now, these are where we need to get the users to do something. So we need to give them an action to perform. So let's think about, first, the features of user goals that we want to test. So let's imagine just for a second that we work at Uber and we're testing the feature with users about contacting a driver about a lost item. Now, if you've correctly screened your users, you might already know that they've taken a taxi with Uber within the last week. So you can pull that into your scenario. It's relatable and the users will get it. They'll understand it. Notice how we're avoided using words found in Uber's interface. This is because I want to understand how users would naturally look for that information. I wanted to find out what kind of words they would think about and where they would go. The next is research prompts. Now, I'm going to be honest, not every researcher writes these down, but I do. And the reason that I do that is because when you're talking to users, it's very easy when you're making questions up on the spot to accidentally lead them. So I try and think in advance what I might like to ask them if they've become quite quiet. And users typically have become quiet if they've become stuck on something. It's your natural reaction just to kind of go inwards and start clicking round. But you want them to speak their thoughts out loud. Another reason why I do this is because if two users get stuck on the same task, I want to ensure consistency between the users. I want to be asking them the same sort of questions. And finally, they've wrapped up questions. By this point in your testing, it's probably your users are going to have a very strong opinion about what they've just seen. So speak to them and let them open up to you. Also give them the opportunity to ask you any questions because the likelihood is you've probably not been able to answer their questions during the study. I also like to leave a couple of minutes just to thank users for their time and to confirm their email address if I'm sending them an Amazon gift card. So the big day has arrived. These are some tips that I've picked up along the way, so I hope they can help you out. Make sure you minimise distractions. So you want to get rid of the 50 tabs that you have open, chuck your phone in another room, turn off your desktop notifications. You don't want any of that. Hide anything you don't want to share on your computer. So during the study, you might want to point out something to the user, something that they've not come across. So rather than having this awkward exchange of movie co-icers over there up a bit, no, you just scroll past what you actually want to do is share your screen. So make sure you hide anything that you don't want users to see. So like maybe your desktop icons or your bookmarks. Do a test run. I typically use Zoom to run my usability testing sessions. I test my mic, my headphones, check that the session's being recorded. I prefer to write out my notes because it's quieter than typing and I find that users are less off-put by this. And it also means that if I do have to share my screen at any point during the testing, then I don't have to hide my notes as well. And if you are speaking to all five users on the same day, they should be doing the majority of the talking, but it's still very thirsty work, so make sure you have a drink of water. When you speak with users, double check permissions. So these are the ones I mentioned earlier about the permissions to record them and to share that recording. I only hit the record button at the point that the user agrees. It's very easy to become defensive when someone's criticising your work. Don't let your body language or facial expressions give that away. Don't let a user know whether you disapprove or approve of their comments. Try not to lead users. So if a user asks you a question and you're not sure how to respond, ask them another question. That's probably the best piece of advice I've ever been given. In that way, you don't lead users. A good question is, but what do you think about that? Stay mostly quiet. Your user should be doing the majority of the talking. It's very easy when there's an awkward silence to want to fill that awkward silence. That's the human's natural reaction to do that. That's also your user's natural reaction, so hold back and it'll prompt them to speak into that gap. Watch the time. Make sure you're mindful of a user's time and keep smiling. It makes users feel at ease. At this point, you're going to be glad that you asked users whether you could share and record the study. You're going to edit those videos. I ensure that I blur out or dip the audio of anything that gives that user a way of who they are as a person. The reason that I do this is that I'm about to publicly share those videos. I don't want a user to feel like they're being judged when they are being publicly discussed. I upload the videos to a location where they can be watched, which is typically Google Drive. Then I create a meta issue within GitLab. Within that meta issue, I link to all the videos and I also store all the key insights that I've witnessed during the study. These are anything good or bad that the user has done. If they haven't experienced a problem and I thought of an idea of how that problem might be resolved, then I'll also stick that in there as well. I then ask my colleagues to go and do the same. I give them a deadline to do this by. Once I've got everyone's comments back so that deadline has passed, I consolidate everyone's comments and I simply do this by only keeping the comments that two or more of my colleagues have heard. I then go through that list for a second time. This time I'm looking for problems that two or more users have experienced. If only one user has experienced that problem, then they'll be removed from the list. This is because that user could be an edge case and we want to design for the majority of our users in mind. Everyone's different. This has three advantages. The first is that my colleagues and I have a shared understanding about what the problems are that users are facing. We have an idea of how we're going to tackle them. We also have an issue that we can refer to at any point in time to understand why we made the changes that we did. And thirdly, and this is probably the best thing about being a researcher, you can see how far you've come, you can see the progress that you've made. So when you first started testing a feature it might have been plagued with usability issues, but you've tested and reiterated and gone through that cycle time and time again. And in two, three months' time you might have a feature that's working perfectly with users and it's just a great feeling knowing that you've improved the user experience. So thank you very much for listening to me today. These are the tools that I've used. I'll be happy to take any questions. So which countries do I test with users? All around the world. If they can speak English then they get tested with. So this is a question about whether I see any differences between the different cultures that I test with. Not in terms of stumbling upon tasks. Sometimes what I find is language used within the interface is the main one. And at GitHub I suppose if anyone who's a GitHub user here will know that there's subtle differences between the language you use in GitHub compared to GitHub as well. So sometimes I witness that. So for example, we call them nerd requests in GitHub. We then GitHub they call pull requests. The situations I've faced they assume they know we know the problem like that sort of thing. Yeah, so that's about the question was about how I motivate my colleagues to be involved with kind of UX research. So I'm going to be very honest with you. The last couple of jobs that I've had I've been the only UX researcher and I've typically been the first UX researcher at that company. So it's been very hard slog sometimes to encourage people to get involved. And I think the best thing is as I've said is like with the analysis is make sure that your team get involved. One of my team members is actually sat here in the audience and he will tell you that he will watch those videos and he will witness first hand and it gives him a better insight into designing and coming up with solutions for those problems. So definitely try and bring them along. The reason why I let my colleagues watch videos in their own time and give them kind of a week to do that is GitLab is fully remote. So for time differences and things like that they can't, my colleagues can't always like be a part of that study because it might be in the middle of the night for them. So therefore being given the opportunity to watch them afterwards they still have a chance to be involved. So the question is how long should a usability testing session last before the user starts getting bored. I think you need to experiment with this like most people say between 30 to 60 minutes. I actually find that it's about 45 minutes. I test with typically very technical people, so developers, system administrators and I'll be honest they kind of get bored like you say. You can tell that they're getting weary towards the end of the session so it's typically around 40-45 minutes my usability testing sessions. So how many features I test in a 45 minute session. Typically the lowest number of features I would test would be about 3 or 4 the highest about 5 or 6. With different categories of users, so after your usability studies if you have power users and regular users how you decide which gets the priority to be implemented at the end. So because you have a single user interface maybe and then how you decide not to remove a whole feature so how not to make it too complicated for regular users. Okay, so this is how do you implement changes from different types of user groups. Okay, so I think it goes back to the thing of you need to know who's using the feature, who's your typical audience. First and foremost at GetLab that does tend to be developers however we are moving to this our motto is everyone can contribute so we want to make it easy for non-technical people as well. So we'll typically test with technical people first. We'll implement that into the tool and then we might try with some non-technical people and we'll typically find that the thing that they kind of need the most is more guidance. So for us and like a project that we're working on at the moment is onboarding so we don't have many visual prompts and the great thing about like kind of doing that user journey is the fact that more technical users could turn off those prompts so it's not plaguing the interface for them but at the same time we're still helping non-technical users on their journey. Yeah it's something that we're actually currently working on at the moment so watch this space. On mobile Do you test on mobile? Do you have tools for that? We don't typically test on mobile at the moment. We're concentrating mainly on desktop however we kind of do check our changes to make sure that they're okay on a mobile but first and foremost we're designing for desktop. So this is whether we test existing features or whether we do prototype new features. We do prototype new features we typically use InVision to prototype and then it's normally based on the requirements of what a user needs so for example if we find out during testing that a user needs a feature that we haven't got then we will move to a prototype first and make sure it works before we even attempt to build it within the greater interface with it. I can imagine that with remote testing video conferencing and I don't know I typically find that some of the frequently issues with this with the microphone and with the video. I mean you test them but the other side might have a problem. How do you deal with that and pay your comments on that? So this is how I deal with the problems that the user faces. Well within Zoom it has a great feature called like a meeting room feature that users can actually come on prior to the call with me and check that everything is working okay so that sometimes helps and sometimes you do have to just kind of leave yourself a couple of minutes at the beginning of the call just to make sure users are okay like I say stress technical requirements if they need a particular browser or they need to be on a particular OS make sure you're stressing that right from the screener through to scheduling the interview with them.