 Hello, thank you, everyone, for joining this session. My topic is based on the user's model to do the usability testing. OK, before talking usability testing, I'd like to ask a commonplace question about what's the user experience? Yeah, that is a definition from the AI group. User experience encompasses all aspects of the end-user's interaction with the company as service and product. Recently, we all hear one discussion about the experience and the service and the product. So I think from the definition, we can get an answer. Experience includes the service and the product. And experience is personal and subjective. So it affects by our past experience, our personal preference, and our mood, and any other things. So user experience is subjective feeling. So sometimes it depends on some external factors. So from that, we can see such as how a human begins their days to their shadows and to their mood, and so on. Yeah, from here, we can see experience is so much conflict. So that is the question why we need to measure user experience. That is a quote from James. James is a famous writer. He read a sentence about measurement as a first step that leads to control and eventually to improvement. If you can't measure something, then you can't improve something. So I think they say there are a lot of reasons to measure the user experience. But I think this is the most important one. Why we need to measure user experience? We need to find out the usability issues and then fix them. So that is also another reason you need to identify and quite fine and communication the user experience with your stakeholder. Yeah, and then this is a great process to give you a clear position. What are you doing and how well are you in? And also another thing is the compacted advanced stage. So what we did, so what we measure the user experience, we did the usability testing. The challenge for designer and for product manager, what we face is not the marketing and new technologies as how human work. Yeah, as how human work. What users say and what users do is sometimes totally different. So the only way to verify is to do the testing. So usability testing is not just a task list for the product requirements. It's also a convincing support for your design decision. So I'd like to know how many people did the usability at the fall. Yeah, OK, got two. Thank you. Yeah, so I'd like to quick through how to do the usability testing. This is a template designed by my team. Yeah, we use this template to do the usability testing plan. So you can see the template. For example, you need to find out some participants. Who is the user or who is the participant. And the second one, you need to prepare some things. For example, hardware software or applications and some meeting rooms. Also, you need to prepare some conference meeting because you need someone to watch your usability testing. For example, you interview the user, but you want some people, some powerful people to see this usability. Because some users doing a task is not like what they designed. So you want some people to see. So you need to set up a conference meeting, let the watcher online to watch the things. So this is our template. So from the usability testing, we can see some people can't finish a task. So we can see the task on that part. It means the task is usable. So user can do something. But if user can finish a task, how do you know the user like it? How do you know the user like the design process, like the task workflow? So we want to know deep things about the emotion. So we want users to love it. And in the usability metrics, we have some data for the objective. For example, you can get the data, completed rating, or errors, or time on task, and so on. But how do you know the satisfaction? Yeah, the satisfaction is also one purpose for the usability. So we have to do measure the satisfaction. There are two categories. This category means we can do the task level satisfaction measurement. It means when user finish a task, you can give a questionnaire or one measurement for the user satisfaction in user testing as a post-task questionnaire or rating. This is a much more effective way to get the satisfaction. It can give us the diagnostic information for the usability issues. And also can give us immediately user satisfaction. And the next is task level satisfaction measurement. I will talk about it later. And the task level satisfaction, I think all kind of questionnaires is attempted to get and quantify how easily or how difficult is the task. So I list some questionnaires here because they are all effective and popular. And they can affect you to measure the user satisfaction. The first one is ASQ, after the narrow questionnaire. In this questionnaire, we can go through questions. The picture is much more small. I will read it. The first question is, I'm satisfied with the easy of completing this task. The first question is focused on easy to use. OK, the second question is focused on time. I'm satisfied with the amount of time it took me to finish this task. So the second question is focused on, I'm satisfied with the support information. So this is three questions in this questionnaire. I can use the likelihood, the likelihood question time. The user can read from strongly disagree and to strongly agree. And ASQ is commonly used. And so many researchers have supported this questionnaire as acceptable for his, because it's sensibility and reliability. So in our usability testing, we also use this as our satisfaction measurement. The next is NASA task load in dust. This questionnaire is also a widely used, subjective, multi-manual access to assessment tools for rate-perceived workload, to rate a task or a system or team effect or any other things about the performance. This questionnaire is divided in two parts. The first part is six questions. The sixth question is focused on the mental, physical, and time, performance, effort, and frustration. There is six questions. The user will rate this sixth question from every question how different measurements. For example, the first question is from very low to very high. And the user will rate it. This is the first part for the questionnaire. And after that, the second part is user to weight. User to weight the sixth factor. Which one is most important? For example, you can see this picture. User to weight the performance at five. The number is larger than the priority is higher. So user to weight the performance is the most important for user. So based on the weight, we can get a weighted rating. So you can get a number about this task. The next questionnaire is UME. This is a little complex. Firstly, you need to tell the participant the task difficulty definition. For example, you let user to perform a simple task. For example, let user simple to click the search icon. For example, this task as a baseline task. You set this baseline task as 10. So in the next user did the task, we will compare with the baseline task. For example, if user finish the task, they think the task is twice difficult as the baseline. So he or she need to type 20 in this input box. So yeah, this is UME. It is a little difficult for user sometimes. For user need to measure, how is twice difficult? How is half difficult? So sometimes it is difficult for user. This is UME. And the next questionnaire is subjective, might or effort questionnaire. This questionnaire has none labels. It's much more easier than UME. And also, this questionnaire has an open ending. The none labels from not at all hard to do, to immediately hard to do. So user need to pick up one label from this questionnaire. For example, the questionnaire should be asked, how do you think this task? So user maybe pick up one label. So this questionnaire is much more easier than the last one. And this one is as a simple, easy questionnaire. They just ask one question, only one question. All real, this task was, how do you think from very difficult to very easy? This also uses the likelihood question type three scale. So from the three scale, user need to pick up one. We also suggest to use this one, because it is short. It is short, and user easy to finish this task. And also easy to response. Yeah, from the third scale, I will pick up one. So it is easier to administer. Also easier to scroll. So it is easy for user, and it is easy for the minister. So we can easily to get the result, and the user can also need to easy to do the response. Yeah, when we finish the questionnaire, we get a lot of numbers. For example, this table, yeah, for example, from the task one to task 12, we can get 12 data. From this number, what we can get? What do we do with these numbers? Task file, yeah. Task file seems to have some problems. So it can immediately flag a difficult task. Task file, compared with other tasks, the score is very low. It's just only a 63% percentage. So from this task, first from the number, we can get a very difficult task. Then the goal of the usability is to find a problem and then fix them. So the first thing, finding the issues we get, we can see the task file have some problems. So we need to roll back to do some in the utubule. When you're doing the user testing, you interview the users. So you need to figure out the task file problems. After the task level questionnaire, we need to do another category questionnaire. It's based on all the tasks. For example, user finishes all the tasks. Then in the end of this session, we want to know our real impression about this projector, usability, and experience. So this is the task level satisfaction measurement. Yeah. There are also some questionnaires. The first one is SUS system usability scales. There are 10 questions. One interesting is the Yiyun road and the other road. It has a totally different feeling. For example, you can see the first one is I think I would like to use this system frankly. The second one is I found the system necessary complex. So it has a different feeling. So when we're doing the results analysis, we need to switch the score because the question is from strongly disagree to strongly agree. The next questionnaire is SuperQ. It has 12 questions. But this questionnaire is all focused on the same feeling, all about the usability, credible, and visual opinions. Also uses a lack of the question time from strongly disagree to strongly agree. And in the last question, we can see it looks like an NPS. So what's the NPS? The NPS is the night promoter school. It is not necessary put this kind of question in this category because it's not just for the task level satisfaction measurement. It is popular and effective for user experience and satisfaction. I will list some screenshots. For example, the Slack and the JIRA. I think some people can got this survey questions. Website or mobile phones. OK, there are another questions, another examples. This question is, how likely are you to recommend this projector to a friend or co-worker? User will rate from 0 to 10. Sometimes some projector use from 1 to 10. So from the NPS, we can break out the response to three chunks. The first one, if user rate the score from 9 to 10, then it is a promoter. So these are your happiness and loyalty customer. They are very likely to promote your projector to others. So we can put the promoter as our administrative address on our website. And another group is the passive. If user rates from 7 to 8, then this customer are happy. But maybe not unlikely to reference this projector to others. Maybe they just think, not bad. Yeah, just like that. And the next group is distractors. The distractors are customers that are not happy and sometimes can be dangerous for your brand because they will spend navigation reviews and messages. So yeah, looks like that person. Sorry, I don't like it. Bye-bye. Looks like this one. So we need to figure out the problems and to fix them. So we're doing the usability. The first and the biggest thing that what we do is need to find the problem, then we need to fix them. And last, what we did with the numbers. For example, we got the NPRS numbers. How did we do? I think it's important to note, we got this number. It's not necessary to compare with your competitors. We need to compare with ourselves because this number is just tracking how well we are doing for our project to yourself. So for example, in this chart, for example, in the first time usability, I got four. And the next time maybe get a file, then it is much more better. So there is not only one way to measure usability. So there are a few different things, a few different metrics to measure our usability. Satisfaction, just only one. We need to also use the other metrics. For example, the success rate or errors or time on task of these metrics. And I have introduced so many questionnaires. I think you don't need everyone. You just pick up one from each category and start measuring your satisfaction for your project. OK? That's all from me. Thank you. I'm thinking they make the emotion that you have anything. That's what you asked them. Yeah. The questionnaire is focused on when you're doing the interview. So you will ask users look like this question. And when you're asking the questions, you can get their feelings. How they do or think your project or something else. Yeah, for the task level, I think the question is it looks like, I see you kill. How do you think this question? How do you think this task? And for the project, I think it looks like the NPS is how do you likely to recommend this website? Yeah, this question is focused on as to measure the user's experience and satisfaction. Answer your question. Yeah, I'm just thinking of how you know what the feature specifically is about, I mean, do it task by task. Yeah. Sometimes, maybe by thing, like, I don't know. Why do people like that? Yeah, I think it's like, do you think that's like, useful to each individual task being easier to do? Yeah, I think. I never got it. Yeah, yeah. It's been quite cute. I knew your question. I think this is another testing about the desirable. Do you like want to use these things? Yeah, I think this is before the product development and before the product planning to do these things, this testing. This is about the desirable. Yeah. OK, thank you. You guys have tried to use facial expression to measure or satisfaction, or kind of like, you know, emotion, right? Fish. I think that the user-based testing is focused on all kinds of products, all kinds of technologies. I think what you said is emotion in user-based testing. Right? For example, if some users are not happy, you can see it's not happy with this project. Yeah, so like some people are doing the testing and it's just people to say things out loud, like, oh, where's this? Is it going? Yeah, yeah. So do you guys capture facial expressions, or maybe in like, voice? Yeah, sure. Yeah, sure. The emotion is too much. Yeah. We have a technology called micro-emotion. So users face have some different face or something else they not like or are happy. I think we can watch these things and let them think aloud, decide what they are thinking. So we can catch this information. So you guys actually use it? Yeah, yeah, sure. In usability, we should watch the emotions from the users. Is there like any open source? Oh, we don't use any technology. We just saw the user's emotions. Yeah. Yeah, this is Michael. Oh, yeah. OK. And you like to take notes during the session, or how do you write down, not notes about what's going on, or how do you? I think when we watch this emotion as not to write or something weird, we just ask the users. And we also have a recording for the usability, for a review in the later. Yeah, I'm wondering how you are in life. So I think that as a question, can you actually have the emotion? So if you can look at the person by as a person, they hate it, or they are frustrated, or they love it. Yeah, yeah, yeah. But then you are making a lot of numbers. Yeah. It makes a lot of sense if you want to see progress. How do you turn those observations of our emotion into numbers, or do you know anything? Currently, we haven't connected the emotions on these results, on these numbers. So we just. Maybe I can just use the words. You don't have to have a number. Yeah, yeah, yeah. It's like my talkers. And using machine learning to analyze special expression, maybe it's too expensive for the course, maybe you don't have so much out of it. Yeah, yeah, yeah. I'm just asking, what is the state of the art right now? Yeah, it's still a quality research, right? So there will be a right report. And we keep testing pretty well for us to see for ourselves how to say that users are. And that you don't need more than that. Yeah, yeah, yeah. Back in itself, it's a lot of work. So it's not only about numbers, although it's great, because it's very open. We get the question, how do we measure? So it was a great example of how to measure. On the other side, it's not only about numbers, right? It's more of a qualitative research. Thank you very much.