 Hey, hi, everyone. How is everyone doing? So we'll start our talk on how AI will revolutionize the way we do usability testing. So most of you are UX professionals here, and we'll know what usability testing is. But for Don't, who are starting out, it's a very popular research method that's widely used in tech industry. And as it helps to evaluate our product usability and get to know our users better, their pain points, their preferences, and basically their expectations. So there are many tools and platforms that are already available for usability testing. But have you ever wondered, can this process of usability testing be more simpler, better, or time-saving? Because when I was there as this user on those usability testing platforms, I was wondering that how can this process be more smoother for me? That was always followed up by this one question, that how can AI help me in this process? And that's what we'll cover. We'll cover all these four steps and see how AI will help us in these steps of usability testing. So the first step is test plan creation. We start by defining our objectives, our usability issues for the test, and then taking those as our base, we come up with a script for usability testing. Now, this process of manually creating script or basically from scratch, especially, can be pretty time-consuming, as well as it takes significant effort as well. So what can be done? This is actually a screenshot from a tool called qq.ai. I'm sorry if I butchered the pronunciation. But it gives us the feature of defining our personas, our context, then our topic. And then when we click on Generate, it can generate a basic script for us. And this can serve as a starting point or a foundation for us to start a script on. And then later, we can build on it and customize it according to our unique requirement. Then the second example here is chat GPT. So what I have done is I have entered the prompt, say, generate a usability moderator guide for xyz topic. And it has given me the introduction, the participant introduction, pre-test questions, then the test task that can be done, and then post-ask questions, as well as wrap up speech. So which I say is not bad, right? The second step is recruitment of participants. So this is a really important step since we want the best target users for our test, right? But even though it's already present on the current platforms for tools, there have been incidences where I've encountered participants who are just exaggerating about having prior experiences in the domains I'm trying to test in. So that made me wonder, how much do we really have control over this step? So how AI can help us with this is there are two possible solutions. First, coming up with a better screener questionnaire. Screener questionnaire is nothing but a set of questions that we use to filter out our audience better. So coming up with a customer screener questionnaire, that matches our unique requirement is really essential. And I have taken CharGPD again, help of CharGPD to come up with a customer screener questionnaire for my requirement. The second example here is how we have AI solution for recruitment of candidates in the companies. In the similar way, we can have such solution for analyzing our candidates for participants for usability testing based on their relevance, their skills, their expertise, et cetera. And that can give us more control over who we go ahead with our participants. Moving on to the third step, which is now actually conducting the test. So here we are done with recruitment of participants. And then we move on to giving tasks for our participants to perform with our product. Now, especially during unmoderated session, I have seen that participants often forget to read out the task fully, and then they end up not completing the task or not understanding it at all. Then there are language barriers, which prevents us to tapping into more wider participation pool. As well as then participants often forget to think out loud during the test, which is really important since we want to know what their expectations are, where they are stuck at, so things like that. So what can be done is if we take text, we take text, we include it with voiceovers. And then these voiceovers can be included with avatars. This whole process can give us AI moderators. And AI moderators can help us to read the task out loud for the participants, so they do not miss on anything. Then it can help us to localize the test. And by that, we can tap into more language options as well as accent options. And third, it can remind the participants to think out loud if it feels that they have been silent for a really long time. Moving on to the last and the final step, that's insight, gathering, and data analysis. Again, this is an important step because then we need those insights to come up with the actionable items that we need to further enhance the usability of a product. So again, these steps take a significant time. It takes time, and it's a tedious task to sit through those testing recordings and analyze them, as well as the note takers often miss out on notes or any important details while they are during the note taking session, because maybe the participant is speaking too fast or maybe because of their accent. So what can be done? This is an example of already-present solution called usability testing. So we can have a provision of time transcripts. So transcripts can help note takers to always refer to notes again if they missed out on something, as well as provision of timestamps, where we can highlight the negative and the positive sentiments. So basically, what user liked about our product, what they didn't like about it, that can be highlighted. The second solution can be tracing of the eye movements, and that can lead to generation of heat map. So this will point out where the user attention was going throughout our testing. And as well as emotional analysis. So I haven't, I think that's a bit difficult to do it right now without the help of AI. But if we include AI technology into this, emotional analysis can be done easily. So we can get to know if the participant was confused at any point, were they frustrated, were they happy about anything. So that can be mapped out. In the end, I want to leave you guys with three important takeaways. First, never felt, never say that we have always done it in this way. Embrace innovation and change to thrive. This was actually shared by me with from like a senior colleague of mine. And she mentioned it in context to usability testing, but I think it goes for both usability testing as well as adaptation of AI tools and technology in our life. Then it will always and always be up to UX researchers or designers to drive the task and make be the design decision makers of the process. And ultimately, AI is more empowering when it works with the user, not for the user. Thank you.