 Hello, everyone. My name is Taimi Mojuribuna, I'm a product manager at Redfin, and today I'll be talking to you about using qualitative data in your product decisions. So quickly, just to go over what I'm going to talk to you about today and first, what is qualitative data and the types of qualitative data. Then I'll like to dive in a little bit into how to collect qualitative data, specifically using user interviews and usability tests, and we'll go through some examples. Then we'll talk a little bit more about the value of qualitative data, how and when you use it, and mistakes to avoid when using it. First things first, what is qualitative data? So qualitative data refers to data that is narrative and descriptive that gives you personal yet subjective insight into your customers. And the best way to really think about it is that qualitative data helps you understand the why behind your users' behaviors. And so you can identify their unmet needs, their wants, their pain points, or their loves and generally just get a deeper understanding of your customer. So there are different ways in which you can acquire qualitative data. There are UX research methods you can use, so user interviews or open-ended survey questions using conduct usability studies, or you might get that data through support tickets or in-app feedback or customer reviews. Now this data generally I consider falls into one of three buckets. The first is generative research. So this is information that really helps you identify new opportunities. So let's say you're an established business, you feel like easy saturation, you want to bring a new idea to the market. Generative research is really helpful for this, so you might want to conduct something like user interviews. Then there's evaluative research. So this kind of data collection really helps you assess or evaluate an existing feature or an existing problem. So maybe you've already identified the problem you're trying to solve for it, you have some prototypes or designs, or you've already shifted and you want to get some feedback on how users are, if it's actually solving users problems, you want to do evaluative research. So the tool you might use is usability testing. And then lastly, there's a bucket I call customer-driven data. And this is mostly unsolicited customer feedback, but not necessarily always the case. So things like support tickets, app reviews or customer reviews, idea portals, where most of the time it's customer initiated, you may send out a survey for feedback or you may send out an email to get feedback on your app, but generally the customer initiating the feedback loop. So with these different kinds of data, I think customer-driven data, if you're in a company that isn't just starting, you have some of these things already set up. And as a product manager, you'll more likely find yourself really trying to acquire the generative and evaluated methods. So if you're lucky, you have a UX researcher that you can partner with who is an expert at these kinds of things and can really help you get the data that you need. And you can work with to identify problems and identify solutions. However, that's not always the case. Maybe your organization is small or for whatever reason you don't have that support. As a product manager, it falls on you to fill in those gaps. So I want to talk a little bit more about two, I think of the two of the most common research methods that you can use as a product manager. First is user interviews, and then the second is user tests. And both of them are helpful for both generative and evaluative research. So how do you conduct interviews and user tests? What are some of the characteristics of these methods? First is that you need a research goal. You don't want to go in unclear about what you're trying to identify. So you always want to have clear research goals and clear research questions with either of these methods. Second is that you need a script. So you're trying to see a trend here. Preparation is really key with these methods, and honestly with any data collection method. So you want to have a clear script of what you want to ask users, potential follow-up questions, and that script should tie into your research goal. So you want to feel like at the end of your test or at the end of your interview that you have been able to get information that will help you answer your research question. Now, these two methods can be either moderated or unmoderated. So moderated really just means live. You're talking to the user live, you're able to ask some questions, and it's nice because you can also ask follow-up questions as needed. And that unmoderated you might be using as an online tool like user testing.com or user zoom and preprepare your data collection method and then just kind of have it run on its own. One thing to note is that your interviews and review tests should always be recorded regardless of if they're moderated or not. Just because it's a lot of cognitive overload to listen to a user, take notes, be able to ask follow-up questions. So you want to be able to reference what has been said, but always ask for consent before you start recording. Then you want to pre-screen your participants, right? So you're not just trying to talk to any person. You may have specific users that you want to get inside of them. Maybe you're only interested in first-time home buyers. Maybe you're only interested in people that are currently looking to get a mortgage. So you want to make sure that you're pre-screening participants. Do you want people to have familiarity with your company or do you want people that have never used your apps or services? So pre-screening is very key so that you get clean data. And then you want to run a pilot, right? So you always want to pre-test your experience with one to two users. With interviews, you can test it with a coworker. You just want to make sure your script has a good flow and your questions are understandable. And then with user tests, you want to test it with a customer or someone who's not familiar with the prototype or the screens that you are trying to show them, just again to make sure that your questions make sense and are understandable. A few other things specifically about user interviews is that generally they take 20 minutes to an hour. So they are generally lengthy. It's hard to have like a five-minute user interview in that case. He may want to use any like a survey. But because you're able to ask those follow-up questions, because it's open-ended, they can take a long time and they can be a time consuming method. And then with user tests, they're generally task-based. So having users go through certain tasks, hopefully to successfully, and to just, you're looking to see through that, through going through that task, you're identifying potential pain points and improvements that you can make. They generally run from five to 30 minutes. You want to make sure that your user tests don't run long, because again, going back to that whole concept of cognitive overload, if the customers are performing a bunch of tasks, after a while they will get fatigued. So you want to keep it short. And you want to also keep the task simple. And then obviously you want to have some sort of prototype or visual aid through which users are completing tasks. And the fidelity of that prototype or visual aid can vary depending on where you are in your development lifecycle. So let's quickly practice how we would, some questions that we would have for both user interviews and for a usability study. So for the interview, let's say you were product manager at Slack when it was recently first created. And when Slack launched, let's say that it only launched on PC, also on desktop. And now as it's expanding, it's wants to create the app version. But before that, we've, Slack wants to prioritize, what are the key capabilities and key features we should bring to the app for V1? How do you go about determining that? Obviously, you might have some quantitative data that will give you some insight. But how do you dig into the why, right? How do you prioritize what the customers really need? This is where a user interview might, will be useful to kind of generate those ideas. So your research question might be, how do employees work on the go? And what are the key capabilities that are necessary to keep them productive? And so with this, you're kind of being convinced you're able to get more insight, hopefully you, through your script, you get, be able to get more insight into how users are working with their phones. But then also what are the key things that they're using their phones for to continue to be productive as they move through the world or as their mobile? And so some example questions that you might have, as you break your script into kind of like the intro and the main and the conclusion, with your intro, you might ask them, you know, what does a day of work look like for you? So here you're just kind of warming them up, getting insight into their day and also identifying some potential follow up questions you can ask down the road based on what they say. For the main script, you might ask, you know, what kinds of tasks do you complete on the go? How successful are you at completing such tasks? And again, this is really kind of getting into the meat of answering your research question, digging into understanding what the user is doing with their phone and trying to achieve on their phone as it pertains to work. And then to conclude or to wrap up, you might add something around idea generation. So if you could only do one more task on your phone, what would it be? And this is a preference question. It's an idea question. I think it's always good for conclusion because you start to generate some ideas, but it also clues you into just really, like, what are the priorities of your customer? But you, but it's, it's preferential. So you want to focus more on the content of the main section. What if you were conducting a usability test? What are some questions that you can ask? So let's say you are the product manager for this new kids game, coloring game, friendly animals, and you have this low fidelity mockup. It's, it's paper and you're doing like a, you know, moderated usability study. As you can see, you have Joe the warm. You have some, a button for coloring pages, a button for friendly games and a button that, you know, may start a video. So the question thing you asked for usability testing really depend on, again, tying back to your goal and what you're trying to achieve with that question. So if you're trying to get initial impressions or feelings about the design, you might ask something like, what do you think when you look at this page? What are your initial impressions of this page? And obviously, your language should match your audience. So this might be too advanced for a little kid, but you want to generally get to that point. Like, what do they feel? If you're just trying to get a sense of discoverability, maybe is a button easy to find. You might ask something like, what do you think you can do on this page? And so this gives the user the opportunity to take a look at the page and start kind of speaking aloud of what they see and what they think they can do. And you can then take note of what they actually discover versus what may be hidden to them. And then if you wanted to test copy, something you could ask is, what do you think would happen if you click on this button? Again, like, is the language that you're using understandable? So for example, we have a button that says coloring pages. You can ask, what do you think would happen if you click on this button that says coloring pages? And the user might say, oh, I expect that I'm going to get to a page that has a bunch of drawings that I can color, and that may match what actually happens. And that's great. That's good insight. Or they may say something completely different than what's supposed to happen. And that's also good insight, because now you know that maybe your copy is not clear. Maybe the design around the button is not clear, but potentially something needs to change. So you've written your script. Hopefully you've run your test. And now you have a bunch of information. How do you analyze your results? There are many ways that you can go about analyzing, kind of summarizing the results of your research. But for user interviews, you ideally want to transcribe. Again, that goes back to the point I made earlier about recording your sessions, because transcription will give you a full picture of everything that was said, and it's just like, it's just so hard to type your notes fast enough to capture everything. Then you want to start to group the insights and answers into themes, right? So for our previous Slack example, maybe a bunch of people are saying things around quick responses, pre-determined responses, they can just send really quickly to co-workers. That coalesce into a theme. And you just keep grouping until you have key takeaways. And those takeaways can be gaps, they can be pain points, they can just be insights, right? Sometimes it's just, oh, people use XYZ very often. And that in itself can help you determine what to do next. With usability testing, you want to know what were user successes and their failures with the different tasks that you asked them to go through. Then you also want to talk, like note down unnoticed elements, right? So we talked a little bit early about discoverability. What were the things that the user did not notice? And maybe that is signal to you that there needs to be some changes in your design. Where were there word and action mismatches? For example, maybe the user said, I expect that when I click on this button, it's going to take me to this page and that page is going to have XYZ. And then they click on the button and it doesn't have XYZ. But then they say, oh yeah, this is what I expected. Obviously, there was a mismatch, right? Before they clicked on the button, they had certain expectations. And now they've almost confirmed to you. Sometimes there's that element of users wanting to kind of please the tester, confirm to you that what they saw was actually accurate. And so that's why it's important to kind of note when the words or the actions don't necessarily align. And then you also want to keep track of what were common mistakes or common problems, as well as what were outliers, right? So you don't want to just focus on the generalizations and say, oh, this was the thing that most people or some people had an issue with. You might also want to take note and spend time on the things that maybe like one or two people had an issue with, like those outliers, because that might also be an indicator of a big problem. So knowing all this, you know, what qualitative data is and kind of the types and even how you can go about studying to collect qualitative data, what really is the value of qualitative data. Its strength is really in helping you figure out what to do, especially when you use it in conjunction with quantitative data. Because it takes a lot of resources to build something new. In fact, it takes a lot of resources to even like build on something that's already existing. So spending that time to understand the space in the customer, know who you're targeting and understanding their behavior pattern, it really empowers you to make informed decisions and give you the opportunity to provide what people want. So it's great for providing additional context for quantitative data findings, right? So maybe you run an AV test, maybe you've done some high level data analytics, you have some initial insights. Qualitative data is great for providing additional context, tying back to the why. It's great for identifying new product areas or directions like that generative research. It's great for validating or invalidating feature ideas. Maybe you already know the problem, but now you're trying to see your solution is down the right path, that evaluative research. Or maybe you've shipped a solution and you want to understand how or if that existing feature addresses the user goal, user pain point, again that evaluative research. So qualitative data is just super important as a product manager, even as a data driven product manager to understand the why of your customers and to get the additional context on what to do, right? So you need that to make fully thought out product decisions. However, there's some common mistakes that sometimes happen when people use qualitative data. The first is you want to make sure that you don't try to quantify qualitative data, so you don't want to start saying things like 80% of users notice the button and were successful at the task. Because it's an easy mistake when you have just 10 users, it's that significant. So you really want to focus more on using terminology like most some all and steer away from kind of starting to get into the statistics of your results. Secondly, you want to make product directive insights not human directive insights. This is especially important if you are doing user studies that may include underrepresented backgrounds as they get most affected by this. For example, if you run a study and you have one African American and they maybe they have an outlier for certain tasks, you don't want to summarize and say African Americans have issues finding this button. Instead, you want to focus on what were the product insights from the study and not the human directed insights. However, you may say if a certain underrepresented group differs from your regular pool, that might be insight that you need to collect more data. So you need to do more research, maybe just for that specific background. But again, you don't want to give human directive insights. Then you want to remember that qualitative data is subjective. So it is not great for preference testing. You don't want to do qualitative research or user interviews or usability testing and say, oh, people prefer blue to green, or people are willing to pay four dollars instead of six dollars because it's totally subjective and you won't really get a sense of what users preferences are until you do something like an AB test or until you put that feature out in the wild. So you don't want to use qualitative data for subjective information. Then lastly, you want to try to get a diverse pool of users. Obviously, you want to remember that because your sample size is so small, whatever your findings are can't be generalized to the population. But you do want to make sure that it is diverse. And that diverse as it pertains to what your customers or user groups look like. So for example, it might be diversity of technical capability. It might be diversity of salary just depending on what your tool is, what your user groups are, what personas you're trying to focus on. You do want to make sure that you're trying to get a diverse pool of users. So to quickly summarize what we went through today, our key takeaway is that one, that qualitative data really gives you the insight into the why of customer behavior. And it falls into one of three buckets. It can be generative, evaluative, or customer driven. So there are many ways that you can acquire qualitative data, but user interviews and usability testing are two key research methods for data collection. And for both, preparation is key to data accuracy. Qualitative data goes hand in hand with quantitative data. And that is one of the many values that it brings amongst others. And then you want to make sure that you don't try to quantify your results and always focus on product insights and not human insights. Thank you so much for listening to me today. If you have any questions about what I said today or if you just want to reach out, feel free to reach out to me on LinkedIn. Thank you.