 Hello, everyone. My name is Devhavi Fadnes. I'm a Product Manager at Microsoft Excel, and I'm here today to talk with you about how to deal with uncertainty as Product Managers. Minimizing uncertainty and finding out the right next step for your product is something that Product Managers spend most of their time on in their careers. I thought that I'd like to share some of the learnings I've had so far in my career that helped me deal with uncertainty in an effective manner. I'm really excited to share my learnings with you, and I really hope that they add some value to you. So let's get started. Before we talk about how to deal with uncertainty, I think it's important that we spend some time talking about, why do we want to minimize uncertainty, and what are some of the areas that we must focus on? As a Product Manager, what do you think your most important task is? I think that it is to build a product that is successful. What I mean by successful product is a product that satisfies our customers and is profitable for our business. So this product success is really dependent on three things. Product desirability, feasibility, and viability. Product desirability means that there are enough number of customers in the market that have a specific need or a pain point that this product or idea solves for. Hopefully, that pain point is recurrent, meaning that customers will keep coming back to your product and using it for a foreseeable future. Product feasibility means that you have the means, resources, time, budget, partnerships, expertise, and dependencies you need to build a product that solves for the pain point that you have identified efficiently. Product viability means that at some point in the future, you'll be able to generate revenue streams and hopefully profit for your business using the product that you've built. So to build a successful product, it's really important that you are working with the idea or a pain point that customers actually need solved, something that you can build and something that you can monetize in the future. That's why it becomes really important for product managers to try to minimize uncertainty and risks associated with product desirability, feasibility, and viability and find that sweet spot at the intersection of these three that actually increases your chances of building a successful product. Well, in short, how do you minimize this uncertainty? Is you gather evidence that points you in the right direction. So when you're working with a new idea and your uncertainty is really high because you know very little about anything at all, your only goal should be to gather as much evidence as you can that points you in the right direction. So what are some of the sources that you can use to gather evidence and minimize uncertainty? Well, there are three major sources of evidence, customers, competitors, and internal teams. Each of these sources will give you varying degree of information about desirability, feasibility, and viability. And so depending on which uncertainty you're trying to minimize, you might want to pick the best source or sources to go to to gather more evidence. For example, like customers, I think are a major source of gathering evidence related to desirability and viability. Will they be able to tell you a lot about whether you as a team can actually build an effective product for them? Maybe not, right? But your internal teams will have the information you need to determine whether or not a building of particular product is feasible for you or not. So if you're working on gathering evidence for feasibility, your internal teams like design, research, engineering, data science, other teams that are maybe working on related feature, those are the ones that you might want to talk to to gather more evidence. And competitors are a moderate source of gathering evidence related to desirability and viability. When I say moderate is because really whether or not customers have a particular pain point is something that only customers can tell you for sure. Competitors will give you a good idea about where our competitors are headed, but don't just do something or follow something that your competitors are doing if you don't have the evidence for it from your customers. And in terms of viability, you can look at competitors and talk to internal teams again, such as legal sales, marketing to sort of determine what price point you want to set for your services or what is the expected cost or revenue or other things related to viability. All right, so when you're gathering evidence from customers, competitors or internal teams, just remember certain rules of thumb. One is that please hire importance to what your customers are saying compared to what competitors are doing for the reasons that I mentioned earlier. Focus on volume. OK, so even if you're gathering information from customers, focus on how many customers are pointing you in the same direction rather than rather than using a single data point or a single anecdote as evidence to proceed in a certain direction. And when it comes to picking which of the three areas to focus on or prioritize, I would advise you to prioritize desirability first and try to minimize risks associated with desirability and then sort of proceed with feasibility and viability because really at the core of it, you want to build a product that your customers want. You know, even if you build a very efficient product that that someone is willing to pay for, if it is not something that a lot of people actually need, you're not going to be very successful in building that product. So focus on desirability above feasibility and viability and then move on to those later. OK, so now you know what sources are there to sort of gather more evidence. Now what I'd like to do is I'd like to walk you through a framework to tell you how do you gather evidence from these different sources and try to minimize uncertainties that we spoke about. So the first step is to gather weak evidence. What the weak evidence is, is it consists mostly of people's opinions, their beliefs and their insights from their past experiences or past learnings. Some of the modes of gathering weak evidence are trend or competitor analysis, customer interviews, customer surveys, talking to teams internally or organizations to learn from them about their insights from past products, past experiments and past experiences. Weak evidence, you know, there are there are only so many customers that you can interview. There are only so many team members that you can talk to. There's only so much information available on the trend and the competitors. And this is why in weak evidence, you do not get significant volume of data points. You have fewer data points in case of weak evidence. And because of that, I would encourage you to not spend too much time on gathering weak evidence. Now, given all this, you might be tempted to just say that, hey, I'm going to skip this step and just gather strong evidence for my assumptions or hypothesis. Well, I would really encourage you to not skip gathering weak evidence for two reasons. One, gathering strong evidence for your assumptions is really expensive. And so you will not be able to gather strong evidence for all of your assumptions. And you can do that in this step of gathering weak evidence. Second is, although you have fewer data points in case of weak evidence, they can give you some profound insights and point you in the right direction. You know, you can you can discard things or assumptions that will absolutely not work for your product based on the evidence you receive in this step and minimize the hypothesis or assumptions that you want to gather strong evidence for. So this is really a critical step that can give you profound insights on to the about the overall high level direction that you want to proceed in. And so do not skip this step, in my opinion. The next step is to remove biases when you're deriving key insights from the evidence that you've gathered. And this is super important because as product managers, you know, you have this really cool idea that you want to work on, you have spent time in gathering weak evidence for it. And at this point, you are really married to the idea and you want it to work. And this can sort of this can make you analyze the data points of the evidence that you have gathered in a biased way. And so it is super important that you analyze the information and derive insights in a very non-biased, rational way. Some of the tips that I have for you to sort of remove your biases is, you know, consciously try to work to disprove your beliefs rather than proving your beliefs. If you're going in customer interviews or just talking to other teams to gather evidence, have more people join you in those conversations. That way, you know, if multiple people have different interpretations of the information that was shared in those interviews or meetings, it'll help you to sort of carefully consider different opinions and viewpoints and come to a rational conclusion. Don't share summaries with people about what your understanding of the evidence is. I would encourage you to share raw data or raw information that you have gathered as part of your evidence and ask multiple people to review it and share their interpretation of the takeaways from that evidence. So really cautiously make sure that you are analyzing evidence in a very non-biased way and driving insights rationally. Okay. So at this point, you have some assumptions that you think might work for your product. And so what you want to do is, you want to convert your assumptions into key hypotheses. Okay. So hypotheses that are related to desirability might include things like pain points, value proposition, something that will get you engagement. The ones related to feasibility will talk about resourcing, key activities, partnerships, and the ones related to viability will talk about costs, pricing, profit, etc. Now, when you are creating key hypotheses from your assumptions, make sure that your hypotheses follow these three characteristics. One is that your hypothesis is testable, meaning that you can measure your hypothesis and the outcome of it can be a true or a false. So you can confidently say whether your hypothesis failed or not. So make sure that it is testable. So, for example, a hypothesis that looks like redesigning the website will increase user engagement is a very poorly formed hypothesis that is not testable, right? And that is because you don't know how you're going to redesign the website or how are you actually measuring increase and user engagement. So if you modified this hypothesis a little bit and said something like redesigning the website with design A will increase total time spent on website per user. So now you have a measurable testable hypothesis that can either fail or succeed based on whether the total time spent on the website per user actually increased or not. The second characteristic of a good hypothesis is that it has a clear goal and a success criteria. So a hypothesis that looks like redesigning the website with design A will increase average time spent on the website per user per week by 10% for users between ages of 20 and 45. This is a very good hypothesis that has a clear goal and criteria. You know which customer segment you're working with. You know what design you'll use for your experiment. You know which metric you're going to focus on and you know how much you're looking to move that metric to be successful. And the third criteria is that your hypothesis must be atomic. When you're testing the hypothesis, what you're really trying to do is you're trying to figure out whether the cause has the effect that you were predicting, right? And so you should really have one cause and effect in a single hypothesis. Don't try to mix multiple variables in a single hypothesis that that's not going to give you clear results. Now, once you have your key hypothesis mapped out, what you need to do is you need to prioritize your hypothesis. You cannot proceed to gather more evidence for all the hypothesis you have. And so, which is because of 10 strains basically like in real world you're gonna always gonna be constrained on time, cost, resources, and various other things. So, you really want to pick a few hypotheses that you want to gather more evidence for and that's where prioritizing them becomes super important. One easy way to prioritize your hypothesis is to ask two questions. One is, which of these hypotheses need to be absolutely true for my product to succeed? So you're ordering them based on their importance for success. And second question you can ask is, which of these hypotheses I lack concrete evidence for? So you're really ordering them first by importance, second by evidence. And you want to pick the hypothesis that are really critical for your product success and you do not have the evidence for yet. And you want to sort of focus on those and then move on to gathering strong evidence for the hypothesis that you picked. Now, strong evidence usually is significant in volume. It consists of facts, real world settings, what people actually do instead of their beliefs and opinions. Some of the ways of gathering strong evidence are actually building experiments to test out your hypothesis in real world. Now, remember that building experiments is costly and so you really want to be careful when you're testing your hypothesis using experiments. So if you can, I'd encourage you to start with building scrappy experiments. Now I know that this is not always possible depending on which product you're working for or which organization you work in. But if you can, try to begin with building scrappy experiments that give you information that we need to proceed and then take the next step. So some of the examples for building scrappy experiments are Wizard of Oz or fake landing pages. So let's say if you are looking to test the desirability of a particular product and you don't know whether it's gonna work or not, right? So what you can do is instead of building the entire automated backend for that system, you can build the UI and as customers interact on that UI, you can have actual manual humans sort of interacting and proceeding or taking users through the next steps. And if that works, if you find a lot of customers engaging with that experience in a way that you were hoping, then as a next step, you can actually invest in building the entire automated backend. Or let's say if you wanna understand if you embed a URL in a webpage, will people actually click on it and then consume the content of that URL? If you wanna test that in a scrappy way, you can create a fake landing page. You can embed the URL in a webpage that you want. Once a user clicks on it, you can just give out a message initially that says, hey, coming soon or we'll be live on so and so date. And then just measure how many clicks you get and if you think that those are significant, then you can actually invest in building the experience that you want to build. So these are some of the ways in which you can gather strong evidence and because the experiment is sort of, experiment gathers significant data points. It is usually a very good indicator of what you should do next, depending on the result of the experiment. If you do not have a lot of users to give you data really quickly, you can choose to run experiments for a long time and gather more data points that way. So okay, so now you have gathered more evidence, strong evidence that you can use. Again, you have to make sure that you remove your biases. I cannot stress this enough. It is super important to remove your biases when you're analyzing your evidence. In cases of experiments, this is where having a hypothesis that has a clear goal and success criteria is really helpful because if you really have a clearly defined success metric, analyzing the results of the experiment becomes very clear. You either met the goal or you didn't. But let's say if the results of your experiment are sort of inconclusive, you might be tempted to say that, hey, let's just roll it out to production because it's not worsening anything if it's not making anything better. But this is where removing biases is something that we should remember. Don't do that. Just make sure that you sort of have multiple people take a look at the raw data and get their insights into it as well and remove biases and do the right thing for the product. Okay, so now you went from gathering weak evidence to strong evidence for your key hypothesis. Now what you can do is you can run multiple experiments for the same hypothesis to gather even stronger evidence that points you in the right direction. Or you can choose to repeat the entire process to gather more information to add features or to remove features that don't work any longer that you already have in the product. Remember that product building is not a linear process. There will never be a time when you have figured it all out. So you have to keep on gathering more evidence and minimizing uncertainty in an effort to build a successful product. So you have to keep on repeating this framework that we spoke about and keep on iterating on your product. Okay, so in summary, in order to build a successful product, PMs have to minimize uncertainties associated with product desirability, feasibility and viability. When you're uncertain, your only goal should be to gather more evidence that points you in the right direction. Some of the sources that you can use to gather evidence are customers, competitors and internal teams. Different modes of inquiry will give you a different degree of evidence. So you can begin with gathering weak evidence and then move towards strong evidence. And you can repeat experiments or run multiple experiments for same hypothesis to gather even stronger evidence. One thing that you have to keep in mind when you're analyzing the evidence you've gathered is to consciously remove your biases in order to really do the right thing for the product and for the customer. And lastly, product development is not a linear process. So you have to keep repeating these steps and keep gathering more evidence to reduce uncertainty and add to your product success. All right, that's it for me today. Thank you so much for joining this webinar. I hope you got some value out of it. If you have any more questions or any feedback for me on this presentation, please feel free to reach out to me on LinkedIn and have a great rest of your day. Bye.