 Hi everyone, my name is Phil. I'm currently the PM leading the iOS growth team on Google Photos. Before I took that role, I was a PM responsible for launching YouTube TV's advertising platform. And I also worked on Google Messages on Android, where I was responsible for the conversation experience and growing RCS adoption. I also loved my free time meeting founders and seeing how I can help with anything related to growth, product, marketing and go-to-market. And today I want to talk about something that is very much a core part of the product management experience. And something that I'm particularly trying to work on myself, which is navigating ambiguity and uncertainty, especially within the context of strategic decision making as a product leader. As a product manager, our ability to drive impact for businesses and the users we care about is ultimately going to be determined by how well we enable our teams to consistently identify opportunities and execute. And today, while I'll be drawing a lot of on personal experience, lessons, mistakes, I've learned as a growth PM, a lot of the principles we'll talk about today I think apply across a variety of different decision making. Scenarios across different flavors of product management. So before we begin though, I do want to make a quick disclaimer, which is everything I share today is my personal opinion, and Google is not responsible for any content or opinions expressed by me today. And none of the content I'm using or sharing is confidential. All right, so with that out of the way, let's jump in. There's this really beautiful quote from French author, Andre Gide, that I always try to keep in mind, especially on a really tough day, and it's one does not discover new lands without consenting to lose sight of the shore. And as PMs, as much as we'd love to have a crystal ball and have everything all mapped out in front of us, uncertainty is where most of us are actually navigating a lot of the time. And it's uncomfortable, but I also think it's where that's where a lot of the interesting challenges and innovation comes from. Like if everyone knew exactly how to grow a user base or win market share or delight a specific segment of users, I don't think the job would be as challenging or fulfilling or fun. So my goal today is to share some lessons I've learned, helping teams navigate ambiguity. And I'm essentially trying to break down, I think, decision making in those scenarios into its critical components. Especially I think those three things I've noticed and learned from high functioning teams and seeing high functioning leaders masterfully navigate ambiguity. And my hope is that by sharing some of those lessons with you, by the end of this talk, you'll also have a toolkit to do the same. And specifically it's how do you define the right goals? How do you effectively communicate in a way that instills confidence? And how do you maintain your and your team's energy and morale and resilience when things get tough? And I don't want to just talk about why these things are important, but also hopefully I can highlight some practical specific steps you can take to embody these things as well. So I want you to imagine that you've just been hired as a new product manager and especially when you're in a new product or growth leadership role, teams are really excited, right? There's a tremendous amount of expectation, especially around, you know, that metric or KPI you're supposed to impact and you're excited. You took a reforge course, you read Lenny Ritchiske's newsletter and you have all these frameworks that you're excited to apply from things like jobs to be done to Porter Seven Powers. And, you know, you're bustling with energy because you're a superstar and you're confident that you could deliver impact. But you know, naturally, as you kind of get to know the problem space, you quickly realize like, hey, this thing is really, really hard, probably harder than you thought it was. And you realize that there's no foolproof recipe for how teams can reliably acquire new users, how they can improve retention or how they navigate shifting their strategy or redesigning their business model. And, you know, a 10 step framework isn't going to save you from the hard work of actually thinking through some really hard, ambiguous problems. And ultimately what your team wants besides impact is strategic clarity. They need to know what are they doing? Why are they doing it this way? And what role does each of them play in determining the outcome? And it's our job to help our team as well as our leadership, I think, answer these questions. And what I've experienced, too, is that when we don't do a good job of that, there are some worrying symptoms that start to crop up. The first is you have a really hard time prioritizing. It's ambiguous which activity is actually going to drive the most impact and you don't have the systems in place to address those trade-offs in a principled way. Second is misalignment within teams. Without proper context, people will rely on their individual opinion rather than align towards a shared purpose. And what you end up with is different teams prioritizing different outcomes that make sense for them locally but might not necessarily be optimal for the entire business. And you can start seeing that your UX feels muddied and unopinionated when the strategy isn't clear, the team may start making inconsistent and compromised choices. This is where that term shipping your org structure I think surfaces as well. Teams also start to focus over focus on short-term results because they crave certainty. And without strategic clarity, teams will over-index on short-term easy optimizations rather than thinking about long-term value creation. And over time, as team over-optimize, they reach a local maximum-hit diminishing returns on their efforts. And when that happens, you start seeing it negatively impact the morale of your team. The work feels burdensome. People no longer feel a sense of pride in terms of the things they ship. And if they don't feel like they're getting the context and resources they need from your strategy, even the highest performers will churn because they don't believe in, they don't understand, or they don't feel like they could properly execute a strategy. And as you see these things crop up, it's usually a sign that you have a deficiency in one of three categories. It might be that you're not aligned on the right goals. It might be that you're not communicating your goals or your approach effectively. And maybe you're also not managing your and your team's relationship with uncertainty. And I've definitely been there and it's not fun. But I've also seen great teams pull themselves out of it and great leaders also succeed by resolving some of these issues. And today I want to explore how you can have a sort of principles, practices and systems in place where you can effectively do all three of these things to allow your team to succeed despite facing a difficult and ambiguous challenge. So the first is to define the right goals, you need to focus on outcomes, not outputs. Now, what do I mean by that? What are outcomes and what are outputs? The best way to illustrate this concept is by going through some hopefully relatable scenarios. So I think most teams have a mental model like this. They think of their product as the system customers come in. We measure some interaction they have with our product via product metrics like click-through rate, order volume or like individual feature engagement. And then there's some change in behavior and we measure that in the form of output metrics like daily active users, retention and revenue and lifetime value. And oftentimes the output metrics are the sort of like North Star metrics for your business and individual teams and product leads own a product or feature level metric that level up to those output metrics. In a way this model was created to, it was our best attempt to measure what sort of value we're providing to our users and to our business. And they're an imperfect way of assessing how people interact with our product and how they change our behavior. So there isn't anything fundamentally wrong with kind of this model, but my argument is that this model becomes problematic when it becomes a tool to prioritize and plan. And let me give you a hypothetical example to illustrate this. So you are the PM responsible for a shovel product. Users rent a shovel from you each day with the goal of digging a big trench. And let's say it takes the average user about 10 days to dig a trench and you charge them a dollar each day they need to rent your shovel. And as a business you care about revenue, right? So how do you increase revenue? Well, you can't really convince people to dig more trenches than they need. So what you really want is people coming back and renting more shovels. Your north star metric, just like the north star metric of a lot of prog teams is retention. And that's how your team is going to prioritize and evaluate its work. A few of you probably know where I'm going with this example. And so as part of one of your sprint, your team launches an experiment where they actually reduce the size of the shovel. So the trench that once took 10 days to dig now actually takes 15 days. And the experiment is a big success because the cohort you gave the smaller shovels to now have to come back 15 days rather than 10 days. And now you make $15 rather than $10, you know, you get promoted for improving retention, even though you've technically made your product much worse. And this is an exaggerated example, but it serves to highlight how messed up this model can be if we're not careful and don't do some deep thinking. And I would argue it's actually really easy for prog teams to get themselves into situations like this only that the consequences in those scenarios aren't as obvious. And so what I'm trying to illustrate is that we run into problems when we start with an output metric like retention and work backwards trying to figure out how to optimize it. When we start with an output metric, we're defining success based on the specifics of our solution rather than the underlying outcomes I think people actually care about. When you think about it, that's not really how the world works, right? A person evaluates a medicine by how effectively it can solve a health issue, not by how often they need to take it. And if you're training your dog, you care a lot more about how well behaved your dog is and not how many hours of training received or how it was trained. I think what we need to do instead is to think of the outcome first. And what I mean by that is understanding that people ultimately want tasks completed, faster, cheaper, and easier. And the nice thing with is by starting with outcomes, you're forced to think about context and the situation with which your solution operates under. And it's also very test dependent. Like for example, an SUV's utility will be very different when driven on like a wide road than say trying to go down like a super narrow alleyway. And then with this context, I think you're forced to think about how efficiency is ultimately the fundamental thing that drives value. Again, users want things faster, cheaper, and easier. And if your product isn't more efficient than an alternative, the user just isn't going to be motivated to try and use it. Sure, you might be able to win some users over with some marketing tactics and growth hacks, but you're not going to be able to keep them long term. You need to overcome inherent inertia, laziness, and fears of users. And if you can consistently show that your solution is more efficient than the predominant way of solving a problem or resolving a particular task, then they will choose your product over and over again as their solution. And I think sometimes by doing this too, it forces you to see that there's sometimes some fundamental conflict between the value you're trying to provide to customers and the value you're accruing as a business. And I would argue that surfacing this conflict is a good thing because ultimately the viability of your business is going to be determined by how well you could align your success with the outcomes of the customer. So if you think back to our shovel example, end of the day, people just want their trenches dug, right? And ideally you want to help them do that faster, easier, and cheaper. And you see some conflict with your business model. Maybe you have to evaluate, hey, maybe we don't go with this like rental model and we should just charge for purchasing the entire shovel. Or maybe you decide that you don't charge users if it takes them longer than expected after a certain number of days to dig a trench. And I'm not saying that these conversations are easy. I think what I'm starting to illustrate that if you think of outcomes first, you're forced to lift your heads up and try to resolve this inherent conflict so you could better align business outcomes with user outcomes. There's also another problem too where we should just accept that products and businesses are complex. I don't think any of us will end up working on a product where the top line metric is as straightforward as the shovel example. And outputs are concrete and quantifiable in terms of metrics while user outcomes tend to be more fuzzy and subjective. And when we act like there's a perfect metric that could perfectly measure a user outcome, it's sort of like fitting a square peg into a round hole. We need to understand that all metrics are proxies for the outcome. And we need to stop treating messy subjective problems like they're super well-structured ones. That's not to say that we just throw out metrics altogether. You still need goals that are quantifiable and concrete. But what I mean by focusing on outcomes is a reminder to not shortcut the deep qualitative thinking necessary when defining goals. You need to ask yourself like, what is the problem I'm solving? What is the natural frequency of that problem? What is the expected behavior within the context users are operating under? And the takeaway here is that if you can make a habit of thinking about user outcomes, you'll be much closer to aligning the success of your product with what the user's value. So we've taken that first critical step of crystallizing where do we want to go by defining goals appropriately? And now the next challenge is figuring out how do we get there and how do we communicate? How do we get there? And the way we traditionally do that is with artifacts like road maps, OKRs and, you know, processes like annual planning. And just like my gripe with how a lot of teams do metrics, I also have a bone to pick with how we traditionally build road maps. Much like how outputs imperfectly measure outcomes, the traditional roadmap is an illusion of certainty being applied to an ambiguous situation. It's sort of like we're not really sure how to solve this complex problem, but because we work so hard constructing this concrete artifact, we can trick ourselves and believe that the mystery and ambiguity are no longer there. And the problem, though, is that I think road maps quickly become very prescriptive in terms of dictating the actions of our teams. I think we've all been there where like your team feels like a feature factory where you're focused on launching a feature by a certain data at all costs where you're incentivized to stick with a plan even when circumstances change around you. And, you know, there's been some kind of innovation there. Like the OKR process was an attempt to give teams more flexibility by telling teams, hey, here's the outcome we're trying to achieve. Here's the metric we're trying to hit and having the teams work through how they actually get there. But what I've seen at least achieving that level of hands off approach is often easier said than done. We all start off well. We define the key results and objectives for our teams. But then we continue to dictate the actions and launches and timelines. And the reason we do this is because as leaders, even though we intellectually understand that we need to give our teams latitude, I think emotionally we crave the certainty of needing to know exactly how our teams plan to reach those outcomes. And it's human nature, right? Like we want to know how we're progressing. And especially when a problem space is ambiguous, we gravitate towards something concrete and certain. My suggestion here is actually think of artifacts like roadmaps as tools of communication rather than contracts. Our jobs as product leaders is to find better ways to communicate progress that also accepts the realities of an open-ended uncertain path. And one tool I really have learned to love is the opportunity solution tree popularized by Teresa Torres. It's a really elegant visual representation of how you plan to reach a desired outcome. And the OST has three main components for those who aren't familiar. The first is desired outcome. So like what is the outcome your project aims to achieve? And just like we talked about before, give it your best shot at trying to measure it in a concrete way. The second is opportunities. These are the pain points we can solve to help our users achieve the desired outcome. And what's important about opportunity is that there needs to be some sort of evidence either through customer interviews or product data. Opportunity should not be based purely on your own intuition or assumptions. Solutions on the other hand, like these are our ideas for solving those pain points as well as the experiment assumptions and hypotheses that you want to validate or disprove. And so you have a goal and what I love about this framework is that the tree structure showcases a reality where there's more than one path to achieving the desired outcome. And you're also forced to acknowledge what you know now versus what you need to find out, which I think is a really important thing to consider when you're solving ambiguous problems. And so to kind of illustrate this, I want to go through kind of like a quick high level example to show you how you would work through this with your team step by step. So let's say you're the PM of a ride sharing app and your goal is to increase the number of successful rides where a user is happy with the whole kind of experience end to end. This is sounding a lot like a PM interview. And okay, let's say happiness is kind of a hard to measure thing, but you take your best shot and let's say you believe the metric to increase is an increase in the number of five star rides per week. Not a perfect metric, but directionally enough for us to get started. So where are your opportunities to move that metric? Well, you have some research that points to some pain points. First, you know that canceled rides make users unhappy. You reduce canceled rides, you probably get more happy users at the end of a successful ride. People could be unhappy because the ride took longer than expected. Maybe the general comfort of the ride is also a factor that you've seen in research. And what you do now is like you want to drill down to each opportunity. Okay, so let's look at rides taking too long. Well, how do we define too long? What's causing that? Do we have data to show that our drivers are actually driving slower? Maybe you don't. There's no evidence of that. But maybe you do have evidence that users have different expectations depending on how long the trip is actually supposed to take. For example, you could say that users taking a short trip appear to really value quick pickups and drop offs. Or users taking slightly longer trips may value a smooth and uninterrupted ride experience. And maybe you also see that on long journeys, users have a really high value for comfort, given that they're in the car for such a long time. Now, the important part here is before jumping into solutions, you work with your team to answer, hey, which opportunity if we solve it is most likely to drive towards a desired outcome? And what evidence do we have to support that? Maybe evidence that one happens more frequently or leads to more pain. Maybe one of these opportunities just seems more actionable today than the others. Sometimes the opportunity is obvious. Sometimes it's not. The important part here is that you have a common language and approach, and you're all on the same page in terms of which criteria you apply to decide which opportunity to pursue. And with this tree structure, what's really nice is that people can follow your thinking if you're documenting it properly along the way. And let's say you arrive at the conclusion that the biggest opportunity comes in short trips and decreasing the pickup times for those users. That's where you can start formulating your hypothesis. So maybe your hypothesis is time waiting to get picked up is a much larger share of the overall trip for short trips. Like if the whole ride only takes 10 minutes and you spend 10 minutes being picked up, it's a pretty unpleasant experience. So your thinking is if you can lower pickup times for these short trips, the trip will feel faster and these users will feel more satisfied. Now you ask yourself, what is the minimum viable test we could run to actually validate that hypothesis and prove we're actually solving this problem and increasing happiness? And so you do, okay, for this cohort of users for this amount of time, if they call trips of a certain length, we make sure drivers prioritize picking them up, which will subsequently shorten the pickup times. And we'll say our hypothesis is valid if we see a 10% increase in 5-star ratings for users on short trips that got those prioritized rides. Then, as your experiment concludes, you surface the results of each experiment on your OST as well. And be inquisitive and ask the like now what so what sort of questions is the opportunity larger or smaller or more or less actionable than we believed has the problem become easier or harder to solve? Is this still the right opportunity to pursue or do we need to update or cross off completely different parts of our opportunity solution tree? And the point of this exercise isn't to say that this magical framework is going to solve every challenge you have in your business. Really my intent is showcasing how it's really good to embrace a strategy that's based off user outcomes and hypotheses. What I like about this framework is that it allows us to accept the unknown as a natural part of this discovery process and that uncertainty isn't something we need to eliminate at all costs. This is sort of how you build a roadmap of what you need to do as a team driven by hypotheses you have. And with this sort of framework, you now have a shared language and a set of practices to flexibly and adapt to reality. Alright, so we've talked about defining the right goals. We've highlighted some ways to communicate and execute this uncertain path you're undertaking. Those two things are, I think, like the birds-eye view, but now I want to zoom in and talk about what do you actually do day-to-day when you're kind of in the thick fog of uncertainty? So in this section, I just want to riff on some really helpful lessons I've sort of learned over the years and emulated from really excellent leaders I've seen. And sometimes I actually write these things down and stick them to my laptop to remind me to practice these things obviously not perfect, but I can confidently say that the days where I do these things effectively, I think the team really benefits from them. The first is to operate transparently and predictably. So to level set, I think here's a pretty sobering statistic for you that I think Microsoft published this, actually, and I've actually kind of experienced this firsthand. Only about 10 to 20% of experiments will lead to results you expect, essentially positive results. So if that is the baseline, that's a lot of failures. And even the most optimistic and resilient among us, I think would have a really hard time getting punched in the gut that effectively or that often. And nevertheless, your job as a leader is to ensure you and your team cope with that baseline and can still operate effectively. And what I've seen is that I think there's a tendency for product and growth leaders to brush quote, unquote, failures under the rug. So you think of like all the dead ends, you had all the experiments that didn't deliver the results you hypothesized. It's important to stop thinking of those as failures. In my mind, the real failures are when experiments deliver inconclusive results. Failures are when you catch yourself saying things like, oh, we didn't move this metric, but maybe it's because we targeted the wrong cohort or, oh, we didn't achieve this outcome, but maybe it's because we didn't set up the experiment in a way to actually test the right hypothesis. And how you avoid that is you need to shift your mindset from how do I maximize metrics with this experiment to how do I make sure we maximize learnings? And it's also important to not learn in private, learn in public, socialize your learnings far and wide. Even when things don't go your way because you'll be surprised, the learnings likely can get leveraged in places in ways you haven't anticipated. And what's nice about that is that if people see that you are transparent, if people see that you're able to detail the rationale behind your decisions, if you're able to articulate what happened, they trust that you're able to think deeply about what to do next. And that's really powerful in countering people's sort of anxiety and impatience. Being predictably consistently transparent is really powerful, especially when the outcomes themselves aren't predictable. And sometimes that could even mean saying, I don't know. And that sometimes even means saying, hey, my bad, I was wrong about that one. People need to feel like they can always rely on knowing what's happening, what you're doing next, and why. And if you can do that, people feel invested and gain confidence in not only your process, but you as a leader as well. Another tip is to focus on what you can control. You can't control the outcomes. You can't control and you can't know what's going to happen next. So what you can control is, I think, how effectively your team is executing. There are so many things you could do to ensure that your team is learning as quickly and effectively as possible. And there are things like, are you being thoughtful about how you scope experiments? Hopefully using some of the frameworks we talked about. Are you thoughtful about communicating your thought process and next steps effectively? Are you showing up every day, engaged and excited, and also setting a good example for your team? Like these are sort of things that you can control and I think they're very much things that should be part of our focus. And so, I don't know, I think the other thing that we talk about a lot is like shots on goal. Definitely maximized shots on goal. Even if you miss, if you shoot a lot, eventually some are going to go in. And this isn't to say you should just blindly kick the ball wherever you go, but it's a reminder that you should try to learn as much as you can as quickly as possible. And you might not have all the answers now, but the more you learn, the closer you'll get to uncovering the path towards your goal. And I would say like an underrated sort of competitive advantage of product teams or just businesses in general is how well they can understand their users, how well they can understand their market, and how well can they get signal on where the opportunities are and where they can iterate to capture those opportunities. And that comes from maximizing your shots on goal and trying to learn as quickly as you can. And another tool I found that's really useful is to have some cadence of what we call learning reviews. It seems some high functioning teams do this really well. So instead of going through a summary of last month's metrics and launches and milestones, shift your focus to this month's learnings. Don't just ask, hey, what did we do? Make sure you chart your progress by also asking, what did we learn? And five questions that come up that tend to spark really good discussions during these reviews. What did we try? What did we think was going to happen? What did we learn? And what are we going to do next based on those learnings? And what's important is that like, if you take a learning focus lens in these reviews, it's really effective at reminding you and your team that their work is valuable, even though the problem space is inherently difficult. The learnings should be the star of the show, and the launches and impact, et cetera, should be there for context, rather than the other way around. And at the end of it, I think it's important to remind your team of how they're continuing to contribute to this expanding corpus of knowledge about their users and their market and their behavior, and remind them of that competitive advantage of, hey, we are building our moat with some of the insights we're gaining here. So as we wrap up, let me just quickly recap some of our key takeaways here. The first is to find the right goals, focus on outcomes, not outputs. Don't treat concrete metrics as a shortcut to thinking deeply about users and the context they operate under. Second is have a roadmap as a communication tool and communicate through hypotheses rather than features. They're not hard set instructions. There are ways to communicate your thinking and your learnings. And third is like create a culture that celebrates learning. Navigating ambiguity requires embracing the unknown and resilience. Remember that what you can control is how you approach each day, how effectively and how quickly your team is learning, and how well they're kind of celebrating these learnings as well. So to close, I know these principles sound simple. I don't think I'm winning any Nobel prizes here for delivering them, but I do really believe they are the cornerstone of your success as a decision maker. And I also know they're much easier said than done, but I'm convinced that if they're things that you can work on, you'll be equipped with the skill set to tackle complex challenges with confidence, to lead your teams effectively, and to always keep an eye towards the ultimate goal of delivering real value to the customers you care about. And you're here in this webinar because you're an eager learner with a growth mindset trying to get better each day. And I think tackling these sort of product challenges should require the same ethos and approach. Thank you, everyone, for sticking around this long. And if you found valuing the insights shared today, please feel free to connect with me on LinkedIn. If you feel like you could benefit from any sort of, I think, guidance or discussion or consulting related to product management, growth or anything really, please don't hesitate to reach out. Like I mentioned earlier, I love working with product leaders meeting other product managers, love meeting founders. So if there's anything I could help with, especially related to product growth and marketing and go-to-market, my inbox is open. Thank you. Thank you for your time and attention today and wish you all the best in your product journey. Thanks so much.