 Thanks, everyone, for being here. Carlos is a great person to learn product management from. And I was just telling Tina, who I was sitting with here, simple reason why, I guess I won't remove this for the sake of this being here, but simple reason why Carlos is a great person to learn PMing from is because this is a classic case of PMing where stuff is broken and you need your use case to be satisfied and you come up with an innovative solution really quickly. And a paperclip does a job of a broken, I guess, player clip, which is really great. But anyway, thank you, Carlos, for having me. I'm super excited to be here. As Carlos mentioned, I'm Ketan. I'm a product manager at Dropbox. And today, I'm really excited to talk about influencing without authority. Can anyone tell me what this really means? Does anyone know? Anyone in the audience? What does this mean? Influencing without authority. You're not the manager of someone who needs to be. Yeah, exactly. That's exactly right. When you're not a boss of someone and you need to influence them in some way to get something done for you. And if you see what I just did, I didn't have any authority on this audience, but I influenced her to answer this question. And here, influencing her might have been easy, but influencing your team as a PM might not be that easy. But why did I decide to talk about influencing without authority? It's interesting because influencing without authority is talked about a lot in business education. You go to like a B school, class, or leadership class. People always talk about, oh, how do you influence without authority? And this comes up in lateral leadership or cross-functional management. If you think about it, product management is all about cross-functional management. You're working with different stakeholders. You're working with different people. And without really having any authority over them, you're trying to figure out how to get your product done, shipped, and have the impact that you all want to have. And just coming back to the influencing without authority piece. When you read literature, it always talks about more of the soft skills aspect. But as I've been doing this as a PM, I've realized that there is a certain data component about it. So what I wanted to do today is not only talk about influencing without authority, but how can data help you here and add my own spin and share some stories on how in certain situations I've found data to be an effective complement to maybe other traditional strategies for doing this. A little bit more about myself. I'm Kathan, PM at Dropbox, third time saying this. I've been at Dropbox for a little over a year. And there I work on the analytics and growth teams. Prior to Dropbox, I was a PM at this company called Applied Predictive Technologies, APT out in DC. That was acquired by MasterCard. That was my first PM gig out of school. Before that, I just did a whole bunch of random things from quant trading to manufacturing in a machine shop. So if you're interested how I found my way to PM, my roundabout crazy way to PM, just find me afterwards and I'll be happy to share my story. Prior to that, I went to MIT and did my undergrad back in India. A little bit of a personal background, I guess. I'm a huge fan of national parks. My personal favorite as of now is Zion, but the picture of me smiling there is not in Zion. It's actually in Bryce Canyon. Cool, when you are a PM and you're looking to get some presentation training, people say there's the ABCD of presentation. Get the crowd's attention, which hopefully Carlos did for me. B, talk about the benefits. C, establish credibility. And D, the direction, which is what I'm gonna talk about today. So a few things that I wanna cover today, which is why influence without authority? Why does this matter for PMs? Approaching it traditionally, like how do you approach this from a standpoint of what literature says? And I'll share a little bit of my own personal story and how I've managed to do this in a PM context. Then we'll add some data into the mix and I'll finally talk a little bit about setting yourself up for success when it comes to influencing without authority. But before we get into all of that, many of you are here to be a PM. And when I do a lot of interviews for PM and a lot of people ask me, what does a product manager really do? And this is like a really hard question to answer because depending on who you ask, the answer really, really varies. I personally think like what my mom thinks I do, which is she thinks I'm Steve Jobs building flashy products and one day like maybe hopefully show like the next iPhone. And we might think that we are like the superman saving the day for our engineers and for our designers. But really sometimes like PMing is all about like, you just don't know and kind of like figure it out as you go along the process. So what I decided to do is, okay, like if I was like a really naive person, what would I do? I would Google it, right? Of course. So I went and Google, what do product managers do? And Google tells me a product manager is a person responsible for defining the why, the what, the when of the product, blah, blah, blah, that the engineering team will build. But most importantly, it says they are the CEO of their product, which means they lead cross-functional teams from a product's conception through to its launch. Now I hear this all the time from candidates that I interview. I ask people like, why are you interested in product management? And people say like one of two things to me. They say, oh, it's because PMing is an intersection of technology, business and design. And I wanna be in this like intersection, you know, like the three circle Venn diagram that like many of you are seeing. Some people are nodding their heads. But I also hear this other thing. Oh, I would love to be the CEO of the product. How many of you think that PMs are the CEOs of the product? I thought so. So no shame in raising their hands. What turns out, PMs are not really the CEOs of the product. The reason is, as a PM, as she said, you're not a manager of anyone. I cannot go to my engineer and say, you have to build this. The engineer's gonna be like, you know, get lost like I'm not gonna build this, right? So you don't have authority. The CEO however has the authority because he's the boss. You on the other hand, don't. And so I kind of think of it as like, PMs really don't have that direct authority to make their product successful. I was walking on a hike with a PM friend of mine and or he's now a PM. He's a friend before. And I was asking, he was like telling me, oh, like I'm now in this PM position and I'm working with like two really senior engineers. And I tell them to do all these things for me and they don't listen to me at all. They just like build whatever they want. And the answer is like classic, right? It's like an inability to influence these senior engineers on what you want them to do. And especially like if you look to impose authority on some of your stakeholders, it just is a disaster. Things will not work out and you'll have to find a different profession maybe. And the reason this is important in PMing is because you're not the CEO but you're responsible for the success. It's kind of like actually like the worst part of being the CEO of the product because you have all the responsibility of making it successful but you don't have the power to make it successful. Well, you kind of have the power but not in a direct way. And then you have to deal with all these people. There's like senior management, design, marketing, business, job, engineer, sales. All these people want all these different things and they have their own perspectives. They say like design says, oh, this has to be pixel perfect. Senior management's like where is this product? Like why is it not, it should have gone out yesterday. And you are this one person who has to deal with all of this. Set this one vision, align with them and move forward. So if you talk about like PMing, I personally agree with Sachin Reiki. I don't know if you guys know him but he called it the most underrated PM skill because writing requirements, you can learn it. I guess working with engineers, technical skill design way more easier to learn than navigating this maze of figuring out how to convince people. And this is really hard to do especially if you are like a new PM and you start off on a team, you don't know what's going on. And the team expects you to be the savior who saves the day by helping you figure out like what's going on. And it's kind of like hard to like have that authority from the get go especially if you're looking to become a new PM. And I just like this Dilbert comic which is you like listen, misinterpret and like say something. But I like to think of skills as some things that you can learn. So in this case, influencing without authority may not be something that's taught in like a regular channel but you can learn it. But hopefully we'll cover a little bit of this today. Before I talk about like data and like how I've approached using data within influencing people that I don't boss over, let's talk about it how it is like viewed traditionally. And when I say traditionally like if I go back to Google and change my query from what do product managers do to like how do I influence my engineer? What result would I get? Turns out you get like a ton of results like influencing without authority is like this fancy thing. There are articles ranging from three ways to influencing people without authority to like 11 ways to influencing without authority and everything in the middle. And these are like, these are valuable in their own right. Personally, I like HBR articles. So if people generally are facing issues with managerial concepts in like a workplace my pro tip is to just read HBR really like valuable at the high level. But a lot of these articles don't go into the tactical aspect. They will say something like you should like look to do this but like how does that work in practice? I don't know. Classic example is the first thing that all these articles say is build relationships. Get to know your team as people and not just as coworkers. And this is really hard, especially for someone like me who's like an introvert. But this matters a lot in reality. For instance, how many of you have like tried out a product because a friend suggested it to you? Probably a lot, right? Like I would say tons of this, right? Your friend doesn't have authority over you. You're just doing this because you know that your friend you care about them and you have this relationship that makes you trust them and try out this product. In products, this is great because you have wider adoption that results because of people telling other people to use products. But at work, if you know someone, the possibility of getting something from them is dramatically simplified. So let's say you are a new PM, you entered this world. All of a sudden they're like all these people, all these stakeholders. How do you like build relationships? It's a really like uncomfortable world to be in. And this is like really, really important. My solution is food. People eat three times a day unless like, I don't know, like some people don't eat breakfast. I don't get why. But people eat food all the time and there is a book that I haven't read but I've heard good things about which is Keith Farazzi's book called Never Eat Alone. It really talks about how bonding over food is like a really, really good way to build relationships. And this is something that I've done because I don't like to like build relationships in like a very networky, context-y way. I just like say like, oh, like let's grab lunch. As a PM, I get lunch with my team every single day. I schedule one-on-one lunches with my engineers. And I even do like organize events like I invite all the engineers to my home, say like, hey, let me try and cook something for you. Maybe they just like pretend that it's nice and say, okay, this is good. But the bottom line is really what I'm trying to do is get to know them as people, right? And for me personally, food is like a really, really good way to just like get to know people and like establish that bond where people like know, oh, like I cook like something really bad. I once baked cookies and they turned out really terribly. But now like you have like built this like really nice bond with someone and the next time the engineer wants something from you or you want something from the engineer, that whole relationship is streamlined and it's just like, it doesn't feel like work. It just feels like, oh, like, hey, like we're all working together, right? It just like smooths or like greases the wheels a little bit. The next thing some of these things say is aligning on goals and vision. One of the books that I have read, however, is this book called 48 Laws of Power. And one of the things that like of the 48 laws which are a lot of laws. One of the laws that it says is if you're looking to get something from someone, appeal to their self-interest, never to their mercy or gratitude. And if I go to someone and say, hey, please do this for me, please do this for me. Like if I go to my designer and say, please can you make this mark for me? This is like really like, I would be so happy if you do it. Designer's gonna be like, okay, like fine, like what will you give me, right? Like it's like very hard to pursue people like that. However, it's far easier to convince someone if you're working towards the same goal. And the way I've tried to do this is I guess try to find common ground and interests and try to get people on the same page. I'll give you like a quick story here. When I was, when I first joined Dropbox and I work on the analytics team as I said, I was super excited to like work with Dropbox design because designer Dropbox is considered to be like really, really strong. And I was like really excited. Turned out my team actually didn't have a designer and I went in and I said, oh wait, like I don't have a designer. Now I have to convince someone to join my team. Well, I have to convince two people. I have to convince the manager to let someone to be on my team and help me out because well, I guess I might not be the strongest in design. And then I have to convince a designer that, hey, like you should join this team because there is something in it for you. But like really like I'm looking to get designer because I want to like learn or get like an improvement in design. So how did I go about this? Well, I showed like the senior manager some carrots. Oh, like if you send the designer to my team, really we are working on experimentation. Experimentation is revenue generating for the company. We can make better decisions and this is like a company goal. And so the senior manager kind of like saw, okay, like I can like if this is improved, we can get so many more experiments and the senior manager's like, okay, let's kind of make, try to make this work. But the harder part was convincing the designer. Like how do you convince the designer that this is actually like worth his time? So I ended up talking to this one designer whose work was most closely aligned with my team. So they were doing some work relatively right next to experimentation. So I went to this designer and said, hey, like, you know, you're trying to build this improved workflow for your users and this is like a streamlined workflow to help them like do this growth analysis like much better. But a key part of this is experimentation. And if you like consult with me for like an hour a week, at least now we can together build this like really nice world for our users that like streamlines that end to end workflow. So in some sense what I ended up doing was played to like his sense of like building a complete beautiful world for the users that encompass not only the area that he was focusing on but the area that I was looking to focus on myself. So aligning on the vision and goals and finding common ground and interests is like really important if you wanna convince someone. And it's like really not easy, but I guess something that you have to develop over time. The next thing that these articles say is have empathy for others problems. What is someone trying to do? Just try to help them. And in my experience, before joining Dropbox I was an enterprise PM and sales is really, really good at this. Because over a sales call someone has to convince you that you need to buy this like massive product or even try this out, right? And sales is really good at understanding people's problems like surfacing them out and showing how you can help. But how did I do this? The answer was relatively simple. And I think like if anyone's looking to be a PM and I would encourage you all to do this, which is if you go and are like a new PM on a team, your foremost responsibility is to add value. And the way you add value is not through your marks, not through your design, none of that. You simply go there and say, how can I help? How can I help? And my first team I still remember, and this was an advice given to me by like one of my mentors, just like go and ask how I can help. And my first team I was, I just did support for a month. I sat in there, like looked at all the tickets coming in. The engineers were tired of looking at the tickets. And as I was doing support, it was really valuable for me because I learned about the product. But the engineers also found this to be really valuable because, oh, like I don't have to deal with these like pesky customer tickets anymore. Like this guy's looking at it. Okay, like it's kind of helping me out here. So you know, you build this relationship. But really what this matters is the reason the door is there in the slide is you have to leave your ego at the door at some point. And even when I joined Dropbox, I had like three or four years of experience and I was like relatively in like a senior role, but the team was struggling with documentation. I just spent two weeks like writing documentation by myself, right? And so the idea is you have to look at what problems people are facing and try to try and solve them. And as a PM, you are picking up the rocks that are falling, right? It's not about like my role is well-defined. It's about like keeping the ship sailing and whatever holes are there, whether it's like a massive hole, a small hole in the top layer of the deck or the bottom layer, you just have to like plug these holes. That doesn't mean that you always end up doing like what that people don't want. It is like a fine balance and you just have to like manage it accordingly. Okay, so that's more of approaching this in like a really traditional sense. But what happens when you add data to the mix? And the reason I wanted to talk about this is it's become kind of fashionable to talk about like data-driven product development, data-driven decision-making, data-driven, like I guess whatever, but we don't talk enough about like data-driven processes, especially as PMs, like how do you like incorporate data in your own processes and how do you like influence people with that? So what I wanted to do is as a PM, I really believe that data is one of your biggest weapons to make decisions. And I just wanted to share like some of my stories as I've looked to help or look to build and ship products. And really what I wanted to do is like share some learnings in how I've used data in interacting with some of these stakeholders that we talked about earlier. Any questions at this point so far? Okay, so group number one is engineers. The most important group that you will want to influence is engineers. And if anything, if I ever had like a prayer in PMing, my prayer would be I wish I had control over my engineers. But it's interesting because a bad partnership between an engineer and a PM is totally disastrous, whereas a great relationship and one where engineers trust the PM and the PM trust the engineers can go a long, long way in making a product really successful. And some of these like team dynamics play a really important role in how your teams function. But really engineers are people who care about impact and making a difference. Like engineers wanna work on technically challenging problems. They wanna like make an impact, like have a difference. What do you need them to do? You want them to build these products and the features that you wanna ship that you care about. And not all the time do they align. So we said that engineers, so I wanna share some stories here, but we said engineers love to work on impactful projects. And nobody wants to ship like something that will get trashed in like a couple of days. Nobody wants to write like throwaway code. So what the engineer is looking from you as a PM is some sort of like a sense of like, hey, is this PM guy doing his job? Like, am I like building something that's valuable? Is this gonna be important? And the way you can show this or the way it gets surfaced is as a PM, you set the priority for your projects through your roadmap. And if you look at any interview question or any interview guide for PMing, it always says, oh, for prioritization, like have a cost benefit matrix where you define like costs in terms of like, I guess like revenue or number of users or costs and benefits in terms of revenue or number of users costs in terms of engineering cost and so on. But in reality, what I've found is very rarely do people do this like very rigorously and in terms of data. Actually, one of the first teams that I saw do this is our search quality team at Dropbox. And the PM on that team had done this like really thorough analysis on this one metric called MCTR. MCTR just stands for like missed click-through rate. So if you're like, let's say you're a PM on search. What do you want to do? Search should fetch like the most important results for you. And if your search is ineffective, you don't click on any search results. You like use like a navigational UI, particularly in Dropbox's case, you have two ways to access your folders. Either you can navigate or you can use search. And one of the really interesting things that the search quality team had, that the PM had done was this roadmap that really called out here's our prioritization and here's the specific percentage points by which I can move the MCTR by. And it's almost like a stack rank solution. And now the engineers on this team go and look and say, oh wow, okay, like if I build this, the MCTR improves by 10 percentage points versus this other one, in which it only improves by two percentage points. So now, not only building a data-driven roadmap, a lot of people do this, but surfacing it to the engineers and saying like, here's the rationale and here's the reason why this data makes sense and why we are building this is really important to get that confidence of their engineers. Okay, like I know what's going on. And if you look at how I adopted it, so I was building an experimentation platform and one of the metrics that we hinged on is we didn't want people to manually analyze experiments using SQL. We wanted people to use this platform. And so the number one thing that we looked at is like, how do you prevent manual querying? And so that's the one thing that I hinged on, which is like, okay, if we build this, we'll get 10 users who would not manually query anymore. If I build this, maybe three users don't manually query anymore. And the justification for this is interesting. It can vary in different ways, but having that clear distinct roadmap where you can point out why one feature is prioritized ahead of the other and from like a data perspective is really, really important. But then again, your roadmaps are kind of crazy. Like you come up with like a beautiful roadmap and then someone comes and says, but I want this like tomorrow, or the CEO says like, you're not the CEO, but the real CEO comes and says, oh, you want this product tomorrow. So you have like a lot of one-off things that come into the picture. And for experimentation, this is something that actually happened. So I put together this beautiful roadmap and said, okay, if we build this A, B, C, D, and E, there is like manual querying will reduce by so and so forth. But then we came across this like interesting feature called multi-arm bandits, which is like an, I won't go into the details there, but it's like ML for experiments. Turns out it doesn't help manual querying at all. This one metric that we hinged on doesn't help whatsoever. But it has like a huge potential in driving revenue. And now you can no longer fit your one-off piece or one-off project that you're doing in this like beautiful roadmap that you've created because the metric value for this one-off feature is gonna be like a big zero. Like I'm not gonna change any manual querying. So there what you need to do is align the engineers a little bit more towards company goals. Like why do we care that we are building something beyond manual querying or like something that doesn't help querying? And the answer is, okay, like driving revenue might be like a really important company goal. And we have like validated that you can have this much of an impact from a revenue perspective. But what I wanted to call out here maybe is, the first part is very tactical, but the second part is something that you need to be a little bit aware of where sometimes you might wanna pivot a little bit and say like, okay, this is the framework that we adopted, but sometimes it might flip a little bit. The second thing that people or engineers care about is, okay, like am I going to work on something important was the first one? But once they build and ship something, was this really valuable? Like once you complete the loop? And a lot of PMs including myself, like initially when I started out, I would build these like beautiful dashboards and only presented to like senior PMs or senior managers and say, oh, we've driven this much impact. We're looking at these metrics for this feature, but turns out like engineers love looking at their features and love seeing like adoption. Like, oh my God, like I built this and it like went up like 200 users are using this. This is great, right? And so one of the ways in which I try to encourage this on my teams is create so-called like Friday feature reviews where every Friday, what I ended up doing was every feature, every like not even major, major and minor features that we released build a dashboard of it, even though they might not align to the metric that you had used previously, but show your dashboards to your engineers in a very systematic manner on like a weekly basis. Now the engineers will get two things out of it. One, they realize, oh, this guy is like caring about the features we built. That's like some sort of perception. The second thing is they look at their features going up and they're like, oh, wow, this is great. And particularly works really, really well for young engineers. So I had like another case of a time when an engineer named Lee joined my team. And to start off like young engineers, like new engineers, you give them like really tiny feature. And in this case, like we asked Lee to build like an outlier performance feature which was like really, really small. Like if in the grand scheme of things we didn't have even have like a metric for it. What it ended up doing was built like two specific boxes like, oh, how many users are using this? How many clients are using Lee's performance feature? And every like Friday feature review, I would go and say like, hey, Lee, look at this. Like there are like 100 users using this, right? And what this does is young engineers, like they're like, oh, wow, like I built something that matters. Like you've come from school where you've just worked on some projects and nobody's used it, right? Apart from you and maybe like your professor barely looked at it. But now you're like, oh, wow, like I'm actually making some impact. This is like valuable. It does two things. One is it reinforces that young engineers trust in you. If you are a young PM, that's great. And the second thing is it enables these young engineers to really take ownership. Like the next time you say like, hey, Lee, like can we make these improvements to the Outlier feature, they'll actually be really excited about it. Yeah. So that's a very optimistic view, right? You can't say if the impact is negative. Yeah, yeah. So I mean, failures are expected everywhere, right? Failures are expected everywhere. There are two ways in which you have to like avoid it. One is you, it comes to two things. One is like, are you good at prioritization? Like, is your prioritization poor? And hence, like you expected the impact to be negative. Or is there some other problem? And that problem, it's on you to do like a retrospective and say, hey, like we expected this metric to go up. Here's why it went down. Do follow-up research. So if you prioritize something that you thought would really drive something ahead and it didn't, AB tests are a different issue. AB tests are a different issue because in AB tests, you really don't know whether a metric goes up or down. And like either way, you're learning something. But if you're like building something with confidence that it's going to get adopted and it actually doesn't, then the responsibility is on you to find out why it wasn't adopted and where the problem is. So you figured that out, did you actually reassess it? Yeah, like it depends on like the timelines of some of these things, right? So really, really large features. You might like, you'd never release something like really early and like, or like you might never release something where things go south really quickly. You might do the AB test there. But really small features. What you might do if you see like a huge negative spike is you might pause it unless it's like growing up and then you might like get the engineer to work on something else. Yeah, negative signal, especially if you are, regardless of the young engineer or anyone, if you are expecting a metric to really go shoot up and it goes down, you have to pause and really seriously evaluate what's going on. But yeah, so this is like really any other question, sorry. Cool. Okay, I covered everything there. Cool. So the next step in this list, yes, sorry. Yeah, so that's like a little bit of like a tricky process. Yeah. Oh, sorry. So the question was for those maybe like listening life, the question was when I had fixed the metric to be manual query, how did I estimate the impact of, or the expected impact of reduction in manual queries when I built certain features. So that was like kind of like a long and arduous process where I actually went and looked. So we had like different user groups who are running different manual queries of different complexity. And the essential point of building the platform was like replacing parts of the parts of the, or like pieces of the manual queries. So starting from like lowest complexity query to the highest complexity query, what do we need to build to start satisfying those use cases. And so the number of people who are running the simplest query, let's say it's 10. Now, if we build a platform that does everything that the simplest query can do, now I'm pretty confident that I can get these 10 people on board. But I still haven't gotten these like five people who are running the more complex query onto the platform. So the way I ended up doing in a nutshell, I guess, is look at distinct ways in which complexity of the query increases. And then how would that translate into features? And then look at the number of people who are like writing these types of queries. Does that make sense? Cool. So next group is designers. Designers are an interesting bunch. Yeah, sorry. Sure. Yeah. Yeah, I'm going to cover that a little bit later. But really what you need to do is standardize the set of metrics that you as a team and a group care about. And typically what I would end up doing is let's say my team or my area has five or six metrics that we care about in general. So those would be at the top of my dashboard. So team activation rate or number of people using something else and so on, like manual querying in this case. But individual features would have their own adoption boxes at the bottom. So really what we would do is generally look at the trend because our goals for the quarter would be how can we drive some of these core metrics for the team up. But at the same time, for specific engineers, we would go in and say, oh, this feature, is it doing well, and so on. So it's a combination of a high level set of metrics plus a bunch of smaller usage numbers. Oh, I'm already on the news slide. So I was saying designers are an interesting bunch because they want to really understand users and they want to deliver the best experience. So if you leave a designer, designers are people who are always inclined to hyper-optimize in the sense that they want everything to be perfect, everything to be beautiful, like extremely, like Steve Jobs inspired, like, oh, everything has to be perfect. But you, on the other hand, are operating in this constrained, optimized world. You want the product to be good, you want it to work, but at the same time, you want it to be shipped fast and you want to make the right decisions. So that's where I guess a little bit of the clash occurs where, oh, the designer is like, oh, this guy is shipping something not so great, whereas as a PM, you're like, maybe this doesn't matter as much. So I'll talk a little bit about how tactically this manifested itself. So when I work with my designer, the way I try to work in a document is I have this big table. As a PM, so you always think about use cases. So you have your set of use cases for the product. Typically, I also have an example in which here's how the use case manifests itself. And these use cases are stacked in priority order, like highest priority use case going all the way down to the lowest priority use case. And then with design, I work with like, oh, here are three or four different variants for the design. And how do they perform relative to the use cases? We run some tests and figure this out. When I used to do this originally, I would say priority is high, medium, low. As a high priority use case, this is a medium priority, low priority. But typically what we would run into all the time is between two high priority use cases, there is a clash. Which one's more important? Is this high or this is also high now? And then one day, the designer asked me, what do you think is the percentage use case that this use case has? Because I always was thinking, this is more important, especially when you do qualitative research with user interviews, you never think of what percentage of users are saying this. So then what we ended up doing is actually adding a column saying percentage usage, like not classifying it as high, medium, low. But really saying, oh, this is 70% of the time users want to do this versus 5% of the time you want to do this. And a 70% feature or a 90% feature is completely different from a 3% or 5% feature. Because the way you surface this in the design of the product is totally different. If it's a 50-50 use case, I might have a tab design or whatever. Whereas if it's a 95% or 5% use case, the 5% use case is going to be way more hidden because you want the design to be simplistic. So we started doing this. And one interesting thing happened. And at that time, I was revamping an existing tool. As a PM, you might want to work on a really, really new tool. But more often than not, you are revamping some existing tool that's really crappy and it's lived its days, basically. So we were revamping this one tool. And we started talking to users about it. And there were really two major use cases or what we thought were two big use cases for this tool. And I went and talked to the users. And I started noting down, OK, has this use case, has this use case. And I started making this box of a number of people who said, OK, I'll use this percentage of the time. I'll use this percentage of the time. And use case one, people in their interviews and while talking said, I'll use this 60% of the time, use case one. And use case two, I'll use it 40% of the time. And then I started looking at this. And then it was really challenging for us to design this because the design would end up being really, really complicated. And then at that time, the EM on my team, EM Transfer Engineer Manager, kind of team lead in some sense. Engineering Manager said, oh, we have some data from the legacy tool. Maybe we should look at that. So then I started digging into the data to maybe give the designer an additional data point. And turns out, use case one was in reality actually used 93% of the time. And use case two was only used 7% of the time. So in my experience, when you talk to people, people dramatically overestimate their tail end. They think the tail is way thicker than it actually is. They will understate the common use case and overstate the rarer use case. And that's a huge problem for you because a 93% or a 95% use case is way easier to design because you can just hide the use case in one level deeper. Typically, when I design as a side note, I think of this is my designer's philosophy, which I have adopted now, which is think of designing a product like having a desk, like work desk, like the desk at your work. You want the most used items at the top in front of you. But if something's not as regularly accessed, you want to put it in the drawers that you can pull out more effort that you can get out. In a 95% use case, I'm more comfortable putting this 5% use case in the drawer. But in a 60%, 40% use case, I'm tempted to keep both of these features or whatever on top, making my desk look way more cluttered than it needs to. So in this case, having a 60%, 40% use case be a 93%, 37% use case was really valuable for us because it made this whole design process simpler. And now the designer knew, OK, we are making this right decision. The PM has the data to back this up. And we are more reliant on this data than maybe some people overstating or understating their use case. And so the value of this is it's not just about showing data and getting the designer aligned, but it's also like you're making a better decision here. You're making a better product because a smaller use case is actually getting smaller screen space. The next big group that you need to convince is senior management. And in some ways, senior management, senior PM, senior engineers, control your destiny because senior managers are concerned about allocating the limited resources that they have to hit the goals that they care about. They are maybe one step ahead in the PMing of putting paperclips. They have five engineers, and they have to do everything in the world. Whereas you, on the other hand, are motivated young PM. And you want to demonstrate your work, have large impact, and you want the senior management to prioritize your area. How do you go about doing that? And by the way, I don't claim to have solved this. I myself am going through this. But I'll share something that I have learned over the course of doing this. So I'm actually at this point putting together the 2018 strategy for next year. And I was talking to my management coach on, hey, I have this big review coming up. And the entire staff team is going to be there. And how do I convince them? And he said two things. Number one, he said, build a stakeholder diagram. This is moving a little bit away from data. But build a stakeholder diagram. If anyone is coming here from sales, they know exactly what I'm talking about. If you're doing enterprise sales, when you sell someone something, you're not doing this in a half a third manner. You know, if I call company X, who's the decision maker? Who has the dollars in their pocket? Who will influence this? You'll make this chart a fancy grid of what motivates what people whom. But the other thing that, and if you're not in sales, you should talk to sales people about them. But the other thing he said was like, are you thinking about this from an inductive perspective or a deductive perspective? And I'll explain what I mean by that. Let's start with deductive. Deductive essentially is a bottom up way of thinking. Typically, you as an individual contributor or an individual PM are operating in our own world. In my case, I'm operating in this analytics world. And I am saying people should care about analytics. They should give me more engineers or whatever. And then somehow I try to link analytics to something that is more upwards. So I try to link analytics to a goal that my manager or bosses, bosses, bosses thinking about. And that's very deductive, bottoms up approach. But typically for convincing people what works way better is an inductive approach where you don't try to bring the exacts to your level, but rather you move up and start operating at their inductive level, at the top-down level, like you start operating at company vision, mission level. And then wherever necessary or wherever there are questions, you bring them down and you start giving them more details as necessary. But how does this inductive approach work? And how does this manifest itself? So for senior managers, the few things that you care about are you want the senior managers to know that you're a good PM, that you're doing a really good job of building these impactful features and whatnot. So the first thing that they care about is your performance as a person. At Dropbox, we have this like OKR system. And for those who don't know what OKR means, OK and R stand for objective and key result. It's essentially like a different way of phrasing goals for your team. And one thing that I've noticed as a PM is not all teams set this goal or goals on like a regular cadence. In fact, before joining Dropbox, my previous company, we never set goals. Like we never set goals. We kind of knew what was going on and we like just a continuous print of building things. But we never like sat down and said, here are like the three most important things, quantified progress that we will make over the course of our time and let's evaluate and do a retrospective. But that is really, really powerful. Just not just to focus the team, but if you set OKRs where you set the OKR in the presence of a senior exec and say, here are the four metrics that our area is aligned to, do you approve? And okay, yes, this makes sense. And then now you go back for the quarter, work towards this. And then at Dropbox, we rate this as like wizard, which means you achieved extraordinary results. Green heart, which means you've done a good job. The neutral phase is kind of like neutral. I don't know what happened. And then the red X is like you clearly failed. But you go back, work on your OKRs. And then now, all of a sudden, you have like an aligned frame, aligned well-defined and quantified framework to show senior execs that, hey, you've done progress as a PM to further the product in ways that was agreed upon beforehand. And it's not just about like being able to align with the execs upfront. Once you've done this, you have to do a little bit of like a drum beating because you have to be like, you have to kind of get in front of them and say like, hey, look at this, like us as a team have accomplished this. And never say I have accomplished this. Like always take your team to the senior execs and show the quantified like sense of like, here were the metrics we agreed on. Here's like why these make sense. So that's like the senior execs evaluating your performance. What happens if you need more people? That's when this like inductive-deductive approach comes in, where now you have to somehow draw a thick dotted line from what you are doing to what the manager or like your CEO cares about in maybe like a smaller company's case. So in my case, like I was looking at experimentation and the objective was we need more people and we need to prioritize this. And what the company cared about were like accomplishing these like large company goals. Nobody says like, oh, like we need like our experimentation to be 5% faster, especially if you're like someone like Dropbox. So the biggest challenge was drawing that thick line in going from how can we save more money and time if we invest in this area and how does that align to the company goals? And it goes back to this like manual querying where now I can quantify like how much time I can save, how many people are now like freed up in their time, how many more experiments that they can run. And now if someone like you have hard quantifiable data that hey like if we allocate this much resource we will get this ROI. If you clarify that ROI in the sense of the company goal like the senior exec has no other way to say, okay, like yeah, like we'll try to allocate this in some way. So really like think about like how can you like project your performance in a quantitative way and get approval and like and get recognition for it in a quantifiable way. And then the second thing is think about how you can make your area relate to the company goal in a very direct and quantifiable way. And both of those will help you like get some sense of a better approach to senior management. And by the way, I'm not claiming that these are like holy grails of approaches, right? These are things that I've seen to work and I'm just like sharing some ideas with you. QA is like another important piece of the puzzle. QA cares about shipping bug free products. They don't want products to break. And again like the QA like they squirm when like a product breaks because it's almost like they haven't done a good job of it. But the biggest I guess struggle between QA and PMs is the time aspect because no product that you release will be 100% tested when you release it because most products are so complex that you have like a crazy set of edge cases. And as you build these products more and more the complexity and the possible cases exponentially increase. It's not even like and there's no way for like any testing framework to accomplish everything. But you on the other hand like have limited time. So how do you accomplish this? So a couple of things like the main thing that matters for you to handle the QA dynamic is you have to do an 80-20 where you wanna make sure that you've covered like the major ground but there might be like weird edge cases that like can always come up. But in terms of getting QA on board and how you do things, a lot of what I already said applies. The same engineer philosophy of showing usage, showing like what happened to a product that they tested, all of this applies because typically at least in my case it has always been my team is my set of engineers plus a QA or two that we set together. So a lot of that already applies. But when it comes to the 80-20 how you can enable QAs in a data-driven manner is like two examples here. One is QA in some sense are also looking to see when the product breaks, right? So classic case is when you stretch a product to its limits like adding like tons of things that you might not really expect is when like a QA is like trying to see, okay, we've tested like adding 10,000 widgets to this page and seeing whether it breaks or not. And this was like one example where I was working on this like result dashboard and the question was how many widgets can people add before it finally breaks, right? And we were replacing like an older version of this product and this was like a really, really manual process. Now the QA had to go in and add individual like widgets on the dashboard and now the ceiling is like sky high and from an engineering standpoint you don't know when this is gonna break. So the way to quantify this or capture this is to look at like older data and classify percentiles. So what I ended up doing is went and looked back at historical data and said, okay, like 50th percentile is this many widgets, 95th percentile is this many widgets, 99.9th percentile is this many widgets. And so now the QA exactly knows where to stop their testing. And if this, if we say something like I just hit 10,000 steps even though I've not done anything. Maybe Fitbit needs to improve its algorithm. But coming back to this point of percentiles, previously QA would have to like go and say like, oh, we tested like a thousand widgets or outputs and the product didn't break but that doesn't really mean anything to me especially if I'm like a senior manager looking to get confidence that this product has been well tested. Versus if I say something like, hey, we've tested confidently that the 99th percentile use case is met. One percentile we have like excluded for like, you know, convenience like totally fine, right? Your feature will get shipped and you won't get questions asked. So that's one way where you can like use data to drive like an 80, 20 in the product. Checking on time. So the other thing is the other example where it's data showed up for me is Selenium. Does anyone know here what Selenium is? Some people know, okay. So Selenium is this like browser, I guess like browser automation framework. Really like what it does is you as a user like click things in the browser. You can just configure Selenium to click things for you and you set up like front end tests to make sure that like things aren't broken. Selenium is interesting because it does like a lot of these automated tests for you but especially in complicated products when you have like a lot of crazy workflows and when things can break. A QA generally looks to set up a bunch of these, right? You have to like see, oh, like I have to set up like 50 different tests and setting these up takes time. So figuring out what tests to set up and which ones are of the highest value is really hard without like knowing, without having like data on the use cases. So when we were looking to build Selenium for our cases, we started looking at like workflows. Like it's almost like what is called like a Sankey diagram like people do this event, then do this event and do this event. You try to see like what are the percentage workflows for individual use case. We actually found that like three major tests would actually cover like 90% of our use case. Now again, like not having a quantified framework would mean that like a QA, you would have to go and say like, oh, maybe we should add like these five tests and they would take like 15 hours. Whereas now with the data, you can go in and say, here's the data for the usage. And now if we add three tests, we have covered our basis sufficiently well. And so that's like one way in which I guess like we looked at Selenium specifically and said using data and usage patterns, the QA is confident that like as a PM, I'm representing the right set of use cases and we have covered our basis. Basically like in all these things, you're just trying to make sure that like these people have confidence that they're doing their jobs well, right? A QA wants to make sure that the product isn't broken or it breaks in way more infrequently than you want it to. So they don't get into trouble. So it's all is appealing to like like on the Maslow's hierarchy, it's all appealing to people's like inner sense of like I'm doing the right thing. The final piece which is like a really short piece that I wanted to cover is how do you set yourself up for success here? And I might have like covered a few things here. And this is really like a random agglomeration of things that actually came to my mind. The first thing is if you're a PM, the way people think about data is always think about like data at delivery, right? I built a feature, I've delivered it or I've launched it. And now what does the data look like? And like what happens? But what I'm trying to encourage everyone here to do is think about how data can be leveraged in every single step. And as a PM, you go through the cycle of generating ideas, building a roadmap, coming up with specifications, then launching the product, getting feedback from users and then going back to the idea. Some ways that I talked about is like in specifications you might want to figure out like what logging you need, how much in the roadmap step like having metrics and figuring out what they mean, really important. The next important thing to do is let me see if I can click this. Oh yeah. So how do you like do this, right? You have to log all this data. I told you like oh usage for this, usage for that. How do you log all this data that you need? And it's kind of hard. And my I guess mantra for this has been like work backwards from the questions. So every time I wrote a product spec, what I encourage my engineers to do is write like a logging spec that always starts from the questions. What questions do we expect to answer for this feature? Obviously things are like usage is one, metrics is another. But think of this more not just from like what do I care from like a usage perspective but like also care about like what would you show to lead the engineer? What would you show to this designer? So think a little bit higher because I think a little bit deeper because if you don't add logging versus the cost of adding logging is really, really low. But if you don't add the logging, you won't have the data. But if you add the logging, you'll have the data. And if you don't use it, that's okay. So really like people try to add logging and then try to answer questions with them but try to work backwards. And this guy seems pretty happy in walking backwards. And then one more point that I wanted to make here is it's not all about like just adding logging, right? You have to go this extra step of in many cases you'd have to set up like the data pipeline to make sure that the metric you want is all captured in the right way. The final thing is standardized. So someone was asking me like, oh, like there's so much data. What do you track on your dashboard? Typically like when you log, and especially in my case, when you tend to over log things and you're like looking at all these numbers, sometimes you tend to lose your sanity. You're like, what number should I look at? What's going on? My rule of thumb has been to say like, let's define like a set of metrics that we care about for this area and always make sure that those are approved by senior management so that they know exactly how we're operating. But they're also like very much visible to my team so that we know exactly what's happening. So like consistency across multiple stakeholders. It's sometimes hard to enforce but with enough wiggle room you can at least align on like a few different metrics like for the search quality team. They were pretty easily able to align on this like missed click-through rate for their product. Yeah, so the final piece is simply thing. If you look at influencing people, and this is like kind of like my conclusion here, if you look at influencing people, most of the circles on like soft skills, right? You have to build relationships with people. You have to make sure that they trust you. You have to align on the vision and the goal. You have to make sure that people believe people say like, oh, we're working towards the same thing. Let's go together kind of like an Uber pool like you'll ride with your teammate to the same destination. And then you have to build empathy for other people's problems. But that's a lot of doing this is very like a soft skills kind of way. And I think like to really influence people in an effective manner, it is a combination of soft skills plus hard data. And a couple of rules of thumbs here that I talked about is look to quantify most things because data is hard to deny, especially if it's obvious and if it's in front of your face, opinion matters less. Second thing is look to, when you're looking at data, it can be like a wild west. You don't know what's going on. You might want to look to work backwards and say like, okay, like what question is the senior manager going to ask me? How can I like appease this engineer who's like a little bit antsy that we are not building the right thing? So look to work back from these questions and then figure out like what matters, what doesn't. And then just to save yourself the trouble because people will ask you if you're data-driven and people starts asking you all kinds of questions, you'll spend maybe 60% of your time writing SQL queries and not like being the PM that you want it to be or dreamed to be. So look to standardize your data because that'll save you like a bunch of time. But yeah, that's it. I think like I try to be like as a PM, I try to like become a better influencer of people around me. And this is like for most people, this is like a constant learning experience, right? You don't work be a PM for like two months and become like amazing at these and these are like very situation-based. One of the things that I personally found helpful is because I think in terms of numbers, like adding hard data to like the soft-skill piece generally helps me influence things better. But that's pretty much what I wanted to cover here today. Any questions? Yes? Yeah, that's an interesting question. And because we're talking the varies in two dimensions, I would say. One is if you are in enterprise products versus the consumer products and then the other spectrum in that two-by-two grid is what is the scope of the product itself? Whether it's like a large product that's gonna affect like a lot of people or whether it's like a really niche product that's gonna affect like really like small numbers of people. Typically in enterprise products because there are like finite number of customers, like you have like these large companies that you're selling like products to for like really expensive like really expensive contracts, you tend to like listen to people's feedback. Like you like talk to people, you're less reliant on data, you're actually more like reliant on people's feeling of the product, right? Especially if you're selling like million-dollar products. Whereas if you're driving like a consumer product, then you have like millions of users and you can't go and talk to individual users, you just look at the data and say like, oh, what is the click-through rate? And it's all like numbers at the margin, right? So for instance, at Uber, when they are doing pricing decisions, I'm sure that it's like completely data-driven. Whereas if it's like, I don't know, like I'm trying to struggle, even like Dropbox at like our enterprise scale, it's always like us like trying to build this relationship and keep this relationship going. The other dimension that I mentioned was like more number of people using this versus smaller number of people. Typically, and this also relates to like top-down decision-making, where like if you're like making like a broad priority-type strokes, you will try to be more data-driven because you wanna like get like more quantitative estimates of how things will look like. But if you have identified like a small niche, you will tend to do more qualitative research because at that point, like you wanna ensure that something works well. So I'll give you an example from my own product experience. When we were building this experimentation platform, the fact that we wanted to prioritize that platform itself was a very quantitatively-based decision, like how much money we will save, how much time we'll save and so on. But when we had focused on like, oh, this is the group of users that we need to appease, it's very hard to like look at metrics when you have like 10 people, 20 people, 100 people even. At that point, you would like start going and talking to people. So it's like a combination of those two. What do you do qualitatively, is that how to do it? Yeah, so we have like a user research team and they do like, they do qualitative and they do qualitative research and they also do like a little bit of like quantitative research but there is like a significant portion of quantitative research that like, you would look at from like past data, like pull from like third-party data sources and so on. So qualitative purely comes from like interviews that like user research gets us but the quantitative piece can come in from multiple sources. Any other questions? Yes? Sure. You know like, they pulled a group kind of in like a bubble market. What happens when you build a product, your competition launch that one day before you, suddenly you have to pivot or I don't know what happens or like you have a roadmap or something and then like marketing decides like they want to go into a completely different sector, you have no data on it. So far that represents like 30% of your market and they want to make it 50. Yeah. Yeah, so I guess there are a few questions here. So the broader picture that you are getting is like, what happens if there are like sudden things that happen that you really like don't know what's going on. Yeah, I guess like I've never been in a situation where like competition has launched some, I fear the day when like competition launches something that I'm like asked to pivot everything. But if it matters enough, I feel like or at least like based on like my understanding of like how things work in those scenarios, it's like a very top-down decision, right? Like, oh, like we had this plan and competition has launched something that's like caught us off guard. And now like your senior managers will pull you in and say like, oh, halt everything, do this, right? Competitive response is like urgent competitive response is never going to be a bottoms up like proposal. It's always going to be a top-down proposal. And generally this is true. You do all this data and all these like crazy like write queries and create beautiful roadmaps. The fact of the matter is sometimes the CEO wants something, he's got to do it, right? He's got to do it. I mean, they're paying your salary. It's like hard and unavoidable, right? And so some of this like hard decisions or hard pivoting matters, they've never happened to me. So I might not be able to like speak with the expertise of having gone through them. But I would imagine like, it's never like you as an IC making the decision to completely pivot the product. It's more like kind of coming from the top, like head of marketing decides like, oh, like talk to the head of product and says we need to pivot. And at the point it's a little bit out of your hands. Yeah. Yeah, so at that point, like if you're talking about a scenario, if you're talking about a scenario where management, I guess, and again, I've not been in the situation. So I'm only paraphrasing on how I might act in the situation. So if senior management wants to pivot, but my engine, and they're thinking that this is going to be a really expensive like operation. But I think a junior engineer has like a really simple solution. I would like, oh, that would be like a dream scenario where like I would like carry my like small troop into the CEO's office and say like, hey, like this is giving you sleepless nights. We have like a really good proposal. So at that point, like it would be more like presenting the solution and talking about why this is a simpler solution and how much time this would save them. So the point that I would anchor on is like the cost aspect because the opportunity aspect at that point is tight because you know like what the pivot will get you. But the thing that you're thinking about is like how much time it will take. And the CEO is probably wondering, oh my god, this is going to take us another three months and that will set us back because competition is going to go away further ahead. But now you're coming up with this magic wand and saying like, oh, like we can build this within a week because we have hired genius engineers. Cool, yes? It sounds like you've developed your roadmap based on your own need to incorporate data. And maybe you had some previous examples that you worked on. Do you have any good references for looking at the roadmap that really does a good job of incorporating data? Yeah, so the problem with that, well, I have good examples. The problem is I cannot share them with you because all the roadmaps are confidential materials. And this stuff, no company's roadmap will be available publicly because every company has competition. I would say, here's my advice though. I think there is no real the right way of doing this. And as I said, I see a lot of roadmaps being done where I see a lot of roadmaps being done where it's more like people say, oh, it's quantifiable. But it's all done based on a feeling. People say, OK, this kind of feels important. This kind of feels less important. Like there's a gray area where you haze around. In my opinion, taking a quantified approach or even looking to take quantified approach is more than 50% of the work. So I might not have a solid example that I can point on the internet and say, this is a great example of roadmap. The way I would maybe start addressing this question is figure out which companies are really, really data-driven. And the typical companies that tend to be more data-driven are, as I mentioned earlier, consumer-focused companies. Companies where ads, ads or pricing is a key point of strategy. Ads are all, it's all at the margin. You have to price your ads perfectly. And so it's all data-driven. So at that point, all the decisions are very, very quantifiable. So if anyone wants to know more about how some of these things work, my recommendation would be to go talk to some of these PMs, like grab a coffee with them, and ask them, hey, you seem to work in a really quant-heavy data-driven area. How do you go about doing this? And maybe get, like, glean more insights. This is just one example on how I do this. Yeah.