 Hey everyone, how's it going? Let me see if I got this right, left, right, okay. Just quickly introduce myself, but before that, I'm hoping to keep this talk as conversation as possible. It's about user engagement. It's about getting user feedback. So if anybody has any questions along the way, just feel free to just raise your hands and just ask the question. It's just better that way rather than forgetting the question towards the end. Just quickly, my background is I've been in product for the last little over six years now. I started out in a company in DC. It's called MicroStrategy. It's a business intelligence company. It's a B2B company. I then moved to Google where I worked for a little over three years, mostly in Android and a couple of 20% projects here and there. For the last year and a half, I've been working as a product lead at a company called Premise. It's a small company. It's 1,000th the size of Google. And its goal is to create a platform so that we can figure out ground truth data from parts of the world where data is really hard to come by. So think about how if the Zika virus is spreading in Colombia, how can we help the government of Colombia in figuring out where the Zika virus is actually spreading and how they can improve their outcomes and mobilize their operations teams, stuff like that. So I've been working there for the last year and a half. So as you can see, I've worked at a midsize B2B company and then a very consumer oriented part of Google and then now at a company which is kind of in between B2B at the same time we have consumers as well because it's a marketplace model. I'm more than happy to talk about that in detail after the talk if you want me to. So that's what got me really interested in this topic. It's something which I'm really passionate about, which is just how do you get to know your users because as a product manager, whenever you're working on however small or big the product is, it's really important to know whom you're building for, right? You won't find a good government, for instance, always knows who the citizens are. They get like constant census data. They get surveys that they send out all the time. So it's really important in a similar way at a much smaller level for your product to know who exactly you're building for to completely be in that person's shoes, what their pain points are. It's a really broad topic to talk about. So what I wanted to do was just divide that into three sections. The first part being why should you get to know your users, right? And the second part being when in the product development cycle is it really crucial and important for you to keep a pulse on who the users are and what their pain points are. And the third part which is very significant is how do you do that? What are the tools that you can use in order to really get to know who your users are? So another reason why I wanted to bring up the why part of this is because it's fairly obvious to us. I think we've all heard this many times that you need to know who your users are, but it's surprising how often we forget about that in the day-to-day of our jobs. And it's mostly because while we're creating technology products it's very common to not be interacting with the user and not seeing the user on a day-to-day basis. So you just get caught up with an idea that we have and just working towards that idea while forgetting about whether that idea is actually going to be useful by the person you're building it for. So it's very important to keep that in mind. So to think about why we need to do this, I thought it might be useful to just talk about a couple of examples of companies that have done this well at the same time companies that have maybe lost the plot a little bit and what happened as a result of that. So five quick examples without boring you too much. And if it gets boring raise your hand. So the first example was Virgin Atlantic. Just raise your hands like how many of us have noticed how funky and cool Virgin Atlantic is over the last couple of years that they've changed revamped stuff, right? It didn't happen by accident. There's a really fascinating case study about this. It's this tiny URL link at the bottom of the slide when you see it, if you're interested in seeing the case study. But essentially at a time, a couple of years ago at a time when revenues for all major airlines were declining, Virgin Atlantic started this really interesting initiative where they try to understand who are the people who are flying our airlines. How can we not be a one size fits all airline experience but instead cater to a very specific audience. So they ran a bunch of surveys, they got data about all of their different people who are flying their airlines and they figured out that the majority of people who are flying Virgin Atlantic were small business owners and entrepreneurs and people who worked in small businesses. So then once they had this focus, they knew who the user segments were, they dove into that and they understood who these people are and what their pain points are, what their needs are and created an airline experience that was focused on just those users. Because of which that led to greater adoption by these users, greater engagement and overall better revenue at a time when other airlines were not doing so well. So none of that happened by accident. It was a very deliberate effort to understand who their users were and that worked out really well for them. Switching gears a little bit, going to a much smaller company, I was fascinated by just reading up about the origin story of DoorDash. Does anyone here work at DoorDash? No? Okay. Sorry. I just assumed everyone knew about DoorDash. It's a company that provides operations and logistics for merchants which are mostly restaurants as well as customers who are mostly people who order from these restaurants. And what is really interesting about their origin story is that they had an idea about how they can improve merchant operations for restaurants within the restaurant itself. They created a quick application for it. There were just a bunch of developers and they quickly created a prototype application, showed that to some of these restaurant owners and asked them if they would use it. And the reaction that they got was that, yeah, maybe, but not really. It's not super useful for us. So they spoke to a couple of more restaurant owners and they realized that the need that they really had wasn't so much within the restaurant itself but in managing logistics of taking the produce from point A to point B, that is, to the customer. And so they pivoted and they tried to change what their value proposition was. Accordingly, they spoke to over 250 customers in the course of a month and they realized what their strategy is going to be, who their customer segment is supposed to be and how they can really focus towards those pinpoints. And that is a really interesting example to me because that's a great place to change your company strategy just when you're starting out. Rather than starting out based on an idea that you have going the distance and building something and then having to pivot later on. So it's feeling quickly and feeling early. That's something which I thought was really interesting. The third example is of a company that was already successful in the U.S., was already successful in Europe. They were entering the Indian market. And I remember reading about this, maybe this was about a year ago, what they did was really interesting. From a payment perspective, Uber thrives on the fact that everything is automated, right? Every single time you do a transaction, you complete a ride. Uber gets a cut of that ride and everything happens automatically. However, in a country like India and a couple of other emerging markets, most of the people in these countries do not have credit cards or debit cards. They live in a very cash-based economy. And so going on a taxi ride and the end of it having your credit card be available to a big tech company is not something that a lot of people are comfortable with. So they had to make a compromise on automation and increase operations by allowing cash-based payments. And they did this. They made this a part of their product. And this was really interesting to me because nowhere else in the world were they doing this. But just to optimize for what Uber's mission was, which is to provide reliable transportation to everyone, they decided to compromise on automation, which is, again, very important to them, but the customer needs come first. From the non-tech space, February's was an interesting case study. This happened about... This case study is from about 15 to 20 years ago when Procter & Gamble, they did this... They did some really great research from a chemical technology standpoint, and they came up with this concoction where you could just pray it and it neutralizes the smell around you. So if your home really stinks for some reason and you can just pray that way and it just neutralizes the smell for you. They thought this was revolutionary technology. They launched this in the market. They thought seals are going to go through the roof, but they did not. They didn't do well at all. And that's when they started talking to their customers and they realized that most people don't really think that their home stinks. They're just used to that smell. That's just the smell of their house. And so if you provide a solution to people which they don't really see the need for, it's not going to do very well. However, they saw a lot of people using the Fabrice technology right after they finished cleaning or right after somebody smoked in the area just because they felt like that's the time, that's the instant when they needed to use this spray. So from the product side, what they did was they added a scent to the product just to give users a sense of a reward as soon as they used the spray because of which their marketing team then started branding this as something which you don't have to use all the time in the house but only in instances when you've just completed cleaning your floor or you feel like the air in your area in your house needs to be, needs to be freshened up a little bit. And that's when Fabrice's sales started increasing even more. The last example is of Google Buzz. Does anybody remember this product? Didn't we use this product? All right. It is a short-lived product so I'd be surprised if people really used it a lot. So the history of Google Buzz is that it was, I think, Google's first foray into the social space and it was interesting because they were working on it for a really long time and it was heavily dog fooded within the company so a lot of internal employees were using this for a while and nobody came up with any exceptions to it but just tweaking it a little bit, improving the product here and there and then they launched the product and it really did, it did really, really badly, so badly that the engineers were called back over the weekend to just improve it and fix certain things that are wrong with it. What was wrong with it was their entire social graph was based on the Google contacts that people had. So as soon as this product was launched and anybody who started using it and signed into Google Buzz, every single one of your contacts would get a message asking you to join them and become their friend or whatever metaphor they had and this is really bad because you don't want people who from your past to all get these messages but they didn't realize this and for some reason their sample size of internal dog fooders never brought this up. This was a huge privacy concern for a lot of people. So this was a good example in my mind of how you should be getting user feedback and talking to your users and doing that all through the product development process but it's very important to know what the sample size is and what the type of people are that you're talking to as well and make sure that they are a good representation of who your users actually are. Again, that's why you need to know what your customer segment is before you get into any of this. The next part is about when. Just before I get into this, do you have any questions so far about what you talked about? Great. So at what point in the product development process should you gather knowledge about the customer or user? Does anyone have any guesses about this? As soon as you can. Go ahead. All right, after launch. Okay. Okay. Yes. Before you build it. Yeah, that's right. Yeah, exactly. So you're all right. You should be always doing this. It's not a point in time activity that you do. It's not something which you just do at a certain point in the product development process and then just forget about it. It's something which lasts across the product development process and it's just a never-ending task. And I'll talk about why that is the case. So before I talk about why that's the case, I wanted to quickly talk about what the product development process looks like. It's an entirely different discussion in all of itself, so I'm not going to spend too much time in it. And there are lots of good resources out there which talk about that. This is one from IDEO where they've divided the product development process into these three segments. There's another one from the Stanford D-School which I think is really good. There's another one from, this is the double diamond diagram from the British Design Council that also talks about a similar thing. So there are lots of different frameworks out there about the product development process. And the reason I'm talking about this is because as we think about when we need to solve for this, we need to know how we can break down the product development process first. This is a really interesting diagram which is quite complex looking, but I would urge you to check this out. It's a medium article by a person named Dan Nestler. In my mind, it takes all of those different frameworks together and creates one big framework, but it's a really interesting read for all of you. But I wanted to just simplify this for the sake of this talk to just a simple diagram that I created which is you can think of the entire product development process as having these four different stages to it where you're discovering what the pain points are. You're discovering who your customer is going to be and you define your product accordingly. Then you start the development process. Then you launch your product and then you learn about what went wrong and what went right and then you iterate upon it. So it's a never-ending process of constantly building and iterating, right? At any point of time, once your product has launched, a lot of you who've been through the product iteration process would know this, that you're never in just one place at one point of time. You're never just launching. You're never just developing. You would have maybe feature A, that's in the discovery and definition phase. You would have feature B, which is being developed and you would have feature C that is being launched. For each one of them, the kind of feedback that you would need would be very different depending upon the kind of questions that you need to ask for that phase. So the first question is, what are those questions that you need to ask and how do those questions differ based upon which phase of the product development process you are in? So let's start with the discovery and define phase. At this phase, this is just purely exploratory. You're starting out. You've got this idea in your mind and you're thinking about, wait, who are our core customer segments? Who are these people that I'm trying to build for? I'd be focused upon a specific group of people. What are their pain points? What am I trying to solve for? It's not just an idea in my head which I want to bring to reality. It's a product I'm building for someone, for these people, and what are their pain points? What features would alleviate those pain points? And this is when you can start thinking about maybe 10, 20 different ideas to solve that problem. And then this is the phase where you can go ahead and cheaply solve for whether this is even of any interest to those customer segments. Again, going back to the DoorDash example, build 10 paper prototypes, show this to your end customers and see if that's of any interest to them before you involve engineering and other resources for the development phase, right? This is where you can feel cheaply and feel quickly. So it's a very crucial part of the development process. The next phase, once things have been developed, you better be having an MVP by this stage. I think that's basically the minimum available product that you have, which consists of maybe 10% of all of the different features that you thought about before, but you're pretty sure that these are the features that your customer wants. At this point, you kind of know who your customer segments are and what their pain points are. But this is a stage where you need to think about, A, what product refinements can be made before I actually go out there and launch. Is there a way for me to get, if it's an application or a website, is there a way of getting a mock of this website or if it's an Android application and APK file of that, which I can just take to any of my customers, my potential customers, and see if they're interested in it? Because there's a huge difference between having an idea and showing that on paper to users versus asking them to use it for an extended period of time. Longitudinal studies are extremely useful. Asking people to use products for a couple of days and seeing what the issues are are extremely revealing. So this is the phase when you're supposed to just do, figure out what those small improvements are, which could be huge problems once you launch, just finding those out early. Once you launch, the next questions become, all right, we've launched now. People are using this. Did we meet our goals? Were our hypotheses correct? We thought this is going to solve for problem A. Is this actually doing that or is this not doing that? And where did we fail? And if we fail, then why did we fail? Those are the kinds of questions that you ask yourself during the learning process. And once you learn from that, you go back to the discovery and define phase and you start thinking all over again. So it's a never-ending process. There are questions for each one of these phases. They're very different questions, depending upon the phase in which that feature is at that point of time. You might have noticed a theme going on amongst these questions. They go from being very open-ended in the beginning to being very specific. You can almost think of it as the kind of data that you get back as answers to these questions go from being very qualitative to being very quantitative as you go further along in this process. It makes sense because you're going from having this hazy, nebulous idea in the beginning to having this very concrete product in the end. So the kind of feedback that you get needs to echo that exact same thing. So what I wanted to talk about next is, sure, we have these different phases. We have these different questions that we need to ask for all these different phases, but how do you get answers to these questions? What are the different tools that you can use in order to get answers to these questions? This is by no means exhaustive. It's just based on past experience and talking to other product managers that felt like these are some tools that have been effective in the past. But it absolutely depends upon the context of your product on your feature. Where is your audience nearby? Is your audience in a different country? So the kind of tool that you choose completely depends upon that, but I'll quickly talk about the different kinds of tools that you can use. So during the discovery and define phase, for instance, you can use tools like Google Consumer Surveys, any tool that allows you to get open-ended answers. Consumer Surveys has been super useful for me in the past when you want just quick answers for things. You have, let's say, a couple of icons that you want to see which one works best. Just throw it out there. You can ask three questions and you get immediate feedback for the next day or two about which one works best. Maybe even use Mechanical Turk. Just ask people from around the world about what their preferences are, about whether a certain idea that you have is actually going to be useful or not for them. You can even do gorilla marketing. I couldn't find a better icon for that. But essentially that's one of my favorites because at this stage you want to be going and talking to people as much as you can. You want to be able to go to them and just ask them questions and get their instant feedback and see the expressions on their face when you tell them about this amazing idea that you had and see them grimace and not react the way you wanted them to react. And that's very powerful. Internally, it's good to talk to the sales team at all points of time, especially in B2B companies. Your salespeople are the ones who are out there. We're talking to customers all the time. They really get what the product is all about and what their customers actually want from it. So they can, in a way, see the future. Sometimes sales teams literally talk to customers about the future without talking to the product team before. So it's good to be on sales calls. It's good to talk to them and understand what the customers really want. For B2C companies, growth teams or operations teams, depending upon which company you're in, are very useful to talk to. The company that I work at right now, Premis, we have a growth team that's really great. They literally have their years on the ground and there are a source of truth about who our users are, what their pain points are, and why they're reacting the way they are. Just for context, almost 95% of our users are non-U.S. based. They're all based in Southeast Asia. So we can't take a flight and keep talking to them. So we have multiple different channels that we use in order to get feedback from them. It's a challenging experience, but our growth team has been a huge champion towards that effort. The next phase is the development and launch phase. So what do you use over here? There are, again, different tools here. You can use tools like user testing. At this phase, remember, you already have something. You have some kind of a concept. You have either a beta product of your website or your application or whatever it is. You can use stuff like user testing. You can get more participants to actually see screens, click through, and get their feedback immediately. Again, gorilla marketing, very useful. Again, like taking a prototype on your phone and going to people and seeing how they react to that is extremely valuable. I feel like this is the most old-school method of getting instant feedback. By the same time, it's as personal as you can get, and it's extremely, extremely useful. I added look back in here because that's an app which you can check out if you've not heard about it, but essentially it allows you to do user testing on the phone where your user could be in any part of the world, but they use look back and you can literally see where they've been clicking, and you can even see the expressions on their faces. So that's something which we've used at premise for users in Vietnam. We did this recently where we asked some of our expert users to get to a cafe and then use this app and have our prototype on their phones. And you could see the expressions on their faces as they were using the app, and you can even see how they're using the app. So it's the closest that we could get to literally being there with our users. It's called look back. And of course, there's internal dog fooding, which is an interesting resource, but again, if Google Buzz was anything to learn from, that shouldn't be your only resource because unless your customers literally are the employees in your company. Sorry, okay. The current type of customer profile is using your system and you're not sure you have to look at the full breadth of what you're doing on the corrector and it just happens to be a very small slice. How do you know or how do you like compensate for that? Interesting. Okay. So the question was, was it specifically to the conversations with the sales team or more broadly? Yeah. Yeah. Right. Right. So yeah, if I answered the question correctly, you're talking about how do you know what your customer segment exactly is and not biased by the questions that you're asking them. Right. It could just be that there's a huge pool of people and it's just where your project is right now and it happens to get a very specific segment. Yeah. And then you're like, oh, I've got a large sample size and you're going to start asking questions like that. Right. You're only got a very limited frame for that. Yeah. That's a great question. That's a great question. So just to rephrase. So the question is that how do you know whether you've actually reached the customer segment that you should be focusing on and how do you know that you're not ignoring a huge chunk of other customers that could be potential customers but you just not met them yet. And the answer to that is it's highly contextual. It's based upon how much time you have. So for instance, at the company that I work on right now, it's a startup. So we don't have all the time in the world. So we need to double down on something really quickly. So velocity is the answer there. If it's a bigger company like Google, for instance, Gmail is a classic example where they kept the product in beta for about five to six years because they had the luxury of time. So I guess the answer to that question is more about just talk to as many customers as you possibly can, get as many signals as you possibly can. It's important to understand if you're fixing pain points, but it's also important to see if there's actual business and revenue that you can get from those customers. So you need to balance all of those things out. But again, it's a never-ending process. So sure, you can start with customer base A, but then you can move to customer base B. And that's always the role of the sales team. So it's good to talk to the sales team though because they kind of have a good pulse part. Most of the money is going to come from here and not here. So start building for those people instead. So yeah. Yeah. So the question is when you're talking to users, some users might say that they really like a feature, but then that's a small majority of them. That's a small fraction of those users, but then a vast majority of users might not actually want that and how do you make sure you're getting to the vast majority? Yeah, it's a good question. So it's important to have a large sample size when you're conducting these initial exploratory surveys. And the reason why I like Google Consumer Service, for instance, is because it won't give you an answer until it's sure about the confidence that it has in its responses. So you've asked three simple questions, let's say, to a certain user base. It won't give you an answer until it has at least about 1,500 data points and a good amount of confidence in its responses. So when you're trying to figure out whether a certain feature is something which just maybe 10% of the user is like and that's whether it's part of that way and you're not biasing towards that, you should just look at the data ultimately and you should just see what's the data saying, what's the majority of talking about rather than building for the small minority that really, really like a certain feature. Any other questions about this phase? Yeah, that's a great question. So the question was about gorilla testing and the fact that it's not a huge sample size, how do you make data-driven decisions based on gorilla testing? I think the most useful thing for me from gorilla testing has been to just build empathy for the end customer. When you're talking to them, you just really understand what their pain points are and the questions, you'll notice how often you go off-script when you go to a customer and ask them about 10 questions but then there'll be 11th and the 12th which will come up because they're going to talk about all their other pain points and the DoorDash example is a good one because they had a different idea. Initially they talked to 250 different people during their gorilla testing phase and then they realized that the problem point that we thought was an issue is really not an issue. Instead, let's just pivot and move into this other direction and then they spoke with more customers and then they got a feel for it. So purely gorilla testing is not a good answer for getting to a concrete idea of who your customers are and how you can build for them but it is extremely valuable to build empathy for who your customer is and when you're building this product, this business, it's really useful to know that within this cohort small business owners or these restaurants where small business owners, there are maybe 10 different sub-segments and the best way to know about that is literally to go out there and talk to them. That's when you can realize in your mind that I know exactly what those segments are, I know exactly which one to focus on now and then you can build on that with your surveys and other data-driven mechanisms. Any other questions about the space? It's a good question. So the question is how do you... how do you... Empathy is good, especially with gorilla testing but how do you make sure that it's also meeting business requirements? I guess that's a broader... the broader question would be how do you make sure that you're getting user feedback but at the same time you also make sure that your business goals are being met and that's constantly the straight-off that you have to be making, right? Because while this talk focuses on getting user feedback, that's just one side of the coin because the other side of the coin is literally knowing what your business goals are and what those objectives are, what you want to optimize for and you have to constantly be aware of that. Are you building toothpaste for the elderly? And if that's the case, then you have to keep that in mind because you know that that's where the money is and then when you're speaking with these users and building empathy, you kind of... while keeping that business goal in mind, you're figuring out what the sub-segments are amongst the elderly. So that's where I was kind of getting at. Is just building empathy with customers going to give you money? It has to be a mix of both. You have to know who the users are and what their pain points are but at the same time know exactly what the business model is and what is it from a business perspective that you want to optimize for and constantly keep that in mind. That's a great point actually. I'm going to talk about that just a little bit more in a bit. Any other questions? Great. So we talked about this. The last bit is now you've launched. You've launched this product. It's out there. How do you get more tactical? How do you get more data about what you've launched and whether it's actually successful, whether you met your goals, whether you fail? How do you know about that? This is again a subset of tools that I've used in the past and have been extremely useful. There's Intercom if you've not heard of it. It's basically that little chat bubble that you see on most every single website these days where something that shows up and it says that, hey, X and Y and Z person is ready to talk to you. It's a great way of getting instant feedback from people about what they like and what they dislike about your product. Zendesk is another example of how you can build support infrastructure within your application, how you can build out a great FAQ page out of the box without involving engineers too much. They just have to do an SDK integration for that. Talk Desk allows you to build a help center within a call center within five minutes which is again a great source of getting feedback. Amplitude is really great if you have good analytics instrumentation within your product. It gives you information about who's using your app, when are they using your product and it completely depends about how you have instrumented your analytics infrastructure but it's a great way of getting feedback and it's also very simple to use. You don't have to be super tech savvy to use it. Then there's of course your data warehouse itself, BigQuery and Amazon Redshift depending on what you're using. It's really great if you know SQL, they make it dead easy for you to just go in there, they have a SQL editor, you can just write in some SQL queries and you can literally query the entire database for any kind of information that you want. What's important to know about all of this is you can't think about this, you can't think about what product you want to use, what tools you want to use after launch. You should know about this and preemptively think about this and build in this instrumentation while the development process is taking place because if you do this after launch then you've already lost the plot. For premise for instance when we're building this out we integrated with Zendesk, we had Google BigQuery, we used Amplitude extensively but we built that analytics infrastructure. We made a very conscious choice to make sure that the output that we get is going to provide us with the kind of data that we need in order to know whether we actually met our goals or not. And yeah, go ahead. In the large scale how to address how data can be customer saying something and sometimes the same thing in different ways so how to address the most important issues. Yeah, that's a good point. So the question is you're probably going to be getting a lot of feedback from many different customers and different kinds of feedback and different kinds of ways from different channels. So how do you address that and how do you make sure that you're solving for the right point? So the answer to that question, it's not easy. It's very overwhelming to get feedback from all of these different channels that you see out there at the same period of time because you have different features and different phases. So it's possible that you're getting information from Zendesk on the one end, you're getting information from the sales team on the other side and you're also getting some data from LookBack from your growth team or your sales team. So how do you keep everything in mind? And I think the answer to that is you have to learn how to sift the signal from the noise and a good way to do that is completely to immerse yourself in the kind of feedback that you're getting. Make sure that you have a constant channel of feedback and also have a keen eye on what the business objectives were and make that kind of like your anchoring point and basically just keep listening to some kind of patterns are going to emerge and then based on those patterns, you just have to see what makes more sense for the business. I'll talk about exactly that in just a sec. But any other questions before I proceed about the tools which you can use at this phase? As you can see, these tools, they get progressively more towards the quantitative side of things. Yeah, that's right. So what kind of questions are you using to make sure you're staying unbiased? Yeah, that's a great point. So the question was probably for the first or the second phase, right? So the question was how do you make sure you're not biasing the data and you're staying unbiased? I think that needs to come from a little bit of training. You have to do every single question that you're asking in a survey to make sure that it's not leading in any way. You have to maybe what I try and do is I pass it by a couple of different people within the company. If you have UX researchers in the company, they are literally the best people to do this because they have training in that field. And just see if in any way, which way you're making sure that the question is not such that you're expecting a certain answer and you're trying to lead them towards a certain option that you have. There's a lot of great reading about how to remain unbiased and about survey creation. There are people who get their PhDs and creating great surveys. So it's never ending, but at the same time, it's something to definitely keep in mind to constantly remain unbiased while asking your questions. And it's not only just for surveys, right? It's also for sales calls or when you're talking to the growth operations team. If you're talking to the customers and asking them questions about a certain future, those questions can't be unbiased either. You can't say stuff like, hey, here's a demo. Isn't it amazing? You can't say that. It has to be more unbiased than that. That on a scale of one to 10, how would you rate this or something like that? So it's kind of how you would do that. So was there another question? Okay. But basically, what you talked about is something which I wanted to get to, which is if you look at the number of tools that you'll be using at all points of time, it gets extremely overwhelming, right? At this point of time, there are probably like five or six different features that are in different phases in the product that I'm building out. And the kind of feedback that you get varies a lot because you're getting quantitative feedback on the one side from amplitude and BigQuery results and you're getting qualitative data on the other side. And you also have internal feedback, right? You get people from leadership saying that, hey, I just used the app, but it crashed for the help. And you also have people from the growth team saying that, hey, I wanted this feature yesterday and users are now complaining about it. So there's a huge amount of internal feedback that you get and it's something to account for. And something which I really enjoy doing, it's one of the first things that I tried to do at any company and any team that I joined is to create a direct feedback channel. And it should be something that people are generally used to using. It could be at Google, it could be like a sheet. At my current company, it's just a Slack channel. But what I try and do is have people from all levels, everybody in the company is there on the feedback channel. And the idea is that forget about the amount of noise that it's creating. Just if you have an issue with the product, if you have any thoughts about the product, if you have any concerns about the product, just talk about it on this channel. Do not use any other channel. Do not come and talk to me for sure, but then use this channel for that. Add images, add screenshots, add videos, do whatever you can, but just use this one channel. And what that does is it not only gives everyone else an idea about what the issues are with the product and how to collectively work towards that, towards fixing those. It also gives a really interesting perspective on things because while the growth operations people are going to talk about how we can improve operations on the ground, you would have executives talking about how a certain customer whom we've not even signed a contract with really, really wants X, Y, and Z feature. But there's no way we can get that until we build features for that. So you get very different feedback depending upon the kind of person who's giving the feedback in this channel. And it also lends a lot of transparency to the entire process. Do you ever have experiences where you last repeat that? Yeah, it's a great question. So the question is what if you create this internal channel, but you just hear crickets. There's nobody saying anything. And that happens all the time. People are, it's very hard to get started. It's very hard to get a few people to kind of speak up, right? You don't want to be that person who's creating noise. I think what's really helped, and I did this recently at premise, was to literally recruit my own internal group of dog fooders and have like five or six people, maybe one person from engineering, one from growth, one from some other team, maybe a sales team, and just ask them to forcefully just give feedback to start populating that channel. And then just keep messaging the channel and messaging in the group every week and just telling them that, hey, give some more feedback. And here are the different ways in which you can provide feedback. And basically having your own core team of feedback providers to artificially create some kind of supply there is pretty useful. But the other way to do it is also to kind of make everyone cognizant about how it's super important to provide feedback. And you can do that during AMA sessions. You can do that during brown bag sessions that you might have in the company. It takes two minutes to announce to the company that, hey, this is extremely important. There's no good or bad kind of questions that you can ask. So go ahead and provide any feedback that you have, any kind of thoughts that you have, and we will take action upon it. It's really important to also kind of respond to every single piece of feedback so that people don't feel like they're just speaking to a wall. If you want to read more about this, I'm not sure how you would access the slides, but there's a link over there which is more reading about how you can effectively create this kind of an internal feedback channel. And it's extremely useful to do this. Yeah, I think so. You know, Sam probably knows more about that, but I think it is shared with everyone, yes. Also, there will be a lot of feedback coming your way. We just talked about this. The goal is to surround yourself by the feedback to feel like you're in the user's shoes. What tends to happen is once you have the right instrumentation in place, you hear about so much feedback that you literally know at this given point of time this is exactly how users are reacting to a feature that I built. You literally feel the elation that a user has when they use a certain feature, and you feel the issues that a user is having. As an example, there's a Slack channel which gives me constant feedback about every single Play Store rating that we have for our app. So whenever somebody makes a Play Store rating, I get a notification on my phone, and those notifications are constantly on. It's not for the faint-hearted, I think, but you can literally see, if you launch a feature, you can see a stream of one-stars immediately because something crashed, you immediately know that something happened, something's wrong, and you can react to that instantly. So it's good to have those feedback channels while trying to maintain your sanity along the way. It's a hard balance to keep. Yes, so the other thing is also to be able to develop a knack to filter the signal from the noise. This is quite crucial, and I think that comes to a large extent from experience. You get so much noise from different people. If you start reacting to every single one of them, you're going to have 100 features that you want to build within two days, but there's no way you can do that. You have to keep a tab of the different kinds of feedback that you're getting and figure out that, hey, this is the signal that I have. These are the business objectives. I know what the business objectives are. I know what the feedback is from the users, and therefore, I can prioritize accordingly. So it's extremely important to completely, completely immerse yourself while keeping the business goals in mind. Any questions about any of this so far? Yeah. Is there a spreadsheet or are you turning off business goals quite a bit? Yeah, that's a good question. So the question is how do you quantitatively figure out how do you or rather how do you do that process of figuring out what's the signal from all the noise that you're getting, right? There are different ways to do that depending upon what you're most comfortable with. It definitely shouldn't all be in your head. I don't think that's really good because you might forget about things very quickly. What I've seen to be the most useful is I just use a Google Sheet most of the time. I collaborate with, in our case, the growth team because they're closest to the users. And what I tend to do is you would see over a course of a day or two days after a feature is launched, you would see some points emerging which are creator pain points for users and the rest of them. And as soon as you see those emerging, you start noting those down, start talking about how bad the pain is on a scale of one to five. And then you also think about what the business objectives are, what's the direction that the company's trying to go in. And based on that, you kind of add another column in that sheet. This is one out of five from a business perspective, but it's a four out of five from a customer perspective, and then you make a trade-off based on that. But having a sheet like that definitely helps. Having other people who are close to the customer also have access to that sheet definitely helps. The way I do it is every single row is like a different pain point that I'm noticing. I also talk about my columns, and this is getting too tactical. I think everyone has a different way of doing this, but my columns are like what's the broader initiative that this feature is a part of, and then how much is the pain on a scale of one to five, and what's the business value for us on a scale of one to five, and then kind of make a call based on that. Ultimately, you just have to kind of balance the trade-offs for either two. Yeah. Okay. Yeah, that's slightly different than the user feedback piece, but the question is about how do you negotiate for industry sources with a different PM. The negotiation mostly happens with inch managers and PMs, but it's mostly about showing the value about the fact that, hey, this is the urgency of the need, and this is the business value that we have, and this is specifically why I need these engineers, and then it's just a negotiation from there for shared resources. That's something which happens in companies which have more of a tribal culture, like Google is one example of that. The company that I work at right now, that's not an issue anymore because it's a startup and we have very clearly defined entities, but that might not be the case if we scale. So you could repeat that, your soft skills. With regards to the negotiation, yeah, yeah. So the question was when it comes to negotiating for resources internally, how much does that depend upon your soft skills versus your hard skills? It is a bit of both because when you're trying to show value, a large part of value is actually showing data. So in that entire list of different tools that you have, I really like the last bit because it's pure data and you can literally say something like, hey, we've met 60% of our goals because of Feature X and we're on our way to meeting 90%. All we need is this one engineer to make that happen. That's a much more powerful thing to say than just saying that I have an inkling based on all of my conversations that this is what I want to do. So it's a bit of both. Hard data definitely helps and being able to get that hard data definitely helps. But at the same time, soft skills help as well because you want to come to any negotiation in a really great way and be able to understand the other side as well. Everyone is coming from a good place. You're all on the same broader team. So it's a bit of both. Yeah. Sure. So the question is about using a tool like Amplitude, how do you figure out what the metrics should be and how do you instrument that in order to meet your goals? So that's a good question. So at the company that I work right now, Premis, what we did was about three or four months before we even launched our product, we knew what the features were going to be within the product. What we decided to do was track every single button press. It sounds a little creepy, but you'll be surprised how many companies do that. And that's because you know what exactly it is that you would be most interested in when a feature launches. But what you do have is the ability to see the funnel of how the user is dropping off and when the user is dropping off. An example for that is we had a fairly long onboarding process, which was very necessary for the product. By the same time, one of the trade-offs which we thought would happen was we would just lose most of our users because of the drop-off. So the amplitude instrumentation that we had over there was literally like has the user seen this page, has the user tapped on terms and conditions, so on and so forth. So we had a very detailed funnel metric basically that was available to us as a result of that. But you have to kind of be able to plan for things like that. You have an MVP establishment. You have a good CSM team and a good VP of CSM as well. Do you really need a product management? If you have an MVP establishment. You're right. You have a good CSM team and a good VP of CSM as well. And CSM is customer success management. Right. If you have a team of CSM, you're really handling the products. I see. Okay. All right. Yeah, yeah. So the question is why do you need me if you have CSM and if you have your MVP or redefine? That's a good question. It's actually, the broader question over there comes down to like how do you define what a product manager really is and what their role is. And honestly, the answer to that is so nebulous because if you ask 10 people about that, you'll get 10 different answers. The role of a product manager doesn't just end with just kind of figuring out what the MVP is and figuring out what you need for customer success. It's also execution as much as planning. Coming up with an MVP requires a lot of planning, but executing on an MVP requires working cross-functionally at the same time. So depending upon the company, if that is something which is missing as a role, a product manager fits that really well. But if the company feels that that's not important to them, then a PM would not be that useful. So it definitely depends upon the kind of company that you're looking at, but a PM's role is not just going to be restricted to just those two functions. Any other questions so far? There are no right or wrong questions. That's a good way of asking survey questions to you. So yeah, so the last slide that I had was just some parting thoughts. Number one is if you ever hear these statements where somebody says, but I am the user, right? Like for a consumer app, for instance, you're building Facebook, I am the user. That's a red flag. That should never be the case. You're not building a product just for yourself. You're building a product for a really large sample size or many different segments. It's your job as a product manager to figure out what those different segments are. If you hear the statement, but the users don't know what they want. I mean, I asked them and they said, sure, everything works. That's not good either because you're not asking the right questions then. You're not getting the right data in the right way. Both of those statements, you'd be surprised how often you would get to hear that. And whenever you hear that, it's important to realize that that's a red flag and be able to tackle that diplomatically. Be cautious of creating a sampling bias or asking bias or reading questions. This is a question somebody asked and this is a good one because it's something which is super important where at all points of time, you have to make sure that you're not asking questions where you're getting the kind of answer that you want to hear because you have this amazing idea and you want everybody to agree with it. But instead you're asking questions where you might get, you get a non-biased response from people. And it's very important to know how to ask those questions and if you have a survey, get that vetted by more than five or six people. Get people's feedback for that launch. So the question is, how do you know when it's a good time to launch? It's highly context-based again, depending on what the feature is and what the product is that you're trying to build. But essentially, if you have an MVP, what that means is you have a certain list of features that you want to go ahead with. You've probably talked to the engineering team about that. You better have talked to the engineering team about that by that point. You've sized how much time it's going to take to build each and every one of those features and you have some kind of a timeline and deadline in front of you for the launch. You know that that's the MVP that you want to launch with because it does both things. It alleviates the pain points for your customer segment at the same time it's also addressing the business goals that you have. And those two are non-negotiables. But what are negotiables is the other kinds of features that you want to build out for the MVP. And as you come closer to the MVP and the launch date, it's quite normal, especially when you start up to kind of trim down the MVP a little bit more so that you can launch on time, but at the same time making sure that you're addressing the pain points, making sure that you're optimizing for the business objectives that you have. So again, it depends upon the company because for a startup, you just have to respect that launch time. But for a bigger company, you have the luxury of time. I just want to ask it in your own experience sort of where user demographics has a role. You've qualified. Certainly you probably already have a, as a user, you probably know how big they are with your product. But what their age group is, what industry they're in, and any other demographic information that might be useful. Yeah. Maybe even assessing the signal to the likes. Yeah. Yeah. No, no, no, that makes sense. That's a good question. So the question was, how do you, does the demographic of the person kind of bias the kind of response that he or she's giving and how do you make sure that you're cognizant of that? Okay. So that's a great question. And it's something which is, it goes back to how you create your surveys and how you ask your questions and whom you ask those questions to. Because when you're doing that, for instance, let's say that you're trying to gather some qualitative data of the survey. You must have noticed that a lot of these surveys, they have questions like, all right, what's your, what's the age range that you fall under? What's your occupation type? What's your gender? And the reason for that is because you want to be able to make sure that if it's a certain demographic that's giving you a certain kind of data, you're not biasing for that. You're getting an equal sample size. You're getting 50% males and 50% women. You're getting people across the age range spectrum to make sure that you're controlling for that entire environment while you're getting feedback. An example of this would be, I can't get into the specifics there, but there was a product at Google which was in its early stages and we wanted some feedback from users about that. We decided to go and talk to these users. And we spoke with about 20 different users, but we chose them very, very carefully because we wanted to make sure that it's 50% men, 50% women, people across the different age groups that we had just so that we're making sure that we're not biasing based on the kind of demographic that we're talking to. Does that answer the question? Yes. Okay. Yeah. Right. How do you get the users before you launch and how do you get users to test your product? So the question is at a much smaller company where you don't have the luxury of tons of money, how do you bring your product in front of users and get them to actually use it? That's a great question. So that's definitely something that I had to adapt to as well personally when I started working at this company. And a really good way to do that is, again, it depends completely upon the kind of product that you're trying to build and where your users are. If your users are around you in San Francisco, you're building things where you expect people in San Francisco to be using that great. You can go out there and talk to them and show them your early prototypes that you have and kind of build from there. Or if you know which city you want to launch in, you can just go fly there and just talk to them about that if it's close by. The issue which we had in our current company was that we were launching in countries that were nowhere near where we are right now. They were very far away. Indonesia, India, Philippines. And the way we solved for that was we built out our growth framework first. So what that means is we built out some community managers on the ground who were going to create the demand for us from the ground since we're building a marketplace. And we asked these community managers to go ahead and just file recruit users for us. And we trained them to get us users who were, again, not biasing for the demographics. And then that's how we got these users. We gave them maybe like $5 for about 10 minutes of their time and quickly got some feedback from them. So it's remote gorilla testing, if you will.