 I want to talk about how to use feedback specifically to inform product strategy. There's a lot of types of feedback. There's usability feedback, et cetera, et cetera. But I'm going to talk about just very broadly feedback about your product to service and how to use it to make smart informed product decisions. But what I really want to highlight if we're talking about making smart product decisions is that it's tough. Part of the art, part of what I think draws people to the role of product management, is it's a low-date environment. We often have to make gut calls and gut decisions. So we never have all the data. Even when you're a very large company with lots of behavioral analytics and lots of feedback, you still never really have the complete picture. So we're always operating from a deficit. So what I really think about is not how do we make smart decisions, but how do we basically make the cost of making decisions smaller? How do we reduce the cost of being wrong? And so instead of calling this making smart decisions, I wanted to call this de-risking product decisions. Because that's the way I like to think about this. So we're going to talk about three ways that we, as a company, but also customers we work with, try to de-risk product decisions at a very, very high level. And a lot of this is stuff we do with our product, almost everything that's being done with our product. However, I'm just going to show you generic examples that you can implement at any company you go to with your own custom tools or whatnot. Anyone have a guess as to probably the most popular way to de-risk product decisions in the last 10, 15 years? From an engineering perspective? Testing. Close. Agile development. First of all, we've de-risked product decisions. We just made the cost of getting them wrong small because we try to launch lots of little things as opposed to, gosh, we're going to spend nine months on this. How many companies, how many people work in a company that's agile shop for the most part? Again, real purists would argue that no one's, who's actually agile, or no one says their agile is actually agile. But we all attempt to do it as opposed to year-long development cycles. So most companies are doing this today, so let's move past this. But it's worth noting that it's something that we almost all do. Second one is we're working on using a prioritization framework to try to figure out what should we build next. Adding some structure around the decision-making process. How many people are at companies where there is some sort of prioritization framework in place, some sort of impact over effort? I'm going to walk through that real quick and talk a little bit about it. So if you're not familiar with prioritization frameworks, they come in different flavors. But at the end of the day, you're basically trying to assess what are the impact of this thing will be versus the effort of it and some sort of sense of confidence. So let's imagine we've got four fictional features we're trying thinking about building. We've got two generic goals. It could be reduced churn. It could be increased ASP. It could be pleased the CEO. It doesn't really matter what the goal is. You don't really always get to pick that. But we're going to rank all of our features against our estimate on how we think it will impact that goal. In this case, I've stolen a thing from this guy named Bruce McCarthy who writes this book about road mapping where he tries to square everything 0, 1, or 2. If you read anything about ratings and reviews, you find that people tend to be, if you give five star ratings, everyone's a 1 or a 5. 10 star ratings, everyone's a 1 or a 10. So he likes to use 0, 1, 2, and I kind of like that too. 0, it has no impact. 1, it'll have some impact. 2, it'll have a lot of impact. Makes it pretty easy to have to not nitpick about whether it's a 4 or a 5. So that's the first step. Estimator impact against goals. Second step, estimate engineering effort. Again, people always quibble. Product people can't estimate engineering. Engineers always need to be involved. Yes, at some point they do. But generally, if you do this job for long enough, you can what we call t-shirt size it, right? That small, medium, large, in terms of the effort it takes to build this thing. And you notice my scale here is different. Again, it doesn't really matter what scale you use. It really matters that you just have a scale and you're consistent with it. I kind of like a scale where everything gets squared. A large isn't twice as big as a medium. It's the square of a medium, right? So one, two, four. Some people go one, two, eight. It's up to you, right? Again, at the end of the day, it's not the numbers you pick. It's just being consistent with them. Secondly, or sorry, thirdly, let's assign some confidence to this. How confident are we either in that engineering estimate or, more importantly, in that impact estimate? And then, as you might imagine, we put it all together. We come up with some score and we restack rank them. Great. And now we have at least some justification for here's why we think we should go build this thing next. Now, why is this to help ESD risk things? Well, because a lot of what happens in most companies when making product decisions is a lot of what I call highest paid person's opinion, where we all get in a room and we argued out to either the highest paid person or the person who's the best orator will get to decide what we end up building. So putting some structure around it, you start arguing about whether it's gonna impact goal B in the way we think it will or not as opposed to how much we just like goal B and how much we've heard about it. Anyone, so the folks that said they had prioritization frameworks is just somewhat familiar. Okay, what's the challenge with this? Where the hell did you pull those numbers from? Typically, it's, you know, maybe at your ass. Maybe it's informed by lots of customer interviews. Maybe it's informed by your latest customer survey. But that's the difficult part, right? Mainly on the impact thing, right? How do we decide what's objective? And if you get to a point, I'm not sure how big your teams are, but if you get to a point where you have multiple PMs on your team and you're all individually scoring your own features, a funny thing happens in that everyone seems to score their own features very highly. So how do we then kind of create some uniformity or try to remove some of that subjectivity? But at the end of the day, this is better than not doing this. Subjectivity or not, I haven't seen a place where this seems to correlate highly with highly performing companies. And there's a whole bunch of analysts and people that spend more time looking to that that can kind of back that up. But if you're thoughtful about this, it's better than nothing. But what I want to talk about is how we can use basically feedback to validate and de-risk some of that subjectivity. But before I do that, I really need to talk about how we collect feedback. And not just how we collect feedback, but how do we collect significant feedback? Because the problem with feedback, and I say that as someone who's been selling tools for this for almost 10 years, feedback is a dirty word most of our companies. Feedback usually means some noisy person, some vocal minority, some sales guy who won't shut the fuck up. It means a lot of things like that. Am I allowed to curse on this? It's because, sorry, I didn't really, didn't check the rider on this. But at a lot of companies, feedback isn't our first class system. We don't take efforts to get a lot of it. So what we have is kind of, we don't know if we trust it, right? And so to me, if you want to use feedback strategically to help kind of de-risk your product decisions, we need to collect significant amounts of it. And so what I want to talk about is methods and tactics for how to do that. And I want to break it into two different channels. One, feedback from customer teams. And the second one, feedback from users directly. So I ask you kind of B2B versus B2C. If you're in B2B, this one's going to matter a lot. If you're in B2C, direct from users, it's going to matter a lot. And in some environments, ours included, both will matter. But I like starting with feedback from customer teams. Again, show of hands again, who's in a B2B company? Just so I have a second, we'll stare you in the eye and pick on you. So what does feedback look like from customer teams? Customer teams being sales, support, success. What does it look like at most companies? You want to pick on you? I tend to work with small companies where all people are equal. That's awesome. Okay, so that is the best place to be. And as you grow, that will get worse. Because as soon as the sales team is down the hall, from the support team is down the hall from the product team, that stuff gets worse. And so let me tell you what it looks like as customer teams grow up. Well, it usually starts off with the sales guy forwarding emails to say FYI, or grab you in the hallway and saying, oh, by the way, I just heard about this. I really need this, I just lost a steal. It ends up being very ad hoc. And usually in reactions that almost every product team I know sets up some process that looks like a monthly, bi-weekly, every other month kind of meeting with the sales team, or the success team, or the support team. And what we do in this meeting is we tell those teams to come with their list of asks. Tell us what feedback you're hearing, show up to that meeting, and tell us what do we need to improve in the product. And usually what they show up is with the list of asks. Here's our top 10 list. We want this thing and that thing and this integration and that integration, yada yada yada. And that's great, we're excited about this. Okay, we'll get some feedback from them. Then we meet with the success team and they give us a different list of asks. And then we meet the support team, they give a different list of asks. So what's challenging about this? Does anyone have this experience with their teams? What's challenging about this? Different teams have different priorities that they can complete. Different teams have different priorities and we have no idea how to stitch these together. I have no idea if the number one thing sales wants is more important or less important than the number one thing success wants, right? I just have their gut call on this. Secondarily, it's honestly hard to trust. Why? Because I don't really know why. I don't know who said it. I don't know what they actually said. We did a survey a couple of years ago where we asked product teams, what are the most reliable sources of feedback internally? And it was like support, users, success, 50 feet of shit, sales, right? And I like picking on sales because they're easy to pick on. But every team has this challenge and that this list of asks doesn't tell me, doesn't tell me a lot, right? And so I kind of tended to discard it. And the fact that I can't aggregate it and I can't compare across things means I'm back in that place of getting all the leaders of different customer-facing teams in a room and letting them argue it out about whose thing is most important because I can't. And then I maybe grudgingly say, gosh, I don't really know how to value this. Trust us, I'll give you 15% of the roadmap. You guys fight it out and see what you want to do with it. That's where a lot of these teams end up. And it usually ends up with a lot of infighting and politicking because, again, limited resources. So let me tell you the better way I've seen teams do this. The better way I've seen teams do this is, you know, I've talked to you about getting feedback from those teams. I actually think about getting feedback through those teams. Because at the end of the day, I actually don't care what Salesman Steve had to say about the product. What I care about is the prospect he was on the phone with. What did she say and why does he think it's maps to this thing? Why do you think it's important? So instead of focusing on them being, you know, this game of telephone, we ask them, hey, what are you hearing? And they try to paraphrase for us. I don't want you to paraphrase for it for me, right? And so this can look like a number of things. This could be a Google form. We have like a browser plugin that allows people to highlight feedback. I've seen custom objects in Salesforce. But at the end of the day, I generally want to know a couple of things. One, I want to basically create a protocol where you can submit feedback to the product team. You have to do three things. One, you've got to tell me who said it. You've got to tell me what they actually said. Like if possible, highlight literally what they said. Is it an email? Highlight what they said. Support to get? Highlight what they said. If it's a phone call, okay, there's gonna be some paraphrasing, but you got your call notes. Highlight what was in the call notes. And sure, we would love it if you try to kind of categorize it for us by associating it to some larger concept, right? Oh yeah. So, you know, maybe our top ask was integrate with Salesforce, right? And it's a common one, right? What I don't want is just on the top list, number one thing, integrate with Salesforce. What I want is 20 prospects that all said X and you think they all need to integrate with Salesforce, right? That's what I want as a product manager. That helps me, that allows me to aggregate this across different sources. That allows me to follow back up with those folks and see whether Salesforce was full of shit. And that also allows me to read the actual raw feedback to start figuring out what the actual problem was. Do they actually need integration with Salesforce? What does that even mean? So on and so forth. So this has like been probably one of the biggest breakthroughs we've seen with teams when they move from the model of, we are polling our team to see what they think to, no, if you want to influence product, you just have to be diligent in collecting the feedback, right? And the value for you, Mr. and Mrs. Success Person or Sales Person or Sport Person is by doing this, you will have a better, more representative seat in the product roadmap. So let's get a feedback through customer teams. And I'm kind of, this is a lot of very high levels. So we'll do Q and A at the end if you have a bunch of nitty, gritty questions. So let's talk about feedback from users. The biggest problem when we get feedback from users, and this is what I mentioned when I said feedback is a dirty word, is we don't get a lot of it, right? We get low response rates, right? Which means we get it from only the most impassioned people which are usually, you know, the advocates for things you may or may not want to do that probably don't represent your average customer. And the reason why we get low response rates is because three things. One, we don't really create awareness that we want feedback. Every single website on the internet has a link that says feedback. Do you know where it is usually? It's in the bottom. You know where it goes? Fucking nowhere. And so we've been trained for the last 20 odd years that there's just kind of unspoken social contract that we're gonna give you an outlet for your feedback, but that's not for you. That's for the crazy people, right? We're not actually gonna listen to that. That's not a first class citizen. So if we actually wanna get feedback from customers because we now live in the 21st century, we actually find, qualitative feedback from customers useful, we need to kind of like break that mental model and tell them about it. Two, we can't make it too much work to give feedback. And three, we gotta sort of giving, there needs to be some value for the person giving the feedback. So let me come up with a couple common examples. Have we ever seen the surveys that fly across the screen, right? Love those things. Certainly it doesn't lack from awareness. I know you want my feedback, it's flying across the screen as I'm trying to read some article, right? How many of you are finished one of these surveys? Yeah, I'm in this business, I click every single time. I have never finished one in 10 years of doing this. Why? Because it violates my second new principles. It's too much work. It assumes I've got 10, 20 minutes to answer all your questions. And two, what the hell do I get out of it? At most I've seen, I'm in a raffle for a Starbucks gift card or something, right? Don't care. So, secondarily, who's familiar with Net Promoter Score? How many people use Net Promoter Score at their companies, by the way? Okay, it's kind of like the de facto, we've all whatever agreed that this is the best way to benchmark how people feel about our products and services. It doesn't lack from awareness. Usually you're sending an email or you're doing a toaster in-app. It doesn't lack from it being too hard, at least on the number side of things. Though the filling rates on what I think are the interesting part, which is after this you ask why did you give me that score? Filling rates on that are pretty low. Why are they pretty low? Because I don't know what's in it for me. Why do I, what does this do for me as a user, right? I get very little out of this. So if we want to get feedback from end users, we've got to do, I said three things, promote that you actually want their feedback. How many people are familiar with Stack Overflow? It's like a website for, so Stack Overflow is kind of a, I don't know how to describe it at this point. It's like the Quora for programming questions. We worked with them really early on and Stack Overflow basically, people sometimes integrate feedback widgets and they do all sorts of different things to get feedback in their app. They put in red text at the top of Stack Overflow, we want your feedback on what to build next. Click here. It's not complicated, right? But putting in red text at the top of the website as opposed to something generic feedback at the bottom was massive, right? The amount of feedback they got from their community we're able to close the loop on was massive. So the biggest thing that we have to coach people on is like someone has to set up and say, convince users, let them know that you actually do want their feedback, right? Secondarily, make it really easy to give them, for them to give feedback, often say like single question surveys are best. This is like the original kind of user voice way we got feedback and we still use it a lot today, which is kind of these idea boards. Again, you can get a lot of different vendors that have idea boards. The problem is if you bury the idea board in the back a lot of your community, you violated principle number one, do not pass code, do not collect any feedback. So you got to promote number one, but number two, ask a simple question, how can we improve this product and making it very easy to see, gosh, I can just type in one thing, click a button, send my feedback along. There aren't 20 other questions that I have to fill out. And third, explain what I get in return. In this scenario, what people get in return is they get influence, right? And that's actually the best thing. Intrinsic motivation, far more powerful than me saying, by giving us feedback, you get the chance at a Starbucks gift card. What I really want, especially if I'm using this product day and day out, is I want some influence over how to make it better for me. And what they could see in this scenario or in any other scenario is if you set expectations of, we are going to use this feedback and here's how and you will have influence, you get better outcomes. We actually had a Stanford PhD, like do a bunch of Cisco anonymized analysis of user voice forums and found, if there was a response from the company above the fold on the page, I want to say the response rate and the engagement rate was like two to three X higher, right? Because we're not idiots. There's a bunch of feedback that no one's listening to. Again, there's nothing in it for me, so I'm not going to bother with it. Also, another good example, Atlassian, everyone familiar with like Atlassian, they did Jira. They did a pretty cool thing with their NPS surveys. The problem with NPS survey is even if you do use that feedback and I know a lot of companies that spend a lot of time analyzing the, why did you give us a rate, like why did you score us the way you did? The challenge is no one knows they're actually, me as a user, if I fill that out, I don't know that this is critical to this company and they're really paying attention to it. So Atlassian basically set up an automatic follow up. The first time you fill out their NPS survey, they follow up and send you this big long thing which I won't make you read, paraphrase this down to basically saying, we really care about this and we review all of it and we classify it in one of three buckets and it goes to our product team and it influences our roadmap. And just by doing that, they massively increase and people actually fill out that part of the NPS survey which is good because it gives them more crisp for the feedback mill. So those are kind of the two big things of what you do on the feedback channel, direct or what I call indirect through internal teams. Now for us, what's interesting is we're a SaaS platform, we're B2B, we have a reasonably large range, we have a couple thousand customers, reasonably large range in terms of price point and spend. We find through these two channels, we get 53% of our feedback directly from users, we get 47% through internal teams. The reason why I bring this up is we're user voice. We invented the thing on the left or like we were the first people. So the fact that we get half of our feedback now through internal teams means if you have internal teams, you will have to tap into them because what happens is if I'm on the phone with salesperson or I'm in a support ticket, I'm not also gonna jump over to another website and give you feedback. What's really interesting about this, this is like a breakdown by teams, that doesn't really matter, product is in here because when product does customer interviews, they're expected to log their feedback into the system just like sales support or success would which is also interesting. The overlap between these two things is interesting. It's very small. The percentage of feedback that comes in through both channels, there are people that have given feedback both directly and through customer teams, actually pretty small and we think about it makes sense, right? If I'm paying a large amount of money that I've got an assigned account manager, success manager, I'm primarily gonna talk through them. If I don't, I'm probably gonna go directly through an NPS survey or an idea board. And so that's kind of why it's important to have both channels, especially in a B2B environment. Again, in a B2C environment, you may not be able to afford to do this, right? You got a support team answering tickets break fix. Maybe you don't have the bandwidth or the cost or the value of individual feedback isn't high enough. So you need to go to something that's more scalable. What percentage, I'm gonna test this up front. How many people get feedback from customers? What percentage of your customers do you think you hear from? Do you think you hear from 1%? 1% of your customers? One, five, 5% of customers? 10, 20, 25? We hear from 60% of our customers. And to me, that sucks. Our goal is 100%, right? Our goal is I wanna know what every single customer thinks could be better about our product or service. But most people when I do this, I have very few hands up above 5% or 10%, right? And again, because feedback is not an intentional process for most of our companies, it's the squeaky wheel that just we have to do something with this. We have to sing in the footer and people occasionally fill it out, we gotta do something with it, right? So those strategies, internal teams direct from users allows us to get a lot of feedback. And before we kind of go back to how we use this, all this feedback, the data structure here matters. So we always take basically, we always want a user and they know exactly what they said. And again, this is now across all my inputs, right? So whether the feedback comes through a salesperson or directly from an NPS survey or from an idea board, I still have, here's what the user said, here's the raw feedback, and then I've got someone else turning this into what we call ideas, right? So if you're on the idea board, it's users saying, oh yeah, this is the thing I want. I want the JIRA integration or yada, yada, yada. Or if it's internal, we've got the success team saying, oh yeah, yeah, they said this and I think they need the Salesforce integration. We get frustrated about this as of as PMs because how many people in product need more ideas? Right, we don't. What these are are actually problems, right? Stated as ideas, why do we have them stated as ideas? Because it's way more engaging for people to proffer up solutions than to tell you what's wrong. Also you like that from an optics perspective too. If you have a website full of people like, this sucks, this sucks. It's not as engaging, right? People, everyone wants to be their own PM, right? Whether it's a salesperson or a discussed person or the end user, everyone is more engaged in feedback when they say, aha, I have this problem and I think the solution is making the box read. Now it's our job, it's product people to go in there and say, gosh, there's 200 people that want us to make the box read. We don't go make the box read. What we do is we read all of this stuff and if that's not conclusive, we follow up with these people and say, ah, what is your problem? The answer is not making the box read, the answer is improving usability of workflow acts. And that is what we call kind of a feature where I often call internally a solution, right? And finally, I will map this into CRM data. So this is the data structure that we employ, not only to be able to stitch together all that feedback that we have across the different teams and different sources, but that allows us to tie things, things we're gonna consider prioritizing all the way down to users that we think could be impacted by it, right? We live up here. We let users and our internal teams do some heavy lifting for us here. So we don't have to sort through 200 pieces of feedback. And that's kind of the structure. So that structure is important. So coming all the way back to the beginning, we're doing Agile. We've got some prioritization framework. All right, how are we gonna de-risk this prioritization framework even further? So let's come back to this. So if I, when I sit with our product team and we go through these exercises, here's how I look at this. So let's first add in this customer feedback. All right, so now that I've got this database, this is all the customers that have expressed interest in one of these features. This is the percentage of feedback they represent. So I take first glance at this. Does this, does this, customer asks this database of feedback kind of validate our estimates. The first thing I notice is, oh, thank God, we've got a lot of people that have expressed interest in this thing, which is good because my product manager said high confidence in our estimates and also it's the most expensive thing to build on the board. So if I'm gonna go build something expensive, I certainly wanna see that there's some existing customer demand that we can prove for this. But what does worry me is this thing which also has a high confidence, also we think will have a big impact, not as expensive engineering-wise, we have very few customer asks for this. Now, that doesn't mean this is wrong, but it means it's something I want to talk about. Why are we so confident in this? Given how few customers we can tie back to it. So we take one more step, and I mentioned I showed on the previous slide pulling in our CRM data. So I'm gonna pull in our revenue data from these customers, right? Because I'm B2B and not all of my customers are created equal. And aha, I now understand why product was so adamant that this was gonna have a big impact. It's because these three people are our enterprise customers, right? If you can do quick math in your head, they're averaging $100,000 a year per customer versus these other guys who are averaging 10,000, right? Okay, so maybe that estimate isn't crazy. Maybe that, maybe this is reasonable. This will have a big impact on goal B. Even the small number of customers is because they're our biggest ones. But it always brings me to like, my favorite question that I think anyone who's a B2B product person is always scared of are we just overly biasing towards our large enterprise customers because that's who's in our ear all the time. We're keenly aware of those folks. They're the top five out of the customer list all the time. And what my follow up question would be to my product team situation is great. I now have more confidence that, yes, that we'll have an impact because this is our enterprise customers, but are we sure we're not overemphasizing the impact or under-emphasizing the impact of these other features because it looks like they map to more total revenue than that thing does. And this is a very common thing we hear, right? Gosh, I feel like I'm always listening to our enterprise customers and I'm really worried that actually we better serve serving the 200 long tail customers that pay us a lot less but in aggregate our bigger portion of our revenue. So this is important except for us here too. So that's kind of the next way to risk it. Let's now use, let's keep going further. How do we keep de-risking things? Well, let's say I bring in more CRM data now on customer status and NPS data, right? So now I'm gonna say, actually my goal is actually reducing churn. So I don't wanna just see how many customers ask for it. I wanna see how much interest there was amongst people that are NPS detractors or churned accounts. And here again, I would say, gosh, are we overestimating the impact feature we'd see, feature C would have on churn because again, there's very few churned or detractor accounts that are related to this feedback. And likewise, are we underestimating the impact of A? And again, none of these are, oh my gosh, this number implies this one's wrong and I can't build an equation to necessarily calculate that number, right? There's still the job for us as PMs to translate problems into solutions and to kind of read the tea leaves on this. But at least in this case, it allows me to help me figure out what questions to ask. So finally, we've made our decision. It's feature C, that's the highest score. We're gonna go build that. The last and probably the most common and most important way we do risk this is, okay, great, we're gonna go up and build this thing. What's the first thing I'm gonna do? Well, shit, I need to find some people that want feature C. Oh, look, I have 60 people that I know want feature C or supposedly want feature C. Again, I could have gotten this wildly wrong. Maybe those six people said make the box red and I'm diligently making the box red and that's not the right solution. But I'll find out very quickly because now I can go back and reach back out to them. How many of you, when you've gone to build something, again, whether you use this decision framework or the CEO just shows up your door every month and says build these three things, how many of you then have to run around and ask does anyone know anyone who wants this? Because that happens almost every company I talk to, it's a gosh, every month it's rinse your feet cycle or I bang on a bunch of doors. Do you know anyone that can use this? Have you heard anything about this? Da, da, da. And we hate doing that, right? That's not a good use of our time. So lastly, to customize our solution, can we go talk to them? Can we show them mockups? Can we invite them to betas? Can we run additional surveys? You name it, that's what we want to do last. So take away from this, build some system record for product feedback in your organization. The key thing here is if you don't build a system, you will end up with just a bunch of vocal minority noise that comes in through your feedback link in the footer of the website. Make sure that the data structure allows you to combine that feedback. It breaks my heart and the number of times I see people with their 15 spreadsheets of the different asks from different teams and they have to print them out and argue about it. And invest early, this is one of those things that's kind of like an annuity. The earlier you start putting in a process for getting this stuff in, the more it'll accrue value over time, right? We're in products. Our time scales aren't days or weeks, it's months. And so a lot of times we'll be getting feedback that we won't action or follow up on until six, nine, 12 months later. So invest early. Like I said, if you're not already using a goal-based scoring framework, if you're not familiar with these, I can send you some links afterwards. Actually, there might be some at the end of this. The rice framework or some of the Bruce McCarthy stuff, there's a lot of interesting things on that. And again, I like to think of this as de-risking more than just making smart decisions. We all make smart decisions, right? How do we just make the cost of your role cheaper? And lastly, I want to write a couple of benefits to this that we don't always think about. I've talked a lot about the first one. But the next three are pretty important too. Increasing internal confidence in the roadmap is a big one. There's a lot of, one of the challenges we have in product is we sometimes do a bad job of what I call showing our work. And it creates a lot of consternation amongst these other teams, like, how do they decide to build this and not that? Honestly, the smartest teams I know make this whole process very transparent. Here's the goals by which we're judging things we're building. Here's the thing we're building. The really good teams also say, here's the things that miss the cut. And so when people are arguing about, well, why isn't this in here? Well, you could see why. We have a consistent way of adjudicating this. That helps a lot because there's nothing worse to company culture and just overall productivity when the frontline teams don't really believe that you're doing the right things and you're kind of making shit up. Focusing our time, we found in our survey that 20 to 25%, depending upon kind of maturity of the company you're in, 20 to 25% of your product management time will be spent, what I call, building the spreadsheet. That could be meeting with those teams. That could literally be reviewing NPS survey results. It could be doing all those things. That's not the highest use values of your time. Build systems, so people do that for you or something automated does that for you so you can focus on customer reviews or analyzing that feedback or whatnot. And finally, and this is my favorite one. I firmly believe that this feedback in the footer that no one pays attention to is going away and that I've been wrong about time scales before. But let's say five to 10 years from now, every single company will have some way to take every piece of feedback you give it and track it end to end and fall up with you even if it's two years later. That will be table stakes in my opinion in terms of customer experience. However, today, only 5% of companies do that. 5% of companies actually take feedback and diligently follow up with it, which means if you can do that today, that is a strategic benefit to your company to actually close the loop with people because it still blows people's fucking minds. I go sit with companies, big companies, brands you've heard of and when they do something where, oh, we built this feature and we clicked the button to send a mass message out to the 10,000 people that said, I want this feature, it blows people's frickin' minds that big company X listen to a thing that I want and they tweet about it and they Facebook about it and tell their friends and that alone actually is a big driver of net promoter score increases. In five or 10 years, I'm pretty sure you'll be forced to do all of this or else it'll just be considered bad experience, but for now, it's a real positive thing if you do that. That's all I wanted to talk about. Thanks for being a good audience and happy to do some Q&A. Yeah, we will probably share out the slides too so you can get directly to them, but yeah, all these things are on our website as well and obviously more. Other questions? I've been a big fan of your company forever and in the beginning of a sort of like where we could all talk about United Airlines and I think that's generally a success story except for a couple that also have been done. Besides self-loan companies, who's the new airline self-loan company that you see out there who's really destroying customer value or customer appreciation? You're on the front line. Yeah, so the question is, who's doing a great job of this? I'm just curious, I mean, this is also really interesting, but just from your business, you see all this feedback. Who are these companies, who are these industries that are being really bad to users that we don't know about? Product managers, we want to cite, we want to sort of get the front line. We want, I love the examples. We all know the popular examples of airlines and self-loan companies. Who else is out there that we should avoid being one? That's a little bit more subtle. Yeah, I mean, we probably focus on software technology products. Why? Because a lot of that is very retention driven. That is the first industry that is really paying attention to this type of stuff because we're all subscription models. Anywhere where there's not subscription model, where there is high lock-in, it's hard switching costs are high, I can guarantee you there is terrible customer service. Telecos, terrible customer service, local monopolies for cable companies. So we don't have a lot of visibility into those folks, thankfully. Though I know, it's an interesting story in that when we started this company, we pitched a lot of stuff on pitching to you now. We pitched it mainly to mid-market enterprise companies. And two things we heard consistently were, we don't really want to get more feedback. That sounds like something, that sounds like a cost for our call center and we sure as shit don't want to put it in public. And now I hear, we've got all this feedback and it's in public. Can you help us find some way to leverage it? So I think in some ways, consumers have forced behavior change on companies more than the other way around. Other questions? Oh, awesome. When it comes to like, how do you segment feedback if I go back to like this, right? It really, like segmentation really varies by company. So for example, for us, we now focus a lot on mid-market enterprise companies. So when we look at our feedback, we're often segmenting down to what are, and honestly, it's usually US-based technology, mid-market enterprise companies. What is our feedback? Like how much feedback do we have in that segment for this feature? But it varies by company quite a bit, right? Like some people are going international and so they're looking at like, okay, we want to see what our Western, what the feedback is from our Western European customers. So there really isn't any one-size-fits-all. I mean, obviously company size, like I said, SMB, mid-market enterprise is a very common one. GEO is a very common one. But outside of that, we find that there's a lot of like, sometimes very company-specific ones, right? You can see here, like the tractor churn, company status, right? New customers, old customers. It really depends on what your goals are over here, right? If my goal wasn't reduce churn, but maybe it was, you know, increase win rate amongst enterprise, that would drive what I'm going to look at over here. So on that slide, we have, you know, revenue or dollar size on those factors. When you're trying to do something new, people haven't been paying for it. Do you have any techniques to listen? Not just somebody says, yeah, if you build that, I'll pay for you, but I'll actually get to the guts of, if you'll actually pay money. And the reason why I mentioned this is, as I'm in B2B, as we're balancing out the vocal people who pay a lot of money, the enterprises that have a long tail, also mid-markets, very easy to come to just listening to the largest dollar sign that you can see in your CRM system, because that's the highest confidence. So going into the potentials or the new money that has yet to be proven. Do you have any techniques to find there somebody to look at? Yeah, there's four ways, right? So at an early stage, some part of this is like, company stage representative, right? So the question was, how do we better assess basically the dollar upside of building things, right? At a very early stage, you do customer development, between product type stuff, right? You've got to go in there and talk to them, and you've got a test willingness to buy yourself. I say that just to kind of put that aside. Assuming you've got your post revenue, and we're talking really about how to optimize how we get more upsells, there's a couple of different ways, right? So one is just this overall, which is correlated to existing spend, right? And if we're smarter, we may try to correlate to like at-risk spend, right? And things like that, however we may come up with that. The second way is we may actually empower the customer facing teams to assess some value, right? This one's, you know, old dicey sometimes, but it's like, we have a usually a large degree of confidence that a customer success team could say, not having this functionality as a deal breaker for renewal on this account, right? We have less confidence in the sales team saying, oh gosh, if I had this feature, it would have closed this deal. But again, no data is bad data, it just gives us something to follow up on, right? So it's another way to do it, which I didn't show on here, which is basically on a per-feedback basis, tracking how much is that worth to you. I think it'll help me close this deal, help me win this upsell, it'll help me save this renewal, and valuing actually the individual feedback, not just value, what the giver of the feedback is worth to us as a company. And the last way is if you're truly trying to do, like if this is like not small incremental things, but actually if this is a new product or product line, there are companies, the good one is called Price Intelligently, which do a lot of surveys on kind of third-party paneling sites to test willingness to buy. So you go in and say, hey, we want to test whether social media marketers in the US would be willing to buy, which of these features would they pay the most for and which of them would not? That's useful when you're in more of a, we're trying to enter a new space sort of category. It's a different methodology than this, to a certain degree. But those are the four different ways I would say. Other questions? Yes, in the back. We're willing to give feedback and more engaged with the product, so. So this goes back to my earlier thing of like, which is the more feedback we get, the more likelihood it's representative, right? So if we have 1% or 0.1% response rate, which is like standard amongst those flying surveys, it's highly likely that's not representative, right? Which is why there's a lot of market research companies which do a lot of work to force people into a panel and make it representative even at a small scale. So one is if we just do all the tactics that get us to, doesn't have to even mean 60%, but 10, 20, whatever percent, it's unlikely that it's wildly not representative. So that's one, that's obviously one way to do it. It also depends on, I'll just talk voice over it, also depends on if, I mean, if you have, if you're going through internal teams, you know what those internal teams biases are in terms of like our success team only works with these types of companies. So obviously that's who they're gonna feedback from. When you have a high enough, you know, average revenue per account, it's likely that's gonna be pre-representative because you're gonna have a success team or account management team on it. If you've got what I call like a, you know, you've got a woe ARPA and we've got support teams looking at some things and like that's where it's more challenging. I've seen some people do some basic like, cool, let's take our feedback, export it, do a quick kind of sanity check on, does this look like the rest of our customer base? And I think that's only relevant when you've got, like if you have a reasonably big company and like you have the manpower, the cycle to do that analysis, you can do it. My general guidance has just been like, very rarely have I seen people below 10, 20%, it'd be wildly not representative, right? It's only when it's like that less than 1% or it's like, okay, these are squeaky wheels, these are not the people we're trying to sell to. Other questions? Yes? I mean, call to action versus the other. Yeah, I think this is, and again, a lot of this stuff is very heavily weighted towards B2B, right? In a B2C context, or it's like I say, you know, the standard operating procedure for a lot of B2C companies is what I call guess and bucket test, right? You know, we make small features and we just roll them out to 5% of people and we see if there's, assuming we can measure lift, we measure lift and then we go from there. And the role of feedback in that case is very much more of a supporting role in terms of like this, right? Which is, oh, we're gonna go build something. But before we even go, maybe we can't do a pain at door test, maybe we do have to go build it. And before we go build it and then bucket test it, gosh, do we have some database of, I can find five or 10 users that have expressed this thing and follow up to them and do very anecdotal kind of user research. And that's what it looks like. It looks like user research, right? It looks less like strategic input and it looks more like user research. In terms of, you know, does it contact support or does it give me feedback on a product? I tend to tell people like, I think there's just desire to have a singular thing to like, here's the hole where you talk to to reach us, whether it's support or whatever. We have to have one community site, we have to have one forum and I've just generally seen that not to be true. I've generally seen, if you tell people, this is where you go to get support and need help. This is where we want your feedback on how this thing can be better. There is some overlap between those two. Certainly there are things that are in support, which we were like, oh my gosh, 80% of our support tickets are this, we should go fix that experience. But there's also things that they may give you and just that we want your view on how this could be better context, you wouldn't get through support. And we've had a lot of good success with BSC companies and saying, you should do both and do them separately. And one does not get the other. If you promote it correctly, people figure it out. Good example of this is Adobe, who has some of their creative suite stuff, kind of B2C-ish, there's a lot of people using it. And they'll have a community site where there's like support forums and contact help and they have a big thing ban on the top, which is, we want your product feedback, please put it over here. And so they kind of train people, there's different sources for different things and it's kind of like leaving out the composting bin, the recycling bin, the trash bin. We won't figure it out 100% of the time, but we'll figure it out more often than not, right? And so that's generally been effective, right? But it also depends on what your goals are, right? If your goal is, gosh, we're just trying to improve usability and reduce support costs, which is a very common goal in a B2C environment, then you don't just mind the support feedback. Do I know of good tools to mind, like how I get customer support? Yeah, most people mind customer support by getting support people to categorize them with tags, which we all love, just go through each one and tell us what it is. They're already in there anyway as you're viewing all of them, so what's an extra thing from the click? I mean, that's honestly, most people do it and that's not a terrible way of doing it. It's kind of manual. The bigger companies basically trains like some sort of, some version of ML to look for certain keywords. I'll tell you, if you should look up the, look up the rough framework, RUF, I mentioned the, I mentioned Atlassian. So Atlassian, this guy, Sean Kramer, who runs the Voice of the Customer program in Atlassian, pioneering kind of this categorization, rough, which is reliability, usability, and functionality. And they basically turned all of the MPS survey results or all of the support tickets basically made a, I think it was originally just like Reg X or something, Reg O expressions to find certain keywords that highlighted like, oh, this is reliability, reliability meaning like the thing doesn't work, usability. I can't figure out how I'm confused like and functionality is the rest. And their kind of thing was like, there's a primacy to those three, right? If the thing doesn't work, you don't get to do usability things. And if the thing isn't usable, you don't get to build more functionality. And so I've seen people take support tickets and do that same thing. So the rough framework, RUF, also data cancel from Drift, formerly from HubSpot, also does something called the spotlight framework. And so what these all are are basically verbs you can look for in the feedback, right? That implies some sense of at a very macro level what this thing is about. It won't necessarily tell you this is about feature C or you should D, but it'll tell you in general, you shouldn't be worrying about functionality, you should be worried about blah, right? You should worry about usability or you should be worried about reliability and dig into that. So they're claiming to fame with the rough thing that was the way they convinced Atlassian that they should fix all their bugs, which if anyone's ever used Gira, that's actually a pretty big accomplishment. Mm-hmm. Mm-hmm. Yeah, so you could basically look at this as like, feature C could be improved usability, right? It could be improved reliability. There's ways to do that, right? And a lot of ways what you think about is, you go back to that database, this is a really high level classification, reliability, usability, functionality, and then you can still tie those things in the same way. Yeah. Most people do it based upon the pulling out of my ass methodology. Again, not a terrible methodology because it's not completely informed. You actually, you know. So you are really bad at playing out of that? In this case, I didn't change the table. It really just allowed me the questions to figure out. In this example, I was basically saying, this just gives me the questions I would ask that might inform how it would change the table. Yes. One side is, this exercise is kind of creating a shared narrative, so you kind of explain how PM buckets the chaos into patterns. Sure. My question is really, I could foresee creating a taxonomy for problems, so the words I use are more problem-oriented. But on the flip side, features are what, you know, people are already familiar with the products. That's another way where people can try to belate their feedback in a structure. So if you were to have choose one, let's say if I were to try to train my stakeholders into dividing the work, like problems type of taxonomy versus kind of a product feature taxonomy, which one would you think is a better place to? I don't know. So I'm not sure I understand the dichotomy between the two. I mean, so I've seen people that have gone in and tried to train, which train all the support people and success people, how to be junior PMs and write this, like do the ask the five why's of the customer to write me the right to do that, right? I'll give an example, I think you get this one or not. There was like, try to bucket, there was a previous slide. I'll just keep going back, you tell me when I get there. Right there. No? Oh yeah, for sales people, I could say, try to classify, or you can explain the problem statement, customer tried to do X, Y, and Z, or the fact is they had a problem with feature area of ABC. We want to make it super easy for somebody to categorize. The language I use to form that structure is really different. Using features versus pain points, it is different. Yep. So I've talked to, so how do we get people to use language of product managers? Yeah, I've tried to make it consistency, right? And people think in different ways, it's only natural for people to just point at features because they see that every day, they can say, this tab or this area that I'm familiar with, people have feedback. It's much harder to go on the flip side where it was hard to use. Yep, yeah, so again, I think there's, I've seen people that have gone in and had reasonable success teaching their teams how to ask, five Ys, how to get to, what is the actual problem here, et cetera, et cetera. I don't believe in that because if you want, especially at scale, you want to try to teach a bunch of salespeople and success people on how to give me problem statements, you're gonna, the problem statement's gonna be, well, they don't have a jury integration. So when I value more than them being good at this, that's our job. What I value them is if you just diligently give me what the person actually said, I'm then happy to give you your outlet because at the end of the day, you think what you want to do is associate this with ideas or what they think of as features, right? That's fine. I don't want to fight against what is natural human behavior here, right? What I want is to be able to give me the tools, who is it, what exactly do they say so that I can do the work later to take that back apart, right? And the thing I'm hoping to save is I don't want to, if I can work backwards, if I can basically reverse engineer why all my salespeople said they needed the Salesforce integration, I assuming that the 200 people they associated with that all want it for roughly the same reasons or at least I can read it and break it up into, oh, there's three different constituencies here. So that was my key thing here, which is like, I think it's our job to basically look through here and reverse them, reverse engineer them back into, what is the problem behind this thing that they're trying to solve? And again, you're right, you can't do that. If all you got was this, you can't possibly, like you said, the top ask is we use Salesforce integration. Well, now I have no way of reverse engineering that to what the actual problem is and who needs it. And so that's why we've learned to lean more on, if you can just give me a primary source, personal contact and information, I can do the reverse engineering later. But if you could do that in a small team, you could probably get your folks to maybe rate and think in that way. It's not a terrible way to get them to understand the role of PM, right? I'm not saying it's terrible, it's a great idea. It's just at scale, it's hard to execute and manage and create consistency around. Yes? Yeah. Yeah, again, I think whether it's an end user or it's a person on the sales team, again, we all think we're little product managers and we want to come with, I think it's well intentioned, we want to come with solutions, right? And so people are more engaged around solutions than asking them what is the problem you have, right? You could do that, you could set it for them to ask them to do that, it's not the end of the world. But I think it's, people tend to, for whatever reason, we tend to want to offer solutions and then I think it's just easier to work with that and then reverse engineer it later. But I've seen both. Any questions? I'll live there. Yeah? Yeah. So the question is, how do we get feedback from folks that have come once and never come back again? What's your, I'm guessing, what's your business model? Is it like, is it e-commerce? Yeah, e-commerce, yeah. Okay. This is a good question. This is way outside my problem area because I know how to traffic and things when people want to give you feedback. We get this question a lot, which is like, oh my gosh, we want to give feedback on something. We've got e-commerce and the problem with e-commerce is when you don't do a good job, people just hit back and go the next search end result, right? So I'd be lying to you if I had a good solution for it, this is why we tend to cheat and we work in environments where people, if given the opportunity, already want to give feedback. The only thing I've seen people do around that is just doing retargeted ads, right? Because that inventory is really cheap and I mean that's why every e-commerce person in a man retargets with inventory, come back and buy this thing. But I've also heard people doing they're like, we want your feedback on your experience. It's really hard. In fact, I'd say the traditional, the reason why the traditional solution was built for e-commerce, right? And brands, right? And it's, and usually this one doesn't, but usually it comes with a, like I said, we're gonna give you an Amazon gift card or something, there's like some sort of incentive to get them feedback. I find that most people in e-commerce either that are really early stage and the way they're getting feedback is they're literally sitting next to people and talking to them or they're popping up live chat on the app or they're at scale and everything is guess and bucket test or retargeting ads. But not my area of expertise, I apologize. You know what I'm saying? Okay, yes? So we have, we're out here? Yeah. Is this a marketplace for buying and selling cars? Yeah, okay. Do they go away because they've successfully completed their transaction? Or someone else? Yeah, I mean, to me it would come back to, you're good, yeah, I mean, again, you're gonna treat this more like user research, right? Where you're gonna try to get some percentage of the people in the middle of the buying process to opt into 15 minutes with you or whatever or some like the methodology there is gonna be a little bit different and it's gonna be far more, here I'm basically trying to take qualitative stuff and I'm trying to basically package it in a way so I can like, it can be a quantitative signal for me. Again, for you it's like, no, no, this is qualitative signal on qualitative signal, right? Like it's you literally just trying to talk to some non statistically significant group of people hear what they have to say about what their challenges are and then see if you can instrument something behavioral analytics wise that backs up your hypotheses, hypothesis around what you heard in the interviews.