 Hello, my name is Chris Rader and I'm a product director at CenterCode. I specialize in beta testing but I have background in product analytics, user research and obviously I've been in product management for around 5-6 years now. Prior to CenterCode I worked at a company called Western Digital. I handled user research there for most of the consumer products and some of the business products. So they give me a fairer expertise of working with product managers, working with product development engineers and QA and user experience teams. If you do have any follow-up questions feel free to connect with me on LinkedIn. You can go ahead and shoot me an email. I'll try to get back to you as soon as I can. I'm excited to be doing this webinar for product school. The focus of this webinar is about how to take your product from something that's good or average to something that's really great, something that users enjoy, something that really stands out in your market. I work with a lot of companies that are going through say a beta phase building their products so I get to deal with product managers, I get to see where their product is at in early stages and be able to look at products and see how we can make them better. Really quick, I just wanted to give you a background on CenterCode. This is the company I worked with for almost 6 years. They focus on continuous customer-driven product improvement. It's the idea of improving your products using what we call alpha, beta and delta testing. This allows you to bring your market into your product development process, help you shape those products, build those products and tune them to your customer's preference. At CenterCode we provide a SaaS platform. This allows teams that run these alpha, beta and delta tests. This is also what our services use to manage those projects. We have those services where we have a fully dedicated person that actually manages a full project for you and your programs. We have a global network of testers that's over 250,000 people across the world that we recruit for testing. Of course, we have this framework, this way of doing the testing itself. We provide a certification to teach teams how to run the programs. You'll see a nice list here of people that we work with. Some of the biggest names in tech are our customers. There's a brief story about how we work with Autodesk. More than 90 product managers at Autodesk actually use the CenterCode software to validate the designs to put their product in front of customers before it launches, before it reaches the rest of their customers. Roku has been leveraging our services for years since some of their very first products. We typically tend to partner with those high-growth technology companies and those modern enterprises. For this webinar, I want to go through just a sequence of events. These are essentially leading to the challenge that a lot of product managers are facing and then methods and things that we as product managers will do in order to move our product from that average product to something that's great. For this webinar, we're going to cover capturing product KPIs, prioritizing things that we need to improve on our product, and learning about things people like or things that are delighting our users. How will my product or, say, an update to a product be received by my market? Now, I know this is a lot of what keeps us up. We want to know whether or not we're going to be successful if this new release has the right features, if this new product is going to, say, beat up the competition. Will people be happy or satisfied with my product? How is acceptance measured? How are we judging whether or not our product meets the expectations of our users? A lot of times, we'll look at things like the usage or product analytics. This can give us things like whether who's downloaded. It can give us our monthly active users. It could see whether or not that new feature that we're implementing is being used. It could see how long people are using each of these features or how many errors I'm running into or how stable my builds are. Customer satisfaction is definitely one of those things that we're using to measure the acceptance. This is the perception or attitudes that the customers are leaving about the product. So, typically, we'll see things like the NPS or Net Promoter Score for those consumer facing products that sell on, say, Amazon. You'll see things like the Star ratings. We'll also have some kind of ratings on, say, GD Crowd or anything that's using to rate our software. Of course, we have our support volume. This has given us a good idea of whether or not our products are broken or if there's something that's wrong. We want to look at those things like, say, call drivers, but we want to look at our support volume to see whether or not we're hitting our mark or whether or not people are calling in to get more information. Of course, we keep our eye on the product churn or returns. We want to make sure that people are satisfied with the product and they're keeping the product. So, for each software folks out there, it's whether or not customers continue to use the product or they stop using the product. We want to look at whether or not they want to refund the product or whether or not to resubscribe. And then when we have, obviously, the mother of all acceptance criteria, this is whether or not I'm hitting my revenue or my sales goals. Now, these are all sometimes metrics that we're looking at or categories of metrics that we're looking at to understand whether or not our product is accepted by our market. And of course, these are common ones. These are not necessarily the full list of things that we're using to judge acceptance. These are just some of the most common ones. Now in a world of product development, in my past that with a lot of product managers, some that aren't willing necessarily to listen to the data and are more willing to take the leap themselves and this concept of let's wait and kind of see what's going to happen after release rather than going with the data that I have here. This is the predicament that a lot of product managers or people providing data for product managers are sitting in is that how much of this data can I trust? How much is accurate? But for a lot of product managers, we should have a way to be able to predict how successful we're going to be. So can we predict the product success? Can we look at our product in a state inside development and see whether or not we're going to be successful before we get to reach our market? So we're going to hop into our second section which is about capturing product KPIs. We talked a little bit about the ways that we measure acceptance. These are the things that we can capture in, say, in development to give us an idea of how well our product is performing. So here we have essentially just quarters sitting at the bottom. So this looks like it's a two-year span. And modern product delivery model. So this is the idea of building a product over a period of time. So we take these different phases. So we're in development. Obviously most of us are working in some kind of agile framework, but we're continuously developing. We're getting QA involved. We're testing. We maybe have some user research going on. But just before we launch the product, we're likely going through these two different phases. It's that alpha and beta phase. It's when the product is almost ready to release and we have all these moving pieces ready to go. And we want to get a test. We want to throw these into real users' environments and get a sense of how this product is working. So after we launch, we have all these iterative releases where we're looking at adding some new features, getting some more improvements, getting some more stabilization software improvements in there. So you can see ideally we have these two alpha, the alpha and beta phase, that we have a chance to see how well the product is working as all components are added together, as it's essentially a release candidate for our product. So during that time, and from what we've seen, it's in a code in our industry, and what I've experienced myself, is that here's the three most common ways of getting a sense of how well your product is performing during those tests. We have essentially our net promoter score and star ratings, which are things that we typically capture, say at the end of the project, say a beta test or an alpha test, and we're looking for how our users are receiving this product, what is their sentiment towards it. So for net promoter score, we're taking that likeliness to recommend. We have a scale of basically a negative 100 to 100, and anywhere in that range, we could get something, for example, there is a 30, and that would tell us what our net promoter score is. The calculation is based on promoters and detractors, so it's taking the difference between those and giving you a score. Star rating, obviously closely related to the Amazon star rating, but it's essentially just that mock review of the product. So it's a scale from one to five, basically how satisfied or dissatisfied are you with the product. And you typically see this in a form of an average, and most of the time we'll have some kind of decimal associated with this, well, like in this case, a 4.2 out of a five point rating scale. And the last one is essentially a, it's not necessarily a score, but it's your issues that were identified on your project. And it's giving you a count of the issues that were received during testing, which of those things, or what counts are we looking at, by severity, how important are these things that I need to pay attention to. So now obviously I want to use those three metrics that we just looked at to get an idea of how well our product is performing, but we always have this influence on the accuracy of the data. So I have a lot of product adventures that will come up, and they'll ask, you know, I want to predict, you know, what my NPS score, what my star rating is going to be. A lot of time NPS scores and star rating are the most common metrics that product managers will use to dictate or to judge their success outside of revenue and sales, obviously. But this idea of NPS being more holistic than something that typically happens on a beta test. When your product is launched and you're capturing something like a net primer score or star rating, it's including a few things that we're not testing necessarily in an alpha and beta test. A lot of times you have things like a marketing experience or a sales experience of a support experience. A lot of times a customer will have to go through purchasing of a product or getting support on a product. A lot of times that's not tested, so those things are not influencing our beta scores. Another thing influencing the accuracy is whether or not you're providing the product for free or product subscription for free or if they're paying for it. A lot of times this is going to end up biasing usually positively the users as they are rating the product. Things like your product state, a lot of times there is some cushion between your beta test and what's going to be released. Obviously we want to address anything that's going to impact our scores, but then that puts in a question, how accurate is the information, something like NPS for star rating, if I'm going to fix some of the things that I ran into in one of these early tests. So we always typically recommend when you are running these tests, run multiple projects in order to find the issues early, fix those and then eventually you get to the point where your beta test is essentially your release candidate allowing you to get a more accurate number. And then of course your target market, who should be purchasing the product when we talk about recruiting for a beta test most of the time if it's not your actual customers. You're making estimates of who should be buying this product based on customer needs. But a lot of times in a beta test you're recruiting based on those characteristics, but they're not always the typically the people that end up buying the product. It's just, you know, we're trying to personify what our target market looks like. So we try to recruit for those people. So these are all the things that could potentially impact the accuracy of those net promoters scores or star ratings in a beta test. And remember our goal is to identify metrics that which we could predict the success of our product during development rather than after development. So I hope that advice helps the idea of capturing those product KPIs during your beta tests, making sure you do your best to get that information as accurate as possible by recruiting the right target markets, by cleaning some of your issues or fixing those issues before you get to essentially what you'd like to treat as your product API version of a beta where you're just trying to see those ratings beforehand. Of course you can capture things like, you know, usage analytics and whatnot during those beta tests, so that should also help you in solving that problem of how well this product can be received. Next, I want to talk a little about prioritizing what we consider to these fix and improvements during this pre launch phase. So ideally we will be able to address some things beforehand and hopefully early as possible because we can't make many changes right before we launch. We don't want to push back those dates unless we absolutely have to, so we don't like to delay those product launches. Sometimes that first to market strategy is extremely good for us, but we also don't want to launch with something that's going to really impact our brand or negatively affect our brand image. So let's get into prioritizing these fixes and improvements early on. So I get asked this question pretty often amongst product managers, but can I make a difference in essentially what's the last few minutes of development? So in order to do this, we need to take a look at what we have to work with. So if we have that net promoter score and that star rating that are those product KPIs that we're capturing and we have a list of issues and ideas, things that people have been submitting to us along the way in those alpha and beta tests. Now, you can kind of see that the list of issues and ideas are obviously impacting or affecting that net promoter or the star rating. It's essentially the qualitative data that net promoter score and star rating is a number. It's giving me how many people feel a certain way about the product. So I have, you know, the average ratings and then the list of issues and ideas are the things that are driving those scores. So I have issues that are typically, you know, detracting my scores and then I have some ideas that are going to do that, basically things missed or features that are missing from the product. But what I want to do with that list is I want to see what's actually driving my score. So I want to take a list of what's important to me as a product manager, what's important to my users and then I want to see how popular those things are. So for example, what issues are receiving a lot of attention? What are the things that are going to cause support calls to go up or tickets to come through? Or returns? So we take this idea of what's important and what's popular to identify what's going to be impactful in my product. This is essentially a way of prioritizing that list of issues and ideas in terms of these two components. Now at center code, when we run our beta projects and we evaluate these products, we take these and we turn them to numbers that helps us prioritize based on a figure. So we have at center code this idea of maximizing your test results. So you can see in the top left corner, the top left box, we have recommendations. So these are the things that are the top things that we need to fix. That's not all the issues, it's only 18% of them. So that 12 is only 18% of the total number of issues that I have. But it actually equates to 76% of the impact. So of those 12 issues, they're the most impactful, meaning they're the most important and most popular. So that way I'm maximizing the attention I have or the efforts that I put into development by hitting those things that actually are meaningful or like we say impactful. So as you get closer to the end of your launch, you wanna make sure you're focusing on the things that are impactful, not necessarily the huge list. If you have 100 issues that you're looking at, you don't have to fix all of them. The truth is you don't have time to fix them all unless you're going to stop your release. And even then you're probably still gonna be chasing issues the whole time throughout development. So this concept of really identifying the things that are going to drive your score up and making sure you reduce the things that are driving your score down. So I know it wasn't a very long section, but the concept of prioritizing based on those KPIs, you wanna look at what's causing those scores to go up. But the idea is that you can get a judge of how successful you're gonna be by what things are driving those scores up and down. And it gives you something actionable. You can take that list of what I need to fix or what I need to improve in my product and you can change them before release if you want your score to go up. And again, we're talking about this concept of taking something average to something good or great. So next we go into this section that's really learning about what's working well. We say delighters. We wanna identify what things are working well in my product. So did the users, the customers, like what I built? So I've heard this saying a lot, but the concept in the past was, if they didn't say anything about the product, they didn't submit an issue, they had no ideas on it, that it was good news. The users liked the feature, or the onboarding was simple. If they strictly didn't say anything about it. So this concept has been challenged as of recent. At center code obviously we see a lot of products go through our processes, but this concept of we can collect whether or not things are working. We can collect that good news. We don't have to take silence as a sign that something's working well. So pushing this out there for everyone as they push their products through those alpha and beta phases of testing, try to collect things that are considered praise. What people like, what's working with the product, or what do they enjoy? This gives you a sense of how well you met those expectations, or how well those features are working, or say you designed a new onboarding experience. How is it being received? Is it simple? Is it intuitive? We don't have to wait for no news. Let's ask for what's working well. And this gives you an idea of a list of things, of positives, things that you can go to engineering with, say, hey, you did a great job. Things you can go to marketing with or sales with to give them an idea of what's working well. So being in the space where we help product managers build the products, we want to share where we typically see praise being used. A lot of times, people use praise in the questions and answers. So for example, you'll see questions and answers on Amazon. And it's what features they liked, or how they're using the product specifically. So they can get ahead of some of those question answers by things that people were enjoying rather than a list of things that aren't working. We see teams use user-generated content, or testimonials about what features they like or how they're using products. It's a great way to get content out there. And it's early on, so you can have a quicker start to your marketing strategy by having this information. You can use the praise as a marketing confirmation about your current strategy. You can see what users are liking, what they're saying about it. And you can use it to say, you know, whether I'm on base or off base on our current strategy. One thing that I love to see is customer stories. When you use praise to tell the story of how a user was using the product and what they specifically liked, it's more relatable. You let the user basically speak for the product and saying, this is how, this is what they liked about it specifically. Using those customer stories can be extremely helpful in sales calls or sales meetings when you're talking to prospects. But those stories really help you understand what's beneficial for the customer. So we wanted to recap everything we learned here. Using that beta test to capture those product KPIs before launch, give us some confidence in what we're actually gonna be launching with. Give us, you know, basically a temperature of where we're at using impactful issues and ideas to judge what things we can fix or address before we go to launch and how we can leverage praise, how we can use what people like in terms of the product experience and how I can leverage that to get more customers or tell better stories and reach my revenue goals. So one thing I kinda wanna end with was this, in the concept of going from an average to a broad average is some data that we have collected in a lot of projects that we've ran in 2020 for product development. So this is probably gonna look complicated to a lot of people, but we have two charts here. The left one is NPS or Net Promoter Score and the right one is Star Rating. And what we're looking at is the percentile rank of Net Promoter Scores and Star Ratings. And really what I want you to focus on here is these gray areas, the gray bars in between these two charts here. And you can see the text here is indicating that this gray area is where 50% of projects are. So if you're getting an NPS score between this negative three to 50, you're sitting at with basically average, this is where most people are at. And on the Star Rating side, between that 3.8 to 4.5, your product is average. So looking at this really gives us a good idea of how we can leverage those improvements that we've identified, those things that we're prioritizing to say, okay, if I were to adjust these things or implement these things or improve these things, could I make my product great? Could I beat out some of these other products? Could I be better than my competitor? So we don't typically show a lot of this information, but these are great metrics to look at. You can grab a screenshot and use it for your next beta test or your next test that you're running to get an idea of where you stack up against the rest of the industry. So I wanna say thank you guys for the opportunity to listen to me, to let me go through my presentation. Of course, you can connect with me on LinkedIn. Happy to answer any any questions or just chat about some experience that I have with product management. And I look forward to doing this again in the future. I appreciate all of your time. Thank you.