 Hi, everyone. My name is Anshul Rampal. I'm a Senior Product Manager at Microsoft. Been with Microsoft for about four and a half years, and during this stint, I have been in the data space, started off in database, moving to Analytics and BI, and all throughout I've been focused on delivering features, and then also started looking at attention as one of my site projects that became one of the core projects that I drew for the Analytics service that I was part of. In our presentation today, I'm going to talk about retention and why studying retention becomes a key driver for product growth. In fact, if you'd asked me maybe a year back, I of course as a Product Manager knew about retention and churn, but never looked at return retention as a key driver for growth. Until I was thrown at the deep end and I had to figure out a way to understand the topic and eventually drive impact through retention on the product growth. In the presentation today, I'm going to talk about some of the basics of retention, why retention. We look at some of the definitions that are thrown around, and what do they really mean in the context of retention and churn. Then I'll switch gears and talk about the customer lifecycle and give you a glimpse of short-term retention, mid-term retention, longer-term retention. Then I have a quick example that I'm going to share where one of the projects that I drove, and then lastly, we're going to talk about some of the actions that are a direct outcome of the retention analysis work that you can use for your projects as well. With that, let's get started. Now, the first question that comes arises, why should I study retention? For that, let me give you a quick example. Imagine you're a PM for this music streaming, a service. It's a niche music streaming service that is going to provide music content that caters to classical plus spiritual music. You've done your research, you know exactly the user base that you have, and you know your competitive landscape, and you've already tested the MVP with your users, so now you're ready to ship the service. Once you do that, you're pretty happy that the users are already subscribing to it and sharing feedback with you, which is great. But you also realize that you have another set of customers who also tuned into your service and have another series of feedback and requirements that they would like to see you ship as part of the future releases of the service. Now imagine that situation and multiply that by 100 or even 1000 customers, and you have all these conflicting requirements, where your end goal of course is to make the customers happy, but then you also need to understand whether I want to prioritize features that are coming from my new users, or should I prioritize features that allow me to keep my customers with the service, right? So that's the trade-off that every product manager has to face. Now that's where retention analysis becomes really critical. Now it won't tell you that whether the music streaming service is a great idea or there's a market for it, but what it will tell you is how do you encourage people to use your service so that they keep coming back to it, and how do you keep your existing customers happy for using your service, right? So that's where retention analysis really provides that instrument for product managers to ship features that addresses those two factors. Now the question comes up is, sure, it sounds great. I get it, we should do retention analysis, but I realize that my active user count is growing. So why should I focus on retention so much when I am continuously seeing a spike in the core KPI, which is growing my active user count? That's a fair point. However, if you're already looking at your monthly active customers as a metric, and you see that for example, I have about 100,000 customers that are net new customers that join me in a given month, but then you double click on it and you also see that there are about 90,000 customers that get churned out and then about 110,000 customers that are returning customers. So net net, you have about 100,000 monthly active users that are with your service at a given month. While that number sounds impressive, because you're also seeing such high numbers of customers churning out, you soon realize that you've pretty much exhausted the pool of customers that you can tap into for growing your service. Because every single customer that you're going to go engage has probably already used your service and has left that service, and so they don't want to come back. So you pretty much hit a plateau really, really soon. That's why our retention analysis becomes really key for driving product growth. Even if you make small changes, could be small paper cut changes here and there, but once you aggregate those changes, they have a significant impact on the overall product growth. Sure. So that makes sense. I know if I have a high churn in the customers, I need to understand why they are churning. But the point is, I'm already talking to my customers, and I'm already doing all of these qualitative service and customer interviews. So why do I have to have another tangent, get into another work stream of doing additional customer interviews when I'm already doing this? Now, it's funny, right? As PMs, our goal is to continuously talk to customers, gather their feedback, make sure we are addressing their needs as part of product enhancements, bug fixes, all of that. But you realize that the number of conversations you have with these customers is finite. The customer base is finite. It's more than likely that you're talking to customers who are happy customers because they are the customers who are keen on sharing feedback, they are excited to see new features being shipped, and so they're very engaged. So what happens is that you are just focused on those set of customers and you ignore this dark matter, which is like this black hole of customers who barely touched the service and left the service, and you have no way of capturing their feedback. So all of the feedback that you get is biased towards this customer segment, and you kind of ignore the rest, right? So in a way, your feedback is limited, and in a way, it's not really addressing the need of customers who turn from the service. And so that's why you need to, as a product manager, you need to retune and rethink on, there is of course a group of customers that you'll always talk to, but then how do you either engage or track those customers through product telemetry so that you know enough about these customers who churned. Now, before I jump into the next section of the presentation, I wanted to talk about certain terms that you will come across when it comes to retention study. The first and foremost is, of course, who is a customer, right? Now, when it seems intuitive, yes, I should know who my customer is. It's a person who's using the service. But when it comes to retention analysis, a customer could be many different things. It could be a subscription ID. It could be an account ID. It could be an email ID. It could be a tenant ID. It could be a workspace ID. It could be a browser ID, right? There could be so many, it could be Mac ID. So there are so many ways on how these customers show up when they engage with their service. And unless and until you clearly define who that customer is, the chances are that you might end up looking at a wrong set of data that may or may not impact your retention and churn metrics. So it's super important to define who that customer is. I mean, in the case of streaming service, a customer could be a browser through which the customer logged in. So you can track the time it spent with your service or it could be a subscription ID that a customer signed up and then you can track that. So based on your service, the customer's definition changes. What is an engagement? Every time a customer logs into a product or a service, we tend to put them in the bucket of engaged. Now that engagement will differ, right? If you call them engaged in the very early stages. So for example, maybe week one or week two, you might have a lopsided data because you're pretty much trying to call that customer and engage customer at the very early time window where the likelihood of that customer dropping off is high. And so your engaged metrics could be out of order. So it's important to realize what is the time period you wanna define for you to call a customer as engaged. So that's right, like you're the whole engagement cycle. Similar to engagement is this concept called activation. Now this could be an internal to Microsoft, but we use this word a lot. And what activation means is a customer might be engaged. And engaged could be that a customer on their mobile phone has opened the app and it shows that the customer is engaged. But activation really means that when the customer realizes the value from the service. And in the case of streaming service, it could be me hearing sounds of a particular genre that I'm really excited about and I'm really enjoying and I'm really like getting value from that service. So that's activation. So it's basically when the customer, it's that moment in the customer's journey when they realize the value from that service. And it's true that once a customer is activated, they generally don't turn from the service until, of course, you know, they delete like that until it's the end of the life cycle and they kind of retire or exit from that life cycle. But once a customer is activated, it stays activated throughout the duration of its service life cycle. Churn, again, is a customer who joined your service or who subscribed to the product, the service and then left. Could be that they deleted the subscription ID, they never showed back. They didn't have any engagement or any service that we tracked that showed any kind of movement from the service. So that's like customer who never returns. It's a churn customer. Now, time is a concept in detention, which is very important because that would determine whether a customer has the likelihood of coming back. So it's a returning customer or a customer is not gonna come back, which means it's a churned customer. So how you measure your time, of course, depends on your service, but it's an important constituent when you're doing an overall detention study. The next concept is hypothesis. Any PM who's dealing or who's working on a retention study always comes across the term as, which is hypothesis, which is where you get started for any kind of retention work. You have to hypothesize that this is why my customer is churning, right? It could be that the onboarding experience is tough or it could be that you have a high price bar for a customer. So price is a friction. It could be that the service is not available on Mac, for example. So you have to come up with a set of hypothesis and then you kind of validate those hypotheses by a set of data points or by set of interviews. And then you either continue developing their hypothesis or you debunk that hypothesis saying, no, that's incorrect and my hypothesis was wrong. So for you to study any kind of churn or retention pattern, you have to come up with a set of hypothesis that basically theoretically gives you an understanding of why a customer is churning. And then you, of course, go and validate it. And lastly, retention, which is how retention again in this context is you're able to retain a customer who had a likelihood of churning, right? So a customer who you've retained either during the actual course of the lifecycle of the service or a customer that was on the brink of churning has been retained. So that basically tells you that that's my overall retention metric. Let's switch gears and I want to talk about active versus activated because that's an important concept that gets applied in the retention study. And for that, I wanted to start off with this heat map. Basically, that's one way of presenting your retention data to your management. There are other ways as well. This by no means is the only way, but this is a very effective way to kind of see how your customers are trending or how your service is trending in terms of retaining customers. So on the left is of course the month, right? Sorry. And month zero is when, of course, you have your customers coming into your service. What happens in between month zero and month one is really interesting. And what you'll see is that for this particular service, we lost almost 55% of the customers in month one. And that number stayed steady, right? So only 45% stayed with the service and the rest got dropped off and that number slightly improved in the subsequent months. Month two is even bad, right? Like you're half of the customers that stayed in month one, you've already lost them by month two. And by the end of month 12, the numbers kind of paint a very blurry picture for this particular service. And just kind of tracking this month on month and identifying all of these new customer cohorts that join you for a given month depicts your service's health and your customer's engagement with the service. And it basically gives you signals and you basically come up with a set of hypotheses on why there is a such sharp decline that's happening or such a sharp drop that's happening for my customers. And then you as a P and take measures to kind of improve that metric or improve customer retention month over month. Now, this is another way of looking at retention, right? Where you have your potential customer, aka the market that you have, that is targeted for the service. And then you have this state of where you've just acquired the customer. So it could be a week zero or a week one where the customer is just active. Probably they've set up the subscription or they've logged in once. And they aren't really engaged at this point in time. It's a very early cycle, but at least you have a way of knowing that there is a high potential of converting these customers into activated customers. So you can probably call this as your onboarding phase. And the goal of the onboarding phase is a successful funnel conversion, thereby minimizing the time to value for the customer and ensuring there's customer confidence to stay with the product. Most customers are actually lost. If you see the heat map here, most of the customers are lost during this phase and for seemingly small issues that we often don't realize until we build these heat maps and we see how many customers we are losing at this early phase. And it could be just unfamiliar terminology to crappy dogs, to just set up poor defaults that you've set up for your service that generally don't work for a large majority of the customers. And this onboarding phase can have a profound impact on the churn rates in the first few days. Discovering these issues can be a challenge, particularly when the onboarding phase spans documentation, sign-up flows and in-product experiences, right? And so, as I said, this is where we see the most drop of happening and we lose the customer. However, for customers who've gone through the stage and moved from and acquired to an active to an activated stage, they kind of fall under the mid-term retention. That's where you're really encouraging habit forming with these customers. And finding the right time period to use is critical and meaningful to retention tracking. The time window needs to be sufficiently long to capture the natural usage frequency of the product. However, an unrealistic long-time window can lead to artificial high retention rates and that also mask the underlying product issues. However, if the time window is too short, relative to the natural usage frequency, inactive users may be also classified as churn, right? Or you call that as a false churn. And false churn and dormant users will commonly benefit from nurturing activities, whereas your promotional credits and other incentives are best suited for truly churn cohorts. So if you have metrics that basically has a too short of a window to capture a customer as churn or too long of a window, what happens is that your marketing has all of these promotion activities that are targeted for these kind of customers. And if you're not defining that customer cohort well, those activities might be targeted at a wrong customer cohort, thereby having a less impact, right? So it's very important to identify the metric on when a customer is active, when it's activated and when it's churned. So it's just a simple example here, right? Like for a mindfulness app, just opening the app and browsing the app, sure, it's an acquired stage. It could be an engaged stage, but it's not an activated stage. And for a critical event, for that mindfulness app, for a customer to be activated is completing a meditation session. If it's a lifestyle app, it's actually booking a class, not just browsing through the classes. Or if it's a mobile game publisher, then of course, you know, seems like playing a game would be a good critical event for your service to trigger, whether that customer is activated or not. Now let me take a step back. I hope so far you've kind of, you have an understanding of retention and churn and how important that is for product growth. Like I said in the very beginning, as I was introducing myself, like a year back, I knew about these terms, I knew about product retention, but I really didn't know enough for me to really appreciate the value it brings to the product growth. It was a time when I was thrown at the deep end, right? So not getting into too much details about what the scenario was. I was asked to study a set of customers who were migrating from legacy app to a cloud service. And what we had found in that journey is that the customer, we were seeing high drop-off or a very low conversion rate where a customer actually moved into production. We had this long funnel or a very fat funnel of customers who wanted to move away from their legacy systems and get into our service. But as the funnel kind of, as we moved down, we saw that the actual conversion was very low. And so that was the problem I was trying to solve, right? And the funny thing is that everybody kind of had their own hypothesis of why that was happening. It could have stemmed from a past experience that they had when they engaged with the customer or it could have also been anecdotal where they've just kind of heard about it from their peers. But what I realized that there was no clear source for me to really know what the customer's journey was, how they were engaging with the service and where the big bottlenecks were. I had to look at many different disparate data sources, including our CRM systems, including a bunch of Excel spreadsheets. And it really didn't give me, of course, clear pointers on why that was happening, but it at least gave me the ability to hypothesize on what are those potential reasons why the customer is not staying with our service, even though they have this high business need of moving away from the legacy service. So once I had to set up a hypothesis, I had to very quickly do qualitative and quantitative analysis. For me, qualitative analysis was easy. I actually wanted to talk to the customers who churned from that service, who didn't move forward and fully went into production. It was super hard. Imagine a customer is already churned, trying to get a hold of them and capturing their feedback on what their experience was is one of the hardest things to pull off, but I was able to get few customers that I could talk to. In addition, I also spoke to partners, our sellers, engineers, other bunch of folks, and it wasn't like these open conversations, but I was always validating the hypothesis I already had or was creating new hypothesis on what could have led to the churn and a sharp decline. I also did some quantitative research and that included me understanding who that customer was. So I looked at a lot of subscription IDs. One customer had five subscription IDs. Of those two subscription IDs, they went fine. Like they weren't really churning, but the three other three subscription IDs, we saw the churn happening. So I had to really filter down the subscription IDs that I really needed to study. And I looked at that. I also looked at product elementary on how those customers engaged, what kind of data they had, what kind of connectors they had, like a lot of that research. And again, the whole point was to validate those hypotheses so that we can come up with a set of recommendations on what we need to do at a product level, at a marketing level, at an incentive level to help guide our customers at each step of the way as they were migrating from a legacy source to our cloud-based service. And finally, I then presented all of that analysis back to the product teams who were in charge of managing that service and with clearly actionable insights and recommendations on what needs to happen, be it in the space of automation, be it in the space of providing partner enablement. They were like all of these different recommendations that came out of that study. And then once that recommendations were implemented, my job was to then track and measure and then report. And I should say that just that whole exercise really operationalized the whole detention process and it really brought forward very data-driven problem statements, hypothesis and recommendations that enabled the business to quickly act on it versus coming up with anecdotal feedback that wasn't backed by real data. So that was again my entry into retention and this exercise gave me a deep appreciation of not just talking to happy customers but also trying to get into the psyche of customers who we've lost and having the hard conversations and then bubbling those conversations up so that that can then translate to actual product recommendations. That would not just benefit the churn customers but will also benefit the existing customers who are in the process of migrating from their legacy sources to our cloud service. So the last two slides, I really wanna sum it up and talk about this process of finding the red, right? At Microsoft, we define this process of continuously identifying and addressing the issues that our customers, that causes our customers to churn and then we call this as finding the red. Now, for us to find the red, there are certain principles that I think are not just applicable for Microsoft but broadly in the industry, which is number one that for you to do retention and churn analysis, it's important to maintain the integrity and privacy of personal information. The data we collect both qualitatively and quantitatively should be used in aggregate to identify behavioral patterns, customer cohorts and product issues but never to focused on one particular customer and going to that customer because you have information about all of their usage, right? So that's never the intent and it's important to maintain the integrity and privacy of the final privacy personal information that we have with the customer, that we store on behalf of the customer. The second is defining your attention metrics based on meaningful usage and intentional customer actions like we saw in the last two slides. So it could be for a mindful app, mindfulness app, it's actually doing a meditation session or if it's a music app, which is listening to your streaming music. So it's important to define those critical events and the time window in which those events happen because if your time window is too large or too short, you might look at wrong customer cohorts and then redirect changes to a customer that probably is not in the right group to receive those changes. And it's not a one-time process, right? You have to carefully select and reevaluate the time period over which retention should be measured for your product. And this product, this period should reflect common usage patterns. Product with frequent use patterns will start with daily or weekly windows while operational use or sorry, occasional use products will offer longer time periods. And then you also need to define the tension based on the customer cohorts for your product, right? There is no one size fits, one size that fits all, right? So you could have a different customer cohort in your SMC, SMB customers or your medium-sized customers and then your enterprise customers is a different customer cohort and the engagement they have with the service is also different. So when you're kind of thinking about building those hypotheses, it's always beneficial when you create those hypotheses against the customer cohorts. And then lastly, I would say the other important thing is doing these qualitative and quantitative signals to understand the why, which is the most important thing, right? It's not a set of anecdotes or a set of, I think that's why customers are leaving, but really digging deeper into your data, product telemetry to understand why the customer is joining. And it's a combination of both qualitative and quantitative as the proven approaches to get to that data. And use end-to-end scenarios to understand your service health, right? It's good that you can look at a problem in a silo and then do all of your analysis, but you also need to understand the end outcome that the customer is looking for when using the service. It seems important to map the end-to-end scenario that gives you a better appreciation of, how do I increase my customer's longevity with the service? Now, I think this is the last slide and I just wanted to kind of highlight some of the possible outcomes from retention study, right? Of course, when we are spending time doing the retention study, our goal is to increase our retention and drive lower churn of our customers. But there are several other outcomes that come as a result of that, right? The first, of course, is improving the product. Once you establish why your customer is joining, you definitely want to use all of that knowledge to go back and convert that into either new features or existing feature enhancements or product bugs. It could be several of those. And retention studies' main outcome is to improve the product so that you're able to create this, you basically up-level the quality of your service so that your customer stays. The other possible outcome of retention study could be messaging, right? You realize that there's not much you can do to your service. Your service is perfect. But what you communicate as the usage of the service is incorrect. Customers perceive your service to do X, but it does Y. A good retention study will also give you very targeted feedback on how to redirect the messaging or how to re-articulate your messaging so that it basically talks to the customers in terms of their needs versus something else, right? So retention study would give you a very good understanding of whether your messaging works or not. In some cases, it also allows you to have one-to-one engagements, right? Like for example, enterprise customers. It's less to do with SMCs and SMBs, but with your enterprise and high-value customers, your data will tell you if that customer is basically could be a job customer. And so you can take preemptive actions to engage with the field and understand what kind of blockers or challenges the customer is facing and then try to mitigate that by kind of hand-holding the customer and helping them go over the hump so that you don't end up losing that customer by redeeming the customer for the longer run. The last two ones are around targeted acquisition and pricing, right? Like if your journey rate is high, it may be the case that you've not really achieved the product market fit. Maybe you're talking to the wrong segment, maybe this segment exists somewhere else, but you're kind of going out there and talking to somebody who's not really reflective of what the service offers. And so that becomes a case where you have to redirect your acquisition strategy so that you are targeting the right set of customers. And so retention study would even tell you that. And lastly, I think pricing, right? Oftentimes, we see that customers, especially on the lower end of the customers, right? Not so much with their enterprise customers, but others where a poor price performance is a big reason for churn. And the retention study will give you enough indicators to say that you might have to create another pricing tier or you have to redo your pricing. And that would really open the floodgate in terms of your entry funnel where you're onboarding a funnel of your customers. You're basically able to have a lot more customers because you can reduce the barrier to entry through your pricing changes. Yeah, in a nutshell, these are like really the key outcomes that a retention study would drive outside of, of course, making sure that you are able to retain your customers longer. It also becomes an enabler for product growth by addressing all of these outcomes that are important for any product journey. Yeah, that's all I had for this presentation. Hopefully you're able to appreciate retention as a key tenant for any product manager to lean in and help drive growth in the product and also have a better appreciation of your customer's journey and their pain points. With that, I hope you enjoyed the presentation and feel free to ping me on LinkedIn. And if you have any additional questions, I'm happy to engage with you, talk about it and see if we can solve some of the problems that you may experience in your world. With that, thank you, bye-bye.