 Hi, welcome to leading a product team through fast-paced product launches. I'm Maya Kacharowski. I'm the head of product at Talescale. Talescale is a wire grub-based mesh VPN solution. Part of Talescale, I was at GitHub working on software supply chain security and before that at Google Cloud working on Kubernetes and container security and encryption and encryption key management. Startups are different from big companies. And so today I'll be talking about how to lead a product team through fast-paced product launches, for a Talescale startup. I've got to build and scale up a team where our pace of launches is hectic at best compared to, say, GitHub and Google where you might launch things over several months. And throughout this talk, you'll hear my bias towards things that are B2B, or developer-centric, as well as security-focused, where these are the kinds of products that have really strict requirements around what you want to know about upcoming changes. So what will we cover in today's talk? First up, we'll dive into what a launch is at the minimum. That is, what is the MVPO launch? If you do nothing else, this is what you need to do. Next, we'll uncover the different kinds of launches that you'll encounter, including mainly the difference between front-end and back-end launches. But launches aren't one size fits all, so we'll also discuss how different kinds of products, whether it's B2B or B2C, require tailored approaches. And lastly, we'll discuss internal process changes that you can make to make fast launches more sustainable. So let's jump in. So first, you're probably here because you're wondering how to get your team moving more quickly. I don't have a good answer for you on that right now. That's going to be a different talk. But I'm here to talk about what happens when your engineering team is running ahead of product, when you're shipping new functionality and improvements that come faster than you can handle them. Some of you are probably thinking, this can't be real. It is real. It's often what happens at startups. Like I said in my prior work at large tech companies, I might be working with two or three edge teams and we might have a launch every few months or smaller changes every few weeks at best. Now that I'm a startup, sometimes we launch the feature that we got where our competitor is working on the same day. I think the fastest that happened was exactly that. We figured out our competitor was working on a feature on similar functionality. Myself and two of our engineers picked up the work around 9 a.m. that day and had it fully working with UI docs and a blog post by about 2 p.m. And that's fast, but I want to clarify that's not the typical speed to launch, which is probably more like two to three weeks or something smaller or one to three months for something larger. But what you're solving for is being able to react to an opportunity when it comes along and get something out the door quickly. So with that in mind, you don't want to slow down product launches or engineering, you want to keep moving. So in this scenario, my everyday reality, we want to get launches done as quickly as possible and make them as repeatable as possible. The thing that I want is for product, the last thing that I want is for product to be the blocker for engineering. That also means that your launches might not be as good, right? You might not have a competitive battle card or a landing page and that's okay. What's important is knowing when the launch is important enough that these things actually matter and then focusing on them then. So as I said, we can't always do everything for a launch. Let's talk about the MVP for a launch. It sounds clear to start here, but what do we actually mean by a future launch? Well, a future launch is when your product introduces new features or functionality or you make changes to your existing product or service. A launch could involve adding new features, improving existing ones, or even redesigning the user interface. And it's typically when these updates are made available to the public, though you might also launch functionality privately first to a smaller set of users. And so looking at what you actually need out of a launch, I could write a PRD for launches. This isn't that, but let's talk about this like a product. What's the MVP that we actually need? What's the minimum we need to get done to launch something? I think that there are three key requirements that need to be addressed. First, the functionality of the product must work as expected. How do we solve this? Tests. This involves writing and running tests to validate that the functionality is working as intended by conducting thorough testing any bugs or issues that can be identified and resolved before the launch get resolved so that you have a smoother user experience. Secondly, it's crucial to make sure that the user is actually aware of the launch and the new features or changes that are being introduced. Or else, why do you even bother with a launch? This could be achieved through something like a changelog, which is a document or notification that outlines any modifications or improvements made in the product. The changelog provides transparency to the users and helps set their expectations by clearly communicating what they can expect from the updated version. And lastly, it's essential to ensure that the user understands how the new feature is supported. This requires you as a PM to define clear launch phases, for example, how a feature goes from a limited release to a broader audience. Clearly defining launch phases helps manage user expectations of the feature. Let's talk about each of these a little bit more. First off, testing. Testing is a crucial aspect of ensuring the quality and reliability of a product or feature before its release. In my mind involves two main components, testing that you do before release, like quality assurance or QA, and testing that's continuous in the code and tries to capture new issues that might be introduced later, like unit tests. QA is a comprehensive process that involves testing the product as a whole to identify any issues, bugs, or unexpected behaviors with a new feature release. This could include functional testing, you know, does the feature work, usability testing, is the feature usable, and also security or performance testing. This lets you answer the question, does the feature work as intended? And especially for edge cases. Now, you might have a QA team, and they might be able to conduct these tests, but chances are if you're moving quickly with your launches, you might not have such a team, you might be much smaller, or they might not be blocking your launches, or at least not the first couple of releases you make. In that case, it's your job as a PM to effectively QA the project to yourself and test it in different scenarios and environments. On the other hand, continuous testing focuses on testing the product automatically when changes are made to ensure the proper functioning. For example, does this form submit if the user email address is invalid? This is usually testing each component or unit of code, so it's unit testing, but it might also be testing multiple units together, integration testing. These tests are written by developers in code, and they're meant to validate the behavior of specific functions or methods. They also are catching bugs, including security issues and ensuring correctness of your code base. When you hear developers talk about test coverage, they're talking about these. And more importantly, this lets you answer the question, is the new functionality that I'm introducing breaking any current functionality? There's obviously more than just testing, though, but I'm saying that testing is the MVP. So beyond testing, monitoring or metrics play a significant role in ensuring your product works as expected. Monitoring lets you respond to changes in how your product functions, by alerting your team if there's strange behavior in the product or potential issues with performance or availability. By finding and responding to these issues, you'll be able to proactively identify those problems. And metrics help you assess feature uptake so you can determine the success of a launch, but also gives you insights for future improvements. So for example, you might use Google Analytics or another tracking tool to understand where users click or what defaults they change in order to improve that user experience. All right, so that's testing. Next up, telling the user that a feature is launched. If a feature launches in the forest and no users are there to hear it, did it really launch? Your users need a way to know what's new in the product. A changelog is the equivalent of a pop-up notification or like a, you've got mail, ding. Informing users of a launch can't rely solely on marketing or something like an in-product notification. A user might read a blog post, but you can't expect every user to read every blog post. And an in-product notification could actually worsen the user experience. Plus it's something that the user can dismiss. A changelog needs to be a pull rather than a push model. The user should be able to access it at any time and see what's progressed in the product. When I joined TailsKill as the first PM, almost two years ago now, a changelog is actually one of the very first things I added. It was, you know, missing and we needed a way to tell users what was changing. So when do you actually need a changelog entry? Internally, I've written up a doc that's called do I need a changelog? And there's some extracts from it here on the slide. We add content to the changelog first if the user experience changes from what it previously was in any way. This isn't necessarily making, you know, a green button blue or moving it from the right-hand side of the page to the left-hand side, but if it significantly changes location or the user experience in the UI or the functionality is renamed, even if it's not new functionality, it needs a changelog. Another example here is if a default changes. If the default set of flags passed to the CLI change, so that the user who runs the same command tomorrow gets different results, then you need a changelog. And of course, you need a changelog if there's net new functionality. The other extract that I've included here is a warning that's in the doc. Basically, if you're changing anything user-visible and if somebody sent you this doc, then the answer is probably yes, you need a changelog. So what does a good changelog entry look like? A good changelog entry is specific. It has the specific dates the change was made or the version number where the change is available. It's not a summary of all the changes that happened this month. That's a blog post. A good changelog entry will have specific user problems that are addressed in that release rather than bug fixes. There's a bit of judgment here. You don't necessarily want to list every single bug fix. If you're not building a product for developers, your users might not be familiar with specific GitHub issues. The exception I would put for a vague statement on bug fixes is around security. You might not be able to disclose exactly what the fixes are when a release is cut. So being vague in that case is sometimes okay. And lastly, a good changelog entry has links for the user to learn more, like in documentation. In docs, the user can see how to enable, configure, use, or disable the feature. Again, there's a bit of judgment here. Maybe your product is not heavy on documentation overall. But nonetheless, you don't want the changelog to be serving itself as documentation. And a great changelog has a consistent format. A user might even be able to subscribe to it to get product updates. A common alternative to subscribing updates is something that marketing might be able to help with, like a monthly product newsletter. The last thing you really need in your launch MVP is to make it clear to the user how the feature is supported. This means defining launch phases clearly. It's a couple of things, terminology. Do you use alpha, beta, early access, invite only, or something else? Availability. At which one of those phases does the feature go public? And once you decide you need to stick to this, I feel like I struggle with this a lot. We have private alphas and public betas, but we might occasionally internally have conversations about public alphas. My pushback is, what's the reason we feel compelled to have a public alpha rather than a standard private alpha or public beta? If there's not a good reason, then deviating from the standard is just more confusing to users. Support expectations. When is the feature officially supported? This isn't just literal customer support, although it also includes that. It's about verifying that the feature works in most common scenarios, but also that it works in a bunch of edge cases like different platforms or languages or integrations, whether the feature has documentation and whether there's, yeah, customer support. For infrastructure products, which is, again, what I work on, this is often just, you know, can this be used in production? That's what the customer is really asking. And terms of service. This sounds silly, but it's a common oversight. You need to make sure that your terms of service cover features that are not generally available in any concerns that you have about sharing those features pre-release with users, like NDS. At a past job, requiring customers to sign a separate set of terms blocked me from getting sufficient users to give feedback on the product. So once you have these launch phases clearly defined, you should publish these on your website so that users know exactly what to expect at each stage of a launch. As an aside, I often see PMs mixing up launch phases with defining the minimum viable product, the actual minimum viable product I need here, not the MVP for a launch. You might hear PMs say that, you know, this is the beta scope or this is the MVP scope, but then it's not clear if there's more requirements after beta or why these requirements are tied to the beta launch phase. The standard requirements for beta or GA should be about how the product works and if it's supported across your environment, not about scope. So for example, you might always need audit logs for the feature of beta. Then if you're comfortable doing so, define generally when the MVP is expected to be completed. Say it's beta. You're saying you're not adding any other functionality from beta to GA or say it's GA. The other party that's going to have an opinion here is marketing. They'll want to know what phase to spend marketing efforts on. Again, I don't think it really matters as long as we're clear and consistent. So how do you make sure that you consistently launch products at each phase that have the same requirements? Launch checklists. These include the things that we just talked about, like tests and monitoring and engineering or integrations with other functionality of the product or whatever coverage you're committed to at each launch stage. So things like writing audit logs, adding new API methods or CLI commands. Giving a signal to the user that there is a launch in the product or in documentation including what release phase the feature is in. Documentation for the feature on how to use it, a change log entry and any other changes that you might need to your docs or website like adding the feature to a pressing page or a features page. These might not all apply at every launch phase and for every size of launch. Typically, you'll also define launch tiers with marketing so that they know where to focus their efforts, which is on the launches that matter. And to my earlier point, agree with them if you plan to spend more effort on certain launch phases like beta versus geo launches, which one is the actual big announcement that you're making? So for launch phases, there's a checklist you need with engineering of what's needed at each stage. And then depending on the importance of a launch, you'll have a separate checklist of launch artifacts that you need with marketing at each stage. And I'm saying keep the minimum for these very low, specific phases that we discussed, especially at early phases. You don't want to have a long list of things blocking you being able to get the product into customers' hands and get feedback. To make these launch checklists easy to get through in particular for the MVP requirements that you have for every launch, you need templates. Internally at TailsGal, I'm still working on some of these. Some of the templates that we have are just to remember specific items. So for example, for our blog post template, it's to get social graphics done and pick a permanent link. This isn't always important, but it becomes important when coordinating a launch with an external partner. So just doing this by default is best practice. I'd like to get some of these templates to a point where we have, you know, a poor version of a blog post in Mad Libs, so that worst case scenario, we can still use that, but we're not there yet. Right now, people are still writing every blog post, which is great. And even if you don't have templates, but you have someone who is responsible for or can help with certain parts of the product, then create intake forms for common things like change logs or feature documentation. Our change log intake form asks, you know, what is the change? When is it launching? And what component actually changes? All right, so we've talked about a launch MVP. Let's talk about when you might deviate from the MVP or need to define different processes. The first reason this happens is different kinds of launches. By different kinds of launches, I mean what's changing in the product. This is really about the surface area of your product. In the simplest case, your product might be a static web app, and the only thing that can change is the app itself. You roll out this change simultaneously to all users, or maybe you use Feature Flag, so test it with a few users or gradually roll out the change. But in a more complicated scenario, you might have a web UI, desktop or mobile applications, and backend changes to the API, your database, whatever. These all have changes, some of which might be user-visible, and all have slightly different rollouts. So my suggestion to you here as a PM is to first understand how these rollouts differ. Do you always use Feature Flags? How frequently do you release your desktop or mobile apps? How does versioning work? This matters because you might not have the ability to change the timing of some of these, like when the app's tour approves your app. And you need people to be ready to and launch and respond to changes when they occur. A change in the web UI might take you minutes, whereas a change to an app might take weeks. Then identify the standard way to roll out changes for each component, and when you want to tell a user about a change. So, for example, at Tailskill, we roll out web changes behind a Feature Flag, and we drop the Feature Flag at launch. For apps, the launch is when the change lands in the stable app release. And lastly, I'd say to ensure the engineering team understands what kinds of changes to the backend you consider user-facing and so need to be treated like a launch. For us, things like new API endpoints should get the launch treatment. The second main input to how you'll define your launch MVP is what kind of product you're dealing with. Specifically, I mean B2B versus B2C products. These have different users who have different expectations about how the product they're using changes. Consumers might be happy or even excited about a new feature, even hunting for Easter eggs, trying to see what the new UI looks like in that app. Businesses, on the other hand, just want to be told what to expect. But I don't know, that's not always true. Even consumers want to know about changes. Has your favorite pair of jeans ever changed its cut and you were surprised or annoyed to find that it was different? I bought a new jacket a few weeks ago and I was really annoyed to find that it fit differently than the last one, the exact same one from the same company. And I went online at the reviews and they all have the same complaints on the new cuts and sizing. Another special case to call out for Enterprise is regulated industries. They'll have specific expectations and may even need changes to go through review or certification processes before they can use the product. But let's talk a bit more about Enterprise, just in general, since that's what I focused on. What do enterprises need as part of a launch or as part of your product to make launches easier? You already know that the first thing I'm going to say, which is the change law, but there are other things that you might want to consider working on as well. First is the change policy. This lets the user know when you would change defaults or deprecate features and how you'll notify them of these changes. So, for example, you might tell your customers that you'll support the feature for at least another year or you'll support the last few versions of clients, including providing security updates. Automated updates. You know how you get out of supporting those old versions or need to provide security updates to those old versions, which is not necessarily a great use of your engineer's time. It's automated updates. If you make it easy for users to keep up to date with the latest versions, many of them will. For apps on the iOS or Android app stores, this is done for you automatically. I saw the other day that Docker desktop now requires you to update or upgrade if you don't want to update. That means they've made not updating an enterprise feature of the product. Clearly defined security guarantees. If you're explicit about what security you expect to have and that you can't make changes that will weaken that security, it'll help your customers decide what their risks are. When it comes to security in general, you build trust through transparency. And lastly, things like trusted testers. It's hard to get businesses to test new features as part of their work because they don't want to have any issues. They don't want to encounter any issues as part of their live production environments. So work with your users to develop test environments, or when you find users who are willing to test lots of features, involve them more deeply in something like a trusted tester program where they get early access to features, can provide feedback directly to your team, or have other perks. With all of these restrictions on how I make changes, can I even do any experimentation? And this is another area that B2B and B2C really differ. Since B2C users are more amenable to changes, they're also more amenable to experimentation. Your business users might not be. You might have other restrictions, though. For example, you can't really test different versions of your apps if they're distributed through an app store. You can use something like TestFlight to get early access to things, but it's not different versions. That you would get from A-B testing on the web. Instead, you'll need to build new testing and feedback mechanisms. In particular, think about how to get earlier versions of the product, especially when they're still in ideation, into folks' hands for testing. So for example, working with user research on things like mockups. And lastly, we'll talk about improving your internal processes so that it means you can actually ship fast. First and foremost is communication. You need coordination and communication between PM, Eng, Design, Marketing, Docs, DevRel, and others to figure out when a launch is going out, the status of the launch, and any requirements needed for that launch. This isn't easy. This is actually the hardest thing we've been talking about today. There's no silver bullet here. But I would ensure a couple of things. You want to ensure that you made it easy for someone to know the status of a launch. There should be one place where anyone who's involved can go and see that status. That you communicate out regularly on the status of various launches, especially when something major changes. Does a contract fall through the day before the launch? Everyone needs to know as quickly as possible. And this is where the launch phases and the checklist we talked about earlier really matter. Being clear and consistent on status and on what's missing. Changing the launch phase, the MVP scope, the requirements, the name of the feature, the launch date, et cetera will all confuse folks. Some of these things will change. These changes should be material changes. They shouldn't just be you changing the definition of author requirements. And just in general, over communicate. If you think you're communicating too much, you're probably not and do it some more. Just over communicate what's going on with your launches. Second is making it easy for anyone internally to edit and contribute to things. I've worked in a few different tech companies now as a PM. In one of them, I couldn't edit documentation directly. And it drove me absolutely crazy. I had to file a ticket and it would always end up being lower priority than something else. It thankfully never delayed any launches that we had, but really frustrated me. As a PM, it's your job to unblock things and to help get things done. I've written docs, I've written blog posts, written code, given conference talks, or written in other people's conference talks, made slides for them, everything. Your documentation and blog posts should be editable so that if someone needs to jump in and get it done, and that includes you, they can. That doesn't mean no review process. It means not blocking on the process to get a first draft done. It also means that if you're not comfortable editing markdown, learn how to edit markdown and get in there and start editing things directly. Or move to a system that you do feel comfortable editing and more folks feel comfortable editing, including, for example, your support team. Next is building a really strong engineering foundation so that you can make rapid changes when you need to. That means building the monitoring testing that we talked about earlier to proactively identify and fix issues before they become problems. I know, I'm repeating myself, it's boring. But that also means refactoring old code to be more readable so folks can more easily jump in and contribute. If you're trying to make a very hectic last-minute change, if your engineers are spending a lot of time figuring out what's going on in the code base to begin with, it's harder for them to make that change. If you've spent a lot of time continuously editing that, keeping it up to date, it's much easier to make incremental changes to your code. You can't always move fast. Sometimes you have to go slow to be able to move fast, again, when it really matters. And lastly, pointing people at the right problems to work on. As I mentioned earlier, I'm often running behind engineering right now at TailsGal, the catch-up to new functionality we're working on and new launches that we have coming up. We don't always write a purity for every feature. What makes this easier though is pointing engineers at the top problems or issues that you're dealing with. Sharing customer conversations or notes with them, maintaining a backlog so that if somebody has extra time, they know what they could help you with if they picked up. Maintaining and sharing your backlog on a regular basis. You might also do regular issue triage to continuously address the top issues that are being reported. Go through support tickets, see what's going on. Tell folks what matters, why it matters, and help them prioritize how to spend any extra time that they have because people really do want to make your product better. They just need to know what to go work on. All right, that's it. So let's recap what we covered today. As your development team is moving quickly, you want your launches to keep up. As a PM, you need to figure out how to get the minimum set of things done for a launch to be viable. What you need to do to unblock the launch and get it up a tour in such a way that folks can just keep moving. A launch is any change that is user-facing to your product. It could be a change to existing functionality, it could be net new functionality, and you have to define that line internally for yourself exactly what that means. At minimum, launch should have tests to verify the functionality works as expected, should have a change log to notify the user of changes, and should be clear around the support expectations with defined launch phases that clarify what you'll have working and integrated at every single launch, when documentation will be available, when the support team will support a launch, that kind of thing. Launches might be different for a couple of different reasons. They might look different or happen at different speeds depending on what product surface you're dealing with, like front end versus back end changes. And different users will have different expectations around launches, like B2B versus B2C users. B2B users are much more demanding as to what a launch is or changes in their environment, and will want notification and tools to help them keep up to date with changes. And lastly, to keep people moving quickly, you need to over-communicate about upcoming launches. Tell people what's coming up, where they can get status on what's going on. Make it easy for folks to contribute to your launches, including documentation, and have a solid engineering base to work on so people can make those fast changes. Thanks for joining this talk, and I hope you learned something new about quickly moving through product launches.