 A team had a charter to build a payments network, and our executive was like, you've got to do it this way. And so we're happy to do it, but we're skeptical. So we wanted to figure out a way to be skeptical in a proactive way. And so we started piecing together little bits of ways of doing things, and eventually came up with this methodology based on the lean. So I'm sure most people have heard of the lean startup. So that was compulsory reading for us. And then we pulled out some knowledge from there, as well as interviewed a couple people to figure out what would be the best fit for that company. At the time, the company had no formal process for how to prioritize stuff. There was ideas coming from all over the show, and seemingly people would just grab them and whatever was most important to the execs at that time go do it, drop projects, pick up projects. And sometimes that works great, and other times it doesn't. In big companies it can get a little chaotic. Has anyone ever experienced that kind of thing in the companies you work at? So we've got a couple, right? What EDD does is it helps ensure that we can go build that crazy tractor, right, if we want. But let's have a good discussion around it first. Our process that we came up with was a way to have meaningful discussions around all the ideas that we want to work on. And understand how those projects, if you have a stack of projects, weigh up relative to one another. Any questions? Nope. Okay. So the vision behind the process was effectively use the data we got, make data-driven decisions. If we don't have data, make it up and then validate it later, right? Run experiments. So it's experiment-based. Run experiments to prove out those assumptions that we made. And then measure the return of investment. So the ROI for what we're building. That was like basically the idea. So the process is kind of, I group it into three main buckets. One is someone drops an idea on your plate, or you've got an idea, your users are asking for something. Some ideas coming into you from the various inputs. Maybe you've looked at the data, et cetera. Right? How do you know whether that thing is worth building or not? We go through this process. The first one you will get into in detail is product description. Right? It's just like you've seen sheets, templated sheets for, if you've done the product school, there's a couple of different ways of doing that. We come up with our own sheet, which I'll go through in a minute. We then come up with a score for that product or feature. So we'd say, you know, throw in, and we'll go into that in a minute as well. But let's really think what are our key metrics around this thing. Throw them into a spreadsheet and output a number, like one number that will give that feature or product a score, and then compare that number to the other. So if one's like, hey, this is a $1,000 idea and this is a $2 idea, it's pretty clear that $1,000 idea is probably better. So we might want to build that. So it's a way of sort of adding data and estimates to figure out what to build. Do you have a question? Okay, cool. And then finally a roadmap. So the result of those two things is a set of experiments that we want to go run to prove that this thing is worth doing. Right? And that might be developing a prototype, developing an actual feature. It might be doing design prototypes, you know, full-on design, design research, like data analysis, et cetera, et cetera. So basically a roadmap effectively ends up being a list of things you would have to do for a project anyway to get it built and delivered. But it's a systematic way of doing it that helps the whole team that's using this approach speak the same lingo. So the product description sheet. So this is the product description sheet. It seems kind of simple. I mean, actually it might not seem simple from you guys' perspective. It seems simple to me because I filled them out a million times. As new ideas come in, we fill those out. And I can send this to you guys later if you need it. And I'll go through each of these in a minute. But the idea was if someone says, hey, I've got a great idea. It's going to make the company $10 million. Awesome. Let's spend as little time as possible figuring out whether that is actually a million-dollar idea or a $10-dollar idea or a $10 billion idea, whatever it is. And so this is a systematic way. It shouldn't take more than an hour. You don't want to spend too much time on it. But you can sit with the exec that has the idea or you can take it away with your peers and run through this and come out with a sheet. And that sheet is basically going to spell out everything you need to know or most of the things you want to know or guesstimate at what the product's about. So for example, in the top left, who is this feature for? Someone comes and tells you, look, we really need to build feature X. Sounds cool. It's like a Swiss Army knife. It's got the little corkscrew. But no one drinks wine here. So should we have the corkscrew? So who are we building it for? Who's it creating value for? What is the problem that the customer is facing? So this is kind of like the why. Why will customers care about this? What value is it bringing to them? The unique value prop? How is this unique as a solution? Can they get to the solution via some other means? Why is it worth us spending money on to build? There are normally a number of ways to solve a particular problem. Let's articulate them here. And then later we can figure out which one we go and actually put more time into. Your KPIs. I'm sure most people who are product managers here know what a key success metrics are. How are we going to measure this? Because later we want to come back and see what we did and see if it measures up to what we thought it would. Sometimes it's useful to think about who's going to be the early adopter. So for Xero's example, we might be deploying a new feature that we think accountants will be big. Or accounting firms will like. Well, let's find out who an early adopter is and see if we can sort of beta it on them or test the idea on them. What is out there that they could use? What are the existing alternatives? Are they in your industry or outside of your industry? I'm thinking about that. You can also think about what your competitors are doing in the space. Do all your competitors have a corkscrew on their thing, on their Swiss Army knife? Channels. So how are we going to get to the customers? Emails. Do we have a sales team? If you're in enterprise, you typically have a sales team that's going to go out and speak to customers. How are you going to drive awareness for the future? Again, this doesn't have to be like a go-to-market strategy. It's just like, hey, we think who are the stakeholders? As a PM, you really want to identify who are the stakeholders, because you can get them involved in supporting the product and get their feedback on the product. If they think it's crap, they're not going to support it anyway and they'll tell you and that's great. But it's like so, for example, customer support. Typically very important in an enterprise situation because they're going to be the ones taking on the support for the feature you put up. What do they think about it? Does it solve one of their customers' needs that they're hearing a lot of? And then key resources. Who's going to build it? What do we need in order to get it done? Do we need more data infrastructure? Do we need a web designer? Do we need a front-end? What are those various things we need in order to deliver the solution? This is pretty key. So when you're thinking about what is the business value, it doesn't necessarily have to be money. It can be money. Money is great. Everyone loves money. It helps the business keep going, but there are lots of other ways it could be useful. It could increase the acquisition. It could increase the amount of referrals people give for this thing. It could increase revenue and maybe the features for return users. So there's a number of things that you could run through for this. R, by the way, is if you look it up, type in R, like the pirate, and there's Dave, something at McClure that has this cool video on YouTube about R and how you go measure these different things. So I suggest checking that out. And then cost structure. We talked a bit about what the key resources are. How much is it going to cost us to build this? Again, you can suck thumb on this, right? You know your devs are about maybe 10 grand, 20 grand a month if they're US based and you work here or in Silicon Valley. So how many man months is it going to take? We'll get into this in a minute, but figure out what your cost structure is. And this also pertains to how long do you think it's going to take? Question on the early adopters require. So are you looking for names and addresses basically that specific? Or are you talking about just generally who you're personally using a bit? You could do it, you could segment it as you like. Like it'd be cool if it was specific, but you could say like our early adopters are going to be on iOS, you know, because you know we've got a few, all of our hardcore users are on iOS. Maybe, you know, or maybe you want to segment it that way. There's different ways to splice that cake. Oh, he asked how, like, how do you identify the early adopters? I forgot, I have to repeat it for the video. Okay. So basically this is your product description, right? Once this is done, and it shouldn't take long. If it's taking too long, make it take less long. Because like typically you want, I'll give you an example. It took pretty much every company I've worked at where, you know, the exact, the CEO or the product unit manager is like very passionate and very product oriented. They're going to come to you, they come to me and say, Josh, we need to build this, go make it happen, right? It's their urgent, they've got this idea. We should put our product on the blockchain. Isn't that going to be awesome? Solve all of our needs. What you want to do is be able to get back to them pretty quickly. So with using product description, going quick, you can identify this. Then on top of that, you look at what the EDD score is. And you can do the scoring based on however you want. I'll show you how we did it. But between the product description and the score, you can go get a relative ranking. You go back to the, I'll go back to my CEO, whomever the person is that's pushing this thing and say, look, we looked at it. This is what we think it's going to take. This is what we think the score is. You still want to do it. It's your decision. I'm not CEO. So let's look at the EDD score. Okay. This is going to give you a relative return on investment. It's a dollar amount. We made a dollar amount. You can make it rupees. You can make it whatever you want. It doesn't matter. The point is that it's going to give you a relative like cash value against other features. And that cash value is just a, you can call it like a ether coin or whatever you want because it doesn't really exist. It's just a way of comparing apples to apples. We also remember that we were doing this quick. So we're going to rely on assumptions. Sometimes you've got the data. The more you do processes like this, the more data you'll have. And I'll show you that in a minute. But for example, your cost to serve in a SaaS platform, that's how much it costs you every month to support a customer. Let's say you know your cost to serve is about two bucks for a given, for your platform, right? You may not know that in the beginning. You may put a 10 because you just don't know. Once you do know it, that same number can be used over and over again for different products. So maybe your product is just an incremental feature. It's not going to drive the cost to serve up very much. But if it's a whole new product with a whole new customer base, maybe it will. And then we talked about prioritizing. So once you've got this, you can prioritize things against one another. So how do we score things? We score things. There's a couple different ways of scoring things. So these are the things we chose because we're working on the SaaS platform. There's different ways of scoring things for different types of products you're working on. Like if you're working on a hardware device, you're going to have a different set of things like your cost to serve will be different on a hardware device than on a SaaS platform, right? You might use some different metrics. Impact per user is how much revenue per user do you expect to generate in the month? Cost to acquire user, right? How much is it going to cost to get that user into the funnel so that you can start giving them this product? Just to get them to the top of the funnel, not to keep them. Generate how many customers are going to leave the platform every month? Cost to serve. We talked about that earlier. How much is it going to cost to basically support these guys on an ongoing basis? Cost to build is a one-off cost. Typically, that's your biggest cost item. And then how long is it going to take to release because that impacts opportunity cost? So when we first started, we didn't start small in this process. We developed engineering teams so we overcomplicate things wherever we can. So we didn't start small, but I'm telling you guys you can start small now because we've figured it out. It's a very simple exercise. This can take five minutes and I'll tell you why we did this in a second. But it gives you enough yield to see whether this thing is worth continuing and you can go deeper and deeper as you go. Now, the reason we had to go to very simple, which is this one, is because to get other teams within our organization to use this, something like this is far too complicated. So we said, okay guys, we want everyone to use this so we can compare apples to apples across the organization. Let's do it this way. Very simple. It's going to take more than five minutes. How happy is this thing going to make your customers? In the back, we've got numbers that are effectively mapping this to our complicated spreadsheet. How hard is it going to be to build? And then how many customers is it going to serve? So very simple. This is a great one. People use this one. This one gets a little bit more difficult because now you have to really grok what those metrics are that we talked about earlier. You've got to go and actually as a product manager learn what impact the user is. Some people just don't want to do that. So we look at our seven critical numbers and then when there's no data we just put it in there and just estimate. How many customers are going to be there in year three? But we can take a guess. So you end up with this score over here. And then finally you get to the moderate which is actually the detailed version and here you can start guessing and putting in how much do you think you're going to make year on year? How many new customers are going to impact because as you get this and the spreadsheet we have effectively uses all of these to calculate that ROI. So it's a little bit complicated on the back end but you can create that spreadsheet and I might be able to share it. I don't know, I have to think about whether I can do that. It basically takes into account what the seven key metrics are and it's kind of like a financial projection. So it's doing effectively what is your financial projection over the course of the next 10 to 12 years. That's really far out. If your business is still around in 10 years these days, but it's effectively taking into account a bunch of this adding interest, et cetera, and maintenance. I can't give you more specifics because that would mean I'd have to go dig out the algorithms that we spent weeks writing and I'd have to remember them again and I'm glad to do that. So comparing the scores, once you've done this what you end up with is a spreadsheet and your spreadsheet's going to have your score. I don't think I have an example of this but I'll have links to all the product description documents that you wrote next to it. And so anyone in the organization or in your team can go and look at this thing and say, okay, I've got an idea for something. They've already done this. They've already looked through this. What's in their sheet? Is it equal to my idea? Cool, it is. Or if it isn't just create a new one and then what the relative score is. Your execs can then go and say, it looks like we've been really trying to push product D but the return doesn't look as good as product A. Maybe we should divert resources to that. You wouldn't necessarily go and pull this off. You might have a customer that is just saying, hey, I've got 10,000 users and I'm using your stuff and it's worth a million bucks to your business and your business cares about a million bucks, let's say. You maybe just go do that. So this is just a way to get a relative score and get a relative ranking but knowing that they're environmental impacts that happen all the time in organizations. So as I was saying, the most valuable part of this that we found was just having a structured approach to talk through what is this idea about, fill out that spreadsheet and then give it a costing and it really helps you think on all the levels of detail about that idea, right? Besides obviously building a prototype and just pushing something into market, which comes next. Any questions? When you just showed six examples of products, are you generally experiencing a list of products and then doing comparative analysis or are you organically, over time, going up with products based on the desire to do some sort of comparison? There are typically a lot of products in any given team, right? So if this was a blank list, I mean we started filling it with all ideas that we had and as new ideas come in, we add them to this. What other teams who started picking this up, like I looked at our sheet, you know, a year after we had started evangelizing this and it was like really long and this is basically the PM team filling it out, not really other people in the organization, but the management team pretty much. So it's organic as ideas come up, as your CEO says we got to do this and we got to do it now, you go and do it, you add it here and then you send him a link and say hey dude, this is what we thought about, we actually think it's a great idea or we're skeptical about it, happy to do it still but here's the information that you may not have had when you thought about it in the elevator on the way to the meeting, right? So does that answer your question? Yeah. Okay, so the roadmap, this is where, how are we doing for time? Are we doing fine? If you get anything out of this whole thing to take back, the easiest part to think about is those two. This gets into a lot more like lean product management and development effectively and you'll see why in a minute. So we found that the adoption rate of the first two within our organization was very high but people started to fall off over here because it takes a lot of discipline to keep this stuff up to date. Not everyone's disciplined, right? So the point of this next part is we came up with the, we pulled some numbers out of our butts, we've got these like cool scores, we think okay, this is what we want to do. Well, you know, are those correct or not? Let's go and figure them out. So what we came up with was this thing called the experiment roadmap, right? And the idea here was to look at each of those key metrics we defined earlier, impact the user, et cetera, and do at least one experiment to prove or disprove that number and make it better. The better you get those numbers, the more accurate that score becomes and the more belief we can have in the fact that it might actually help our organization or hurt our organization by doing that thing. So what we do is we create, we brainstorm, this is actually really fun, I love doing this. You brainstorm, okay, how are we going to figure out what it's going to actually cost us to service or build or whether users really want this and how much will they pay for it, right? So these are part of the experiments. And then we also assign a cost for the experiment, which is also a formula and I'll show you that in a minute, but it gives us, and then we give it a rating. How much certainty does doing this experiment give us that this product is going to be a success or this feature is worth doing, right? Along the way of doing that, you learn a bunch of stuff and you figure out what is the right thing to build for the customer, right? So for example, let's see, do I have an, before I get there, an example might be, we believe that we can charge two dollars for this feature per use, right? We're going to do scanning of documents and we charge two bucks a document. We have this great idea for how to do that. And so what we're going to go do is to prove that is to reach out to 10 customers, interview them, like target customers, interview them and see if they validate whether they would pay two dollars or not. We believe they will and that way we think that it's going to be worth monetizing this a two bucks a sheet, right? That may not be the case, but it's, the hypothesis is fine. You're just looking to prove it's scientific method, right? It's just a hypothesis. You're looking to prove or disprove it. Other way you're going to be learning something new. So you come up with your hypothesis. What is your target metric? You create and run an experiment. You then document your findings and then you confirm or change your assumptions based on your conclusion and then we publish it. So we were working in the last company I was in as a very distributed organization, offices all over the world. You know, you come up like, it's important for any organization to not lose all the knowledge that their employees are. If you have employees like you know, people are, you're paying us to think for our brains. And when we leave the company, do we leave any of that, you know, intellectual stuff behind? Sometimes, sometimes not. By publishing the results, you hopefully leave some assets behind, you know? And then also you, you get exposure for what you're building, right? We'll talk about that in a minute. So, okay, let's go back here. Here's an example. Users will value and pay for a bookkeeper on demand, right? Does anyone know what bookkeeping is? I would hope everyone knows about it at this stage in our lives. I only learned about it a couple years ago. Really useful service. Someone figures out, you know, what's your, you know, how to categorize all your expenses and your incomes, right? So at a high level. So you say, okay, we think we want to start a service like that. Let's figure it out and see if we can charge people. So we're going to ask, you know, 50 customers, bookkeepers and partners if they would be willing to pay for the service. We're going to, we see our results where 27 of the 50 said they would be willing to pay online for a 60 minute session. Okay, that's pretty good validation. If almost 50% of people are going to do it, it's probably not a bad idea, right? Now that we know it's probably we'll have a realistic attached rate, we can better calculate the ROI. So experiments. I'll give you an example of these in a minute. They're all sort of fit in the standard framework, whether you're doing an engineering effort, a design effort, a PM research effort, you can do experiments in this way. So you can say, well, what is the, what is, first of all, what is the critical question? So what is our cost of build, a revenue per user, et cetera? Okay, your experiment, you describe what that experiment is, I'll show you that in a minute. You then figure out, you observe it, so you run their experiment. You see if your hypothesis is true and then you add your belief. And this is cool. It's kind of like putting a stake in the ground, saying, I believe we can charge two bucks for this. I'm putting my, you know, I'm drawing a line in the sand, we can charge two bucks for their one hour of bookkeeper time. People may come back and say, no way, only if it's free, right? Or they may say, I'm gonna charge 10 bucks. That's a happy outcome. It makes me feel more confident that I can actually make revenue of this and actually increase my revenue per user critical question. Okay, so here's how we do experiments. It seems complicated, but once you start, once I found that in our team, once we started framing everything we do in this mechanism, it helps us think through why we're doing something. You don't just go, I'm gonna go do a user research project because it's that time of the cycle and I think we need it. You're gonna say, you know, we're gonna do it because we're trying to figure out whether this thing, whether the core actions in this thing that we're developing will be easy enough for a user to use and that way they'll end up onboarding with us. So the first part of the experiment is what we'll do to conduct the experiment. We will, you know, set up a prototype and run it through user testing.com. I'll actually give you an example after this. What is the data we're gonna gather to analyze? So then we'll see if blah, blah, blah. What we expect the data to show, we think it's gonna do this, right? We think it's gonna do that. And then if it does do that, this is how we're gonna interpret it, right? If customers do say they are, have said they're willing to pay 10 bucks for this feature, then it means that we're gonna make the revenue that we thought we were gonna make and we're more confident in shipping the feature. So here's an example. We'll research company-wise pricing offer for this feature. Then we'll see if our planned pricing is less expensive. We believe that our planned pricing is less than theirs and this means that customers will pay the $1 for the transaction and so we can be more confident that our monthly revenue per user, in our monthly revenue per user critical number. So you might say we think we've got these customers already that are processing these documents. This is not a zero example, by the way. It's just a random example. We've got customers that are processing documents right now digitally. We think if we can do it manually, if they're doing 30-digital, we figure they're gonna do 15 manual a month at a dollar a piece. It's gonna add $15 extra a month to our revenue. This is the way we price and cost the experiments. This is a useful mechanism that we use to figure out which experiments to do next. This is a fun exercise. You brainstorm a bunch of stuff. You have a long list of stuff. You only have X number of people in your team that can run these experiments. Which ones do you try first? If I run this experiment, I'm gonna be on a scale of 1 to 10, more certain. If I run this experiment and it proves that I can get $15 revenue a month, I'm gonna be 9 out of 10 certain that this is a good thing to build for us. How many hours is it gonna take? Actual hours. This gets better and better over time. Product management doesn't often have to estimate how long a task is gonna take. I never used to do this. I used to be like, here's my list, I'm doing this. I feel like this is a big-ish item, but I never used to exactly measure the hours. By doing this, you kinda start figuring out, oh, when I asked the business analysts for a survey, or not even a survey, for a data that they have to mine, typically that takes two weeks. I thought it was gonna take a day because it seems like an easy sequel query, but it doesn't because it takes two weeks because they were backlogged or whatever the case may be that they're busy. Or it's a complicated query. So if it does take them two weeks and it takes me eight hours to do this, the lag in terms of days is gonna be at least two weeks. And so that adds to the cost. So your total cost is your number of hours. We just timed it by 100. Suck them. Average engineer, product manager, designer, about 100 bucks an hour maybe. And then the number of days. And then your cost of certainty tallies all of those together. Here's an example. So when we're brainstorming, we set up a sheet and it's got an ID. And then we have the critical number we're trying to validate. We articulate the experiment that we're gonna run. And then we have a cost of certainty over here. You don't see it over here, but there's a couple numbers here that impact this. And those numbers get better and better over time as you get better and better estimating hours to do things. You know, the survey question that I talked about. Your experiment process, come up with the idea, design it. Once it's designed, put it into the working queue, wait on your data, document your findings, debate the findings with your team before you send it out to the whole company. Once you've figured out what that, you know, once you've had the debate and you've updated that doc, then publisher. Our team came up with something like this. We use Trello. We've got your idea board, your experiment board, things that you're working on. Sometimes you're waiting on data, so you just pop it in there. This gives everyone on the team a good idea of where things are in the cycle, right? So that your manager and their managers can look at this board and be like, oh, okay, cool. They're still waiting on that. Homer, we had these cute little icons. Homer indicates that you're waiting on someone and it's taken forever. Disappointed dad is like, dude, why haven't you started this yet? This is typically your manager. Darth Vader is our developers. It takes the force to get this thing built. And the stormtroopers are the product managers. So we can see which are the dev tasks and which are the PM tasks. Darth Vader, I mean, the dark star represents a huge fricking task. So it's probably gonna be sitting in the, working on the data, working on this for a while. So you know, some things like that to help spruce it up. So what software is this? This is Trello. So Trello is free, but I think they charge, they're obviously charged for enterprises because that's probably how they make money. But Trello is a great way from a product management perspective. I don't cover it in this lecture, but our product team keeps everything, the previous product team kept everything in Trello. What am I working on now? What am I finished? And that way your manager can look at it and see where you're at with anything. And that each of those cards, this is a lane. And each of these cards, you can put a bunch of data and just keep updating their status on that particular work item. So document debate share. Once you've got your results, we have a document template. Okay, this is what we learned. This is the hypothesis that we had. This is the data that we got. This is the conclusion that we draw. And then it's kind of like a, in a way it's kind of like a spec for this experiment. And sometimes they end up being specs, right? You know, for a user research project, you're going to have a lot of artifacts. Some of the things covered is, you know, how much it actually costs to do this experiment, which is always useful for us as PMs and designers, but then for the devs, it's a way of really like holding them accountable to how many hours they said they were going to take to do something as well. Did it increase our confidence? This is more important at the beginning of the project. As you get into the later parts of the project you're committed, you're building it, this becomes less important. But the structure of the experiments is still useful. And then, as I said earlier, what our team did was once the results are done, we've played a little while with like just publishing the results doc. No one likes to read documents, right? They just don't. Like some people do and that's great, but mostly they don't. So what we did was just do, use QuickTime on your computer and record a video and just walk through the results doc and the experiment and then publish the video to the intranet, the company intranet and then people can just press play and watch it. And then you could post like, you know, the five second blurb of, you know, this is a terrible idea, you know, exec, this is why, you know, get his attention or her attention. And then you just rinse, leather and repeat or leather, rinse and repeat, whatever it is. And you just keep going with all your experiments. Yeah. Yeah. So how do you decide which ones to spend time on and how long they take? Effectively. Okay, cool. So do we pull resource and time from developers to do experiments? No. Okay, let me answer all of these in sequence. Right? Each experiment has a different cost associated with it in terms of time. Right? So dev items are experiments. Right? You can frame them as experiments and the devs go build it where there's a prototype or like an APR they're building, etc. And there's a whole part of the process that we don't discuss that's the dev side of this that I haven't talked about today, but it fits into this, the dev process fits into this and it's based on these questions. It doesn't have to be, but we made it fit that way. Some experiments for PM and design go real quick. Right? I need to change the color on this button because I think it's going to do something. So I change the color and I run a test on user testing. I can turn that around in three days. So that's why we have the Death Star. You know, sometimes things take a long time and we know, but sometimes they're really quick. Also, like, it just depends on you as an individual. If a competitive analysis takes me a day to do, it might take, you know, you an hour to do. Like, we have different competencies, right? So, so they may take time. Now, how much time overall do we expend on this process? The, for my team, our entire, like, day to day is surrounded by this, right? We don't work on anything that's not an experiment. So if someone says, hey, we're going to do something, we say, cool, let's write it up as an experiment. We put it in and then we have a card that's tracking to that experiment. So we have all of our, you know, what am I working on today? What did I finish? And all of those relate back to experiments. It's not, like, when I say that, I'm like, shoot, that sounds kind of heavy, but it's really not. Like, once you learn how to make the, like, articulate things as experiments, they help you figure out quickly what, why, how, and that's what you need to go do anything, typically, like, whatever that may be. Sometimes, I mean, I'm sure there's tasks that don't require experiments. You know, like, you've got to phone a customer for something or other. That doesn't require an experiment. But for things that, we try to frame everything as experiments for things that might end up being a feature or product that we build. Anything else in that question? Question. Okay, cool. That's it. That's Edd in a very quick nutshell. 730. That's pretty good timing, I think. So, you have experience working for large corporations and also for a long startup. Yeah. For a long startup, do you use similar methodology if you didn't know what they mean and have you used it? At which point do you use it? I didn't. In the first startup I did, I didn't have this methodology. But we did do, you know, product requirements specs and design specs that, and be like, oh, obviously we need to test this. Let's go test it and see if it's good. So we didn't put it into this framework. So it wasn't really a great, this helps provide a really nice process for tracking things. I'm working, advising a startup right now, and they've put all of their stuff around, like, oh, you should try this process out. And they've put it all in there and they're working on it that way. For my current, for the thing that I've just started doing, I haven't got to the point of this yet. No, I think even as a team of one, this would help me structure why I'm doing stuff. Like starting anything or like, it's chaotic, there's a million things you need to think about. So how do you track them, right? So we can do it this way. And there's ways to short-circuit this so that it's maybe not as verbose, but you're still capturing the same information. You know, the information helps you draw conclusions and stuff. Yeah. We are. The question on the beginning, this is a lot of campus things. Yeah. You mentioned good product features. If you were having to be an organization where someone, other than, you know, a product manager, designer, anyone that's ever really worked on software, has come up with a product idea, and you are, as it has to evaluate that for them, how granular do you say, well, listen, I see what you're saying. There's actually five few cases, and that's five product features. Let's do this on that method. And last is where you say, okay, let's put them together. And you know, as you can emphasize, it should be taking super long. Yeah. What's the method of analysis? I mean, you might, I would say, start small, right? Start quick. Sit with that person, or get the idea from them as best you can, and try to put it into this. But I think this provides a nice structure for conversation. So if this person comes to you, they're not technical, and this happens all the time, right? You've got lots of business people out there that have money and ideas, and they're like, I need someone to convert this into a product. And so you could use this to help them convert it into product. I get, like, I consult for various people, and this is typically how they want someone to help them turn this, like, thing that they're thinking about into something that's actionable for a developer to go work on, or a designer to design. This is a good starting point for that. And it doesn't have to, as you go deeper, you'll get more and more granular, right? So you take the product, and not every particular use case you would start with. Just the top stuff, like what's the top three to five things? You know, like any product, like any product is going to have, like even a simple product, like I'm working on a simple idea right now that I think solves a good use case. But like the more I brainstorm it, the more features I add to the list of things that sound awesome. But I'm not going to ship, like if there's 40 of them, I'll ship three of them. Like what are the core use cases? Like outside of this, when you're thinking about this, what is the key problem that your product is solving? And what is the key user narrative and user action that they have to take in order to solve this thing, right? Everything else is actually irrelevant when you're trying to find product market fit. You need to ship that thing first and see if people are actually, oh, you know, we identified this problem. We really think people, I'll choose a stupid example, we really think that a product to help sell chocolates on the subways is going to be a huge hit in New York, right? You could expound that outwards and be like, well, people are going to buy chocolates. They're probably going to want a milkshake and they're going to want a very clean environment so we'll give aprons with the whole thing and you develop this huge product package for selling chocolates without validating that people would just want to buy a chocolate to begin with, right? That's a way of spending a shit ton of money on an idea that's bound to fail. So just go like, try find product market fit, especially in startup, like you just don't have money, right? You're lucky if you do have money, but even if you do have money, you have limited resources and time to prove this thing in market before you run out. If we can forward to the moderate spreadsheet. So I think I'm okay with calculating those seven points. The next slide though. I'm not as skilled at doing the customers year one through ten. I know it varies depending on the product, but keep you taking through your thought process for how you even hypothesize those numbers. I don't even know really where to start. Yeah, so you start small because you're going to have a couple of early adopters, right? And then with these numbers, you're comparing them to, you can compare them to, for example, without being specific about where this stuff came from a product. If you don't have any product in the market yet, these are all going to be thumb sucks, right? IE, like your Instagram, and you haven't shipped Instagram via the beta yet, you're going to be like, oh, God, we could have a billion users if we're really good, right? That might be shooting for the stars, but you could say, well, and also this is kind of a slow trajectory on this one. You can see like if you're moving this slowly, maybe it's not worth doing this feature, but it depends on the nature of your app. So if you've got like a social app, if you're not hitting 100,000 users in just six months to a year, it might be a very poor indicator that this thing is going to find product market foot. So you want like, there's some metrics that you should go and research, right? Just to get an idea of what's acceptable in that industry space, right? And what you can be thinking towards. And then what you're doing, you can work backwards, so you do like top-down from that approach, right? That's one way to go. Another way to go is if you do have customer, we talked about, you know, you're brand new to the market. If you've got, let's say you have a million customers for your product, you're shipping enterprise, you're shipping office, right? Microsoft office and you've got 100 million users. You're going to have a good idea that if you put a feature out, how many users might end up using this based on other metrics that you have, right? So maybe you have experience with office because you've been working there for 10 years and you know, hey, if I put out an eraser tool, you know, 3% of customer support queries are about needing an eraser tool, right? That might give me a good idea offhand that of the 100 million users we'll start with 1,000 and by, you know, year five, at least half of everyone is going to be using that. So that's 50 million users, right? And so you can work back. Without that data, you can suck them and say, oh, I think it's going to be half, you know, 500 million users over here. But then you go and run some experiments to really figure it out, right? This is supposed to be quick. Let's just do a quick rough justice estimate on this and then go and ask the, you know, data analysis team, hey dudes, how many people are using the delete function for our documents or whatever, you know, whatever makes sense for that to try to figure that out. And that's one of the reasons you want to write the experiments. That was my follow-up question then, then how would you test it? And so you're saying, test the year one, but the other year it's just kind of the system. That's going to be the best year. Yeah, yeah, totally, totally. This is all like, you know, we did this in order to, you know, this is just, this is not even that real, right? This whole thing is not real numbers. You're not going to have, it could be much more, it could be much less. This is a guesstimate. As long as you're guesstimating using roughly the same methodology for guesstimating across all your products, then this is going to be a relative number, which is what you're looking for. You're looking to have relative comparisons between these different features and products. But those numbers get better and better over time, and if your product has been on the market for four or five years, you can get a curve. Like you'd be like, oh, okay, well, I'm Evernote, I put out a feature every six months or three months. Every time I put a feature out, the curve keeps going like this. So if we, you know, you can, you can basically use that as a data point to figure out, you know, how many people are going to use this thing? More difficult in the startup situation. Any other questions? Yes. What's your recommendation on the best approach for getting buy-in for this kind of product evaluation process? To use it. No, yeah, that's what I'm saying. So the best way to get people to use something like this is to lead by example. We did a lot of work trying to evangelize this to the other teams, and some teams picked it up and some teams didn't, but we just started using it because it was useful for us. And then we started publishing these result docs and people were like, oh, that sounds pretty cool. Maybe we should frame some of our things as experiments. And so it picks up over time. Large organizations are very difficult to influence with regards to processes. Every team has got a different process, different technology stack they like to use, et cetera. You know, like these companies, consultants that come in and be like, oh, we want to change your whole organization to use this new thing, it's going to make everything great. All my experience, I've never seen one that actually works. You know, teams should typically pick up this and that. So that's why I said earlier, out of all of this stuff, right, even, I think there's this slide at the end, I don't know, out of whatever. Out of all of those things, if you took away one thing, maybe it's the sheet or maybe it's like, hey, we're going to frame some of our things as experiments. That's a win, right? Like, if you have some new ideas about how to prove out whether a feature is worth doing or not based on this conversation, that's great. You know, or, so just lead by example, you know, pick stuff out of here. The product school teaches a bunch of cool stuff like about, you know, how to frame, how to select what features to go work on next, et cetera. There's lots of frameworks out there. This is what we found worked for us. It may not work for the next company, I don't know. You know, I like the way it works, but the next company I go into may have their own way of doing things and maybe I can pull one thing out of this, right? Sometimes you'll find companies don't have jack. They're just running around like headless freaking chickens and I've been, I was the first product manager in a company that was exactly that. There was a development, a developer, a hundred person developer, maybe one PM and with, you know, without a lot of like discipline and coming in there and bringing in some tools like made people really happy. They were really resistant to process, but once they had it, they were like, shit, this is cool dude. Like I'm not going to develop a feature unless you give me that PRD or the burned down list for what I need to build. Whereas before they would just go build whatever they want. So take what you can, influence as many people as you can and if you're lucky, they'll listen. In your experience in your career, is a new idea generation generally or PMs do a lot of that for the new features or is it generally like someone higher up and CEO saying, hey, like, I want you to do this and then like tell us, like, then you figure out like, you know, the game point. I mean, as a PM, you've got ideas coming from everywhere. Sometimes they're your own. You're going to build your own company. It's probably sometimes the seed has come from you or you've observed your girlfriend, you know, like struggling with something or whatever, you know, like you've seen a human problem that needs a solution and you think there's a business behind it. In a big company or actually, not in a big company, in a startup when you go in as a PM, you would have had a CEO typically who's, you know, product oriented and they wanted something that they want to build and they're going to have a bunch of ideas and they're going to give them all to you and you're going to have to go figure out which ones are worth building and which ones are not worth building. You also have, you know, if you have a customer already, that customer is going to be asking for stuff or your customers will be asking for stuff. So there's a lot of different inputs coming in. You know, the idea generation could come from the data. You've collected data. No one's looked at the data and it turns out that, you know, when you go as the PM and look at the data a little bit closer, you'll find, oh shit, no one's using search. We're spending all this time on search. Should we kill it? Or should we fix it? If we should fix it, what is the problem with it, right? Like this, you've got to look at each of your features that are there, maybe with the data and see, oh, okay. Maybe we need to fix this, maybe we're not. You can decide what you need to fix by figuring out what your success metric on your company is. So if your company just released this product, they probably really want to focus on acquisition, right? Because they probably don't have any customers yet or they've spent freaking a million bucks on advertising and they're not getting customers in. Why? You know, you can go figure that out and help them fix it. That's part of the value you bring to the table, right? So another success metric, if you've got enough customers coming in, you've got this great funnel, millions of customers are pouring in, might be how do you keep them, right? This, you know, we're finding that this huge funnel we made and we spent three million bucks on because we'd run a Super Bowl ad, drove all these customers in and no one's coming back, right? It turns out that they're logging in because you've got a quick Facebook OAuth, and they can do that, but there's how many pages are they going to, how many clicks have they done per page, how much time have they spent on each page? Go look at that, Google Analytics or whatever. If you haven't instrumented it, go instrument it. And then you can sort of start figuring out, okay, if we want customers to be happy when they come, right now it doesn't seem like they're happy, then we want to focus on how to make them happy. A great, smooth onboarding experience.