 OK. My name is Jess Humble. I'm the same Jess Humble that was here yesterday. The background to this particular presentation is that I was a product manager from about 2008 to 2010, maybe 2011. I was the product manager for a product called Go CD, which is a CI server. When I became a product manager I was working at ThoughtWorks and ThoughtWorks basically has a philosophy that will hire people who seem reasonably capable and then we'll just go off and do this so that your product manager off you go. And I'm like, OK, what does that mean? So I looked at Scrum and Scrum basically tells you the product owner's job is to prioritise things and go to showcases and to decide the value of what you should build. And that's extremely unhelpful advice as a product owner. I did that. I came up with a huge list of stories and I prioritised those things. I went to showcases and that was great. And no one bought the product for a really long time. Because it turns out it's great. Everyone can have opinions. But actually working out what's valuable to your customers, this idea of what is value, really, really hard. Agile comes from a situation where we know more or less what to build, where there's one customer and it comes from internal IT. And the very first kind of celebrated Agile project was the C3 project, Chrysler compensation project. One of the things that's not very well known about that, and this is where Kent Beck and Ward Cunningham and all these people worked on this project. What's not very well known about that project is the product owner in that project actually develops extreme stress to the point where she had crazy nervous ticks developing and had to take a really long time off because it was so stressful being the product owner for that team of people. The product owner role is basically treated as a black box in Scrum and in many Agile methodologies and that's terrible and it doesn't work. So after finishing being a product manager, I managed to get myself out of it by writing this book, Continuous Delivery. That's the only way I managed to say, I don't want to do this anymore. Please can I leave? Okay, I guess you've written a book, you can go and do something else. So that's a pretty extreme way to find a way out of my job. I reflected on this for a really long time, how we could actually do product management in an effective way. And I now in fact actually teach a course on lean product management at UC Berkeley based on a lot of reflection and research, which led to my second book, Lean Enterprise, most of what I think about how we should do product management in enterprise context is in that book. So I'm going to give you a very short version of that in the next 43 minutes or so and I'll try and leave some time at the end for questions. The problem that we have is that most of what we think product management should be comes from project management. And in fact at UC Berkeley where I teach this stuff, there was a course on project management and the person who taught that course quit. And the Dean of the School of Information said, please would someone like to teach, who would like to teach the project management course? And there was crickets for a month. Nobody wanted to teach that course. And so after a month I chimed in and I said, well I'll teach it but I'm not going to teach product management instead. And either because she thought it was a really good idea or because she was really desperate, she said yes. I don't know which. But project management is fundamentally unsuited to building products. Projects have these characteristics such as the fact that once we've built the project it doesn't change much once we've built it. In the course of building it we don't discover significant information. When you're building things like bridges you do a lot of planning and design and if you do a good job you don't have to redesign it in the course of building it. And then you have to complete the thing before you finish using it. None of these things is true of digital products. If they are any good they will change a lot over the course of their life cycle unlike say a bridge. In the course of building the thing we will discover significant information that should cause us to rethink large amounts of our design and our assumptions. And we can start using it before it's completely built. So none of these things which are fundamental assumptions of project management are true about building digital products. That's a really big problem. I went to Japan a few times over the last five years to speak at Agile Tokyo. And Agile Tokyo is a really fascinating experience because what happens is I go there and I'm like, so you know this lean stuff that works in manufacturing that you all taught us from Japan? You can use that for software. And everyone's like, no it'll never work. And my hosts took me to see big companies in Japan where they would show me their project plans and they looked a bit like this. This is disguised to protect the guilty. And basically this is for a consumer device. And they've got classic V-model requirements gathering, analysis, design, implementation, integration, acceptance, release. But guess what? If you find out here you released your product to your consumers and they don't like it. You can't fix that problem in the second release because the analysis has already happened for the second release by this point. No, you have to wait until the third release before you can fix any problems that you find after the first release of the thing. That could be like a year. This is how people do things. Still this is like three years ago in a very big company that you'd all recognise the name of. So I feel like we've known for a long time that this is a terrible way to build products. And you know the agile software development movement won the methodology wars effectively. I mean how many of you go into a company which is very proud of the fact that they work in a waterfall way? Like today. Have you seen those companies? But well okay. But you say that banks and I would agree that traditionally that's true. But I've spent a lot of time being called into banks over the last few years by senior execs who are like this doesn't work anymore. We need to do something else. So yeah, I agree that I think the tide's turning in all these places. I actually am not allowed to talk about the fact that I also work for the US government where we're adopting agile methods for delivering projects. So you know I think and part of the reason I took that job is because so I could say well we're doing agile in the US federal government. What's your excuse? I mean that's a big thing for me to be able to all these people saying it won't work here. You know basically to say that's not true. Let's talk about that over some beers. So despite the fact that agile won the software methodology wars, if you go to a lot of companies that claim they're doing agile, what you'll find is that they're actually doing a slightly different methodology that I like to call water scrum fall. Which is the most popular agile methodology today. Where basically we're doing the scrum stuff here. I mean we have these nice iterations where the software hopefully is actually releasable at the end of every iteration. Not actually being released. You still have to go through integration and central QA that has to be shipped off somewhere else to be QA by a bunch of people who don't understand the code base. And then hopefully you have some time to fix the bugs and then the whole thing gets tossed over the wall to IT operations to run. And this is the problem that continuous delivery is supposed to solve and I'm not going to talk about that in this session except for a little bit at the end. But we still have this thing up at the top. What's called the fuzzy front ends by Don Rhinison. Where it can take months to go through a process of putting together a business plan, going through a budgeting process, getting approval, then going and doing analysis, requirements gathering, writing up these huge documents that get stuck on the table of some poor project manager to try and deliver. And that's fundamentally a really miserable process. And then the success of whether the thing actually is judged to be a success at the end is based still on the iron triangle. Did we deliver the expected scope with the resources available in the schedule, in the time we had available? And the thinking is, you know, this is a zero sum game. If you compromise on any of these things, the quality of the final solution will be compromised. Sorry, I'm not allowed to walk in front of the screen. This is not true. This is what Lean tells us, that actually it's not a zero sum game. We can deliver higher quality in shorter times with less resources. And also the scope doesn't actually matter that much. I'm going to talk about that in a minute. But this is fundamentally unsuitable for building products because the success of a product isn't dependent on many of these factors. The success of the product is dependent on whether we make money. Don Runnison has a great saying. He says, the way the world tells you whether what you're doing is valuable is they send you money. And this stuff actually isn't important if no one sends you money. So what you want to find out is, is someone going to send me money if I do this thing? And what you really don't want to do is wait a year or two to find out if that's the case. So what should we do about this problem? Number one, we should be focusing on the outcomes, not the costs of what we're doing or the output that we produce. Secondly, we want to optimize for the case where we are wrong. This project management process, Water Scrum Fall, we're basically optimizing for the case where all our assumptions are right. That is almost never the case building digital products. We're going to use something called optionality to manage that risk. Product management fundamentally is risk management. It's all it is, same as project management. But the risks are different from the kind of risks that project management typically cares about. We want to build feedback loops to test our assumptions about what our users will find valuable and what our customers will find valuable. In order to make those feedback loops economic, we need to make it economic to work in small batches, to build products in small increments and ship them and get feedback so we can test our assumptions. And we want to take an experimental approach to product development. So I'm going to talk about this bit. I like to run surveys and get data. Surveys are great. They're a really rich source of confirmation bias. And we did a survey of ThoughtWorks with Forrester Consulting where we asked a bunch of execs some probing questions about their processes and the effectiveness of those processes. And one of the questions we asked is this. Please select the statement that most closely aligns with how your company decides which products are built. So 47% said committee decides from potential options, decision by committee. 24% said they use some kind of economic model to make their product investment decisions. For a joke, we put the opinion of the person with the highest salary wins out. We actually didn't misspeller, that's my fault. 13% of people said that that was the process they used to decide what to invest in. I actually think it's probably more common than that. 7% said they use the product portfolio approach, which is some bullshit language, which is the same as committee decides from potential options. Committee decides from potential options is actually opinion with the person with the highest salary wins out. So this and this and this are all the same thing really. And then 7% said no systematic approach, which is, you know, I respect those people. So what this means is that 24% of people use an economic model to make their investment decisions and 76% of people don't. That is the sad conclusion of this survey. So this is part one of Water Scrum Fall is making the investment decision. Once we made the investment decision, we do a whole lot of requirements gathering and estimation. So at the output of this, you have, you know, after all your estimation, the point of doing the estimation is what? Why do we do estimates to make a decision? What's the variable we're trying to account for when we're doing the estimation? Cost. So we spend a lot of time, a lot of time trying to work out the cost of what we're going to build. It turns out that cost is extremely unimportant in terms of making investment decisions. So one of my favourite books in the last ten years is a book by Doug Hubbard called How to Measure Anything. And Doug Hubbard has been examining business cases in large companies for many, many years. And he uses a technique called Monte Carlo. So you've got a business case in a spreadsheet. And what you do in Monte Carlo is you randomize the input variables and you see the impact on the output variable you care about of changing the input values. And so what you can do is you can, you know, randomly vary the cost input and see the effect on overall thing that you care about, which might be product life cycle profit is the typical variable that you care about or return on investment or whatever. And what he found from many years of studying business cases and doing mathematical analysis on business cases is this. Even in projects with very uncertain development costs, we haven't found that those costs have a significant information value for the investment decision. The single most important unknown is whether the project will be cancelled. The next most important variable is utilization of the system, including how quickly the system rolls out and whether some people will use it at all. Will people send me money? How much it costs doesn't matter if no one sends you money. And if people send you money, it shouldn't matter what it costs unless you're comfortable with, you know, 3% to 5% return on investment, in which case you would typically be better taking your money and investing it in index funds. I'm very serious about that. There's many companies who would do better by not developing software at all and taking that money and putting it in index funds. And they should probably do that. The reason you develop products is to get an enormous return on your investment, tens or hundreds of percent, and if you succeed in that, the development cost is trivial compared to the money you make, and it doesn't matter. So if you fail, the development cost doesn't matter. If you succeed, the development cost doesn't matter. It's unimportant. Instead of spending all that time estimating to the nth degree of precision that we don't actually care about because the plan will change anyway, what a lot of big companies are doing starting to do now is a very lightweight planning process. So I really like this book, Practical Approach to Large-Scale Agile Development by Gary Gruver and his colleagues. Talks about HP LaserJet Firmware, which is a large programme of work. It was 400 people split across three countries, India, North America and Brazil, a large distributor team, and they did their entire planning process for the next year on a single piece of paper where they estimated in engineering months for each of the components how much capacity would be required to develop the list of initiatives. The first thing they did, which was very, very sensible, was to rank those initiatives in order of priority. Who has tried to get a bunch of business owners to agree on the priority of a set of initiatives? How did that go for you? There you go. You get a different list every week. Everything ends up coming back number one priority. Right? Right. It's like, yeah, right. So like that already is the most important thing that you can do. And that fits on this very small bit of paper, but it has very high information value. And then they just take a very large, kind of low-precision set of estimates, basically, firmware engineering months, and this is their planning process. For a 400-person project which is critical to HP's, well, not HP Enterprise, we'll see other HP these days. Anyway, it's a big impact on their top line. Oops, didn't want to do that. Yes. The blue is the things that they're not going to go get to because they don't have any capacity left. So basically, you get down a certain amount and then you're like, oh, this initiative is going to require 27 engineering months, and we don't have 27 engineering months, so this thing is not going to get done. So you can basically say this is capacity planning, right? We have X amount of capacity. This exceeds our capacity. These things aren't going to get done. So it's basically like a very quick feedback mechanism. What's likely to get done, what's not likely to get done, given the high-level, low-precision estimates and an order list. This is all the information you actually need. You do not need an enormous document full of stuff. You just need this information. So this is the business, the various different stakeholders like product marketing and many product marketing, I think. Product marketing and then other business units who are involved in it. It's one of the most important things to do when you become a product owner is find the people who can say no to you. Find those people and find the people who when they say yes, you can actually do something. So finding out who actually has the power in your organisation is really important. The first thing you should do when you join a new company is find out who you need to get a yes from in order to get something done and then don't bother talking to anyone else. The engineers are there as well. The engineering leads would be there and the engineering leads would collaborate with those people to produce these high-level estimates, but it's a very lightweight process. I'm actually not going to talk about the cost to delay here because I don't have enough time, but that would be a very effective way of thinking about this. So this is talking about cost, not value, but in order to prioritise these things, this is a big problem. How do you decide what's valuable? Don Rynison wrote this book called The Principles of Product Development Flow, which is the best book you can possibly get on product management today, but it's very, very hard to read because he's very dry and the text is extremely condensed. So you can't read the whole thing in one go unless you're much smarter than me, which you may be, so good luck. But it's very, very dense. One of the things he talks about is the fact that the value of delivering a feature is time-dependent. If I deliver the feature now, it may make me a lot more money and the typical way we think of prioritisation doesn't take into account the time dependence of the thing we're building, and that's very important when you're prioritising things. So I'm not going to talk about that today because it's a big topic, but yeah, thinking about time dependence is important, is the TLDR. However, even this doesn't take into account a very important fact, which is that every innovation, every idea has a life cycle. So when you're thinking about a new product, when you want to develop a new product, you have to first consider the life cycle of innovations. So this S-curve came out of the mind of a guy who was an academic in Iowa who was studying technology adoption by farmers. So what he was studying is, when a new technology comes out, how long does it take for farmers to start using that technology? And he basically came up with this S-curve, the idea that new innovations that are good, will provide a competitive advantage for the people who adopt those innovations. And then other people will see them and they will buy that thing, they will adopt that idea, and it will give them competitive advantage. And if the idea is good and it actually works, more and more people will start using it until eventually it becomes a commodity. That thing becomes commoditised because everyone starts building versions of it. And then, so like operating systems, everyone starts off providing a competitive advantage, now they're commoditised, and then they provide a building block for the next high-level innovation. So transistors were a competitive advantage, they became a building block for ICs, were a competitive advantage, and they became a building block for computers, which became a building block for the clouds, which is then going to be a building block for artificial intelligence, and so forth. So another guy called Geoffrey Moore adapted this idea. So Everett Rogers was the guy who came out with this diagram. Everett Rogers also came out with this way of modelling the diffusion of innovations throughout a group. And what he said is, you can actually divide your cohort of people into people with very different kinds of behaviour. You've got innovators who always want to try the latest things, so think engineers, right? Engineers, they always want to try the latest framework, the latest technology. Then there's a whole bunch of people who aren't going to get the latest thing, but if they see that innovators succeed with that idea, then they'll start using it. And then you've got pragmatists who will watch this group of people, and if they see it work, they'll adopt that idea, but they're not going to adopt anything unless they can tell that everyone else is going to adopt it too. And then you've got late majority, these people who will not use any idea until at least 50% of their social group is using it, and then there's the people who just won't use anything until they're absolutely forced to. And so this is true not just in society as a whole, it's also true in our companies. Organisations that try and do big methodology rollouts across the whole organisation often fail because they ignore this fact. These people are not going to adopt a new methodology until they've seen it work. In fact, this whole group of people after the chasm. This is Jeffrey Moore's invention. He put a big white hole in the middle of this diagram, or the third into this diagram. That was his hole. He wrote lots of books about this white bit here. And so you've got to prove that your new ideas are successful with this group before they'll be adopted, which is why it's important to do pilot projects and improve success before you can get the methodology rolled out to a wider audience. This is true of products as well. So products, methodology, culture changes are all subject to this dynamic. So you need to find out, first of all, if these people are going to actually buy your product and send you money, and target these people. If you're successful with these people, then you're going to target these people. If you're successful with those people, then you're going to target these people. But you can't target your product to the whole audience. You've got to find, first of all, who are the people who are going to try my thing even though it only solves a very small subset of their problems. And then they're going to say, this is really great, and then the next group of people are going to adopt it. Any company which expects to live a really long time is going to have to have products in various different stages of this adoption life cycle. There's a group of McKinsey consultants who created a model called the Three Horizons model, which is an elaboration of this fact. And what they say basically is any successful company is going to have to have products in three different horizons. So big companies have got big because they have built a product which has made them a ton of money. And that product is in horizon one. That's making you money right now. It's generating today's cash flow. However, this is not going to last forever because guess what? All products eventually become commoditized. Everyone uses them. They're no longer competitive advantage. They become commoditized. So you can still make money out of those things. So IBM still has a very profitable company. Only one company normally is able to do that with a product which has basically become commoditized. In order to continue to grow, you're also going to have to have products in horizon two, which are going to provide some revenue today and then they're going to provide cash flow tomorrow. And you're also going to have to have a bunch of options in horizon three, which are ideas which are not going to be widely adopted for three to six years. But we need to test them out in the market right now. So every large, successful company is going to have products in all these three different horizons. And the crucial thing is you have to manage those different horizons very, very differently. So Intuit, for example, is well-known. They actually, for adopting these practices, they have 60% of their cash invested in their existing businesses, like TurboTax and Mint, and they measure are they growing in their category? Do they, what's their market share? What's the net promoter's score? How much revenue they're making? Then they have a bunch of adolescent businesses which aren't providing all their cash, but they are providing revenue, like QuickBooks Online and Accounting, QuickBooks Online Accounting software, and they measure different things and that's very important. They're not measuring net promoter's score and revenue, they're measuring growth and increasing efficiency, which will lead to profitability at some point, but it's not necessarily going to be profitable right now. And then they invest 10% of their operating expenses into new ideas, like SnapTax. And those are measured again by a different set of metrics which, in this case, they call them love metrics. Basically, do their customers love their products? What are my favorite books on product development from the last couple of years is by Kathy Sierra. My brain has melted, I can't remember. Does anyone remember what Kathy Sierra's new book is called? Kathy Sierra's new book? I'm going to look it up. I'm going to look it up using my pocket supercomputer. It's a really good book. Making users awesome, there you go. Making users awesome, where she basically says the point of product development is to make your users really awesome at something, but that's basically what love metrics is. Do your products make your users really awesome to the extent that they can't stop talking about how great your product is? That's the most important metric for a new product. I used to think that when I asked my customers if they liked the product and they said yes, that that was a really good sign. That is not a really good sign. A really good sign is when you don't say anything and your customers are tweeting all the time about how great your product is without you asking them. People often think that customers liking your product is good enough, but in this stage it's not. Your customers need to love your product so much they can't stop talking about it. So the important thing here is they're spending their money in these different horizons. They're allocating their money investing in these different horizons, and they're measuring them in very different ways. You can't apply the same set of metrics for large growth businesses to the businesses that are very new ideas that you're testing in the market. You have to measure them very differently in terms of their success metrics. Fundamentally, there's two activities that we're involved in when we're doing product development. One is exploring new ideas. So these are Horizon 3, and then Horizon 2 and 1 is about exploiting ideas that we already know are good ones. We need to think about these two activities very, very differently. They require different strategies, different organisational structures, different organisational cultures. We're managing to different risks in these two different activities. We have different goals, and we're measuring progress in different ways. A lot of companies, big companies 10 years in North America have been going on hiring sprees, acquiring companies, and what happens is those companies get acquired, and then the founders stay in the company for three years until they can sell their stock options and then they leave because they're so miserable. And the reason they're so miserable and then the product is usually either cancelled or just going to left to rot. The reason that happens is because the big company tries to apply this set of ideas on structure and strategy and risk management to this activity, and it doesn't work. That's why acquisitions often fail is because the big companies don't know how to do this stuff, and they try and use these management techniques to manage products that are still in the explore phase and it doesn't work. So you've got to be very careful about how you apply strategy, how we do culture, how we do risk management and ideas versus ideas that have gone up this curve. The other interesting thing about this curve is anyone who knows about startups has heard about the hockey stick. So the hockey stick is basically this left-hand part of the curve when suddenly your growth goes like that. What most people don't tell you is that that hockey stick is actually just the left-hand side of the S curve. So this is the whole diagram and the hockey stick is just the left-hand side of this diagram. So you've got to account for that basically. When this happens you better have something else that this is happening to. Otherwise you're not going to be able to succeed long-term. Now the problem with project management in this context is that project management assumes that your assumptions about what is a good idea are correct. You only find out whether your assumptions were correct at the end after you've built it. This is problematic because most ideas fail. Most ideas do not go up this curve because they are bad ideas and it's impossible to know in advance which the bad ideas are. This is a really big problem but it's a problem we've known about for thousands of years and we've had a solution to this problem for thousands of years and this solution is called optionality. When we can't predict the future it's pointless to try and predict the future. What you should do instead is have a bunch of options. Imagine you're who has a retirement fund that they put money into for when they retire. Good, very sensible. Everyone should have a retirement fund, very important. It's my investment advice. I have no fiduciary GTTU so I can say whatever I want. I'm going to pitch you something for your investment fund. I've got four companies that I'm going to build and I'm going to invest an enormous amount of money in these companies and I have no idea if they're going to be successful. Would you like to invest in my fund? There's a very low chance that these products will succeed but if they do succeed they're making an enormous amount of money. Would you like to invest your retirement in that? No, of course you wouldn't because you're not done. This is what many companies do with their IT budget every year. It's a stupid idea. In most cases they'd be better off putting their money into index funds. Instead and what investors do is they don't invest in a very small number of risky funds. They invest in a very large number of funds but they limit the amount of investment in each fund with the expectation that most of them will fail but it doesn't matter because we haven't invested that much money in them and a very small percentage will succeed wildly and that's called the positive black swan. There's a guy called Nassim Taleb who's incredibly clever and the most obnoxious writer ever. His books are good but every time you read them you're like, oh, it's really awful to read because he's such a bad writer. But brilliant. He wrote this book called Black Swan which basically predicted the financial collapse and he knows that he was right and he's very quick to tell you. But essentially he says, we've known about this for thousands of years this strategy of managing risk which is when the return is very uncertain you manage that uncertainty through optionality and this is how we deal with the case where we expect to be wrong and when we're developing new products we should always expect to be wrong. So instead of spending a large amount of money on a small number of projects what we should do in Horizon 3 in the case of new product development is basically have a whole bunch of teams exploring ideas but you limit the amount of investment in each idea. If you don't have a lot of people this is what the lean startup is saying basically is that if you invested more than a few months in a particular idea you should pivot or more than a month. So in the lean startup the limit in investment is time based and then once you run out of time you pivot to the next idea what you can do in a bigger company is basically allocate three months worth of budget to a whole bunch of teams to explore an idea and then expect that most of those teams will fail and love what they've built and then you say okay go and explore the next idea in the expectation that a very small proportion of those ideas will actually be very successful. In that situation what you want to do is basically learn as fast as you can validate your assumptions about whether your idea will be successful as fast as possible which is the entire principle of the lean startup. It's basically about maximising the optionality and the key metric we care about from a productivity perspective is how fast can we learn from our customers about whether our ideas were right. That's the only productivity metric we care about. How fast can we learn. So this is problem one with project management. Project management works in the assumption that we're right and the feedback loops are very slow and we only find out if our idea was good right at the end after we've already built it and then we're all subject to the sunk cost fallacy and no one wants to cancel the project and eventually the BP's fired and that's the only way to cancel the project. We end up bashing up work. You typically get a budget for a year's worth of work and you have to plan what that is and get the budget for it in advance and then you're committed to delivering all that stuff and you want to deliver it because otherwise you won't get the budget next year. So what that causes us to do basically the one year budget cycle drives this thing where we batch up enormous amounts of work because in order to get the money you have to first say what you're going to spend it on. It's a terrible idea where we have large amounts of uncertainty about what we're building because if we're going to be wrong about whether the things we're building are the right things to build what we absolutely shouldn't do is stick to our plan but that's how our success is measured from the point you view the budgeting process. So the annual budgeting process is in many ways a terrible idea and one of the things it drives is batching up enormous amounts of work because you plan it all up front and then you deliver it all and then you release at the end or maybe every six months if you're feeling lucky. So another really good resource is a paper called Black Swan Farming using Cost of Delay by Joshua Arnold and Urs Le Mugir. You can go to this link and download it. They went to MERSC lines. So MERSC lines is I think either the world's biggest or second biggest shipping line very conservative company in terms of they manage risk in a very long-term way but they were having this big problem that their IT budget was enormous and they weren't delivering the expected value and so these people Urs Le Mugir actually worked at MERSC and Joshua Arnold was a consultant who was brought in and one of the things they did is looked at the big list of requirements that was going to be built and there was thousands of requirements that were in the queue to be delivered by IT and what they did is they looked at the cost of delay for those requirements so the cost of delay is cost of delay is every week that we don't deliver this feature how much will it cost us every week we're not delivering this feature how much is it costing us so if we did deliver it how much money would we make is the other way of thinking about it we went through the entire backlog at a very low level of precision and asked what's the cost of delay for each of these features and what they found was that there was a very small number of features with a very high cost of delay so if we delivered these right now we would be able to make millions of dollars per week from having these features delivered but there was this very long tail of features which had very low cost of delay delivering these features wouldn't actually make us a lot of money and what they found from examining more projects is that this kind of power law curve is very common what you would typically find if you have a big batch of features is that there's a very small number of features with a very high cost of delay and a long tail of features with a very low cost of delay and when you see this it's very clear what you should do is you should not bother doing any of these things as fast as you possibly can any time you see a big list of requirements what you should be thinking is which of these is actually going to deliver us money and how can we get them out as fast as quickly as possible and the way we do that is by not bothering with any of the other things and just deleting them from our backlog yes well the most important thing to bear in mind is that you do not need a high precision people waste a lot of time getting a very high precision to the last dollar we don't care about the last dollar we care to the nearest million dollars so when you have a very low level of precision information theory tells us you don't need a very large amount of information the more precision you want the more information you have to gather but if you want a very low level of precision you don't need much information so the important thing to do is don't be very precise just say to the nearest million dollar how many dollars per week is this going to make us and the answer to 97% of those things is 0 to the nearest million dollars and that's good because you want to find the three things that are more than 0 to the nearest million dollars and for that you don't need a lot of information well honestly I think they found that pretty much everything they looked at there was this power law curve if they didn't then you've got a different problem yeah smart ideas to do this thing and get a look at the shape of it so I'm afraid I'm not going to take any more questions because I have five minutes left and I've got a lot more to get through I'll do my best so what do we do about this problem well again what we want to think about instead of thinking about outputs and delivering the requirements I mean I have a problem with this word requirements I mean these things are called requirements often whose requirements are they are they the user's requirements users don't know what they want users know what they don't want so I think we have to be honest and say well look these are not requirements what they are instead is hypotheses we believe that this idea will deliver value but we don't know until we've validated and so instead of thinking about requirements and delivering requirements instead of thinking about outputs we want to think about outcomes we want to maximise the outcome and minimise the output required to deliver that outcome and the traditional analysis process is very very poor at helping us to do this there's a really great tool I like by a guy called Goikko called impact mapping and impact mapping is this you have a goal, an organisational goal so this is an impact map from a bank they wanted to reduce the transaction costs of their transactions by 10% so this is their objective this is the outcome they're trying to achieve then they look at the various stakeholders who have an impact on that outcome settlement team traders and IT operations then they look at how those people could either help or hinder achieving those outcomes and then at the end here's all the things that you might actually do to either help or hinder those outcomes when you get a list of requirements what you get is a list of one of these things at the end so someone upstream has thought about this problem and they've said here's the thing that they're actually going to do and then they send that downstream as a requirement think about the enormous information loss in that process someone has mentally in their heads gone through this exercise and decided this is what we need to build and then the engineers all the developers basically get a list of one node on a bunch of impact maps usually the thing that these people choose is not the right thing to do these things it is well you can't unless you validate your assumptions so actually getting the business people and the engineers together and actually getting them to work out what the measurable outcome they want to achieve is and then work back to the solutions is really important and again it's difficult to do people always think about the solutions they don't like to think about the problem that they're trying to solve and getting them to think about how would we know in terms first again that's very hard to do but very important because who cares if we deliver all these things and we don't achieve the outcome doesn't matter and then what we have is basically a bunch of hypotheses some of these things and maybe all of them will have some impact but we don't know which ones will have the most impact the other tool that I really like from the last few years instead of who's familiar with agile stories as a hmm I want hmm so that you everyone heard of stories I mean they're cool, I do like them I'm lying, I do like them but I think they miss an important bit of information which is how will we know if we've achieved the outcome we wanted to achieve so I really like this format from Jeff Goffalth who wrote a book called Lean UX he says we believe that building this feature for these people will achieve this outcome we'll know it's successful when we see this signal from the market we believe that building this feature this feature for these people these people will achieve this outcome this outcome we'll know it's successful what are we going to measure to find out if this thing will actually achieve this outcome without building the whole thing what experiments are we going to design and run to validate if this thing building this thing will actually achieve this outcome that's a much better way to think about it because then the people who are actually doing the engineering have all the information they need to actually test their assumptions and that's so important this is the biggest problem I have is that it's still all about somebody upstream taking these big things cutting them up into lots of little bits handing the little bits out to all the engineers who have no understanding of the context in which they're operating and then after many months all those little bits come together and you get something and then you have to spend some time actually making it work and then you deliver it and then you find out if that thing was a good idea to have built in the first place right so yeah I still like to rant about this but it's a big problem I think thinking about this and actually empowering the engineers to be running experiments rather than feeding them little bits and then their success is did I deliver the little bit who cares if you deliver the little bit doesn't matter what matters is did we achieve the business outcome we wanted to achieve we better change our process for product management to think about that instead so I'm now out of time what I will say is that the hard part of this is designing experiments to test assumptions it's still a big empty void where there's a few people who are doing a really good job and lots of people who are doing don't know how to do it and the place to look for information on this is the Lean UX community who are spending a lot of time thinking about how to build experiments to test ideas without having to build the whole thing and the gold standard here is AB testing so Etsy are great putting presentations out about how they do AB testing so go and look at all Etsy stuff they put it out there so they can recruit you basically and it's great because I can then steal their slides and put them in my presentations but they actually when they're going to design a new feature they don't build a feature they design a small experiment and the experiment basically is the 20% of the functionality that will deliver the 80% of the value they don't worry about scaling it because they only expose it to 1% of users they don't bother with corner cases they don't bother with cross browser support because they only let one kind of browser and then they do an AB test some large proportion of the users of the site see the site without the experiment turned on some very small proportion go and see the site with the experiment turned on and then they actually look at the outcomes how many people visited the car with this experiment turned on versus off how many people bounced off the site how many pages did they visit how many things did they add to their car and you can actually see statistically significant data on whether that idea was a good thing to build that's amazing for me as a product manager that's like crack it's the best thing in the world real data on whether that idea is actually a good thing to build or not and then you go and build it or not and they're running many many experiments all the time to find that out this is one of the major reasons why Etsy and Amazon are investing in continuous delivery is so that they can run experiments work in small batches delivering experiments which were a few days work and then gathering the data and what they found who was at Nicole's keynote yesterday yeah lots of you so you know this is what they found basically that most of those ideas most of those experiments were bad ideas so Ronnie Cahabi before he was at Microsoft he was at Amazon Cahabi was the architect of Amazon's experimentation platform and then he went to work at Microsoft and built the experimentation platform that Bing is running so both Amazon and Bing have many many experiments going on at the same time to find out how to optimise those are bad so I'm just going to conclude by the thing I said in the middle project management is a very traditional project management because project management is starting to move in this direction the new set of project management texts from the PMI are starting to talk about the outcomes and how you measure the outcomes rather than the traditional stuff so the latest stuff in project management are starting to move towards an outcome by his base framework which is great there's still big problems with it and with the implementations of it and so I don't like to think about project management anymore I think product management is a real thing and in product management we're going to focus on the outcomes and how we're going to measure the outcomes and work backwards from that rather than thinking about how many engineer hours we're going to spend or how much it's going to cost and so forth we're going to optimise the case where we're wrong by using optionality so experiments are a form of optionality MVPs are a form of optionality MVPs are for product ideas experiments are for feature ideas so the experimental approach works at the product level and also at the feature level we're going to create these feedback loops both in terms of software delivery and in terms of validating our initial assumptions so we can validate those ideas from the fuzzy front ends without actually having to build out the whole product how can we validate those ideas much more quickly because again, if you only want a very low level of precision you only need a very small amount of information in order to build those feedback loops you want to make it economic to work in small batches which is the continuous delivery piece and then, most crucially the story that Nicole told about the engineers from Amazon the fact that an engineer in Amazon can push an experiment into production against the express wishes of the VP of products that's the kind of culture you need to believe sorry, that's the kind of culture you need to create but engineers are empowered to come up with ideas and test them and that's probably the biggest problem we have right now is engineers have to do what they're told instead of engineers being told the outcomes that the company needs to achieve and being empowered to design and test experiments to achieve those outcomes and that for me is the final frontier of agile at the moment and most of the big agile frameworks don't talk about that thank you very much