 Why Scaling Agile doesn't work and what to do about it? So my experience with Agile at scale looks a bit like this. This is not a term I invented, it's a term that was invented by a guy called Dave West at Forrester. He talks about water scrum fall. And what happens in water scrum fall is someone comes up with a new idea for some new product or service they want to deliver. But we can't just start work on that, oh no. First we have to go through a budgeting cycle which takes many months and that involves gathering requirements and estimating them and doing huge amounts of analysis before some nasty enormous project plan lands on someone's desk. And then these teams in the middle, these scrum teams develop in iterations and they're doing all the agile stuff, you know the stand-ups where they get told what to do standing up instead of sitting down. And acceptance testing and all this kind of stuff, but nothing actually gets delivered to customers. Oh no, before that can happen you have to first integrate everything and then you have to test it and then you have to toss it over the wall for some poor smucks in IT operations so they can try and somehow make it work. And unfortunately if you take this and you make the middle bit agile it typically has not very much impact on overall cycle times. If you look at the total cycle time from golf course up here to measurable customer outcome down here and you make this bit in the middle agile, typically you're not going to see more than a 5-10% improvement. Because approximately a third of the time is up here, that doesn't change, we still have to do all this crap. And then this stuff at the bottom takes a good 20% of the time and we're not changing anything down here. So this is pretty miserable, but unfortunately it's the reality of Scrum and Agile in many organisations. Who works in an organisation where this happens or maybe you've got a friend in an organisation where this happens? Okay, yes, so this is very common unfortunately. And then, I mean, this is miserable, but then if you look at the decision-making process in front of this, particularly at this point up here, how are these decisions made? So if you're already miserable, I went and gathered data on this. I did actually a survey with some people from Forrester where we asked some executives, please select the statement that most closely aligns with how your company decides which products are built. So how do big companies make investment decisions? What you find is 47% of committee decides from potential options. 24% of people use some kind of economic model to make investment decisions. Sounds like a good idea. 13% of people, we kind of put this as a joke and 13% of people said that's what they did. The opinion of the person with the highest salary wins out. And by the way, that's exactly the same as committee decides from potential options. Those are actually the same option. And then 9% of people said they used the product portfolio approach, which is the same option. And 7% of people said they had no systematic approach. You know, good for them for being honest, well done. So the conclusion from this is that 24% of executives use some kind of economic modelling to make their investment decisions. And 76% of executives don't. That's depressing. If you look at the rest of the... Gosh, that's dark. Okay, maybe somewhere in the middle? I don't know. I'm fine either way, I don't care, I'll just keep talking. If you look at the rest of what people are doing up here, what are people primarily spending their time doing? And what's the output of that activity? Estimation and requirements. Why do we gather estimates and requirements? What do we want to know? Correct, so we need to know about cost. However, when we're innovating, building products where we're trying to do something different and innovate, the outcomes, the variance in those outcomes is very high. One of my favourite quotes on this comes from a guy called Douglas Hubbard. Douglas Hubbard wrote an excellent book called How to Measure Anything, which I strongly recommend that everyone read. It's a fabulous book. And he wrote an article, which is a very short summary, which I recommend that you read if you can't be bothered to read the book. In this article, he talks about work he's done over many years, looking at projects and the outcomes of those projects. He's done loads of work with big companies, doing very complex spreadsheets and estimating costs. And his conclusion is this. Even in projects with very uncertain development costs, we haven't found that those costs have a significant information value for the investment decision. The single most important unknown is whether the project will be cancelled. The next most important variable is the utilisation of the system, including how quickly the system rolls out and whether some people will use it at all. Those are the two things that you care about. Will the project be cancelled? Will people actually use it? And part B of that, how fast will the uptake, the rate of uptake of that product be? That's what you care about. In any successful software product, the cost will be dwarfed by the value. If that's not true, you should take your IT budget and invest it in fixed income assets, index link funds or some other way that you can get a small but guaranteed return on your investment and you should not build software because it's pointless. If you are actually expecting to get a significant outcome, maybe some multiplier, some multiple of your original investment, the cost is unimportant. What's important is these two things that we care about. So we shouldn't actually be spending a lot of time worrying about costs, we should be spending a lot more time worrying about the outcomes, whether people will use it, whether they'll pay for it, whether the thing is going to be cancelled. People don't like estimating those because they're much harder to estimate. However, it's entirely possible to do that. What Dave Thomas was talking about earlier, making sure that you can get something out in a few months, the whole point of that is to find out if someone will actually use it or pay for it. The other problem with that fuzzy front end process and the budgeting cycle that drives it is it tends to lead us to batch up work. The reason we batch up work is the transaction cost of taking a project all the way through the life cycle is so high that economically you want to batch up that work, that makes absolute sense. The problem with that is you end up batching together a whole lot of stuff in a project and that whole lot of stuff has an extremely high variance in terms of the value of the things you're going to end up building. One of my favourite case studies on actually taking a big batch project-based paradigm and turning it into a flow-based paradigm is from Maersk Llyns. There's a great paper called Black Swan Farming Using Cost of Delay by Joshua Arnold and Ursula Mewtshire. You can go and get this paper, I highly recommend it. They did some simple analysis with the business people to work out how much each of those requirements was worth and the way they did it is using something called cost of delay. Now cost of delay sounds scary but it's actually not. What you're working out in cost of delay is how much you're losing by not delivering that piece of functionality. These numbers are quite big. The great thing about big numbers is you don't care about precision. People spend a lot of time worrying about estimating things to the nearest dollar. When the variation is this high, you don't care to the nearest dollar. You care to the nearest, I don't know, $100,000. So you don't need to spend a lot of time estimating that because you don't need a lot of precision. So the business people spent a few days in a room and went through thousands of requirements and got very low precision estimates but the results were very interesting. Three of these requirements, not delivering them, was costing MESC of the order of $2.5 million per week not to deliver those features. The rest of the requirements was a long tail. This distribution in statistics is called a power law and this is pretty typical of the requirements that you're developing. So once you actually see that, it becomes really clear what you should do which is not do any of this stuff and do this stuff really, really fast. But that's hidden in this massive pile of crap that you can't even see into because inventory in software is invisible which is why it's so hard to apply lean thinking, for example, to software delivery. So this is a big problem with projects and the project-based metaphor and so when you add all this together, the point I'm trying to make and the reason why I talk about scaling agile doesn't work, unless you fix these fundamental problems, it doesn't matter how much you agile or how much you scale that agile, you're still going to end up with the same outcome which is everything gets delivered at the same glacial pace and no one gets any value and we're focused on costs all the time and cutting costs and we fall further and further behind until eventually our organisation dies. So what should we do? Number one, accept our requirements will be wrong. This is really hard. People spend a lot of time and effort into gathering requirements. There's a guy called Dan North who gives another talk with the same title but says slightly different things. I've worked with Dan North for years. At ThoughtWorks, where we worked together, there was a prize for the worst career move of the year which Dan won one year by going into a client and the client sat there with a huge pile of requirements. Dan North said, nice to meet you all, we're doing agile, we won't be needing these and he put them in the trash. And that year he won the award for career limiting move by doing that. But he was right, the point at which we know the least about whether what we're delivering will actually be valuable is at the beginning when we have the requirements. The whole point of agile is that the requirements will be wrong. So if you don't challenge the requirements, fundamentally you can't be doing agile. Unless you throw away most of those requirements, you're probably going to build something useless. And you better acknowledge the reality of that from the start and do something about it. We spend a lot of time focusing on cost because cost is easy to measure. But typically it's not important to measure and we should be focusing on value instead. There are some seats at the front here. You can meand around the front, come past me, I won't mind. I'll just keep talking. But a little bit of space at the front. Sorry if you'll stuck at the back. A big part of agile and the continuous methods is about creating feedback loops and nurturing those feedback loops. We had feedback loops, did the thing I build actually pass the test? Did it pass the acceptance test? Did it perform correctly? Is it secure? And then most importantly, does it actually deliver value in the way we expect the customers? Please. Four, the whole point of continuous delivery is so we can make it economic to work in small batches. The reason we work in big batches in projects is because the transaction cost of going through water scrum fall is extremely high. So it's not economic to work in small batches. If we can change the economics of software delivery then actually we can work in small batches. It becomes economic to do that. And then we can take an experimental approach to product development which is the goal of what I'm talking about. So you're now going to present a few tools to enable you to move in this direction. So I talked about how I hate this word requirements. Whose requirements are they? Are they the customers' requirements? No. Customers don't know what they want. Customers know what they don't want once you've built it for them. They're typically the requirements of the highly paid person, the hippo. And those people are always wrong which is why they get paid so much. Something wrong with this. So requirements is a crappy word. I encourage you not to use it. What we have instead is guesses and it's better to have a more scientific word for guesses so you don't upset people. So I like to use the word hypothesis because it sounds like you know what you're talking about. So who's heard of impact mapping? Anyone heard of impact mapping? Oh yes, that's great. Fabulous. So over half the people in the room. So that's a big change from the last couple of years. So impact mapping is a very small book by a guy called Goiko Adzik and it talks about an alternative to coming up with a bunch of requirements and shipping them downstream. It's a very simple tool which is part of the beauty of it. You start with your organisational objective or your customer objective. What is your customer or what does your organisation actually want? So in this case, this is an example from financial services. You want to reduce the transaction cost by 10%. So that's your measurable goal and notice it's not like make it less. It's reduced by 10%. So that's a specific measurable goal that you can know when you've got there. Then the second level is who are the people who can either help you or stop you from achieving that goal? So here we've got a settlement team in Germany who does the back-end transaction processing. We've got a bunch of traders who are drinking too much coffee and fat fingering and then you've got IT operations who are miserable and grumpy. And so on the third level here, this is how those people can help or prevent those outcomes. And then at the end, you've got what they could do. Now what typically happens, even in a kind of agile water-scrumple process is that someone upstream of the developers does this in their head, takes one of these things and decides that that is the thing that we're going to do and ships that downstream to the developers as a requirement. So what you get is a bunch of single leaf nodes of these diagrams and then all the rest of the information, which is the actual problem-solving bit, which is the important bit, just disappears in the puff of smoke and it's never even written down. The rest of the organisation even thinks about that stuff. So that's terrible. It's taking a beautiful picture and JPEG compressing it until it's a pixel and then showing it to someone and being like, look at this beautiful picture and you're like, that's just a dot. What are you even talking about? So the thing about this exercise, as with much of agile, is it's not the picture that comes out at the end that you care about. It's the process of creating a picture. It's the documents that you create. Documents that you create in agile are typically not important. It's the shared understanding that the people who created the document came to, which makes it really difficult if you're in a development shop in India and you've just got a bunch of requirements that someone in another country did this for and then didn't tell you any of it and just sent you a bunch of requirements. So I'm sorry about that. It's really hard. It's a time to always question the requirements. That will get you into trouble. I am sorry. But you should do it anyway. One of the many things I love about India is the fact that people here are so argumentative and I encourage you to harness that. Use that to your advantage. It's an amazing skill. Use that, fire it straight back at those people because they may not like you for it but they will definitely be better for it. So if you get a leaf node you should be asking why am I doing this? What's the actual outcome that I'm looking for? See if you can find the people who it's affecting and just get them on Skype or something and actually talk to them if it's all possible. By the way, this is not a problem that just happens in distributed development. There's a great story. Who wrote the... I forgot his name. Jeff Patton has a great story from when he was working as a developer in a financial services company. They had a requirement to do an instant messaging thing for a bunch of traders. He was like, I really want to go and talk to them and find out what this is about. The developer manager said, you can't talk to the traders, they're very busy. He's like, where are they? They're on the third floor but you can't bother them. Jeff Patton says, I'm just going to the toilet. Walks off, gets in the lift, goes down to the third floor, goes to the traders and says, hey, I'm from the development floor. Please can I talk to you and the traders are like, no one ever comes to talk to us. That's so nice. You'd be surprised how often people actually will want to talk to you about this stuff if you can only find them. But do make an effort to do that because it can make a huge difference. What he found when he got there, by the way, is that all those traders sat around the desk and could see each other and they had no use whatsoever for any kind of instant messaging functionality because they used the inbuilt instant messaging functionality called the voice. So it's important. People write it down that they think it's a good idea and it's not. That happens most of the time. Most of the time. So the point about this exercise is to get your customers and the developers or the stakeholders or some representatives of them in a room and talk about the outcome. Agree on that. You probably won't be surprised because you probably see this all the time. It's very hard to get people to actually agree on this bit. What's the measurable outcome we're trying to achieve? If you can even find that out, who's seen stories that are you know the story format? I want them so that yay. Who's seen those stories but without the so that at the end? Right. I've seen it all the time. People are like, we don't need to write the so that. Everyone knows what the value is. No they don't. And they don't because they're not thinking about it because people don't want to think about problems. People want to think about solutions and they focus on the solutions and writing the beautiful stories because that's easy. I can write stories all day long. It's super fun. I bet you all can too. Sit there and write stories. Wouldn't it be cool if you had this feature? I really like to build this and then you've got loads of stories and then you build it all and then maybe you're not even around long enough to see whether the product actually passes or fails whether it succeeds in the market. This is why I'm here by the way. The reason I'm here is because I was in India in 2005-2006 at ThoughtWorks working on a product for a year and a half out here in Bangalore. I lived in Murugasipelio and we built this thing and then I went off to another office started working on another project. Later I came back and I met the project lead for that and I said, so that thing that we were working on what happened to it? He was like, oh yeah that really bombed in the market no one bought it. I was like, shit. We thought we delivered that on time to budget with all the requirements met and we had, we had succeeded in all the project metrics in building a product that no one wanted and that failed. That's why I'm here because I was so, I had this is my PTSD therapy from that experience is me standing here right now telling you this. So like getting all those people together in a room and actually getting them to talk through this problem and then say what do we think is going to give us this objective with the least possible work? How can we do the smallest possible work and achieve this objective and then you pick one and you'll probably be wrong the first time because then you'll get somewhere into it and you'll be like, oh this is way more work than we thought or you run a test and you find out it's not going to achieve the outcome and then you stop doing that and do something else instead but being able to do that having something where everyone can see here's the problem, here are the stakeholders, here's how they can achieve or stop that objective and here's the what, having everyone be able to see that is hugely important because then you know the objective you're working towards and then you can harness the innovation capabilities of everyone in the team to focus on achieving the outcome, not doing this if I complete this task if I develop this story and this story doesn't achieve this objective I fail that's normally completely hidden from you you might even get into trouble for asking that question that's hugely problematic and no amount of scaling will solve that problem because it's in a wrong problem there's a really nice template that replaces the other one so that which is this template this is by a guy called Jeff Goffelf he wrote a book called linearx which is really good he has this template, we believe we believe that building this feature this feature for these people these people will achieve this outcome we'll know we're successful when we see this signal from the market what experiment are you going to run to measure whether actually building this thing will in fact achieve this outcome without going and building it the slowest way to test a product hypothesis is to go out and actually build the product you would like to avoid that wherever possible and find a shortcut to try to achieve that get that information without actually building the thing and that is most of what lean startup is about clever hacks to gather data without actually building the thing the more you can do that the better and the more time you will save and the less time and money you will waste building things that people don't actually want which is what happens most of the time so one of my for me UX and incorporating UX into agile has been one of the biggest things to come out of the last few years and in particular the focus on experiments how can we run experiments to validate if our requirements or stories will actually deliver the expected outcome this is a quadrant diagram created by Janice Fraser who's a really amazing product designer she divides up user research based on whether it's quantitative or qualitative and then whether it's evaluative or generative so these are terms from design thinking generative activities are things like brainstorming where you come up with a bunch of different ideas evaluative ideas evaluative thinking is where you take a bunch of ideas and decide which one is actually good you may know people who are really good at generative and really terrible evaluative those are the people who come to your meetings and you're like we're going to do this and they're like but what if we did this or this and they're like please shut up because you only got an hour and I want to go and have my lunch so you'll notice that people are good at one or the other of these things you have to have both you have to be able to create a bunch of ideas and then assess those ideas to find out which of them are actually good so user research can help you with all these things in both a quantitative and a qualitative way so on the generative side when you're coming up with ideas surveys are a great quantitative way to come up with ideas there's qualitative stuff that mainly comes from anthropology actually things like contextual enquiry basically observing people solve problems in their natural environment as much as possible without them knowing you're there this is I mean who's built financial services software in this room okay who's gone to a bank and watched the teller in the bank create a new account and you've seen them like go into one system and then do something and then they write down something on a piece of paper and you're like I really hope they burn that when they're done because it's got all my personal details on it and they don't you know take it home or give it to someone else and then they put it into another system and they take something from that other system they go back to the first system like watching this you can see how much of what we do waste and this goes back to what Dave talked about earlier you know this idea that actually the most important thing we can do is go and talk to the customers and the people who use our software and get rid of most of our requirements which are terrible so a lot of this stuff comes from anthropology and then evaluative on the qualitative side you've got things like hallway usability testing just coming up with wireframes and mockups and getting your users to try them out and watch them as they grapple with that that can be really sobering you're like this is a beautiful UI it is so beautiful the CSS is delightful and it works on all browsers and then you watch someone use it and they're like the submit button is right there it's right there click on the submit button they're like ah dude it's right there it's blue click on the submit button and like five minutes later they're still trying to work that's your fault it's my fault it's not their fault for being dumb it's your fault for building shitty UI like go and watch that stuff and feel sad and then do something about it and then on the left here you've got quantitative stuff that's evaluative and the gold standard here is AB testing and I wasn't here yesterday but Josh apparently did a fabulous talk on this the great thing about AB testing is it gives you cause and effect relationship between feature and business outcome most of this stuff is useful but it does is correlative AB testing you can get cause and effect this feature cause this outcome that is amazing it's like crack for product people I love it one of the slides I like to show about continuous delivery is this one so this is Amazon's deployment stats from May in 2011 so this is six years ago now they're over an order of magnitude faster right now they were delivering changes to production on average every 11.6 seconds this is just production they were making changes to their production environment on average every 11.6 seconds they were achieving up to 1,000 of 79 deployments per hour on average 10,000 boxes were receiving those deployments up to 30,000 boxes receiving those deployments if you haven't seen this before this should blow your mind certainly blew my mind the first time I saw it now this is aggregated across all of their services they have thousands of services this is aggregated across all of those services not all of those services are changing every 11.6 seconds let's be very clear about that but some of those services are changing very quickly some of them aren't and crucially as Dave Thomas said all those services can be independently deployed you don't need to deploy the whole world in one go in an orchestrated deployment people sometimes ask me how do you how do you build an orchestrated deployment system and my answer is don't do that it's a symptom that something is very wrong with your architecture fix it so you don't have to do that so you can independently deploy those things that's what Amazon does they can make changes to those things independently this took them a substantial amount of investment they had to spend four years completely rebuilding their system to allow them to do that during which time they basically told the investors to get stuffed because there would be no new features that was a bold move by Amazon it was very expensive and it took them a lot of time and effort for what it allowed them to do amongst other things is run experiments in production there's a guy called Ronnie Cahave who led the experimentation team at Amazon who then went on to lead the experimentation team at Microsoft very smart guy stand for professor he has tons of data from AB tests and his data tells him this evaluating well designed and executed experiments designed to improve a key metric only about one third were successful at actually improving that key metric what that tells you is that unless you're running experiments to test your ideas and throwing away the ideas that are actually bad you are wasting about two thirds of your time you could be spending three days drinking chai and eating dosa in a cafe with a book and deliver the same value to your customers if only you knew the two thirds of the features you're building delivering zero or negative value to your customers and not only does that kill you in terms of the opportunity cost of not building the valuable features you have to maintain those features forever and they slow down the rate at which you can add new features because they're adding complexity to your system so they kill you in these three really miserable and horrible ways so I want to talk about a story that probably some of you have heard me talk about before but I'm going to tell it again because it's great HP LaserJet Firmware and I'm going to talk about this as an example of how to do agile at scale by being super lightweight so for those of you who don't know the story very quickly 2008 this is a 400 person distributed team so not huge but not tiny 400 person distributed team in Brazil, in Porto Alegre in Boise, Idaho and I think either Bangalore or May or Bangalore or Headerabad, one or the other they had this huge problem which is they were going really really slowly and it took them years to get anything done and they tried all the usual things they tried outsourcing insourcing, hiring people, firing people, nothing worked and in the end they were so desperate they asked the engineering leadership for help which is how you know things are really bad in the organisation and so what they did was pretty interesting the engineering director looked at how they were spending their money not very precisely low level of precision what they found they were spending about 10% of their costs on code integration 20% on detail planning 25% on porting code between different branches because they had different branches for different ranges of devices every time you fix the bug on one range of devices that was also present on another range of devices you had to port that bug fix across the branches 15% 25% of their costs on product support what does that tell you you're spending 25% of your engineering time sorry 25% of your engineering budget on product support what does that tell you quality problem, it's a proxy variable for quality 15% of their costs on manual testing do you subtract that from 100% what you discover is there's 5% of time spent on actually building features now who's heard the engineering manager say who's heard the engineering manager say when you ask them we'd love to spend some time on automation or refactoring and the engineering manager says we'd love to do that but we can't because we haven't got enough time who's heard that the reason you haven't got enough time is because of all the bullshit waste in your process which means you haven't got enough time to do anything and if you don't fix that this percentage will get smaller and smaller and smaller until you can't do anything so you have to fix this problem they also looked at cycle times it was taking a week to get code into trunk they were getting one or two bills a day into trunk and it took six weeks to do a full manual regression test which for a team of that size is pretty typical the way they fixed that I'm not going to go into detail now I'll talk about it in more detail in my keynote on Friday they basically rebuilt a software from scratch so that they could build a firmware build that ran on all their different ranges of devices and which would switch on or off all their different ranges of devices and which would switch on or off features of boot time based on a profile so they used feature toggles basically to decide which features would be on or off based on the printer model based on an XML file with the profile of the printer and so by building something where they didn't require branches for all the different types of printers and scanners and whatever they could actually build a single product on trunk use continuous integration and they invested a substantial amount of money into automation so that every time someone checked in it would run a set of tests that took about two hours on each commit then it would batch loads up and run another two hours worth of tests if that passed it ran another two hours of tests on an emulator so they actually had emulators on logic boards in racks running automated tests and this is on printers so they're actually sending signals to the logic boards and waiting for signals to come back in an automated test framework so it's not about it being too hard to write automated tests for their websites I don't have it today but I sometimes carry around a copy of this book to spank them with so they stop whining about how difficult test automation is they invested a substantial amount of effort in building a simulator so you could actually run the tests for the firmware on your development workstation so developers could reproduce the test failures on their workstation so they could fix them quickly then they had a full regression suite that ran overnight so sometimes people say they're doing continuous integration when they're doing an overnight build these people were doing an overnight build but it was a complete regression suite if this passes the software is releasable if it doesn't pass they fix it straight away so by doing this and this by the way this is 400 engineers working on a 10 million line codebase they were getting about 100,000 lines of code changed per day into trunk they were getting 10 to 15 good builds per day into trunk every developer they had about 100 commits per day producing these builds they completely removed their regression their six-week regression thing that just went away because they made sure the software was always in a releasable state and by doing this by investing in substantial test automation and continuous integration they completely changed the economics of the software delivery process much less time spent on integration I'm going to talk about planning in a minute much less time porting code product support goes down from 25% of costs to 10% of costs what does that tell you? quality goes up most testing is automated what they actually achieved was an 8x improvement in the amount of money they were spending on actually building features so their goal setting out was a 10x improvement they only achieved an 8x improvement still pretty good and this is a slide to send your CFO when they want to know about the return on investment in test automation, continuous integration speaks for itself I want to talk a bit about the planning activities planning activities are really interesting they used to spend 20% of their budget on planning activities and that was basically what would happen is the product people would come out with a big list of features that they wanted done in the next year and then the engineers would look at it and laugh and then they would say pick two features and then the product people would get really mad and make them do really detailed estimates to show them exactly why they could only do two and not more and then they get about halfway through that plan and realise they couldn't in fact even do two and then the product people would get even madder and spend even more time re-estimating everything to prove why they were being so slow and that was 20% of their costs so part of the agreement was we're going to have a lightweight planning process the lightweight planning process fits on a single piece of paper this is it these are all the components these are all the different programs the different feature sets they're going to build and they would just estimate in firmware engineering months so this is a pattern by the way and the pattern is reducing precision because you don't need high precision for your estimate you can just estimate in engineering months so let's start at the top and actually prioritising the issues this is another thing that's really difficult to get product people to do is rank stack rank priorities for features people love stack ranking engineers but they hate stack ranking priorities and the world would be a much better place if they would swap those things and stop stack ranking engineers because individual productivity is not a thing that exists and start stack ranking feature priorities instead so they got them to actually stack rank the priorities and initiatives and then they just went down the list and they said well this one's going to take this amount of time and then they start running out of time down here and being like well you're not going to get this done because there's not going to be enough time but here's the plan for the entire program of work 400 engineers fits on a single page you can do it in a day and that's all you need and then I want to talk about how they actually implemented continuous integration so at the time they were doing this I hadn't written the continuous delivery bit but they independently invented continuous delivery because guess what continuous delivery isn't really a thing it's just what happens when you focus relentlessly on improving your process and that's all you really need to do when people want to they're like I want to improve on continuous delivery I'm like well don't just find out what's holding you back and attack those things and then just keep doing that forever that's all you really need to do but it's hard and it turns out this is written about in this book called Toyota Cata by a guy called Mike Rother so Mike Rother was an academic who would go around looking at Toyota factories and studying what they did and then writing books and papers about it and then people would hire him to tell them what Toyota did so they could copy it and they would copy the things but they wouldn't work so for example a classic example is the andon cord so in a Toyota factory if there's a problem with what you're doing at any time and you can't get it done before the car moves down a production line you can pull a cord, it stops the production line and then you fix the problem and you build quality in you don't let bad product go downstream to be fixed later they tried that in GM factories what happens is no one ever pulls the andon cord because the managers are incentivised by how many cars come off the end of the production line and so if you shut down the production line the managers don't get their bonus and they're super mad at you so people would copy what Toyota did but it wouldn't work because they weren't changing the work structure in which it operated you can't just copy what other people do and implement it and expect to get the same results this is why methodologies suck in software a lot of software methodologies are post hoc rationalisations of something that worked for a particular team and then they're like well that was really fucking awesome I'm writing a book about that and then a methodology is born and then everyone copies it so like don't do that your team in this situation is not going to work for you it's not you won't get the same results because complex systems you're going to have to do the work and find out how to fix your own problems in your own situation if you do that you don't need to worry about anything else and this guy wrote a book about that because he was like oh copying isn't working let's find out how they problem solve and then wrote a book about that and it turns out that the ability of the people in your organisation to identify and solve problems is actually your competitive advantage and that's what you need to do and that's what they did you understand the directional challenge so for HP LaserJet Firmware the directional challenge was 10x productivity improvement that's what they wanted to achieve grasp the current condition current condition HP LaserJet Firmware here you go that's the current condition and then you know where you want to go long term how do we work out where we want to go in the medium term and this is the other bit that's really important taking long term goals and breaking them down into short range intermediate goals that you believe may or may not get you towards your final goal so establishing a target condition which typically you want to make that around a month out so go through this process every month where are we going how are we doing right now where do we want to be in one month's time and then you don't plan how you're going to get there because you're innovating so you don't know instead the people on your teams work out how to achieve that by running a bunch of experiments on a daily basis so what do these look like and what does this look like well it's racing towards the target conditions everyday your team comes in what's the target condition we're trying to achieve this month what's the actual condition now what's stopping us from achieving that condition what experiments are we going to run to find out if we can solve the problem in this way how soon can we go and learn from that and if everyone does that everyday in a disciplined way in a scientific way and the cool thing is this is not just for process improvement you can use the same thing for product development that was the massive aha moment that I had it's like you can use the same thing for process improvement and product development so if you look at this was the HP LaserJet firmware's target condition for month 30 so they had many milestones every month month 30 this is their entire list of target conditions so I showed you their long range planning process fits on a single page this is their program plan for a whole month for those 400 people fits on a single page doesn't tell it doesn't say here's what everyone's going to do it says here are the outcomes we're going to achieve in the next month so outcomes again stacked ranked, ranked 0 quality threshold, priority 1 issues open less than one week level 2 test failures responded to within 24 hours bit release final priority 1 change request fixed reliability error rate at release criteria and so you've got here a combination of products and process outcomes that are measurable and that have numbers attached to them so you know when you're done with them and again what you're not doing which unfortunately a lot of the scaled frameworks you know they don't tell you to do this but some of them kind of are implemented in this way I don't want to slag people off because I think I'm not against the scaled frameworks I just think a lot of the time they're implemented badly you know you have this thing where you've got here's the epic and we're going to split it down into features and then we're going to split the features down into stories and we're going to hand all the stories out to all the developers and then at the end of the month we're going to come back and see if we can make it work and oh my god it's a massive pile of shame it doesn't work and guess what everyone did their job the problem wasn't that people didn't do their job the problem was that it was the wrong thing to do in the first place because you can't plan the evolution of one month of a complex system in your head however many heads you have can't be done that's what a complex system means and so anytime you see that you know and everyone's like oh we did this bit we did our bit they didn't do their bit normally not the case everyone did their bit but the problem was you don't get any cookies for doing your stories you get cookies for achieving the outcome so the cool thing about this is you list the outcomes you don't list the tasks the teams have to work together to achieve the outcomes and no one gets anything unless the outcomes are achieved and that encourages people to collaborate to achieve the outcomes rather than to not collaborate with their goals and get that done and that is one of the secrets of actually achieving this stuff at like some level of scale is don't reward people for doing their individual tasks reward them for working together to achieve the outcome so I'm going to end with a story from Amazon this is a guy called Greg Linden who he wrote this blog post in 2006 it's actually from before that he was working on the recommendations engine team so if you go to Amazon you buy a bunch of stuff you go to the checkout it gives you recommendations if you go to a grocery store there's like chocolate at the checkout aisle and you're like you have kids I have kids and you go there and the kids are like I want chocolate you're like no chocolate for you, it's bad for your health and you want chocolate and maybe you're like a bad parent so Amazon was going to do this but in a personalised way they're going to give you personalised recommendations based on what other people who have the same things that you have in your cart have and what they bought instead and so Greg Linden knocks up a prototype for this goes and shows the VP of products and the VP of products says that is a terrible idea you shouldn't do this people will abandon their carts they will not like your idea so Greg a little bit sad goes back to his workstation brushes up his prototype pushes it into production gathers a bunch of data from production and demonstrates that this feature actually produces a substantial increase in conversion so big win for the business goes back to the VP of products shows the VP of products the data VP of products does not in fact fire him but says well I guess you're right we better go and actually prioritise this and get it done for real and so he ends this blog by saying this I think building this culture is the key to innovation creativity must flow from everywhere whether you're a summer intern or the CTO any good idea must be able to seek an objective test preferably a test that exposes the idea to real customers everyone must be able to experiment, learn and iterate so just to recap to accept our requirements are wrong we want to focus on the value we're delivering not the cost we want to create feedback loops so we can validate our assumptions as quickly as possible without actually building the thing we want to change our process to actually be able to perform experiments and then once we have all that in place then we can actually take an experimental approach for product development which harnesses the creativity of everyone in our organisation thank you very much yeah I mean it happens all the time so like you're talking about the you have the that's the normal way of doing things okay I mean that's cool my main thing is the requirements are important what's important is the outcomes and the assumptions so anything which focuses people on problem solving around achieving the outcomes and testing the assumptions rather than the solution no I don't think so I don't think we should be focusing on requirements I think we should be focusing on the problems we're trying to solve requirements tend to be about solutions and I think we focus too much on solutions and not enough on problems thanks very much