 In India, when it's bad news, we say very softly. Let's think. All right, we're going to get started. We are about five minutes late. But the good news is, Jess is going to go for three hours. And we had to extend the hotel booking so we can continue all the way. So this is going to be a three hour nonstop key note. You only get it at Agile India, nowhere else. You need to have dinner after this again, right? All right, I don't think Jess needs an introduction. He's a very familiar face at this conference. Jess has been here at Agile India for probably more than a lot of other people at this conference. 2006 is my first time, I think. 2006. Pesit. Pesit College, yeah. That's where we started. 2005, and then 2006 you came in. Obviously, Jess has written some wonderful book. You've heard from other keynote speakers, including Ron, about how influential some of the work Jess has done has been in the banks and other places. So over to you, Jess. No formalities, I guess. Thanks, Jude. And I just want to say, I mean, I've had an amazing time here. I hope you all have had an incredible time here. I just want to say thanks very much to Nourish and his team. Let's give him a huge round for such an amazing achievement. I mean, Nourish has been a huge supporter of Agile and he's put a huge amount of personal effort into this conference over the years. So thank you very much, Jude. So I've been talking about continuous delivery for seven years now. And if I had a dollar for every time I'd heard someone say, continuous delivery sounds great, but it won't work here, I would have probably at least a bus fare in London or something. Everyone is a special snowflake. This much is true, but that doesn't mean that you can't get better. And fundamentally, continuous delivery is about getting better. So if you're saying you can't do that, then that's probably wrong. So that's the TLDR. You can leave now if you don't want to listen to the rest of it. But I'm going to keep talking anyway. So I'm going to first introduce continuous delivery. What is it? Fundamentally, it's about being able to make changes to your system whether that's adding features, bug fixes, configuration changes, features, experiments into production, or if you're building mobile apps into the hands of users, safely and quickly in a sustainable way. And it comes from originally a desire to not be working evenings and weekends. Who here has to work evenings and weekends whenever a new release goes out? Okay, I'm very sorry for you. That's a substantial number of you. That is a sign that something is wrong. That's not something you should accept. It's not something that should be the case. It's a sign that something is wrong and needs to be fixed. And the continuous delivery book came from my experiences being in that position, having to work evenings and weekends, and being on a team of people who were like, no, this is unacceptable, and we're not going to do this anymore. And between the team of us and a whole bunch of other people in the community who've been adding to the canon of techniques over the years, there is no need for that anymore. But it's hard to do. I think it's important to distinguish between two different kinds of activities. A lot of the lessons from Lean and Agile come from Lean manufacturing and other areas, other domains. Software is special. And software is special because we're mixing two things. We're mixing design and delivery at the same time. Software engineers, when we're building stuff, we're making stuff up at the same time. We're not just following instructions. There's design activity involved with development. And those two things have very different characteristics. When we're doing product design, we're building new products and services. And that's highly variable. Designing features, designing products is a highly variable activity. Estimates are usually highly uncertain and the outcomes are highly variable. A lot of features deliver very little value. Delivery, on the other hand, is something that should be predictable. What delivery is, going from check-in to release, that's something we want to make very predictable. We want the fast flow of changes from version control to production. We want to be doing testing of all different kinds continuously. Cycle time should be well-known, predictable, and we want low variability in the outcomes. So two very different characteristics. And this is why you can't apply things like Lean Six Sigma to software. You can't just throw Lean Six Sigma at it, because yes, for delivery, we want variability to be squashed, but for design activities, we don't want that. So two very different characteristics. And really what continuous delivery is about is the right-hand side, making the delivery part very predictable. And what that does is it enables us to do design much more effectively. It allows us to run experiments. It allows us to get fast feedback on our hypotheses. Getting the right-hand side of this predictable enables us to be much more effective at design activity. The forebears of continuous delivery, I mean, I didn't invent it. I just built on stuff that was already there. And Dave Farley and the rest of us who were involved in creating continuous delivery, we just stole ideas of other people and repackaged them, basically. And the ideas that we stole came from extreme programming. A lot of continuous delivery is just extreme programming and stuff from the Popendix, who really, they were the people who took the ideas from Lean Manufacturing and applied them to software development. And I remember very distinctly reading the low-price edition of Lean software development in Bangalore in 2005 at Goatworks when I was working out with the Bangalore office and a lot of that went into continuous delivery. And then finally, UNIX. So UNIX might seem a strange reference, but actually UNIX, the UNIX philosophy is having a resurgence right now. So this is some points from 1978 from Doug McIlroy, who was head of Bell Labs Computer Sciences Research Center, also the inventor of the UNIX pipe. And he's talking about the UNIX way of doing things. So make each program do one thing well. To do a new job, build a fresh, rather than complicate old programs by adding new features. Expect the output of every program to become the input to another as yet unknown program. Don't clutter output with extraneous information, add stringently columnar or binary input formats, don't insist on interactive input. And there, in points one and two, we can see the same ideas that are in microservices right now. So this is basically like microservices, but in UNIX in 1978. Point three, design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them. So here you've got some of the same ideas as continuous delivery and agile, deliver early, deliver often. Four, use tools in preference to unskilled help to lighten the programming task, even if you have to detall to build the tools and expect to throw some of them out after you've finished using them. So this idea that as developers, we're creators and users of tools rather than relying on extensive manual testing, for example, or extensive manual deployment activities, these ideas are instrumental to continuous delivery and to DevOps, the idea that we're applying this tool-based approach to IT operations. I mean, that's at the core of DevOps. So a lot of these ideas that we're seeing today in DevOps and test automation, agile continuous delivery microservices, these are not new ideas. These are ideas that have been around for 70, 80, 40 years, right? And this is the problem with computer science and computers in general. Every year, new waves of people graduate from university and we're really bad in the software industry at understanding our history and knowing our history. And we make the same mistakes over and over again because we don't pay attention to our history. So there's one thing I've learned over the years, it's pay attention to history and ideas that have actually been around for a long time and mistakes that we've made in the past and don't keep repeating the same mistakes over and over again. So that's my little historical interlude. Back to the topic at hand. The main reasons I hear people say they can't do continuous delivery is these four. Number one, we're regulated. Number two, we're not building websites. Number three, there's too much legacy stuff. And number four, and I've actually heard this in real life, our people are too stupid. That makes me mad because I don't believe that ability is innate. I believe that all of us are capable of learning new things and excelling in whatever we are passionate about. And I'm gonna talk about all of these things, one after the other. But first, I mean, these are the reasons people give but they're not the actual reasons. There's two actual reasons why people can't do continuous delivery. Number one, because their culture sucks. Or number two, because their architecture sucks. So these are the real reasons why people can't do continuous delivery but they're not the stated reasons. So stated reasons and actual reasons. So I'm gonna address these one after the other. Part the first, we're regulated. So one slide that I love to show over and over again is this one. Who's seen this slide before? Okay, a bunch of you, that's cool. So this is from 2011. So this is 60 years old and Amazon are at least one order of magnitude faster than this right now. But in 2011, in their production environments, they were deploying to production on average every 11.6 seconds. And they were doing up to 1,079 deployments per hour. On average, 10,000 boxes receiving those deployments up to 30,000 boxes. Now, this is aggregated across all of Amazon's services. So Amazon has thousands of services in production. These are aggregate numbers but still pretty mind blowing. When I first saw this, it blew my mind. If you haven't seen this before, it's probably gonna blow your mind. And the thing I think is important to point out is Amazon is heavily regulated. They process a large number of credit card transactions which means they have to follow PCI DSS which is really strict regulation. They're also a publicly traded company so they have to follow Sarbanes Oxley. So heavily regulated but they're still able to achieve this. I also worked in the US government for the last year. So I was working at an organization called 18F which is part of the US federal government general services administration. And in the US federal government, it typically takes between eight and 14 months from dev complete to go live. And that's mainly implementing and testing security controls. So for a moderate impact system in the US government, there's over 350 information security controls that have to be implemented and verified before you can go live. And it takes about eight to 14 months to go through that process of validation. So we wanted to do something to speed that up. And there was a number of different things we did but one of the I think most impactful things we did is we built a platform as a service. It's called cloud.gov. It's completely open source. So it uses the open source version of Pivotal's Cloud Foundry and you can actually go to github.com slash 18F and one of the repositories there is the Bootstrap repository. You can clone it and you can start up your own cloud.gov in Amazon. So cloud.gov runs on Amazon web services and you can create your own instance. It's all completely open source. And the idea is this. You've got all these controls you have to implement. Most of them can be implemented at the platform level. And in fact, ideally a lot of them are implemented at the infrastructure layer. So there's controls around making sure that you have fire extinguishers and physical access control and backup power. Like those should be implemented at the infrastructure layer. And so in the end, after we'd gone through the process of getting cloud.gov certified which took about eight months, we're now in a situation where out of the 325 security controls that have to be implemented for a moderate impact system, 269 of those are handled at the platform layer. So when you're building a new system, if it's hosted on cloud.gov, you only have to worry about 15 controls and maybe some of the 41 shared controls. So using cloud.gov, you can get a system into production and live very, very quickly of the order of, we're aiming for about two weeks to go through that security process to validate those 15 or so controls that you have to implement. So this is a way that you can actually do continuous delivery DevOps even in a highly regulated environment. And actually cloud.gov uses continuous delivery. There's a deployment pipeline to make any changes to the underlying cloud.gov platform changes go into the cloud.gov platform multiple times a day. So we're making changes to the platform multiple times a day and using continuous delivery and actually a lot of these ideas from continuous delivery actually help with compliance. So one of the key patterns in continuous delivery is the deployment pipeline. The deployment pipeline is this idea that every time you make a change to any part of our system, we should be able to completely reproduce our system from scratch purely from information inversion control, scripts, configuration settings, codes, all that stuff should be inversion control. We should be able to completely replicate our production environment from scratch just using that information. Any time you make a change that's going to trigger unit tests that validate that nothing really horrible is broken. That passes, we're going to trigger acceptance tests which stand up the environment in a production-like environment, run a bunch of more comprehensive tests. If any of those things fail, we fix them straight away. If those pass, we have a build that can be self-service to a testing environment, an integrated environment, and ultimately we should be able to do push-button deployments to staging environments in production. The great thing about this is you can see every change that's made to the system, which environments that change has been to, what the results of the tests were, who authorized those deployments, what was run as part of those deployments, all the information you need in terms of auditing is available to you for free. That's much better than a piece of paper that says you did a bunch of things that you may or may not have done. So a lot of what we see in compliance and regulation is actually what I like to call risk management theater. So it's processes that are designed to give the appearance of having effectively managed risk while actually making things worse. So a lot of what we see in regulated environments is just people signing bits of paper that said they did something so that when something goes wrong, they can say, look, I did my job, this piece of paper says so, so it wasn't my fault. That's actually not what we care about. What we care about is making sure we can move fast and achieve higher stability. I've been investigating IT outcomes in all kinds of different companies for the last four years as part of the state of DevOps report. What we found, we created a way to measure IT performance, software delivery performance in terms of these four metrics. So two throughput metrics, need time for changes, release frequency, and then two stability metrics, time to restore service and change fail rate. What we found is that the high performers did better on both of these things. We're used to thinking about throughput and stability as a zero sum game. If you go faster, you're gonna break things. And what our data and research shows is that that is not true. The high performers achieve higher throughput and higher stability. And that is true across all different domains, including financial services and government. It's also true in all different sizes of companies from very small startups right the way through to enormous mega companies. So there's no reason why you can't achieve this. We can do it in the US federal government. No one has any excuses, which is part of the reason we did it to show that it could be done. So regulation is hard. It's hard to deal with it and it requires a great deal of work, but there's no reason why you cannot apply these principles in regulated domains and in fact, they will often really help you out. Part two, we're not building websites. So one of my favorite case studies, which you've probably heard me talk about before, but I don't care, HP LaserJet firmware. So I'm just gonna briefly review this. Who has not heard me talk about this before? Anyone? Okay, a few of you. That's a good enough excuse me to talk about it again. So 2008, HP LaserJet firmware, the team was going really slowly and they were the critical path on all new releases. This is very bad, but it's very typical of a lot of companies. I once did some work for an airline company that was implementing premium economy seating. So these are where the seats are more widely spaced. It was gonna take them longer to change the booking system to book premium economy tickets than it was gonna take them to change the seats in the aircraft. That's a sign that things are very wrong. And so where software is actually the critical path on your business changing and offering new products, that's a problem. And this is what was happening with HP LaserJet firmware. They were building these custom ASICs for their devices and the ASICs took one year to fabricate, but the ASICs were not the critical path. The firmware was the critical path. So that was very bad. They tried all the usual things in sourcing, outsourcing, hiring people, firing people, all the usual stuff. Nothing was working. They were so desperate. They came to the engineering leadership for help, which is how you know things are really bad. And the first thing they did is looked at how they were spending their money. And what you can see here is they were spending 10% of their money on code integration. 20% of their money was spent on detailed planning. 25% of their money was spent on porting code between branches, because they had different branches for every range of printers. So when you wanna fix a bug here that's also present in another line of printers, you have to merge the bug fix across the branches. 25% of their costs were spent on product support. What does that tell you? Quality problem. 15% of their money was spent on manual testing. If you subtract that from 100%, they were only spending 5% of their budget on actually building new features. In terms of their cycle times, it would take them a week to get code checked into trunk. They were getting one or two builds a day out of their build system. And a full manual regression was taking them six weeks. So what they decided to do, which was kind of scary and crazy, was to completely re-architect. And they had a few goals with their re-architect. Number one, reduce hardware variation. So they said to the hardware people, you can't keep creating these very different ASICs so that we need to completely rewrite big chunks of the software. We're gonna have a single set of hardware for every device. It's gonna cost you a bit more, but it means we only need to make one build for all these different devices. We're gonna create a single package for the firmware which switches on or off features at boot time based on hardware profiles. So instead of producing different builds for different features, we're gonna produce one build and they're gonna turn features on or off based on the hardware profile at boot time. That allowed them to work on a single trunk, a single master and not have different branches. That allowed them to implement continuous integration and comprehensive test automation. And I actually watched a guy from a large Chinese hardware manufacturer come and talk to the director of engineering from HP and say, well, how do you do test automation? And he said, well, we built a simulator for the ASIC. And the guy said, you did what? He was like, yeah, we built a simulator. So building a simulator for an ASIC is an enormous engineering investment. It was a lot of money they spent on this, but the benefit was they could run automated tests without actually having the ASIC ready and they could run those tests on developer workstations. So developers could test the firmware on their workstation without having to wait for an emulator or an actual hardware environment to do it. So that makes the feedback cycle much, much faster. And in the end, after a couple of years of building out their deployment pipeline, they ended up in a situation where they had 100 developers, sorry, 400 developers distributed across three countries making changes on a daily basis. They were getting about 100 commits per day into trunk on a 10 million line code base, about 100,000 lines of code change per day. And they were getting about 10 to 15 good builds per day coming out of trunk. Every developer pushed to their own individual GitHub repo and only once automated tests had been run against that check-in were they then promoted to stage two where they were merged with the other check-ins and they had two hours worth of automated tests run again. Only at that point did they get merged into trunk. So the only way you can get into trunk as a developer is having your changes past the automated tests. So that means bad changes can't get into trunk. They were able to get 10 to 15 good builds a day coming out of that process. And they were running level two tests which are some more automated tests and then level three tests on emulators. So they actually had physical logic boards in racks running emulators. And then an overnight regression test which gave them a complete regression of all the features within 24 hours. So they completely eliminated that six-week regression testing because they did fully automated overnight testing. You can find those bugs within a day. You can fix them straight away. The software is always in a deployable state. Now, this is continuous delivery but people aren't updating their firmware 10 times a day. This is not what's happening. We're not actually doing continuous deployment where we're updating our firmware all the time. Why would we care about this? Why would we put in all this effort and investment? But it turns out that it changes the economics of the software delivery process. We end up spending much less of our money on continuous integration, on planning, on porting cable stream branches. Product support goes down from 25% of costs to 10% of costs, what does that tell you? Better quality. A lot of testing has been automated and they achieved an 8x increase in productivity measured in terms of the money they were spending on actually building stuff. So the key thing to bear in mind is there's all this waste in the software delivery process of stuff that's not actually adding value. However, you'll notice that the numbers on the right don't add up to 100%. There's a new activity that's not represented here. 23% of costs are spent on building and maintaining automated tests. Who here is not happy about the state of test automation for their systems they're building? Who here thinks we don't have nearly enough? Okay, a lot of you, right? What would happen if you went to your manager and said, please can we have 25% of our budget to spend on automated testing? Hoo hoo hoo hoo, yeah, that probably, you know. So, but nevertheless, this is what they were doing and they achieved an 8x productivity increase and in terms of the economics, overall development costs went down by 40%. They increased their programs under development by 140%. Development costs go down by 78% and the amount of money they're spending on innovation is increased by 8x. This is the mistake people make. People make the mistake of thinking that lean is about cutting costs. Lean is not about cutting costs. Lean is about investing to remove waste. And in the medium and long term, that drives down transaction costs of making changes so that you can move faster. You're investing to remove waste. And yes, you're gonna end up investing a lot more money in test automation and continuous integration in order to do this. But what happens is the overall cost goes substantially down as a result of that. You have to pay up front for the investment. And this is, I mean, continuous delivery is not about releasing multiple times a day. Continuous delivery is about changing the economics of the software delivery process to make it economic to work in small batches. And when you work in small batches, you get feedback much faster and then you can build quality in. And by building quality into the product, you substantially reduce costs, you substantially increase quality and you substantially reduce time to market. And that's what this is about. It's not about delivering 10 times a day. It's about changing the economics of the software delivery process. So part the third, too much legacy. Who works in an environment where you're working with mainframes? Okay, lots of you, cool. So mainframes are actually pretty awesome. Mainframes are basically the cloud but without network partitions. That's quite good. They have lots of really positive characteristics. So I'm gonna show you a video from a project that was done at Suncorp. So Suncorp is Australia's biggest insurance company about doing continuous integration in a mainframe environment. And I just realized at this point, I haven't tested the audio yet. So I'm gonna play audio over the computer now. Hopefully this will work in some way. It's plugged in. So you're gonna see a little test running here. And run that. And what you're gonna see here is a demo. There's a couple of mistakes in it but that's okay. The green screens are remarkably durable and remarkably quick. And I found this to be an incredibly good testable endpoint. So they're kicking off a little test here and this is actually going through an amazing number of workflows. This is creating a new company. This is creating new contracts for that company. That generally takes an analyst about 10 minutes to do. 15 minutes to do. And what we discovered through this was that green screens are amazingly fast and amazingly durable. Not just the systems and cells but testing them is very fast. So what's going on in the bottom is jobs are running and they're waiting for the jobs to run. And then they're kicking off batch jobs to run in the background, which is quite remarkable. So what we were doing was for supporting UAT testing because we had this type of capability, we could set up 500 to 1,000 policies anytime we wanted for UAT to run. We could set up training environments with thousands and thousands of policies for people to learn from. But what we found, and we use Concordian so I'm gonna stop it right there. This is just an example of some of the Concordian output. But I was very surprised that GUI test works so well on a mainframe environment. We didn't think it would, but we put a lot of time to the test engineering and working with the system to make. So what they did is they built a driver that was driven through Java but then torped BT to the mainframes. So there's a little shim and then you can write acceptance tests in Java that are then driving the mainframe through the green screens. And then you can do test automation. You can write acceptance tests that run batch jobs and create large numbers of policies. And what they actually used this to do in the end, they had about 18 different mainframe systems from all the companies that are acquired. And by putting test harnesses around them, they were able to reduce those down to about two or three systems and basically get rid of all the legacy systems and considerably consolidate. And that massively reduced the cost of supporting all these different mainframe systems because they just got rid of them all. I mean, a key part of what we're trying to do in enterprises is reduce complexity. What too much legacy means is there's too much complexity in our environment and the budgeting process in large enterprises drives us to add more complexity all the time. What we actually wanna do is reduce complexity. That's what allows us to move faster. And you can apply these techniques to reduce complexity by building test automation and applying to these delivery ideas, like making sure your code's always testable, doing push button releases to mainframe environments in order to drive down complexity. That's really powerful. It releases enormous amounts of money. Complexity is not just about mainframes. Complexity is everywhere. It's in poorly written, tightly coupled, Java systems that are tied to enormous databases where the databases are the integration points for hundreds of different systems. That was actually a problem that Amazon had back in the day. It took Amazon four years to re-architect to achieve those continuous deployment stats that I showed you earlier. And during those four years, they did very little new work. They were basically re-architecting their entire system from a big ball of mud into a service-oriented architecture. So, I mean, service-oriented architecture is a really old idea. It's over 10 years old at this point. But people did it badly. It all became about whistel and sending protocols over the wire instead of actually what it should have been about, which is making services independently testable and deployable. It turns out there is a way to fix that problem incrementally. And this is called the Strangler Application Pattern, who has seen Tomb Raider movie. Okay, Tomb Raider has important lessons for software architecture. So, this is a temple called Angkawak in Cambodia. And what you can see here is there was a tree that grew on top of this temple, and then a little bird came and did a poop. And a seed for a strangler fig grew and basically surrounded the tree and then killed it. And you have strangler figs here in India, which are pretty awesome too, so same idea. We can apply this to our systems. So what Amazon did and what has been done in a lot of other companies as well is you don't do a big bang rewrite of your legacy systems. What you do instead is you drive incremental evolutionary architecture change through building new stuff. So you don't rebuild all the old stuff. What you do is you take new requirements and you build new services in a modern object oriented or functional paradigm, test driven, using databases per service, and you actually build a service oriented architecture incrementally. So as new features come in, you build new modules, and those new modules still have to talk to the old stuff, but over time gradually you strangle the old stuff, less and less of the functionality in the old stuff is used, more and more of the functionality is in the new services you're building, and over time gradually you strangle it. But the key thing is you're never done with this. Evolutionary architecture is an ongoing process. You're always working towards improving your architecture. And what that means is you should be always devoting part of your budget to incremental architecture change with the goal of reducing complexity. And actually there was a big retailer that we worked with when I was at ThoughtWorks where one year the VP bonuses were based on the number of systems they decommissioned in that year. So as a VP you only got your bonus if you decommissioned a bunch of systems. And that is one of the few ways that you can encourage people to reduce complexity because people just want to add more features. There's no one cares about IT operations. IT operations is just a big cost center and all that anyone wants to do is reduce that line item. But actually the way to do that is to reduce the complexity of the environment and we need to be doing that all the time. And that's a developer concern. That's something developers should care about because the complexity of your operational environment is what's making it so hard for you to add new features. Who requires an integrated environment that takes more than two weeks to set up in order to do real acceptance testing for their systems? Okay, probably a lot of you. That's an architectural problem. That's not something you should accept. That's something that indicates a problem that you should fix. So how can we reduce complexity? How can we reduce coupling between our environments? Maybe we can build client libraries to abstract away, talk into these systems so that we can put in a mock version of that service or a virtual version of that service instead. There's all kinds of smart ways to reduce coupling and reduce dependencies and reduce complexity and that's something we should care about as developers and trying to make things less complex should be part of what we're doing. I'm going to show you, who's heard of Steve Villegas' platform Rant? Okay, this should be mandatory reading. So this link, I think, doesn't work anymore, but if you Google Steve Villegas' platform Rant, you will find his Rant. He worked at Amazon during the time when they did this big re-architecture from this big, complex C++ ball of mud to a service-oriented architecture, which was 2001 to 2005, so four years to do that. This is Steve Villegas' summary of the memo that Jeff Bezos, the CEO of Amazon, sent to all their technical stuff. He says, number one, all teams will henceforth expose their data and functionality through service interfaces. Teams must communicate with each other through these interfaces. Three, there will be no other form of inter-process communication allowed, no direct linking, no direct reads of another team's data store, no shared memory model, no backdoors whatsoever. The only communication allowed is via service interface called over the network. Number four, doesn't matter what technology you use. HTTP, Cobra, Cobra, pub sub, custom protocols, doesn't matter, Bezos doesn't care. Five, all service interfaces without exception must be designed from the ground up to be externalizable. That means the team must plan and design to be able to expose the interface to developers in the outside world, no exceptions. Number six, anyone who doesn't do this will be fired. So I don't really agree with that. Anyone who read the New York Times article about Amazon's culture that came out like 18 months ago will realize that this is probably not a joke. He actually hired an ex-US Army Ranger who I think had been CIO of Walmart at one point to come in and enforce this. And this guy would come around to your team and if he saw in your code that you were talking to another system's database directly, not through a service interface, he would shout at you a lot. And if you carried on doing it, he would try and have you fired. So that was pretty severe. And to have a CEO send a memo like this is pretty crazy. Not many CEOs would send a memo like this, but they were deadly serious about this because they realized that by fixing their architecture to enable them to deploy those services and test those services independently, they could then do this stuff. But I showed you much earlier. That's what enabled them to do this. If you're gonna do this, you have to architect for it. It's an architectural concern. And if your architecture won't support this, then you had better rearchitect so you could do that. And if you're gonna rearchitect, you would better not do that in a big bang way because what will happen is you will fail. So you need to find a way to do it incrementally as the plane is flying using Stranger application. So part of the fourth, our people are too stupid. Our people are too stupid. There's a great story about Adrian Cockroft. Adrian Cockroft was a cloud architect for Netflix. Netflix famously with the biggest users of the internet in America, in fact, they had some enormous amount of internet traffic in the US. And they famously implemented a lot of patterns around using Amazon web services at scale. They moved, it took them about six or seven years, but they moved their entire stuff from a data center into the cloud. Everything runs off Amazon. And so they're pretty well known as being amazing and really cool. And so this CIO of this stodgy old company goes up to Adrian Cockroft and says to him, Adrian, where do you get all these amazing people from? And Adrian turns to him and says, I get them from you. Because guess what? It's not the people that are the problem. I mean, the problem is that someone is being stupid, but it's not the people you think. So one of the famous stories about Lean is the story of Numi. So who's heard the story of Numi? New United Motor Manufacturing Incorporated, a few of you. Okay, this is a cool story. So about 20 minutes from where I live, in California, in Berkeley, is what's now the Tesla factory. But the Tesla factory only came into being a few years ago. And the story starts in 1983. In 1983, the Tesla factory was actually a GM factory. And it was the worst factory in the whole of GM North America. Labor relations had broken down to the extent that the workers would drink and take drugs and gamble during their working hours. And they would deliberately sabotage the cars by putting Coke bottles inside the doors. So when you open and shut the doors, the doors would rattle. So worst quality cars, miserable labor relations, GM decided to shut the factory down. So Fremont Assembly, as it was called there, got shut down in around 83. Around the same time, Toyota wanted to create a joint venture with GM. So GM and Toyota had a joint venture. GM wanted to learn about the Toyota production system. They wanted to learn about how to build small cars profitably. Toyota wanted to build a plant in North America because the US government had imposed trade barriers on Japanese companies because they were producing cars that were too good and too cheap and the US auto industry couldn't compete. So obviously trade barriers are the solution to that problem, right? So the new joint venture, they decided to have the factory in Fremont Assembly, where Fremont Assembly had been in Fremont. And then something really crazy happened. The union leaders convinced Toyota's management to rehire the same people. And they sent these people to Japan, to Nagoya, to Toyota City. And they went on a training course that was designed by Toyota Japan's first American employee, who was a guy called John Shook. And they went on this training course. They saw how Toyota built cars. They worked on Toyota's production lines in Japan. And then after this training course, they came back and started work in the NUMI factory in Fremont. And within a year, they were producing higher quality cars than any other GM plant. And as high quality as the cars that Toyota was producing, the same people. What this tells you is that it's not the people who are the problem. It's the system of work and the system of leadership and management that's the problem. So what's different? Well, this is a picture from a Toyota factory in England, in fact. And what you can see on the bottom left, there's these kind of lines on the floor here, right? And what that is, is it's timing how long it takes for the car to go through your cell. So the way it works is the production line goes past you. Cars come along the production line. You have a certain amount of time to do whatever your job is, which might be, you know, putting in the engine, putting in some seats or whatever. So here's what happens in the GM factory, you know, pre-NUMI. What happens is the car comes along the assembly line and your job is to put in the chair, let's say. So you put in the seat, you start tightening the bolt and then the thread goes and you have to pull out the bolt and then you've got to get a new bolt and you put in the new bolt and oh, I haven't got time to finish it. What happens? Nothing happens. That car goes off down the production line with the seat not put in properly and you have to go and do the next one. And at the end of the production line is quality control. And quality control looks at the car and is like, no one can drive that. And they send it off to a parking lot to rust or maybe be fixed. And so a large proportion of the cars that come off the production line can't actually, they don't work. So they can't be sent to the showrooms and be sold. What's different about a Toyota production line? So car comes along, you've got to put the seat in, you're putting in the bolt, don't have time, you've got threads, you've got to pull the bolt out, you've got to... So when you get to about three quarters of the way along the production line, if you're not done, you can pull this thing called an anron cord or some factories have a button that you can press now. And what happens is it plays a jolly little tune. Do-do-do-do-do-do. And then a little light comes on, orange light comes on and a manager comes and the manager helps you. How good is that? And then if you can't get it done by the time you get to the end of the production line, you can pull the anron cord again and another little jolly tune plays, do-do-do-do, and a red light goes that on and the whole production line stops. And then you can actually fix the problem, make sure it's done there and then, and then you can start the production line again, car goes off down the production line and then sometime later you reflect on how you could improve that process and the workers, I mean it's, no one tells you you must do this differently. You make suggestions and then you get to implement those suggestions. So as a worker, you get to change the system to make things better and you have the power to do that. You have the power to change your own system of work to make it better. And they actually have teams of engineers and Toyota factories who'll come to you and you can say, listen, this wrench is, if we had a bend in the wrench it would make it much easier to do it. The engineers will go off and they will come back with a new wrench and be like, how about this? And they'll be like, oh, that's much better, thanks. So as a worker, you have the power to stop the production line. You have the power to change your system of work. I mean, and this is what empowerment is all about. I mean, empowerment is not managers telling you to be empowered. Empowerment is you actually getting to make decisions that affect your work. And the key thing here is you're building quality in. As a worker, your responsibility is to make sure that when the work product leaves you, everything is great and you have the tools and the power and the responsibility to build quality in. And then you don't need quality control at the end because everyone is doing their job and building quality in. So if you wanna read more about this, which I highly recommend, there's an episode of This American Life which talks about this story. There's an article in Sloan Review by John Shook who talks about how to change a culture. And the key thing is you don't change a culture by leaders writing manifestos. You change a culture by changing the way that people do their daily work. That's how you change culture by changing the system of work. And that changes culture, not the other way around. This idea about building quality in is actually very old. Who knows what Toyota was doing before they were building cars? Not you, Nourish, I know you know. Yes. Yeah, looms, that's right, that's correct. So this was their breakthrough product about 90 years ago. They released the Toyota, because it was called the Toyota, that's the actual family name. Toyota Automatic Loom Type G. Before this loom was released, every loom had to have an operator who would stand in front of the loom and watch what was coming out of the loom and if something went wrong, so if a thread broke or it ran out of cotton, they would have to stop the loom and fix the problem. And otherwise they were very, very bored because they were just staring at this thing. So the Toyota Automatic Loom Type G could actually detect any failures. It didn't know how to fix them, but it could detect them and then it would stop and say, I've got a problem and then the operator would come and fix it. So this changed the economics. Since the looms stopped when a problem arose, no defective products were produced. This meant that a single operator could be put in charge of numerous looms resulting in a tremendous improvement in productivity. It also means that we're assigning the responsibilities in an interesting way. The machines are in charge of finding problems and the humans are in charge of fixing the problems because fixing problems is problem solving. Computers can't do problem solving yet, except in very limited cases. Humans, what we need humans for is solving problems or we need computers for is performing repetitive tasks. And look for that in your daily work. If you see things that are repetitive tasks, automate that so that you can use humans for what they're good at, which is problem solving. That's what we should be doing. So there's an exact analogy to this in software delivery. What is it? All right, test automation, continuous integration. So this is what continuous integration is. Continuous integration is we have tests that run every time you check in and anytime we find a bug, a light goes on, a buzzer sounds, there's some notification and we stop and fix the problem straight away. And by the way, who's doing continuous integration? Hands up if you're doing continuous integration on your team. Keep your hands up, keep your hands up. I'm gonna administer my free certification. Okay, keep your hands up, keep your hands up. Okay, if not all of your developers are checking into trunk every day, if you're working on long-lived feature branches that don't get merged into master on a daily basis, put your hands down. If everyone is checking into trunk or master every day, keep your hands up. If when your tests pass, you get them fixed within 10 minutes, keep your hands up. If that's not true, put your hands down. Okay, who still has their hands up? Okay, there's like five of you, six of you. So well done to those people. You are certified continuous integration practitioners. You can email me and get your free certificate afterwards. CI is not running Jenkins against your feature branches and then ignoring the build when it breaks. Continuous integration is working off trunk and fixing problems as soon as they occur so that we can build quality into the product and not let bad software go downstream to be found and fixed by other people. This idea, same idea that was invented 90 years ago that W. Edwards Deming talks about about building quality in. And we know that it works. The research that I've been doing for the last four years with Puppet Labs, Gene Kim, Nicole Forsgren, we've gathered thousands of data points from about 20,000 people worldwide, 100 different companies. And what we find is that these things on the left, effective test data management, trunk based development, automation of test and deployment, keeping everything in version control, doing security as part of the delivery process rather than downstream. Those things together, that those are the practices of continuous delivery and the data shows that results in less rework, which is our proxy variable for higher quality, it results in lower levels of deployment pain, it changes culture, so you actually get a higher performing culture with more information flow as a result of that. You get high levels of IT performance in terms of throughput and stability. It actually helps people identify more strongly with the organization they're working for because guess what? If you have the tools and the resources and the authority to do your job, it makes you feel better about your job and happier in your job. If you don't have to work evens and weekends to deploy software, you feel better about going to work in the morning and it results in lower change fail rates. And we know also that higher culture and higher IT performance results in higher organizational performance. So these things work and they impact the bottom line and they change culture. I'm gonna end with a couple of quotes. This first quote is from Taiichi Ono. He says, improvement opportunities are infinite. Don't think you've made things better than before and be at ease. That would be like the student here becomes proud because they've bested their master two times out of three in fencing. Once you pick up the sprouts of Kaiser ideas, it's important to have the attitude in our daily work that just underneath one improvement idea is yet another one. Continuous delivery is just continuous improvement. That's all it is. It's just making things better all the time and never giving up. Transformation is not a project with an end date. What's characteristic about high performing companies is that they're always working to get better. They never stop. It's part of their daily work and it's part of everyone's daily work to make things better. And I'm gonna finally end with a quote from Jesse Robbins who is master of disaster at Amazon. He says, don't fight stupid, make more awesome. And if you take one thing away from this, every day go into work. Think about how can I make things more awesome for the people around me? And if every day all of us went into work and thought, how can I make things more awesome for the people around me? That's how you implement continuous delivery. Thank you very much. Do you have time for questions, Rish? I know, quite short, really. All right, any questions? We have time for exactly half a question. Minus two questions. Everyone wants beer. All right, if no questions, then thank you all. Thanks, Des, for the wonderful talk, as usual. Thank you, thanks for having me.