 So, continuous integration is about making sure that your software is ready, that the software is actually working all the time, so the default state of the software is that it's working. But the software doesn't actually deliver any value to users until it's in production. So, the key insight of continuous delivery is that we can take continuous integration and actually extend that idea of keeping the software in a working state all the way out through your process to release to users. And we create what's called a deployment pipeline. This is a very simple example of a deployment pipeline. It's linear, there's no branching out for people to be able to do performance testing and UAT independently and so forth, but it serves to illustrate the case. So, what you can see is people are checking in codes to try and conversion control. That triggers build and unit tests, which is the standard continuous integration loop. If the CI build fails, it's very important to actually then fix the problem. Either by going and trying to find the file that you forgot to check in, or if you can't fix the build within a few minutes by reverting your change out of version control. One of the most selfish things you can do as a developer is to leave broken codes on trunk, because what that means is that the rest of the team is not working off a known good state. And we should be able to get that feedback in about 10 minutes or less, preferably just a few minutes. And that means making sure that you're just running unit tests because they're fast and that you keep your unit tests sweet, well factored and rapid. Once we have a build, so obviously that first build fails, we then fix the problem and we have a build that passes the unit tests. And at this point we move beyond continuous integration. We take the build that we've created as part of that first step and we then perform comprehensive automated acceptance testing against that build. Again, that might take a bit longer. Typically, for a large system, you would see acceptance tests might take a day or several days to run. We typically run those in parallel on a big grid and get the feedback, try and get it in 30 to 40 minutes tops. And when those fail, again, the first priority of the team is to fix that problem. We need to prioritize keeping the system working over doing new work. That's one of the key mindset changes when you're moving to continuous integration and continuous delivery. So we fix that problem. Once we have a build that passes all the automated tests, we can then take that build and push it downstream to the more expensive processes like performance testing, manual processes like user acceptance testing and so forth. And it should be possible. The key thing is that the notification goes downstream and the people downstream can push a button to deploy the build of their choice to the environment that they're using for testing. And then when we have a build that's passed all the testing processes, it should be possible for people working in technical operations to be able to self-service any build of their choice at the press of a button to staging and to production. The key with continuous delivery is that every change results in a build and every build is a release candidate. And the job of the deployment pipeline is to prove that that build is not releasable. We can never prove the absence of defects. We can only ever prove the absence of known defects. So we want to try and find all the known defects and if we can't, we should feel totally comfortable releasing that build that's passed through these steps to our users. And if we don't feel totally comfortable doing that, then that means that our validations are not good enough and we need to improve the quality of the software. So in continuous delivery, every build is like a hero in an epic and the hero wants to marry the beautiful woman but first the hero must pass a series of tests. The hero has to get the big pile of gold and then fight the evil monster and then have an emotional reunion in the underworld with his father. And then finally, the build gets to marry the beautiful woman. It's like a swambre in Hindu culture. So every build is a release candidate and the job of the deployment pipeline is to prove that it's not releasable. If it's not, then we can release it. And what this does is it changes the way we think about software development. The default state is that every change should be releasable to our users. In typical software developments at scale, that's not the case. What we have is there's dev complete and dev complete means it works on my machine and then there's releasable. And we have to put a lot of effort into stabilisation phases. We put in countermeasures like release trains, all these techniques which basically cater to the fact that it's very expensive to do that. It's important to bear in mind that countermeasures like release trains and integration and stabilisation phases are not the inevitable consequence of scaling agile. They are a sign that something is wrong with your engineering practices and insufficient attention has been paid to test automation and to investing in engineering practice and well factored loosely coupled architecture. Continuous delivery is about making it cheap and low risk to deliver incremental change to users. And there's a constellation of techniques. The deployment pipeline is just one. There's a piece around making it cheap to provision and manage infrastructure to be able to manage database changes in a cheap low risk way. There's a whole number of techniques that we describe in the Birken that are out there on the web. The high performance companies like Netflix and Etsy are practising. Why do we care about doing this? Martin talks about the fluency model and when you get to 3 or 4 stars there's this idea that the team is actually participating in deciding what's valuable. My favourite statistic on what's possible in terms of continuous delivery comes from Amazon. In 2011 Amazon was deploying changes to production on average every 11.6 seconds. They were doing up to 1,079 deployments in a single hour to about 10,000 boxes. Now it was expensive and painful for them to reach that level of fluency. And they need to change their architecture and the structure of their teams to build small self-organising teams that can independently push their services to production in order to do that. What it allows them to do is actually do A, B testing on all their features. Anytime they have a feature, they can run a feature idea. They run an experiment on a very small percentage of users to find out if that feature idea is actually going to improve the top line revenue metrics that they care about. Based on doing a large number of experiments, the guy who invented Amazon's A, B testing framework, he has this observation. Evaluating well-designed and executed experiments designed to improve a key metric, only about one-third were successful at improving the key metric. What that means is two-thirds of the software features that we build provide zero or negative value to users. And if you're not measuring the value of the features you're delivering, two-thirds of the work that you're doing is not adding value, and it may in fact be subtracting value. These are good ideas, features that people thought were slam dunk ideas, and two-thirds of them delivered zero or negative value. It's impossible to predict which features will actually deliver value to users, so if you're not testing to discover whether they are in fact valuable to your users, you've got a problem. We have this word requirements. Whose requirements are they? They're not the user's requirements. Users don't know what they want. Users know what they don't want once you've built it for them. They're the customer's requirements, the person who's paying the money. So just in the last few minutes that I have, I want to give an example of consumer's delivery at scale in a non-webby environment. So I'm going to talk about the HP LaserJet firmware team. This is a team that builds the firmware for HP's laser jet printers, and they had this problem. They were going really, really slowly, and they weren't able to get new printers out nearly as fast as the competition. And they looked at what they were spending their time doing, and they were spending all this time doing activities that were zero value add. Every time they brought out a new range of printers, they fought the code in version control, and they were spending 25% of their time porting code between branches When they made a fix to one printer, they would have to port that fix over to all the other ones, same features they added. The quality was poor, they were spending 25% of their time on product support, they were spending 20% of their time on detail planning. And what they decided to do was to re-engineer the system, to re-architect it, build it from scratch in a way that would enable them to build off trunk. And they would have one single firmware build for all of their different ranges of printers and scanners and multifunction devices. So they rebuilt the software so that they could have one build, and when the firmware woke up, it would look and see what device it was on, and turn on only the features necessary for that device. And in that way, they could have one build and they could work off trunk. And between 2008 and 2011, they evolved this very powerful deployment pipeline that enabled them to work off trunk at scale. So bear in mind this is not where they started, this is where they ended up after two to three years of evolution of their process. They were able to have a team of 400 people working distributed across three countries, this is distributed across Brazil and America and India, 400 people making 100,000 lines of code change every day to a 10 million line of code base and getting 10 to 15 good builds a day out of their initial check-ins to trunk. So anytime someone wanted to check in, they'd run a couple of hours' worth of automated tests against their change before it ever made it into trunk. It only makes it into trunk if that initial set of tests works. Then any build that comes out of trunk, they run level two tests on a simulator, they built a simulator to simulate the actual hardware platform. And then they run another, if a build passes that step, then they run another two hours' worth of tests actually on logic boards, printer logic boards in racks to make sure that that build is good. If that works, then they run overnight about 10,000 hours' worth of tests in parallel that's their complete regression suite. So within 24 hours you get feedback on the quality you'll check in. And if those tests break then they immediately fix them. And in this way they got to a state where their software was always releasable. They completely eliminated the needs for stabilisation and integration phases. And what that meant was the business could change its mind and introduce new features well after release candidates came out just before actually releases of the firmware. And they were able to renegotiate their relationship with the business and move away from doing detailed upfront planning and say, OK, we're not going to do upfront planning in detail. Instead, if you want to change your mind about what features you want to build, you can do that at any time because there's only one day's worth of working process off trunk at any time. So you can stop what we're doing and work on whatever you want. So I mean this is a team working on firmware building automated tests. So this is a powerful demonstration that these techniques work at scale even in very complex domains. And as a result of this, they were massively able to reduce the amount of time spent on non-value added activity, increase the amount of time spent on innovation, and that completely changed the economics of the software development process. They wrote a book on this. I recommend buying this book. It's short. It gives you the value proposition of doing this. And it demonstrates that we don't need to do things like stabilisation, integration, release strains in order to be able to run agile at scale. This is hard. It's complicated to do. I want to leave you with one final thought from Jesse Robbins, who is head of availability at Amazon. The way you implement this is not by creating a programme or a project. It's by everybody every day working to continuously improve their process. Jesse's rule is don't fight stupid, make more awesome. And that's what I want to leave you with. Every day, think about going to work, think about one thing you can do today to make things slightly better for everyone around you. And if everybody does that every day, that's how we continuously improve our process and get better at what we do and create better outcomes for the business. Thank you. Thank you.