 As you can see, this is my dog, by the way, he says hello. I deploy on Fridays, maybe you should too. We'll get down to that. If you want to reach out to me at any point in time, that there is my Twitter handle. Feel free to post any criticism, any suggestions, any things you like or dislike. I will not respond during the talk, but after. OK, let's get going. As I was running through slightly foggy London this morning alongside the canal, which is beautiful, by the way, I was thinking, as I do often, people that know me know that I like to talk about this. I was talking or thinking about deploying. And I was not just thinking about deploying, I was thinking about deploying on a Friday. And funny thing, it's Friday today. So it got a little meta. I was thinking on a Friday about deploying on a Friday. When I say deploying on a Friday, what's your reaction? Can we do that? Yeah? OK, can the people that say no please raise their hand? Awesome. I hope that that would be the case. Of the people that raise their hands, who is forbidden to deploy on a Friday by company policy or team policy? OK. And the rest, are you doing it off your own volition or? Yes? OK. So you're not alone. Because whenever I talk about deploying on a Friday, typically the reaction is this, are you crazy? And once again, you're not alone. You just have to Google for deploying on a Friday, and you get this, the internet is a wash with memes about it. I don't know who you are, but I will find you. And yeah, there's only going to go one way, right? 5 PM deployment on a Friday. So this is an actual t-shirt you can buy, and there are mugs you can buy as well. I may just get one for kicks. Should I deploy on a Friday at 5 PM? No, what if I just need to know? So basically, stop. Don't. Don't deploy on a Friday. And OK, let's not then. Well, that would make my keynote very short. And that's not why I'm here, right? So I want to get down into why people say this. There are a couple of reasons. There are a couple of reasons that I've identified over the years. And one of them is it's simply tradition. We've never deployed on a Friday at this company, so please don't start now. And if you then ask why, well, because the previous person didn't do it. And they told me not to when I got into the company. And so I don't do it either. It's tradition. It's written in the company culture. It may be unwritten, but it's tradition nonetheless. Risk is a thing that is always associated negatively, or not always, but it's regularly negatively associated with deployments, with releases. Why? Because they fail. And then the irrevocable but and also understandable reaction is to reduce your deployment frequency. Things go bad. Things fail. So what we do as humans is do less of it, because it hurts. And ironically, the less you do it, the more risky it becomes. And we'll get back to that in a little bit. So risk. And it's interesting that you see this happen at large companies, for example. Financials are notorious that they deploy once or twice a year because they associated all this risk with that release. And the only thing is it keeps confirming their bias, because every half-yearly release basically goes wrong. Control, if I ask you, as a developer, as a tester, as a team member, as an ops person maybe, are you allowed to decide when you deploy to production? Who says no? OK, I see a fair number of hands. There are companies that actually do not allow technical team members to make that call. They want certain features to go live, only when some other people, in some cases, even the CEO, says it's OK to do that. And that's a control mechanism. It's basically somebody without any of the technical knowledge or background saying you cannot do that on Friday at 5 PM because I have this idea associated with that thing that it's dangerous. So please don't do that. I've seen somewhere in the newspaper that it breaks on Friday, so don't control. Confidence and trust, the legacy application that you really don't know what is happening or what will happen when you touch it here and then all of a sudden over there something falls apart. We don't have confidence in the changes that we make to our system that when we deploy them to production that they generally will work. Maybe we don't trust ourselves with our code. I'm changing something here and I don't really know what I'm doing because the system is not architected and designed in such a way that it helps me to understand me what I'm doing. So I really don't trust what I'm doing, so I tried to limit my exposure to, well, at least not on a Friday. Time to repair and recover. A lot of companies optimize for the time between failures. They want to have that as large as possible, which is insane because things fail. But the reason they do that is because their systems are so difficult and so complicated that when they break, it takes a long time to fix it, to restore that system. So if your time to repair is 24 hours or more, then, yes, I can understand why you would not want to deploy a change on Friday because it will possibly, if it breaks, push you into the weekend. It will actually force you to deal with that system going into the weekend. And worse, if it's more than 24 hours, it will actually blank your entire weekend. So the time it requires you to repair a system or recover a system definitely plays into the don't deploy on Friday. And that also goes directly into on call stress. The person that goes into on call Friday afternoon, well, if it's not you doing the deployment, I'm doing the deployment and somebody else is on call. And if I know that my system tends to break after deployment, then I'm, well, maybe if I don't like that person that is on call, then I may actually want to do it, screw their weekend. But let's assume we're all reasonable individuals and we're all team players. So we don't because we know we've seen it before, right? We've hit our nose. We've bloodied our nose on that wall. And so let's not do that because the person that is on call is going to, they're going to be hurt by this. Their weekend is going to go into the toilet. So that's not something we want to do either. So all these things combined, it's pretty reasonable to say, don't deploy on the Friday, right? And funny thing is, I used to say the exact same thing, not that long ago, at least not when you compared to cosmic timescale. So six, seven years ago, I used to say something like that as well. Probably without context, maybe because it was tradition, maybe because I heard people in the industry saying it as well, maybe because I saw the shirt online. I don't know, I can't remember, but I used to say it. And I just as much as everybody else, like my weekends, I like my evenings, try as I might. But what about bugs that you find in your system? A critical issue that you find on a Thursday night or a Friday morning? I've seen companies that actually have two streams of flowing software to production. The one is the stream that they advertise, that is the feature stream. And the other is the, huh, it broke. Let's fix it. And they actually bypass everything that is in the feature stream just to get something to production really quick. There is an organization which I shall not name in the Netherlands that used to advertise that they did two releases a year. And they actually did four. So two that they advertised. And then a week after the release, they did the real release that fixed everything that the release broke. So what about bugs? What about 3 p.m.? Is that okay? Or are we freezing the entire day? What about Thursday though? If your system takes 24 hours plus to fix after a broken deploy or after a change, if you do it on Thursday night, maybe or Thursday in the afternoon, then maybe the system is still broken on Saturday. So maybe we should not deploy on Thursday either. And we should say, okay, maybe let's just only deploy on Monday, Tuesday and Wednesday. Friday is 20% of the working week. And if you block pushing value to production, value to your user for 20% of the week and your competitor isn't, then you're going to lose that battle. Friday is 20% of the week and that is an important thing. So I'd say, let's improve. Maybe we must improve even. Let's do it more often. This is not what your doctor would recommend. Doctor, I broke my knee. Okay, do it again. It will hurt less the second time. I promise you. But in this context, it's definitely a thing that is the case. Because some people associate negativity with failure. And that's not really the case. Failure is not the opposite of success. Failure is encouragement to learn. It's an opportunity. Failure is an opportunity. And as Winston Churchill once said, and he was a wise man, success consists of going from failure to failure without losing your enthusiasm. So we basically, we keep tripping and yet we're still smiling and we're continuing happily along. And this is what improves us. If we take those failures, not as negative data points, but as opportunities to improve. And the thing is, if you do things in big steps, then they will fail big. So if we do one release a year, then what can go wrong will go wrong and it will go spectacularly wrong because there's a lot of changes in that release, right? So conversely, small steps, if they fail, they will fail small and fail they will, or some of them at least. But that's fine because they fail small. So we can recover quickly and it's cheaper. And the object of this whole thing is making things effortless. So instead of worrying about the next deployment, will it work? Will it screw up my weekend, my evening? Will I have to fix things again? No, we're going towards a situation where we actually don't think about that anymore. We just make our change, we test it and we have sufficient confidence, sufficient trust in our system and ourselves and our team that we can deploy that to production that it won't break, effortless. So all this is to reduce risk, not eliminate it, because that's not possible, but reduce it and reduce it to such a level that it is manageable. And I came across an interesting tweet the other day. On the left side of this picture is the risk and cost diagram with infrequent releases. So what you see is basically, this is inventory. If you don't release things to production and you keep building things, then you build up inventory, right? And once you release that, then you actually realize value towards the customer or that's what we hope. While you are building up inventory, you are also building up risk because you actually don't know whether the thing you're building is the thing that people need because you haven't tested that. It's still lying in your company, nobody's actually used it yet. So it's risk. Whereas on the right side of the picture, you see a deployment frequency that is far greater. And so what you see is that the amount of inventory and therefore risk that you build up is significantly smaller because we release in small batches and we can learn quickly from those small batches and if there is risk involved, if there is a failure, whether that's a technical failure or a business failure, we can easily compensate for it. This rolls right into the continuous everything mantra. Rather than thinking about start dates and end dates and projects, we're thinking about products, products that have a life cycle that we improve day by day by day and we improve them through a feedback loop that goes on and on and on. We plan, we code, we build, we test, we release, we deploy, we operate, we monitor and along goes the cycle and we never stop and we do this in very small steps. Continuous everything. That's the basis of realizing value quickly to our customers. And one thing, if you haven't read this book yet then I absolutely suggested to you to Continuous Delivery Bible, if you will, by Jazz Humple and Dave Farley. Continuous delivery is all about delivering value to users safely and quickly in a sustainable way and this is a key point. It's about a tempo that you can achieve and that you can sustain indefinitely. So it's not about crunch time, this needs to go into production right now. No, it's about a tempo that you can maintain indefinitely as a team. Continuous delivery basically looks at things in sort of a sequential way in deploying or delivering value to production through a number of stages. Stages that do testing or building or variants of those things and in an abstract simplified way, Continuous Delivery starts on the left side of the picture with a developer checking in code to GitHub, GitLab. Then a build and test system starts running, it compiles the code if you have a compiled language or it builds assets or whatever and then you run the tests. When all is okay, all the tests are green, we deploy automatically to a staging, a testing and acceptance environment. The name doesn't really matter and then at some point, after testing on acceptance, we decide as a team, this is okay, this is good enough to go to production and then it's deployed into production. The red arrow then or indicates that there is human involvement, there is a human decision that actually says, okay, now we are good to go, we can promote this change, this build to production. Continuous Delivery says, specifies that your code should always be in a releasable state. Regardless of the moment of time, you should always be able to deploy the thing that you are working on to production. And if that is true and we've done the thing with the manual intervention often enough and we learn about that, then we can go to this situation. Continuous Deployment, where we actually, after we deploy into acceptance, we don't have any manual checks, all our checks are automated and we verify on an acceptance environment whether the deployment there is okay, is successful and then we immediately and automatically progress into production. So there's no human involvement in this pipeline anymore. Everything flows from the left to the right of the picture in an automated fashion. All the checks that we do are automated checks. Why would we do this, you might ask? Well, research, for example, the research presented in Accelerate, another very interesting book that you definitely should read if you haven't yet. Their research says, for high-performing teams compared to low-performing teams, their deployment frequency is 46 times higher than the low-performing teams. Their mean time to recover, they recover about 100 times faster, so that's the difference between days and minutes. Their change failure rate is one-fifth as likely, which means that the low-performing teams will have this and they will have a fifth of that in terms of change failures. And what's especially interesting, the lead time for changes, which is basically how long does it take for a change to be developed by you, a developer, and actually be delivered into production and be used by a user. High-performing teams do that 440 times faster than low-performing teams, and they all do that by incorporating all, well, some of the suggestions that I'm about to give you. So, the effort then, or the idea then, is that if you can deploy on a Friday at 5 p.m., you can do it always. The idea is, the goal should be, if you ask me, to go for the most difficult period that you can imagine, which in this case would then be Friday, 5 p.m., and go for that. And if you can do that, then you can do it any day of the week. You can do an emergency fix on a Sunday, and it will be the exact same effortless thing as it is on a Tuesday morning, right? Just look at Netflix. They don't have windows that they can't deploy in because everybody's looking at videos all the time, worldwide, they don't have off hours. So, if they cannot deploy during the release of their most incredible series, their most popular series a few years ago was House of Cards, now I'm not sure, but if they cannot do it, then when can they? So, that must be our main effort, I think. So, people say, okay, but our code is old. We cannot do all the things that you're saying. It's impossible. Please go away. Our code is not on the test, so we cannot have any confidence in our system. We can never say whether the thing we are changing is actually going to work, and we have an impossible-to-change system, and I don't like my job. And please go away. So, dealing with legacy. Now, legacy is quoted because legacy is also associated with negativity, and that is understandable, and in some cases even justified, but please do remember that legacy systems earn money. I've consulted on many legacy systems that actually were reaping in money, and yet people are like, this doesn't... So, yes, there are problems with legacy, and I'll give you a few patterns to deal with that. The first one is the strangler pattern, much like this type of vine, which is a strangler vine, and forgive me that I don't know the exact Latin name of the thing, but you can look it up on Google. It grows on trees, and it basically grows on the outside of the tree until it has enveloped the entire tree, and the host tree can no longer survive and dies. Which is something we want to do potentially with a legacy system as well. Imagine we have a monolithic application that is very difficult, very much not under test, and it's connected to the Internet, and it's connected to a database, so far so good. What we do is we insert a proxy between the monolith and the Internet, and initially it doesn't do anything, it just forwards the traffic from the Internet to the application of vice versa, but then we start adding a service, and this is not a microservice or a microservice or a monoservice, it's a service, or it may be a module, if you have a system like that, and it may have its own database, may not, but it's going to implement a small piece of functionality, and when that is done, we add a rule in the proxy that actually forwards the traffic for that particular piece of functionality, not to the monolith, but to the service, right? And then we add more services and more services, and we carve out pieces of the monolith, and we make everything nice and loosely coupled, and the proxy gets more rules, and at some point, our monolithic application is either not doing anything anymore, or it's reduced to the size of all the other services, and it's become a service of its own. That's the ideal end state. The strangler pattern, which is uniquely, well, not uniquely, but very much applicable to web systems, for example, because you can insert a proxy into the web traffic and then forward that, but it's also, if you do messaging, for example, you have a message bus that gets messages based on the message contents or key in the message, you can distribute it to module X or the monolith, very applicable in both cases. Another pattern is branching by abstraction. Now, this doesn't have anything to do with branches in your Git and your Subversion, but rather a functional branch, a logical branch, and let's assume we have a module of some sort in our legacy system, and there's a bunch of client code connected to it, and they all rely on the old module and its specification, and it's not well tested, and we don't really know its specification, and there are bugs, but all the clients, all those clients are completely linked to it, so if we change the old module, then those clients start falling apart, and that's not something we want, and we don't want to do it in a big bang, we don't want to change all the clients and the module together because that's risky. So what we do instead is this, we add an abstraction layer next to the old module, and the abstraction layer is think of it as an interface that is well defined, or if you have an ugly building, think of it as a facade that you had in front of the building, right? All of a sudden you don't see the original building anymore, you only see this beautiful, shiny facade, and what we then do is start transporting those clients to the abstraction layer. The abstraction layer is going to call the old module, so it's going to abstract that away, and the clients are going to talk to the facade, one by one by one, until they are all talking to the abstraction layer, and now you have a stable interface which can be tested, can be specified, all the nice things associated with modern code, and the old module is hidden away. And what we can then do is, hey, add a new module with the same abstraction layer or which implements the abstraction layer, and then we can, preferably by configuration change, switch those clients to the new module without the clients knowing that they're actually talking to something different, right? And that new module is, of course, your TDD and everything, it's beautiful, it's well-designed, and all of a sudden you've carved out a piece of the code that was impossible to maintain, and it's now replaced by something that is well-maintained. And we do that, and then we can get rid of the old module. Great. This is a very powerful pattern that unfortunately is not used enough. And this allows us to do things in small steps, which is great, which we can test and deploy to our heart's content instead of going to a big bank change. So, on branches, while we're talking about branches, and now I'm talking about the Git branches or version control branches. Who here has ever had a merge conflict? Please don't lie. Okay, who's here had ever had a merge conflict that took more than a day to fix? Thank you for your honesty. This is something I caught at a client last year, and this was relatively benign. This occurred more than I wanted. This can happen, but what is really at play here? Continuous integration. That is a term coined by, amongst others, Martin Fowler, way back in the day. And continuous integration, there's one incredibly important line in the idea behind continuous integration. Integrate to master or to mainline or to baseline, at least daily, not from. There's an important distinction. The idea of continuous integration, as it's stated, is as a team member, it is important that you integrate your work with others so that it can be tested and verified and also that any potential merge conflicts are as rare and as small as possible, which requires you to integrate your work with others, at least daily, to mainline, not from. So pulling from or rebasing from master is not the same as integrating your work to. So what you essentially do, or essentially saw in the picture, is delaying integration, delaying integration of code to others, to other code. And okay, I know that there are people that say, let's use branches to decide when we want to merge a feature and when we want to put a feature live, like a timer, like a schedule. But there are better ways to do that because if we endeavor to couple the deployment from the release, then the deployment becomes a purely technical exercise. We deploy to production and that is a technical thing and the release, which is actually allowing customers to use a feature, that could become a business decision or rather a joint decision within the team and other stakeholders. How can you do that? Well, a thing called feature toggles or feature switches, feature flags, all different names for the same thing. And what they basically mean is we put sort of like an if statement around a piece of functionality and depending on a condition, the functionality is either dormant or active. And we can add a dashboard for that, for example. And basically you would have a feature that is based on a flag or a toggle, switched on or off and available or not available to groups of consumers. And this can go, this can get very complicated. You can activate feature flags based on region, based on preferences, based on other things. But what it boils down to is basically, this is one of my client projects a few years ago, an old version of a search page on the left and a new version on the right. What you see happening there is a complete re-architecting of the entire search functionality. So going from solar to elastic search, a new design and a whole bunch of other things, which is not, you know, this is not done in a day. So this took about a month and a half and if we didn't do that with a feature toggle, it would have meant building that on a feature branch for a month and a half with all the associated merge conflicts and issues with that. Or we would have blocked the deployment for a month and a half, both very suboptimal. So what we did instead was in a number of small commits behind a feature flag, develop the new site, develop the new page. And then once we thought it was okay enough to release to the company, we enabled the feature flag for people inside the company network. So based on IP address. Then we got some feedback and then we enabled to feature flag for 10% of life traffic. And we saw that the metrics were actually getting better. So we increased it to 50% and then to 100%. That's what feature toggles allow you to do. There is a real thing though, feature toggle debt, is that if you have too many feature toggles, long lift feature toggles in your system, they could lead to a combinatorial explosion of possible paths through your code. So if something reaches 100% and doesn't require you to have the feature toggle anymore, then please go ahead and remove the feature toggle. So let's get back to branching. And I've stolen the next few slides from somebody who's talking about this more often as well. Successful branching strategies, don't. And the second is please also don't. And third, don't branch. When I say it is, you're laughing a little bit, but I've also seen this happen, the pitchfork reaction. So what I mean with this is trunk-based development or master or mainline or whatever it's called in your version control system of choice. This is not new. For those that are old enough to remember extreme programming, which by now is 18 or 19 years old, I think, this was introduced as part of XP, trunk-based development. So what it basically means is you have a mainline of development, you have master or trunk, if you're still on subversion. And that is where everything goes. We don't use feature branches. We may, depending on the context, we may use a release branch, but we don't use feature branches. Two, get into the continuous integration mantra of integrating with each other at least daily. The question then becomes, okay, what about short-lived feature branches? My question then would be, what is short-lived? What is short-lived to you? Who thinks short-lived is three hours, four hours? Okay, a day? More than a day? Okay, you're violating continuous integration, sorry. And a friend of mine, this is not something I thought of myself, I wish I would, he called it discontinuous integration or continuous disintegration. So short-lived feature branches, if you don't wanna violate continuous integration means a feature branch can only be alive for less than a day and probably a lot less than a day because you also need to factor in time for the merge and et cetera, et cetera. So three, four hours maybe, I don't know, but if you are at three or four hours, why would you use a feature branch anyway? Because it's probably not that, it doesn't exist for that long even. So short-lived feature branches then. However, pull requests are in essence branches. Who here uses pull requests? Okay, that's almost the entire audience. Using it for code reviews, I assume, right? Code reviews are not un-useful. They are very useful. If you don't have a code review system in place and you go to pull requests to do code reviews, that is way better than not having code reviews. But there is an even better way of doing code reviews because code reviews in the pull request fashion introduce flow delay. And what I mean with that is it's asynchronous in its nature, right? I develop some code, I create a pull request and then another person, another developer has to look at that and they have to verify may change and they may see some issues with that. They need to get back to me. And all the while this is happening, I have a bunch of context which is going on. I'm working on a thing, I'm finishing the thing, submitting the pull request. I pull somebody else out of their context to look at my pull request. Then their intended changes come back to me. I need to go to another context. So there's a lot of delay involved. And the only time this really, really works is if you have a sort of SLA agreed on between your team, like if a pull request comes in, it needs to be looked at by somebody in 15 minutes. I don't know, something like that. Also it's been looked at that pull requests that are large and large depends on the language, but typically 50 to 100 lines sort of seems to switch over point. People start scanning and stop actually reading. So yeah, it looks good. Merge it. Do it. Whereas if it's 10 lines, people will actually start looking line by line and will actually find things. So there's a better way I think. And that is pair programming. Okay, maybe not like this, but pair programming is a very useful way of collaborating on code, well not just on code, but on software development. Because it allows you to do a continuous and inline code review. You're working together. You're talking about code. You're talking about your system. You're talking about everything and you're doing it continuously and inline. There's no asynchronous, there's no context switching because you're both in the same context. There's knowledge sharing, all those beautiful things. It does tire out people. Not everybody is comfortable with pair programming, but it can be taught. Maybe not for eight hours straight, but it is a very useful tool in the box. Mop programming is another thing that's starting to gain popularity where basically we have a whole team and one system, one computer. And somebody's doing the typing and the rest of the team is yelling at them. No. Now that's not what happens or that's not what you want happen, I would say. But the entire team collaborates on a feature, on a thing, on a fix. And sometimes they break away to a whiteboard and collaborate on that and there's all this energy and people rotate in and out because they have a meeting or they need to bring their kids to school or something. But the entire team is focused and involved and the quality is markedly better than doing things on your own for pair programming and mop programming. Now, let's take a little bit of a look at pipelines. This is what continuous delivery is all about. A pipeline to deliver value from the left to the right and the right meaning the user and the left meaning us. Pipelines should be automated as much as possible. If it's dirty, dull or dangerous, you need to automate it. Luckily, we're typically not in the dirty or dangerous business, but the dull business, well, I've been there, I don't know about you, but I've been. So I tend to automate whatever I can to allow time for things that actually matter, for things that really require human insight and decisions. Taking a look at a sample pipeline which starts from the left by checking out code, compiling the code, testing, making a package of it, you know, something of an artifact and then deploying that to acceptance, verifying it automatically, deploying it to production and again, automatically verifying it over there. An important part of a pipeline, any pipeline, is testing. And like everything in the continuous delivery, SWIR, we do it continuously. We continuously test our things. And we can do that, for example, in this way. If you look at the testing pyramid, we have a number of potential testing layers here. Depending on your product, you may not use all of them, but I sincerely hope you have unit tests. Unit tests are cheap and fast. That's why they're at the bottom of the pyramid. Integration tests, which actually link to get our components and test components, they are slower and more costly to maintain, so they're less of them. Then we have acceptance tests, your B-hat or PHP spec, for example, which, again, are slower and more costly to maintain. And at the top of the pyramid are end-to-end tests, selenium, click things, stuff like that, which are often brittle, expensive to run because you require a browser and the whole stack to be deployed. So we have only a few of them, not too much. Some companies have the pyramid turned upside down. So we have a whole bunch of end-to-end tests and maybe one or two unit tests. But the unit tests are the cheapest and the fastest to run. So it's important, if you can get away with it, to push as much of your testing down in the pyramid where it is cheaper to maintain and faster to run. But also, don't forget, exploratory testing and user feedback, essential parts of any system. Those can function as your early warning, your tripwire. There is, depending on the project or product you're working on, there may be a group of people that are very interested in the beta program, for example, you know, getting to see features that are half finished and getting some feedback on that. Those people are your champions and do involve them in your system because they will actively let you know when you've missed something. And on the right is monitoring and alerting. No testing system is complete. You cannot test for everything, ever. So use your monitoring and alerting to alert you to conditions that are abnormal. So if your error rate starts to go up an hour after your deployment, then you should alert on that. And that is part of your testing pyramid as well. But if tests are good enough, then they are good enough because, again, we cannot test everything. Nothing is ever watertight. And no test issue detects, or no test suite, I should say, detects every issue. So this is why pipeline speed is key, essentially, right? We're looking at something that delivers our value into production fast enough and that we get feedback from it fast enough. So if we have a testing suite that takes three hours to run and we made a commit and after three hours it breaks, then the only human response to that is we're going to bypass the testing suite because it doesn't work. It's too slow. And I now have five commits lined up that may or may not have fixed the test that I just saw break. So, and maybe another test may fail. So keep it as fast as possible, preferably within minutes. My personal goal is to have a pipeline that deploys into production from the check-in to GitHub to the last deploy to production, not more than 15 minutes. That gives you enough of a window and enough of feedback to react. Let's take a little bit of a look at deployments before I send you all away. The deployment strategy I use a lot is the rolling update. I was looking at interesting pictures to associate with the rolling update. I think this is the Gloucester cheese roll or something like that, where people actually start running after a rolling block of cheese, which is thrown down a hill. And yes, there are injuries, I read. You brits are crazy. Anyway, the rolling update. So let's assume we have the internet and there's a load balancer between the internet and our system. And connected to the load balancer is our application or one of our services. And we have multiple instances of them because we wanna be highly available. Let's assume that we do. So we have three of those. Now what we do with the rolling update is we spin up version 1.1 of our service and it's not added into the load balancer yet, but we spin it up and we wait for it to check out and do some automated checks on it, et cetera. And once that is all okay, we add it into the load balancer by swapping it out with one of the existing services. So it's now in a position to actually take traffic and it starts working. Now none of the users connected to your load balancer are, if this all goes well, aware that this is happening. They only see the system which is still live, which is still processing requests and still available. And we start doing that for the next instance of the 1.0 service and then we do it for the third, at which point we've replaced all our instances of the existing service with the new version and we can throw the last one away. Nobody ever noticed this was happening. Maybe some feature got added, but everything worked as expected. So zero downtime deployments, which is a key thing if you do a number of deployments per day, for example, you cannot go down for every deployment that would be a waste. So taking all those things together, we come into the pipeline and then ideally, we write that down as code as well, just as much as the testing code and the production code that we have, we write that as code as well. And you have your Jenkins, you have your GitLab, which was presented earlier today. Both do roughly the same thing, even though there are nuances and people like the one more than the other maybe, but our pipeline is written as code, written as stages that do something and that if the stage passes, we proceed to the next stage. And if the stage fails, we get feedback. A simple pipeline, this is in Jenkins, this is off of a client project last year, could look like this, where we have a flow from left to right, actually delivering things into production. And this pipeline took 20 minutes and I think by now they've improved that to 10, 12 minutes from the check-in or the start of the pipeline to the deploy into production. Now if the pipeline breaks, if something fails, then you want immediate feedback. And there's a whole bunch of systems to do that feedback. I've tried the USB-powered rocket launcher once, which is cool, that shoots Nerf darts. We've also tried the LED siren once, which basically honks at you and starts flashing lights whenever the build breaks. Whatever works for your team, whether it's a slack update or a lava lamp that starts or a big monitor that shows you as long as you are aware or alerted as a team to deal with the situation, because the pipeline is what deploys your software to production, so if it fails, well, then you obviously need to deal with that and fix that. So this was a talk about deploying on a Friday, on a Friday, and by now I think you can do it too. Thank you so much.