 So my talk is going to be on continuous deployment, even though today it's named continuous delivery. Whoops, they invited me to the long conference. So my name is Timothy Fitts. I coined continuous deployment. I was looking it up, but was that part-time block in 2007 that I actually first used the term continuous deployment? I did so while I was a technical leader at IMV. We were a startup, we did continuous deployment. I first coined it, I didn't invent it, I didn't start it. There were other companies doing it before IMV. I've seen references to Flickr doing it early 2004, late 2003. And there were a lot of trends leaning to... I just sort of pointed at, hey, continuous integration can go one step further. That's in fact where the term continuous deployment came from. Why don't you integrate and then deploy? After IMV was the CTO of Canvas, I actually got to build this from scratch at a company where people who knew nothing about it can teach them. And in fact, we got to have a bunch of capitalists, Fred Balsen, who had already continuously deployed at Etsy before he joined our board, which I never would have imagined. And then for the last five years, I've been a software consultant. About half of my time, I write code because I'm the kind of person who will find the right code. I go crazy. On the other half, I teach and evangelize and do investments in workshops and coaching. You can find IntimityFits.com or IntimityFits on Twitter. These slides are available already online if you go over to my Twitter. So really quickly, I'm going to cover what is continuous deployment. I was looking for continuous delivery. That's sort of the basic part of this talk. Then I'm going to talk about the pitfalls of getting to continuous deployment. So it's sort of the lessons learned that I've seen from organizations where they get stuck, where they have problems, things I don't like that the industry is doing right now. Lots of very opinionated sort of grantee style things. That's a little bit more intermediate. And then finally, I want to end on the future of continuous deployment, where I think the industry is headed, what I think we're going to see develop, where some opportunities are. And that's sort of like what keeps me up at night. And that's a little bit more advanced. So let's start with the definition of continuous delivery. So the goal is to produce software in short cycles, ensuring that the software can be reliably released at any time. This is pulled from Wikipedia. If you look at, for instance, like the Mark Fowler page on continuous delivery, it's a much longer explanation. But I'm really highlighting sort of the difference between continuous delivery and continuous deployment here. It can be reliably released. So I would say this is deployable at the push of a button. That's sort of the difference between continuous delivery and continuous deployment. I see Jed's nodding, so okay, I got it right. We're good. Here, I use terms deployed with these very specifically. And in the classic box model of software, they were the same thing. You would ship a thing, deploy it, you release it. We're all the same thing. But very quickly, you're going to find that deployment is something very specific. It's copying the code out of servers to client devices. It's running that code. And it's not necessarily releasing new features. Releases when a feature is turned on for a user, when they first see it. For many things, deploy and release are the same, even in continuous deployment. But for bigger features, you're obviously going to want to do something different. So what is continuous deployment? So my definition is safe, automatic deployment of frequent, small arguments. You have these things, you're doing continuous deployment. So safe. The automated process is ultimately responsible for failures. So this means that you are not personally going to get blamed. You're not going to sit in a postboard and have someone say, you're fired for screwing up and putting that bow. No, no, no. It is the process that actually guarantees the safety of this. This isn't just a tool or a technology. It's a way of looking at the people in your company. When you look at safety as an industry, as a sort of set of principles, there's a whole bunch of practices that our industry just doesn't do. You'll hear a programmer say, I got burned by that. That hurt when I did that. That failure cost me. And in terms of the safety industry, you'd be like, oh, you got burned by that? We should probably have protected you for this. We should probably ensure the safety of your day-to-day workings. And in the software industry, we go, oh, now just do more of that and faster. Don't worry about it, get burnt more. No, continuous deployment is about safety. First and foremost, we're seeing it time and time again. We're seeing it in statistics. We're seeing it in studies. When people feel safe, and it is a feeling, first and foremost, when people feel safe, they do better work. And continuous deployment is about making that happen. It's also about automatic deployment, not just automated deployment. So automatic is what happens when I commit code and I don't have to think about it, it gets deployed. Now usually that means I commit, test run, it passes, okay, it gets deployed instantly. But I've seen this work just as well, where for environmental factors, the deployment couldn't happen more often than a certain amount of time. Every two hours, every four hours, maybe the employee takes an hour or so, maybe it's slow, maybe there's some bureaucratic process. But getting to automatic here is really, really important because it means there's no human process. There's no human chance for failure. There's no decision-making process in when and what to release. And so those decisions can't be made for. And frequent small commits. The smaller your unit of work, the cheaper it is to build out a good deployed pipeline and the less failures you're going to see. If I commit 10 or 20 or 30 lines of code and they go on in production, I just regret them. I didn't think about it. If I'm landing two weeks worth of changes and something goes wrong in production, now I have a problem. I can't revert. There are scheme of changes applied. I don't know what's going wrong. I don't know which line it is. Two days later, when I finally debug it, what do I do? I revert two weeks plus two days worth of changes. The smaller these changes are, the better they are for everything. It doesn't just make things simpler to debug. It decreases the lanes. So I have an idea. I build a short thing. I deploy it. I see that it fails in production. I change the way that I was going to build the rest of the feature. And really decreasing that latency and getting that feedback cycle to go very quickly is super, super important. It's a big part of continuous deployment. So I'm going through this pretty fast because I've said this many, many times in other forums. So if you have questions, whatever it's to me, and I'll try to send you links to what I have already said in the publisher. So really, really high level. Continuous deliveries, push button, continuous deployment is automatic. They have sort of the same structure otherwise. And in fact, the best practices of continuous delivery are almost the best practices of continuous deployment. The only difference is you have to push a button between tests and deployments. I also want to make a little secret. I'm doing the origin place of continuous deployment where I went sort of learned and scaled up and got to understand how it works. It was actually continuous delivery. So test with tasks, then you had to shell into a machine and type, I am going to push. We did this because people would commit code and then go to the coffee and then tell the deploy and have problems and they weren't around and you couldn't ask them to fix it and they didn't notice and they didn't go back. It was problematic. So we didn't have the institutional confidence to actually do continuous deployment. But in practice, during business hours, we were committing 6, 8, 10 times an hour easily. And so this was being run so frequently that if I didn't run it during business hours, someone else would. And so it felt very much like a continuous deployment that I've been evangelizing and that has worked now everywhere else. So let's just keep this a really, really tight, well-kept secret between us. Sound good? Both. Also, I'm not going to say delivery and deployment anymore because they sound almost exactly the same. So I'm going to try and say push button and automatic. Hopefully that will stick a little better. Both require a fully automated deployment and that's expensive to build. Both require significant test coverage and that's expensive to build. If you don't necessarily need great test coverage to start, you're going to get there all the time. You're going to be spending a large fraction of your development on right and automated tests. And both require deploy-piling infrastructure. And between the three of those, that's a lot of work. That's most of what people see and say, okay, continuous deployment or continuous delivery is too expensive for me. I can't afford to do it right now. I can't get management buying to build this. So when you talk about either of these concepts, you're talking about mostly the same amount of investment. And that's going to play into why I think everyone should stop targeting continuous delivery and instead always target continuous deployment. So one of the first problems I see with push button deployment is that they grow a human QA step before deployment. If they don't start with it, it happens very quickly. Because it's so easy. There's a person that are pushing a button. Why don't they test the app? Why don't they just do one, four behaviors? Why don't they do three, four behaviors? And this sets up a really bad feedback cycle. Oh, hey, there was a bug in production. We slipped a change to one of our four behaviors and we didn't test it before we hit that button. So we run a post-mortem. We have a meeting. We look around and say, how could we cause that? I could just try that. I could just try our application every time before I push the button. That's not that hard. And so we add more human QA steps. And here's the problem. That's pushing us further and further away from our ideal of low latency. That's pushing us further and further away from our ideal of automating these things. And inevitably it leads to more bugs. So I've added a human factor into our process that wasn't different before. I'm relying on it to catch serious issues. And humans, I don't know if you know this about humans, but they make mistakes. And it turns out that a developer at like two in the morning trying to push out a hot fix makes all of the mistakes. Every single one you could possibly make. And so when you see that more bugs happen, as you slow this process down or as you add more human QA, and then you have reduced your confidence in your reply. And this is what really kills. Better version of keynote. That's the best translation. More bugs lead to reduced confidence lead to less frequent deploy. And this is by far out of every single mistake that I've seen made at the numerous companies. And I've consulted for two, three, four person startups. And I've consulted for Fortune 500 global conglomerates. And I see the same pattern of mistakes. More bugs leading to reduced confidence leading to less frequent deploy. Pushing you further and further away from the ideals of continuous delivery and giving you more and more of the problems of traditional software. Now one of the really non-intuitive things about continuous delivery or deployment is that it's much harder to ship a weekly release than it is to ship 10 releases a day. Because on a weekly release, if there's a bug, I have to look at a week's worth of changes to figure out what's wrong. And odds are something's wrong. It's been a whole week. Versus I just committed and it instantly went by. Very easy to see what it was that caused it. Very easy to do accounting. Very cheap to do debugging. And so the slower process tends to get slower. And the automatic fast process tends to stay fast. And that's really the crux, is that if you have this feedback cycle, if you have the feedback cycle that we restarted, if you have the feedback cycle that says, okay, we had a bug and we're going to automate the solution to this, we're only going to consider automated solutions to finding that bug in the future, to preventing that class of failure. That actually leads to increased confidence in the system, the ability to do more deployments. Kind of counterintuitively, if you have a broken process, I can give you continuous deployment. Maybe it's just in the staging, and as long as you agree with me to only add automated things to that process, you will get better over time instead of worse over time. I don't need to do anything else. Usually when I'm brought into a team and they say, okay, how do we get to continuous deployment? I'm like, I'm going to talk to you guys for two hours and then we're going to sit down and we're going to start building it. Because the faster I set up this feedback cycle, the less work I have to do. I don't have to tell you how to automate a solution to find a class of bugs. You're good programmers, you're good developers, you'll figure that out. That's how I am if you got to where it is. That's where everybody else, when you see this presentation, it's about fantastic test platforms and amazing sort of edge case testing and all of these advanced things. They got there about like three or four years ago. They even found a bug. How do we find that automatically? I need to build a tool to solve that. It's easy to wouldn't have a concrete problem but the other problem with push button is that deploy often becomes feature-based and conflating those two is very problematic. So what happens if I, in a continuous delivery in a push button system, I want to release this new feature so I commit the final lines of that feature onto the master. But then I need to get like the marketing communication side of things so now. So it takes me about a day. What happens if there's an emergency in the middle of that day? How do I deploy it? Well it turns out I'm accidentally not releaseable. We can go live with the feature. We can rush it out. I've seen that happen. That doesn't play out very well. Especially when you're trying to release the new feature and fix a bug at the same time. Probably a critical bug because otherwise you would just have a code freeze. And so this leads to sort of hidden unreleaseable moments. You have an automatic process. You can't do that. As soon as you commit it, it's going to go live. So you can't commit things that aren't better. So automatic forces you to use feature flippers. Now if you're not familiar with feature flippers it could be as simple as a if statement. What you're doing is you're saying release is now controlled by software. It can be whatever we want. We have a 19 step feature release process where every person in the company gets to individually sign off and turn the key. That's fine. You can automate that. I don't care. You get to pick. But what it does is it changes it to software and it means that we can deploy any time we want. And we know that features are going to go live because they're hidden behind these feature flippers. But can you just use feature flippers to push button the board? Like is it a feature flipper? Like one of the gold standard practices where you continue to still agree? Yeah, it totally is. But here's what I find happens. Without this feedback cycle, without this dilute you have a top-down approach to feature flippers. I see this pattern with all of the best practices you can do to deliver it. Someone says, hey, feature flippers are great. They would solve this problem. Let's make sure you use them every time you need to. And then you have watch developers that are excited and trying to do the best effort. And you have a couple of lagging developers who just don't want to do it. And resist it. Oh, that's stupid. I don't want to write those statements. And there is a lot of resistance to what I call scaffolding code. So that you write while you're delivering a feature, while you're releasing it, you rinse out later. Because it makes your code look worse. And without that scaffolding code it would look better. And so there's definitely a developer pushback. And so you have to fight that battle. Versus with an automatic deploy developer sits down to start working on a feature that's not ready yet. And they go, oh, crap, I can't do this. I can't do this or I will cause bugs into government. And so now the developer is asking you, what's the solution to this problem? And you can point a feature for those. This is a much, much, much better model. This is something I think about a lot when designing processes when matching them to organizations. I think about it in terms of push and pull. Methodality is pushed if adoption is optional and driven by vandalism. And it's pulled if adoption is mandatory and driven by necessity. I was in the continuous integration very, very, very, very, very early. I worked on BuildBot one of the first open source CI servers. I didn't do a whole lot with it. I was in high school. But it was very interesting to me. And so when I finally got into the industry and started working, I was like, okay, I'm going to show everyone how to continue this integration. And I was going to write tests. It was going to be great. Now we're going to have a CI server and it's going to run. What did I find? About half of the people would adopt it. And they would have a relatively positive experience. And the other half would hate it and never adopt it. And you cannot do continuous integration if it's half adopted. And I found this story repeated everywhere. You would have test practitioners who loved tests and people thought that it was a personal preference. Oh, you're just a person who loves tests. You'll write them. I won't write them. Oops, I broke your code repeatedly in my bag. Tons and tons of stress. If you're trying to drive adoption of something, find a way to pull it out of the organization instead of pushing it into it. Adoption will go much, much faster. Switching gears a little bit to the next sort of pitfall I see in continuous deployment. Service-oriented architecture. I think it was a little service-oriented, but it was basically a monolithic codebase that got deployed to multiple targets. It wasn't what you think of as a sort of modern service-oriented architecture. And so we didn't have a lot of these problems. There's something that I got to learn about more as I built Canvas and as a consultant elsewhere. And so in a service-oriented architecture, you often have all four play pipelines. So you have a commit that moves from... commit to build the test, and then we have the little push button and we go a little bit and we push that button, and now it's all done fine. Everything's great. We can deploy a few things in parallel. This looks really nice, except here's what's actually I need to build a feature and it needs to touch both the backend and the front end. And so I commit both of those changes. They start walking through their respective deploy pipeline, and then they sit here. Now probably I'm on the front-end team or I'm on the backend team. There's another team that owns pushing the button to the other service. Maybe I can talk to a couple stakeholders. Maybe I don't know who can push the button. And what I've seen happen, usually this happens once or twice and people figure it out, is that the button gets pushed in their own order and the front-end change goes out live first and now we're broken. And so service-oriented architecture presents an interesting wrinkle in complication to continuous delivery and continuous deployment. So what do you actually have to do? Well, you have to commit to the backend first. And you have to watch that go out and then you have to wait while you figure out who can push the button and push the button is the time to push the button that we waited two days, whatever our delivery cadence is, and then finally go live. Now, and only now can I actually really just commit that change. If I've been holding another change off on a branch or locally on my laptop, and then that could finally go out and maybe another waiting period before I push the button, then we finally have it. This gets slow and complicated and frustrating because I can turn around and push the button on my front-end service, backend service, and accidentally shift your commit too early. And the answer generally tends to be let's add more human processes to fix this. But that was way too simple of an example. No one has service-oriented architecture with two services. In the real world, it's like 5 or 15 or 500. They just seem to grow. Plus you've got databases. Like a database is basically another service or another deployed pipeline. If you push the image changes out, they have to go off first. And so you end up in a world that looks a lot more like this. And this is a world that makes me sad. I don't hate service-oriented architecture. I mean, as an application designer, as an architectural person, it's great as a lot of advantages. But people who think about service-oriented architecture are often not the same people thinking about the deployed pipeline. So the naive way of doing service-oriented architecture ends up with this, a whole bunch of deployed pipelines. Now, automatic deploy makes this nicer. Because I can commit to the back end and probably it'll be deployed before I'm actually starting working on the front. And I can sort of work at my own pace. That makes things a lot nicer. And so if you're going to do service-oriented architecture, I highly, highly recommend you do it. Continuous automatic deploy. But now you end up with this other problem which is that you've got five deployed pipelines. That's really expensive. How do you actually maintain great automated deploys with production monitoring and automatic rollback and all those great automatic deploy practices? It's too expensive. I've had startups come to me and they go, well, we've got four engineers and we're going to do continuous deployment. Well, hire eight engineers or get rid of eight services. There's no way you can afford to do 12 continuous deployment pipelines with four people. That's nuts. And inevitably, they go, okay, now my continuous deployment is over. So here are options. One is template. So you say, okay, here is how we do our deploy pipeline and everything is going to roughly fall into this. Sometimes this means standardizing on language or framework the reason that developers themselves often want server-for-aided architecture is so that they get to pick their own personal favorite framework or language or tool chain. And that's really expensive. That's an antisocial behavior that says that I want to choose my things over the expense of everyone else who will eventually have to learn all of the different tools and technologies in our application. That doesn't mean you can't have more than one language. What it means is you need to pick those considering social factors, talking to multiple people on the team and understanding that it's an extra cost at the deploy pipeline stage. My favorite system is a unified deploy pipeline. As much as possible, figure out which of your services are close to each other, talk to each other frequently and unify them so you have a single build pipeline that eventually deploys to multiple services. This means that you can share code across those services very easily because they're all building out of a single repository. It also means that as a developer, I'm not afraid of touching back-end services even though I've been on the development of the whole pipeline because I know how the system works. I've watched it happen, I've done these deploy. This is what we built out at Canvas and we hired a new engineer and after a few months they had made a change to one of the back-end services. They didn't even know it was a different service. They weren't even aware. This worked so seamlessly they weren't aware of the complications that were being made. The next thing that I want to talk about is another bad habit, bad pattern I see. Staging environments. So, in a nutshell, staging environments are Staging environments are a a sign of fear of your deploy. Staging environments are a sign of fear. Staging environments are a sign of fear. They say, okay, our process is not yet good enough to go out similar to push, plot, and deploy. Our process is not yet good enough and it has the same problem. Staging environments suck the shared mutable state. The shared mutable state is terrible but the shared mutable state across multiple teams across multiple deploy pipelines. So, I'll go into a larger organization and there will be four or five teams and they'll tell me, oh yeah, we're doing continuous delivery. Oh yeah, we're doing continuous deployment in the staging. That is not continuous delivery or continuous deployment. And what's worse is, because staging is so bad, having all these users, he's actually touching it. We don't even know if it works right now. You know, maybe there's 10, 20 developers using it, maybe there's 50 customers who have asked to be alpha testers. But at the end of the day, staging can be broken overnight and no one will notice. And it's broken regularly. And I see, oh, Ops needs to test a new package so they just push the change in the staging. It turns out it didn't work. But don't worry, they're going to fix it in four or five hours. So that wouldn't be too bad if you just said, okay, staging is usually broken. We're going to try to push to staging but then we're going to go live after that. That's not what happens. What happens is another negative feedback group. Be able to deploy to staging. Staging is broken. Maybe it was the change, maybe it wasn't the change. It was a different production. Myriad of issues cause this. And it leads to less confidence in deployments. And the crappy part here is if you just push that change live, it's probably going to work. You could have skipped staging. So instead of staging, what should you do? Well, I'm not saying don't run integration tests. Obviously, you want to run your code in as real of an environment as you can before you actually go live in production. Whatever that means. You don't need a full Android hardware or running in the cloud exactly the same way as your production hardware is. But you need to be isolated first and foremost. Only tests should have access to the environment. At least while the tests are on. It's nice to be able to pull this aside and debug it afterwards. You have to be able to recreate it automatically. 100%. And ideally, you do so on every single test. Now, I know that in certain environments that becomes very critically expensive. You can do it daily or weekly. Stay in that environment. But don't give developers direct access to that environment. Don't give operations direct access to that environment. Don't let it turn into a second-class citizen. If someone goes into your staging and changes the schema by hand, when are you going to find that out? When you try to commit code in breaks. And that's an awful, awful, awful feedback. The other nice thing is when you look at this, you've got four or five teams that previously were all going into staging and now have their own mini-environment being stood up by the test. So now they get to test against everyone else's known good revision instead of everyone else's broken version. Because they get to pick what versions they actually pick up in the test. And so that leads to a dramatically better pipeline. A dramatically better organization. The worst part about this is the blame pipeline. Because it's always somebody else's fault and your team is broken. And this just leads to incredible, incredible stress and really negative cross-team interactions. And those cross-team interactions are valuable. If you can't ensure that they're often positive, you're going to have a very dysfunctional software organization. And I see this often come down to just staging or not. Makes a difference in how it feels to be on a team. Switching topics, again. I promise I'll tie it all back together. Git. Git is one of the things that's changed since I first started talking about Continuous Integration. Is Git bad? I use Git every single day. I kind of like it. I kind of hate it. Got a whole bunch of issues. There's one really specific problem when it comes to Continuous Integration. Branch-based workflows. Pull requests and code reviews. That's not even Continuous Integration! Woo! I spent too long on this, I'm sorry. That's kind of that crying because you guys aren't doing Continuous Integration like 20 years later. Two steps back to the industry. No. Don't get me wrong. It's a GitHub flow model. Branch-based workflows. Those are really, really good for open-source software. When I was working on Twisted, when I was working on Buildbot, they called it UQDX, Ultimate Quality Development System. And it was like 1,000 lines of shell strips that scripted subversion and tracked so that you could do branch-based workflows that were connected to tickets. They got merged after code review. It was the GitHub flow model in 2005. And it's great for open-source software. But open-source software is not what you are probably doing day to day. You're probably at a corporation. You probably work with a team that you know. You're probably going to continue to work with that team. You probably work out of a single master branch. You probably don't accept contributions from a thousand strangers every year. You probably have a single place to deploy to or a few known deploy targets. And you probably want to get your code deployed as fast as possible because you have a high level of quality with everything. And under those circumstances, branch-based workflows don't make sense. So what's the solution? Never branch. The only time that branching makes sense is when you're spiking more prototyping code that you're going to throw away. You're going to just get rid of it. If you're going to write code and it's going to live in production, then you need to do it using continuous deployment. And you need to commit it frequently and regularly so that you can integrate with everyone and everyone else can integrate with you. Again, this isn't even continuous deployment. This is continuous integration. I'm just telling you what can't be said a long time ago. So then that always brings up the question, are you saying get rid of code? I'm definitely not saying that. There's a lot of good uses of code. But there's also a lot of bad. And the reason that people usually want branch-based workflows is because it catches bugs and it means that terrible engineers can commit code and you can somehow magically turn that into good code. And both of those things actually kind of make sense in an open source world. But again, in a closed source, in a really corporate development environment in a small team's working on their project, it actually doesn't make sense. If your code review is catching bugs, you have serious problems and those are further up the pipeline. If you can automate catching those bugs, then everyone can work dramatically faster. Because what's the turnaround time? How fast are you actually getting code reviews done? Even really good organizations like GitHub, they're deploying all of its mind, they would say the new continuous delivery and they might argue with them, but they do this model. And the only way it works is because they have hundreds of engineers that are all just picking up code review. And so their throughput is acceptable, but still surprisingly slow. They still have the problem where someone will accidentally have a branch open for two or three days and then have integration. And then try and land too much code all at once. And try and push a couple thousand lines to code out and work out all of the problems that were noncontinental. In practice, it sends up being like a week. Two or three days to write code, do a core quest, get a code review, have to fix stuff. That took a couple of days because everybody was busy and code reviews get deep prioritized because everyone wants to work on features, move their scrum tickets along. So it just falls behind and your latency increases. Everything I'm telling you about here is about reducing latency because reducing latency makes everything better. Do not do this. So how do you actually do it? How do you actually do code review? You can do it after deployment. It turns out this works really, really well. There's a bunch of human-level factors that I don't know how to automate. I don't know how to automate readability. I think at the point that we can automate ensuring readability, I'm out of a job and AI has already taken over the planet. Code review is still useful for these human factors. Promoting code reviews, code reviews, promoting knowledge sharing, minimizing your bus number, making sure that there's no aspect of your application that only one person understands. All of these can be done with code review after deployment. So in fact, that's what this looks like, is everybody commits to master code review. And then you have a webpage you can go to to see the master code review commits, and anyone can pick one of those up. The works are very, very similar to a branch-based model, but I'm looking at much smaller snippets. And I'm also not worrying about bugs, because it's already been deployed, probably been out there for three or four hours. Odds are I'm not going to find any bugs in that code. Another really good method that I like even more than reviewing individual commits is it? I don't know what your experience with code review is, but it's like two-thirds of the code reviews are nitpicks, and they're annoying nitpicks. Oh, you shouldn't have this space here in this if state. I would use this other library function, it's equivalent in every way, and I don't know why I'm telling you this, but it's my preference. I hate that kind of code review. It's awful. So instead of doing that, do a feature-level code review. Get to two-thirds, 80% complete on a feature, pulse it up, and you're like, yeah, we've got the architecture nailed, and you're like, this is great. And then have another technically incumbent. I've seen this take place on a projector with the entire team in the room, where someone has sort of a line through the code, and your goal is just to explain the code to that person, walk them through it. And inevitably, you're going to come up with a bunch of issues. These are the important issues. These are the issues that you're going to hit a year from now and go back and try to do this code and go, what was I thinking? Why is this organized this way? I don't understand it. It's a valuable use of programming. So higher-level, looking at data flows, instead of looking line by line, looking at features in an intelligent way, asking about the architecture. And better yet, if you can build this into your feature release system, you can say, okay, well, every time that we're about to release a feature, we're going to do a program review, and we're going to budget a date, now all of a sudden you can build the feature slightly wrong and actually have the budget because I don't know about you, but every single time I build a feature, I get to the end and I'm like, well, that was slightly wrong. Because that's programming. The details matter, and you don't know the details until you actually build it. So, oh yeah, I wanted to tie all these things together. So it turns out that, I don't have a slide for this unfortunately, code review slows down your latency. Service-oriented architecture gives you a bunch of deploy pipelines and continuous delivery. Push button avoids slow down your latency. And all three work together at unison to cause organizations that claim to be doing continuous delivery having two or three week latency between the time that someone's done writing code to the time that it's actually live and being run by users in production. And that is not continuous delivery. That is not what was intended for you. That is far from level one. And so these anti-patterns are unfortunately collude together to cause a lot of frustration. Everyone's like, well, we're doing all the right practices. All the right architecture practices and also all the right continuous delivery practices and then they don't mesh together. And so you need to avoid these big problems. I think I've given one form or another of this advice to every single company I have consulted with over the last five years. It is that common. People keep making the same mistakes. So if you get to continuous deployment, by far the most surprising thing about adopting it is that deployment is stress free. When people hear about continuous deployment, they think about an extremely high stress model. They think, oh, every time I deploy right now it's super stressful and things go wrong and we have postmortems and it's awful. Why would I want to do that more? Why would I do that all the time? Even if it works, even if we don't have problems, I'm going to be sweating nervously the whole time. And that's the general intuition. It's just not true at all. You've got to be back off. On your 100th deploy, you literally forgot that it deployed. You committed and stuff just happened in the moment. It's incredibly stressful. It's easy to go, oh, yeah, we have to push this hotfix. Well, that's fine. It's going to go through all of our normal deploy pipelines. It's nothing special. I used to teach people at IMU how to handle production outages, cluster failures, and required code changes. The worst possible type of bug. It's hard to teach. You know the first mistake people make is when they try to fix higher priority bugs where dollars are on the line and you have the right code with the gun in your head? You don't write a test. You don't follow your basic class. And it turns out, you're probably going to screw up because you're under a lot of stress. Things are going to go even worse. So if you say, no, no, we always follow the process. It doesn't matter if it takes you 10 minutes extra. The company will pay for that loss revenue or something like that. You'll probably not have a bug. You'll probably push the bug. So summing up what I've learned from teaching continuous delivery and continuous deployment. Continuous anything is great. I'm not here to bag in continuous delivery. Sometimes you have to do something like that if you're shipping an iOS app. You've got a gatekeeper. You can only deliver the ones a week. You know, automatic would be better, but I've got a question about it. People say continuous delivery and continuous deployment, and they mean the same thing. I don't care. My language is living. I don't have a say in it anyway. But what I want you to take away from this is to have continuous deployment as your goal. To always be seeking the automated version. To always be continuing to try and deliver software slowly and easy. And to push back when people try and slow that down. Because if you don't set out this way, you will get stuck at roughly a one week delivery cycle. That is the pattern that I see over and over again. It doesn't matter how big your company is. It doesn't matter if you're super well funded startup or scrappy bootstrapping. It's the same problem everywhere. And now I'm out of a job as a consultant. So, I guess this is my retirement speech. Okay, so now I'm going to switch gears. We went through basic stuff, intermediate stuff. And now I'm going to bladder a little and hopefully it makes sense. So, I'm going to talk about the future of continuous deployment. Really, that's the future of software. So, just picking a project at random. I looked at a 2006 version of Buildbot. And the set of instructions were to manually go off. And they just listed these packages as requirements. You had to Google them. You had to figure out how to install them. It was all manual. You had to install that. From a Python perspective, there's no pool in your class. Contrast that with today. We're going to data warehouse project. This is a graph of the Python dependencies. There are 15 explicit dependencies. And those branch out. Each of those dependencies have dependencies which have dependencies that lead to 66 different things being integrated to a 5,000 line project. It was a huge explosion of integrations. And integrations that aren't within my organization. I can't tell any of these packages how to manage how they work. I don't know how reliable they are in updating. I don't know how frequent. I don't know how much to trust them. I deleted actually which packages we had to pick specific versions of. Because I didn't want to call out anyone. Everybody has a package manager. Even old languages that didn't have them are getting them very quickly. Everyone is stuck in the same boat having to manage incredible complexity. Projects have more effects. More libraries, more frameworks, more API integrations more frequent deployments more frequent needing to take updates and all of these things. And so what we're starting to see is the continuous deployment of things that you wouldn't traditionally consider standalone software packages. We're seeing continuous deployment libraries, frameworks, operating systems. For example, Google Analytics. You say, please give me a Google Analytics. I have this API. Google gives you the latest code. Or an API experiment. Or something else. You don't care. It's out of your hands. We've been seeing this with operating systems. By a case-by-self track, you can just say, yeah, auto-apply updates. Any hot fixes to my terminal I don't care when. All of this is towards one goal. And that is to stop wasting time trying to keep up. We're starting to get faster and organizations are spending more and more time keeping up. If you really want a good example of this, look at JavaScript. JavaScript programmers, if they've been in the industry for more than four or five years, they all have the same fatigue. They've all been around the block and seen three different frameworks evolve and become the oddest thing and then die. And they've also watched their own marketability as a developer. Just fluctuate wildly. They're apparently tightly coupled. And in fact, this ends up potentially killing whole languages. Perl 6, Python 3. We're now at like a year 10 of the Python 3 transition. And I started a new project this year. Python 2. I was a huge evangelist for Python. I can tell about four or five years ago when I realized that it's kind of dying because this transition is so extensive. AngularJS is switching the languages that it's written in. Ruby on Rails projects that are more than three or four years old, you should just rewrite them. Actually upgrading these two concepts. This is a huge problem for our industry that we're going to have to face. So here's my big prediction. The next big language and or framework, the winner will be whoever can sustain evolution across organizations, across integrations, across platforms. And that's a big shift from the current paradigm, to whoever has the best marketing and whoever shows me the best static code examples. That's what I want to use. Now organizations are starting to get smart enough. I don't know what the timeline is on this. It might be five years, might be 15. You know, I'm only mostly perfect at predicting the future. So what would that look like? Here's a hypothetical. What if your programming language doesn't support the code migration, so that your frameworks and your libraries, when they updated, when they made a breaking change to their API, they could ship with it a little snippet of code that could update your code to work with the new API. Now obviously, there's a thesis with this, doesn't there? Were you surprised how much time and effort and stress and frustration goes into like changing the casing on an accidentally mis-cased class in Python. Like some of those changes had to wait for the Python 3 rewrite. So what would that look like? Imagine you write in a tiny little language that says, okay, well, we're re-enaming old class to new class, or we're changing a signature, or we're adding a new parameter, and here's a default, so if the calling site doesn't have anything to discuss right there, it just automatically becomes null. And we're seeing a little of this right now. This is a library design just to help this transition between Python 2 and Python 3. And in Go, you have Go format, which most people just use to pass it their code and it automatically makes it look correct. They have a single correct version of how the whites should be. However, it's actually also really good to be passive, crazy command line arguments and automatically do refactoring. And you'll see in the release notes that it's not for Go. Hey, we change this minor thing, we don't think it's a big deal if you run this Go format. And I've seen people share, hey, we updated this project, here's a Go format line. What I propose is that these are going to become not just standard, but built into the way that we build and develop software. And overall, this is all pointing to one trend, which is a prioritize change in code over the static code. I don't care about the 10,000 lines of code I want to add to it. I care about the refactoring and how they integrate with other people's changes. We're a world where if I actually count up a number of developers on my 5,000 line project, I looked at all the open source contributors, all the contributions, all the patches and changes, it grows to be beyond the comprehension of any one individual. And that's a small project. Can you imagine an operating system? This focus on change I think is going to revolutionize our industry. I don't know how it will perform, but I'm really interested and I'm excited for the future. Thanks. Any questions?