 How are we all doing this morning? Do we still need coffee? Yes, I know. I left my coffee in the back. I'm so sad. Okay, everyone, please take a drink of coffee for me. Now, how are we all doing this morning? Fantastic. So thank you so much for joining me this morning. As he mentioned, we're going to be talking about double the awesome, mostly because I am super excited to be talking about DevOps because I'm a giant nerd. But I think I might be joined by some of my nerd peers. Yay. So have we all heard about the DevOps? Yay, okay. I get to skip the intro. So some of you may have heard of this like accelerate book that covers state of DevOps report. Have we heard of the state of DevOps reports? No? Do we need to make hands? No? Who refuses to raise their hands in the morning? Yes. Okay. So there's been like the 2014, 2015, 2016, 2017, 2018 state of DevOps report. I'm super excited because as of right this very second, the 2019 state of DevOps report is out. So we all get the first review of it. I'm excited. Are you excited? Are you going to pretend to be excited with me? Yay! Okay, here we go. This is what we're going to talk about. Is this DevOps thing even a thing? Maybe not. If not, I'm out of a job. That's fine. Getting better, aka choose your own adventure. This is where Double The Awesome comes from. We're going to talk about performance, productivity, culture, open source, because that's why we're here today. And then we'll be done. So DevOps, let's get on the same page. So when I'm talking about DevOps, I'm talking about these types of things. I'm talking about software development, software delivery. I'm also talking about availability. So some people may have heard of the four key metrics. Lots of organizations are starting to talk about this now. So when I talk about DevOps, I'm talking about measuring it because I do this like researchy thing. So there are two throughput metrics, speed metrics, lead time from code commit to code running and production, deployment frequency, how often you're pushing code, to, those are in green, eye color code, because I was a professor. I'm also talking about stability. So change, fail rate, the percentage of time, like how often the percentage of times that you push code to production, that like it bonks on you. So it can either come all the way down or it just requires some type of attention. Hopfix, patch, a roll forward, a roll back, time to restore service, how long it takes you to bring those systems back up. Those are the four key metrics. Now we also talk about availability. Just keep that in the back of your head. I'll come to it in a minute. Now we have movement. When I compare last year, 2018, and this year, check out those elite performers. Last year, 7% of our data, 7% of those four key metrics, by the way, they always move together. I see no trade-offs in speed or stability. You don't have to sacrifice speed for stability or stability for speed. This has been consistent for six years in a row. Okay, so anyway, 7% of our population, of our sample, were elite performers. They were optimizing on both speed and stability. But it was a subset. This year, check it out, that 7% broke out to their own. They've almost tripled. So you know how we keep hearing this whole cross the chasm thing? Have you heard this before? This confirms that. So, yay! Now check out the low performers. They went from 15% of our data to 12%. That means that everyone is getting, like, more people are getting better. They're moving out of the low performance group into the medium performance group. Yay! Say it with me. Yay! I love you guys. Thank you for indulging me. Okay, now we have medium performers. Look, they went from 37% to 44%. That's interesting because some of it came from low performers, but some of it came from high performers. High performers used to be 48%, which included elite. So they also, they went from 48% to 43%, right? 20 plus 23. I can do a little bit of math in my head, not very much. What does that mean? That means some high performers fell back, probably struggling with some complexity, probably also achieving high performance and giving themselves a gold star on their forehead and sitting back and declaring vacation. Probably not great. Not that anyone's executive has ever done that. Okay, this is an eye chart. I apologize. Now it's in the report. But when I, like, some people are going to say, what does it mean to actually be a high performer? Again, deployment frequency on demand, multiple deploys per day. Or it's a business decision and not a technical constraint. Maybe you're pushing, maybe you're like making an app, pushing to an app store, so you don't have to do it like all the time. But again, business decision, not a technical constraint. Lead time for change is less than a day. Time to restore less than an hour. Change fail rate between 0 and 15%. Look at the low performers. Between once a month and once every six months. Lead time for changes between once a month and six months. Time to restore between once a week and once a month. Change fail, 46 to 60%. I will note that change fail rate. It looks like it's the same for elite, high and medium performers. There's a bunch of, like, asterisks there. It's because we report a median because the distribution isn't normal. They're actually different and we include visualizations for, like, fun math nerds out there that want to see what it looks like. They are significantly different. It's just the medians look similar. Okay. So you can benchmark. You can take a look and see where your teams fall. See where you are. You can also go look at the website that I'll note at the back. And there's a quick check. So you can also, like, do it automatically. So how do they compare and what does it even mean? If I compare the elite performers and the low performers, we see 208 times more frequent code deployments. 106 times faster lead time from commit to deploy. So this means we can push features faster. But it's not just about features. It's about keeping up with compliance and regulatory changes. It's about being more secure. It also gives us over 2600 times faster time to recover from incidents and 7 times lower change fail rate. Now, sometimes people are like, this doesn't make sense. But when we think about it, when we take this code, if we can release more frequently, we're releasing in smaller batches. Think about it in the inverse. If we release slowly, our code piles up. When it finally gets to production, we have a large chunk of code. So we have increased our blast radius. We have increased the likelihood that it will cause problems. When it does eventually cause problems and cause an outage, we've made it much harder to debug what that problem is. And we've made it much harder and much longer to bring up. That time to restore is much, much longer. So when we introduce delays, when we introduce process, we intend to create more stable systems. That's not true. We've never seen that. We tested it in 2014. We tested it again this year. It actually creates instability. Now, I told you to remember availability. Availability is about the promises that we make and keep to our customers and end users. This is how we measure it. It's basically about how well teams can define their availability targets, track their current availability, and learn from any outages that they do have. Has anyone heard the saying, day one is short and day two is long? This is what this is about. So, wait, we do tech just because it's fun? Well, I mean, yes. And it's important because technology delivers value. We see that elite performers are twice as likely to meet or exceed their organizational performance goals. Let me translate this. This is a little bit academic. We call them organizational performance goals because it encompasses a lot of things. Some of its commercial goals, things like productivity, profitability, market share, like money stuff. Money stuff's nice. Even in not-for-profits, we need money because we need to pay bills. We need to pay salaries. We need to pay something. But it also covers non-commercial goals, things like effectiveness and efficiency, customer satisfaction, achieving the mission goal, serving our constituents. Okay. Hopefully, I've convinced you it matters. Okay. So, like, how do we actually get better? This is how. I also like to call this slide choose your own adventure because you can start by choosing your goal. If your goal is SDU and organizational performance, that stuff we just covered, we cover this in this year's report. Start there and then pick the things that you want to improve. Or if you want to improve productivity, we have a whole entire extra model this year to help improve productivity. So, let's start with SDU and organizational performance. This is the predictive model this year. I mean, pretty straightforward, right? I mean, it's fine. Okay. This is how we read it. We have a dark box around the things that are typical goals, things like SDO performance, software delivery and operational performance. We talked about how software delivery performance is that speed and stability, whether we cover this in the report. Like, there's lots of reminders. Availability is the promises that we make and keep to our customers. Organizational performance is like money and efficiency. Now, we can work backward into identifying the constructs that help us get better. Green boxes are constructs. That's things that help us do things better. Like, we know what continuous delivery is, right? We know what monitoring is, and we talk about this in the report. The things that are in dark bold, like clear change process, culture of psychological safety, they're just in bold because here in 2019, this is the first year we studied them. If it's not in bold, it just means that we've studied them in previous state of DevOps reports. So we've re-validated them. Anytime you see a line with an arrow, you can read that as predicts, drives, impacts. Because our research is unique in that we study predictive relationships. It goes well beyond correlation. Anytime you see a line with a minus, that means it negatively predicts or reduces. So remember how I said sometimes we try or organizations try with very good intentions to introduce change approvals, heavyweight like cabs, like the change approval board. They try to do this to increase stability. It doesn't work. Spoiler alert, narrator. It did not work. Heavyweight change approvals reduce speed and stability, software delivery performance. We also see the nice thing, continuous delivery. See how it has that line with a minus sign to burnout? Doing continuous delivery in good ways helps reduce burnout. So because I don't have very much time, I'm just going to cover a couple things. Cloud. Who here is doing the cloud? Cloud is a differentiator for late performance. However, we have to be doing it the right way. These five core essential capabilities come from NIST. I didn't make this up. Come from NIST. Here's what we see. Only 29% of respondents who say they're in the cloud are actually doing all five of these things. And elite performers are 24 times more likely to have met these characteristics. This is such a bummer. Also, I mean, I went to a gym, paid for a membership. I have to actually do the work. So when I meet with organizations or execs and they're like, I went to the cloud. I'm not seeing the benefits. I'm like, are you doing the work? Although we probably know this, right? We know this. So on-demand self-service. If you go to the cloud but you still throw it behind service now and someone has to manually approve, it doesn't count. We know this. How frustrating is it when we do the cloud but we still have to put in a service ticket? That sucks. We know this. Okay. Code maintainability. This one's new this year. And I was excited to see what this looks like because this contributes to CD and helps reduce technical debt. I'll talk about this in a minute. Code maintainability is about having systems and tools that make it easy to change code maintained by other teams, find code in the code base, reuse other people's code, and this last one I love, add, upgrade, and migrate to new versions of dependencies without breaking code. Does anyone here have tools that are kind of like this? I know there are a couple open source tools like this. These are dope, right? These make writing code so much easier. Okay. Yeah, yeah, yeah. I read the report or I'm about to. So what else is new? This is the productivity model. You can read it very similarly. Lines and arrows are prediction. Lines and arrows with minus signs are reducing. So the goal here is to improve productivity. The thing I like about this is that productivity done the right way helps improve work recovery and reduces burnout. So work recovery is when you can leave work at work. It's when you can detach. It's about work life balance. But we have to have the right kind of productivity. This is not about lines of code written. This is not about story points because we all know that writing too many lines of code just leads to loaded systems. And story points is bad because if it's story points, I'm never going to help you with your work because I have to get my story points done. So lots of researchers are starting to study productivity this way. So the ability to get complex, time-consuming tasks done with minimal distractions and interruptions. We know when we've had a good day. I know when I've had a good productive day. The nice thing about this is that this kind of productivity helps us disconnect. We can shut off our brains when we go home. Now, let's talk about technical debt a little bit. Technical debt reduces productivity because we can never get in that flow. Technical debt was introduced in 92 by Ward Cunningham. He uses this to describe what happens when we don't maintain what he calls immature code. Who here has ever hit technical debt? Okay, so we know this. A few people were like, technical, that's not a problem. And I was like, I have a hunch. Let me throw it in this year's research and see. This is what showed up, like what loaded. So loaded is actually a stats term. These are the things that statistically piled on in the analysis. Do these resonate with y'all? Y'all are my people. This is what technical debt is. Known bugs that go unfixed. Because we need features. Insufficient test coverage. Problems related to low code quality or poor design. Code and artifacts, facts that aren't cleaned up when they're no longer used. Because sometimes the best code is deleting code. Implementations that we don't have expertise in so we can't debug or maintain it. Incomplete migration, I feel attacked. Absolute technology. And this last one. Incomplete or outdated documentation or missing comments. I hate it when people are like, it's just documentation and comments. And I'm like, what are you talking about? This is absolutely relevant because if no one can read your code, it doesn't count. Thank you. I love you. So how can we reduce technical debt? Work Cunningham also suggested that one of the best ways to deal with technical debt is to allow people to maintain a mental model of the entire system in their heads at the same time. This was in 92 so that's adorable because what do our systems look like now? Complex. Distributed. With multiple people maintaining them. So one of the best ways is to allow our systems to be flexible, extensible and viewable. And the thing that we found is that maintainable code, lucy coupled architecture and monitoring helps decrease technical debt. I love that because that helps us at least get some type of idea of what our systems are looking like. Okay, I'm going to hurry. I keep hearing culture matters. What does it even mean? This model was studied by the project Aristotle team at Google a few years ago. Has anyone heard of this before? Psychological safety, dependability, structure and clarity, meaning and impact. They put out this work a few years ago. The interesting thing is that a few people were like of course they found this to be an important driver for performance. It's at Google. So many thanks to the project Aristotle team. They shared their research instrument with us. And so we tested this to see if this is also impactful in organizations that are outside of Google. And indeed we did find that this model of culture is important. Trust and psychological safety has a positive impact on software delivery performance, organizational performance and productivity. So these results indicate that teams with this culture see significant benefits in teams in many contexts. I mean, does this sound familiar? Would you rather be on a team where you can trust people and you can depend on their work and you're working on meaningful things? Trust no one? Okay. So what does this all mean for open source? Community is an asset. High and elite performers continue to use open source the most. Elite performers are 1.7 times more likely to make extensive use of open source software. By the way, this also benefits recruiting. As I'm aside, low performers are using fully proprietary software the most. This comes with extensive costs to maintain. External search contributes to productivity. Engineers who can use external search information like Stack Overflow and YouTube are 1.67 times more likely to be productive. Faster is better even in open source. Open source projects are community driven with contributors from around the world at all levels. Still, committing code sooner is better. It helps you merge patches faster and prevent rebases. Also, working in small batches is better. Large patch bombs are so much harder and slower to merge into a project than smaller and readable patch sets. Help your maintainers out. Okay. Here's your TL there. Take a picture in case you were on Facebook and Twitter. Is DevOps even a thing? How to get better? We covered performance. We covered productivity. We talked about culture and open source. By the way, there's so much more in this report. There's disaster recovery testing. There's change approvals. There's more about open source. There's more about cloud and costs. We also talked about scaling transformation successfully. To get the report, you can go to cloud.google.com. Thanks so much, everyone.