 don't have any stories about going to court. But I'm Mike Stonkey. I'm the VP of Platform at CircleCI. You may remember me from past DevOps days. I've been here a few times. I've given a lot of ignites here. I usually talk about organizational something or other because I'm in leadership and technology is interesting, but I find humans to be more interesting. But I like big data and I cannot lie. So I was at CircleCI and one of the reasons I changed jobs to here was there was this giant data set that I couldn't cover new things about what people were doing. 1.5 million jobs a day, tens of thousands of orgs. So let's just go to some hard-hitting stats. The average repository name has 17.6 characters. Yeah. The most common project name is API. 0.01% of branches have swear words in them. So we're in Chicago and one of my favorite things about Chicago is the Booth School of Economics at the University of Chicago because I'm a huge Freakonomics fan. So you have Dr. Steven Lovett there. You have a Nobel Prize winning economist there, like the father of Freakonomics. And I was like, okay, so I've worked on these DevOps surveys before. That's what people say they do. And I have big data that tells me what they're actually doing. So I can see observed preference versus reported preference because y'all are liars when you fill out surveys. And I knew it and I had to prove it. So how do I map these things? Well, I want to map these four big metrics from all the DevOps surveys onto something that CircleCI has as a construct, which is a workflow. So I've got time from red to green, delivery lead time, workflow duration, workflow failure rate, and deployment frequency. How often are we running these things, right? So let's just get into this. Fastest recovery time, one second. Less than one second. How do you recover that fast? You have two builds going on when it worked and one didn't. Worst recovery time, 30 days. You shipped something bad and said, fuck it, don't care. And the median time is 17.5 hours, right? 17.5 hours. That's basically, this is broken. This looks hard. I'm going home. I'll look at it tomorrow. That's what the average person does with their build. Now you might think, huh, that doesn't sound like all the things that I hear about on those DevOps and continuous delivery books and all that. I thought if it was red, we just had to do that. Well, they're cherry picking a little bit here. When they ask you those questions in those surveys, they say for the primary application or service you work on, maybe you're really good at that one and you're just not really good at everything else, but I see it all. So let's talk about the lead time here. How long do these builds take? How long does your factory floor take to ship the thing out the other end? The 95th percentile is 28 minutes and 45 seconds. For some places, 28 minutes to get like through their entire build system, test system, validation and shipping is amazing. For other places, you're crying. So it depends on what you're doing. Some people take 3.3 days. Don't do that. That's bad. But then you can also see how often are these projects running and the hand drawn charts I did, but some of them were just too dirty when I drew them, so I had to put up real slides. It was terrible. But you can see that most things build around once or twice a day. We have things running more than 2,000 times a day. Do you know what that is? Monitoring. Yeah, that's not build systems anymore at that point, but it's still interesting to think about. How often are people running these builds then? Okay, so you have 29 workflows per project per day in the 95th percentile. We have 85 per day per org. So that's like a lot of orgs doing a lot of different builds. And then on master, they're doing a little bit different, a little bit different numbers because most of the time, I guess you're doing topic branches or pull requests or tests and hopefully you're testing before you merge onto mainline or whatever. But the thing that I found interesting was somebody still running 2,443 builds a day at pin times. I think that's a little weird. Overall, in the global data set, and these are at different samples at different times, usually they're about a week that I take at different times just to verify things, but 20% overall is roughly the failure rate for the jobs on CircleCI. I don't think that's crazy. It's a little bit higher for topic branches, a little lower for master. But we also have a thing called manual approvals, which is fascinating to me. So the 50th percentile of manual approvals happens in three and a half minutes, I would have never guessed that approvals happen that fast. And the failure rate is 3% on a job that contains a manual approval versus 20% in the normal the normal sample set. Now, if your job has a manual approval, it takes 14 minutes and 38 seconds versus the three minutes on average for the others. Some workflows have more than one approval. I don't know how that works, but they do. And what this chart tells you is that you should be writing everything in PHP. Because it's way lower on error rates and failure rates. And it could be because they exit zero and don't do tests faster. I don't know. But the other thing that this tells you is that for the number of users you have if you square it, you get roughly the number of branches that are in play, which I thought was really interesting and also that Ruby people collaborate more. So anyway, basically what I wanted to do is talk a little bit about data sets. I only had five minutes, but I did want to pitch an open space for DevOps metrics and measurements. And since open spaces are up next, I'll go stick this on a board somewhere. Thanks for having me.