 Hello everybody, I'm Michael Stonkey. I run the platform engineering organization at CircleCI. And I want to talk about what we've learned from looking at the CircleCI data about how teens are performing. So, I'm Mike, I'm at CircleCI. I guess that's the intro, that's the simple intro. The main part of this talk, we're going to have a little bit of a setup about what's the problem set we're trying to ask about, what are we trying to learn, a bit about the data, diving in, seeing what the raw data tells us, and then a little more about insight about how we look at that data and what we can derive from it, what we can maybe hypothesize about it and talk about trends we're seeing in the industry and other influencing factors. So, beginning with the setup. There's a lot of DevOps reports out there that talk about high-performing teams and what these metrics are and what happens. And to me, those have always been fascinating. They've been some of the best research available in our field. I've helped author a number of these at this point. I worked at Puppet for a long time, and they were the people that started this. And I worked with them on a number of these questions and analysis and kind of how teams are working and what makes a team high-performing versus a team that's maybe medium performing or a team that's less successful. And, you know, you ask a whole bunch of questions and you learn a whole lot about the way teams work. Well, that is an interesting way to approach this, but now that I'm at CircleCI, I had a whole new opportunity of how to do team analysis and that's looking at actual behavior, looking at data. And so the way that I think about this is this is performance derived versus performance described. You describe your performance when you're filling out a survey. You say, well, here's how we behave or here's how my team behaves. Whereas when it's derived, I can say when I look at the data, here's what your team is doing and here's how they're actually working. And those things don't always agree with each other one-to-one. And that doesn't mean that anybody's lying or that they're not telling the truth or that they're, you know, perhaps looking at something with rose-colored glasses or whatever. It means that sometimes people have recency bias or they have a, well, I'm not thinking about this other project. I'm only thinking about this main project or, you know, whatever the case may be. And so we kind of try to, I guess, normalize out that data a little bit and look at what's really going on org-wide versus just maybe a specific data point. So in this data set, we have 44,000 organizations that are in this data set using CircleCI to pull from. 160,000 projects. And that makes this about 1,000 times larger than any state of DevOps survey that's existed over the years. And so this is just a much, much larger data set, which should mean that we can get a little more strong signal from it and a little less, you know, if there's four people that work at one company or 20 that work at one company to fill out something that they can skew the data in one way or another. I don't think that actually happens in those surveys because, again, the data analysis is very thorough and professional and they usually can sort that kind of thing out. But this is just a larger data set overall. The other thing that we can do this year is look at what's changed year over year. We started this question set last year where we were asking a bunch of questions and our data science people were slicing and dicing it and trying to figure out the answers. And so we have some data from last year and even this year we ran the data from last year to make sure that we had processed it correctly and everything. And so we have some trends that we can see of what's changed this year versus last year. And if you're like all of us, I guess, your 2020 has turned out quite differently than 2019 ran. And so you may see that there's some of those factors show up in the data. So we do 30-day increments in data sets. And for some of the trending, we'll do multiple 30-day data sets so you can see month over month in a couple spots. Our organizations, we've grown a little bit in 2020. Our number of projects in the sample grew a little bit in 2020. And that's basically just because of platform growth overall for CircleCI. Traditionally, there are four metrics that are used in DevOps surveys as kind of the indicators of high-performing teams. And that's your deployment frequency, your recovery from failures, your change failure rate, and your lead time to change. And those are generally kind of the big four metrics. If you're high in those, if you cluster in those, you can be in the high-performing group. And that indicates those are characteristics of high performers. Okay. And so how do we look at that now if you map that into a CI world? Well, if we have deployment frequency, the question is really how often do you initiate your work? And so this is how often do you initiate a pipeline and that becomes throughput in our nomenclature. Then we have lead time to change. That has, what's your pipeline duration? And we just shorten that to duration. Your change failure rate, which is your pipeline failure rate, we actually go to success rate instead of failure rate because it's more fun to talk about success than failures. And then mean time to recovery just becomes recovery time in our world. So that was a little bit about the setup. Now I'm going to get into the data. This will be a lot of data coming at you pretty quickly, but then we'll get to the insights. So throughput is what we start with, which is how often do you push code that triggers CI? Most projects are configured to run and push to it, upon push to a get server. Now this can happen, this could be one commit push to a get server, or this could be dozens of commits say in a single poll request. A lot of people like to say they run tests on commit, but usually it's actually they run on push. And so with throughput, you can see that people at the 5th percentile are running 0.03 jobs per day. Well, that's not very many. And then people at the 90th percentile are kicking off a workflow 16 times per day. Okay, that makes a little more sense. But when you think about kind of the narrative that high performing teams are deploying dozens of times a day or hundreds of times a day, I would say that our data don't really support that. Now, if you're in that 95th percentile and above you do start to see that where people are in the 30s and I mean, well above that you can get the several hundred at the 98th or 99th percentile. But on average, you know, people are deploying maybe eight times a day. We should look at that mean at the bottom there. And that's just running a throughput. That might not even be a deploy every time that could be just doing validation or code pushes. So people are not pushing code perhaps as often as they talk about all the time. And so, yeah, this analysis is kind of most projects are not doing not deploying dozens of times per day. So why why does that become such a popular narrative? Well, a lot of times in the surveys they ask, you know, what's the primary application or service you work on? And it could be that a single project you have is deployed a lot or it could be that your single application, your platform is made up of dozens and dozens of projects. And so while any one of those projects may only be running, you know, one to eight times a day in aggregate, you're still deploying 60 or 70 times a day. And for instance, if you were going to look at CircleCI that is exactly what you would see. You would see that our platform has dozens of repositories that make up what becomes CircleCI. And we deploy about 60 to 80 times a day, I would say on average, but any one of those applications may only be changing once or twice a day. And then you can see that like the difference between 2019 and 2020 isn't a lot, but you can tell that, you know, the 90th percentile and the 95th percentile have really increased the amount of CI they're running. And to me, what this tells me is the people that invested in CI are getting more out of it this year than they were last year. And I think that that's a good thing and it's not surprising, but you know, and really across the board, people are just using a little bit more CI, but it's not necessarily the most significant trend. So those leveraging CI well are doing so even more so. And if you had already made the investment to CI, I think you're leaning into that investment more heavily and you're seeing the returns on it. The other thing that we see are there are fewer developers pushing code worldwide in 2020 than there were in 2019, which I think again, if you think of all the factors going into 2020, that isn't too surprising. Now, if you look at duration, that's how long does it take to get results? And so this is basically time after code push, how long does it take me to get signal on what's going on for my code? 5% of our builds finish in less than 12 seconds. And when I say 5%, that's about 500,000 in the sample of this exact sample that I polled. And so that's not a small amount, but less than 12 seconds. You think, well, what happens in 12 seconds? Well, it could be a number of things. It could be you get a very quick failure. It could be the entire workflow is something like I'm putting a static file onto, say an S3 bucket or I'm modifying some documentation. It could be very simple things. It could be your running unit tests that have been really well optimized, but 5% of our builds are finishing in less than 12 seconds. But then you can see that a lot of our builds, they increase quite a bit from there, but 50% of the builds are under that four-minute mark and then the 95th percentile you're at at 34 minutes and our average is right around 25 minutes because you start looking at the outliers that are in there and pulling that number up. I think most of our workflows time out after about five hours, so that's about the longest you can go, but they're a little bit scatter-shotted and a little bit around the world here, but the four-minute mark is the one that I really want to hone in on and that's half of all builds finishing under four minutes. I've seen teams that have an SLO around how long it takes to build their software and if things go above their SLO, they will stop engineering and start engineering how their tests are happening, whether they need to parallelize more, whether they need to be more diligent about flaky tests or which ones can take longer or speed up a test. And so if things take longer than four or five minutes, they may throw a flag and say, hey, it's time to re-engineer our test suite because what matters to us is cycle time and how fast we can give signal back to developers. And so some teams work very, very hard to keep that build time under four or five minutes. And other teams say, hey, 25 or 30 minutes is great for me. I know at a place I used to work 25 or 30 minutes would have been outstanding to get signal back because sometimes it was seven or eight hours. And so it all depends on your perspective about what you're doing, how detailed and thorough your tests are, maybe the number of platforms or permutations you're testing, things like that. So in 2020, you can see that most of the average durations that across the percentiles are a little bit larger in 2020. And that might be that there's more tests happening, perhaps more investment into validation in certain segments, maybe more complicated workflows, things like that. But there are a little more usage going on in 2020 than there was in 2019. So all pipelines are running just a tad longer with one notable exception. And that's that the average time for a pipeline was actually longer in 2019 than in 2020. And what this tells me is maybe the outliers are longer but the average time is shorter. So people have been spending time optimizing and trying to get themselves at the lower end of that measurement. That 24.6 minutes to 26.76 minutes, it's about an 8% change. So the average is about 8% faster year over year. Next is success rate. So how often does your pipeline complete with a green status is more or less what this metric is trying to measure. And as you can see, we have in the lower end of the percentiles zero, it never returns green. And in the higher ends 100% where it's always returning green. And so some of our data samples dabble with CI but they don't necessarily get a working build. This could be somebody starting with their own individual project. It could be a side tool. It could be, you know, that they never get a valid configuration and they look at it once and walk away. And then others of our sample see no failures within a month. It could be they have a really good test suite. They really know what they're doing. They have discipline development practices. It could be they have few tests that could be they have tests that just say return true so they can never fail. And we see all those types of things in our workflows but you do see that some tests or some organizations have workflows that never fail. And so what you look at overall is the overall success rate on average is, you know, it's 54% in the mean and at the 50th percentile is 61%. And what I found really interesting was this really didn't change from 2019 at all. You can see there was a 1% difference at the 50th percentile. But we wanted to look into this a little bit more and see, well, okay, is that true? Is this really that static year over year? And if you do look at the little bit, the 50th, 75th and 85th percentile that things are increasing a little bit. So the people that are using, I guess at that percentile are, have increased their behavior a little bit. They're having a little bit higher success rates than what they were a year ago. I'm not sure it's that significant, but it does seem like people that are really leaning into the CI space are making sure they're getting more out of it than what they were. The last is recovery time in terms of the data. And that's the time a pipeline sits in a failure state. So in 2020, what we see is, you know, the shortest end is about two minutes and that can happen in a couple of different ways. Things could happen where you have a developer, perhaps that pushes a config and right away realizes they pushed an error and they repush a config right after that. It could be you have two developers and one pushes a bad config and one pushes a good one and so the bad one finishes and the good one finishes and now it looks like that pipeline is recovered. And then you can also see that some of these are way, way longer in terms of recovery time. If you want to capture 95% of all builds on the system, how long is the recovery time is 3.4 days, which is very different than two minutes. But on average, if you look at this mean at 14.85 hours, what that basically tells me is sometimes people run a build, they stop their workday and they check it the next morning because that's about the amount of time you're away from your desk on an average weekday. And that's how they go find what happened the next day, which again, I think describes behavior a little differently than what most of those DevOps surveys do in terms of how fast can you recover from a failure or how fast do you recover from problems? And it could be that people look at that question and think more about production problems than about development pipelines being read, but of course, in a classic continuous delivery methodology, a pipeline being read shouldn't be treated any different than a production level outage from a developer point of view. So, and again, recovery time can be for multiple contributors running in parallel, which we talked about already. And then once you get to a certain gap, you can kind of see there's this overnight waiting period that is likely happening. And that's people that are, you know, they kick off a build maybe toward the end of their day and they check it in the morning or they decided to get with it in the morning. In terms of, you know, the 2020 impact here, recovery time went down in every category. And one of the things that I wonder about is, is that because more people are at home and more people are paying attention? Is that because there are fewer distractions? I'm not positive on that, but it does look like the 2020 is slightly better than 2019 was in terms of how long pipelines sit in their red state. Yeah, so they have improved year over year. And so now I want to really get to the insights which I think is the most interesting part of this rather than just read you data on a screen. Let's talk about what this data tells us a little bit more. So what development practices definitely work? Like what can we pull out of this? Well, I can tell you that success rate does not correlate with company size at all. We looked at company sizes and team sizes and org sizes and tried to look at, okay, if you're this size team versus this time size team, do you have a higher success rate? The answer was really pretty much no. So the only thing that we learned for sure was that if you're a team of one, your success rates are lower than if you're not a team of one. Duration is also longest for a team of one. And so this might be that if you're the only one consuming the output of your software project, you may not be trying to make it faster or better because there's no one else relying upon you. It could be these are all personal projects or one-off tools for your organization and not things that other people are relying upon. But a team of one is definitely not, I guess that's always the longest that we see. Recovery time decreases with an increased team size. And so the more people looking at the build, the faster your recovery time is, which makes sense, but it's only up to a point and up to that point was around 200, which was actually much larger than I would have guessed. But the vast majority of our data samples sits there with teams between kind of that five to nine mark and not much above that. Every now and then you get projects that are maybe the core project that many teams contribute to. And so you get many more contributors where it could be in the hundreds because perhaps the entire engineering department or even people outside the engineering department still contribute to the main core project. But that sample is a little bit smaller than kind of just the across the board all the different projects. The longest recovery times are again from teams of one. And so again, if you're working on this yourself, maybe you maybe you're working on your own tool, you get a red build and you think, well, whatever, I'll work on it later. And that could be 10 days later. It could be two weeks later. It could be, you know, three hours later. It just depends on kind of when that fits back into your schedule. So in every way that we can measure performance for a team is better when you have more than one contributor, which I guess you can say, Mike, that doesn't surprise me. You were talking about teams. If you have a team, your team is probably going to perform better. And that does make sense. But if you're chewing things on your own, basically it doesn't look like you will have as good of outcomes. And so I guess one of the things that tells me is you shouldn't be that 10X developer who sits in a corner and says, I can do this all myself because you won't actually have as good of outcomes. And we can show that based on the data sets. Ultimately software is collaborative. And that's one of the things that we were able to pull out of this pretty clearly is that, you know, when you have more than one person or multiple people, more than two people, even as that number grows, you get better outcomes on everything having to do with software delivery. The next question that I asked was, is this don't deploy on Friday a real thing? This has been kind of an old school level of thinking. I think that goes back, you know, to the classic operations days before DevOps was a big movement was, well, we don't make changes at certain times. That's the best way to be stable and don't deploy on a Friday became quite a real thing overall. And so I wanted to see, could I see that in any of our data? Well, we see 70% less throughput on weekends. And that means that there's a whole lot less happening on our on-circle CI on a weekend than there is during the week. And it also means my AWS bill goes down quite a bit. So that's something I usually look forward to. But then on Fridays, we see 11% less throughput than we would say comparative to Tuesday, Wednesday, or Thursday. And I pull off Tuesday, Wednesday, Thursday intentionally because of the middle of the week. And internationally, those days are all weekdays, no matter what time zone you're in. Whereas Monday and Friday, you end up having pieces of weekend for every other slice. I'm at some part in the world. But we also see 9% lower traffic on a Monday. And so if you remember, there was 11% lower on a Friday and 9% lower on a Monday. And what this really tells me is that it's about the same between Monday and Friday. So I don't think people are holding back on pushing code on Fridays. A lot of times people do take Fridays and Mondays off if they're trying to extend a long weekend. And that probably accounts for most of that drop. The other thing that does account for that drop is that Mondays and Fridays are, again, different time zones around the world. We use UTC for all of our logs and all of our data gathering and stuff like that. But at midnight UTC Monday morning is still Sunday evening in North America. So there's not a lot of work happening from that segment. It's mostly other parts of the world at that time. Another interesting set of questions that I love to ask is what do we see about languages and language trends going on in this dataset? Are there certain languages that lend themselves to better outcomes across the CI system, across teams? And generally I say take these with a grain of salt because anytime I put up a chart of languages, it's half of it's just to provoke you to think about what your language choices are and why. And the other half is to make sure that you can argue with your other developing friends about what language choices you made and why. So languages in our sample, when you look at the giant sample that I said that was 160,000 projects, here's basically the breakdown. Anything below 0.83% just wasn't even counted because this is the top 20. But we have a lot of dynamic languages. We have a lot of statically typed languages. We have a lot of more markup and markdown type languages. So there's several different styles of language in our sample here. On throughput, this is basically how often you're running on CircleCI. Ruby is the thing that's happening the most often. That could be because their durations are rather slow or rather quickly you get feedback very quickly. It could be there's just a lot of Ruby projects. It could be there's a number of different things. And you can see that this is just how often different things happen. Now the Docker file being at 20 was a little bit surprising to me because we do have a lot of people that build Docker images via CI. But it could also be that that doesn't just get kicked off that often, it happens less often than a lot of the other software projects. I guess it's because Docker would be the final artifact versus perhaps the intermediate artifact of you know, compounding your Go program and then placing it in Docker container, things like that. On the success rate at the 50th percentile, this is the order. I find it interesting that Vue CSS and Shell show up at the top. I don't see a lot of tests for CSS and even in Shell, you don't see a lot of tests. Usually it's kind of just actions. And so those actions may succeed regardless of kind of what happens, a lot of Shell exit zero unless you're very intentional about how you write things. Some people probably do use, you know, like a bash automated test suite bats or whatever. But most people I think are just using Shell to complete a workflow, maybe copy a file over there and you know, it's going to return zero regardless in a lot of cases. And then Docker file also usually succeeds in that you build a Docker container, whether or not the container was what you wanted or not, maybe something that you need to require is further introspection. But the first real language that I think that has a lot of testing up and down all the way through would be probably go at number eight. And what I find fascinating here is that's the first, you know, that's a strongly type language and then you get into many dynamically type languages and then you get back into the rest of the type language is lower. And so you see, you know, 16 starts with Java again, where you get back into static typing and that I'm really surprised that go it seems like, and you'll see this in a couple of the other slides, go actually clusters more with other dynamic languages, even though it has more of the statically type language, you know, properties going for it. Yeah, recovery time number one is go, which was a little bit surprising to me as well. But apparently the people that are running go programs are paying very close attention to their pipelines and making sure they're good things are happening. The longer things, the things are more at the bottom, the recovery time being slower for some of the statically type stuff, that makes sense to me because their durations are a little bit longer. And a lot of times your recovery time is longer if your duration is longer. If you know your pipeline is going to take 35 minutes to run, you may step away from your desk. You may go to another call, another meeting and then you don't see the result for a while or you don't get back to working on the result for a while. Whereas if you're going to get your results in three minutes, you may be able to correct them very quickly and so your correction time will also kind of correlate with that. In terms of duration, things that happen, the fastest are shell scripts in HCL and CSS. And then you get into the JavaScript stuff which does happen very quickly and then go and then you have kind of those dynamic languages again and then you get back into the static languages toward the end. So, you know, what this tells us is not a lot. I think they kind of cluster together but I think that just the behavior of the go programmers is what I would say is a little bit anomalous and I'd call out there a little bit. In last year's analysis, I can tell you that PHP showed up in a really good light in most of these metrics which was really odd to me. And I remember making recommendations of everybody using PHP which I didn't believe but thought was rather amusing and this year it shows up much more like how I expected to group next to Python and near Ruby and things like that. So, this year the data seems to be a little clearer on how these all grouped together. One of the other things we've talked about was branch information because we, you know, a lot of times people build off a feature branch and then they merge into mainline. How does that all work? And the first question that came up to me for 2020 was have people really worked on renaming their branches from master? You know, there was a lot of talk in the social unrest days throughout this year that master's just, it's harmful language and we want to remove that from our computer science curriculum as much as we can and all of our usage. So, GitHub announced they were going to change the default branch away from master as did Git upstream was very aware, was made so that you can have different default names easily and not just use master. So, did this actually decrease this year? The answer is no. It really didn't in any significant way and I put yet in here because I don't think GitHub has completed all of the changes to make it so that any new project doesn't have master or maybe they did with new projects but they haven't done anything retrofitting existing projects. I think once GitHub does a little more of that work, you'll see it take off a lot more, but right now we don't see any significant movement in people moving away from a branch called master. Another branch topic is that teams are innovating and experimenting on feature branches and so you can tell this because the success rate on the default branch is much higher than on non-default branches and so the default branch in a lot of cases is mainline or main or master and the success rate there is significantly higher than what it would be on some of those branches that lead into it, you know, the branches, you would either be making a pull request or a merge request on things like that. So, we have about an 80% success rate on the main branch versus a, and it's 100% above the 75th percentile and so basically success is happening a whole lot on the mainline branch whereas it's only 58% for that non-default branch and what that tells me is people are using the CI system to get signal at times and so maybe they're merging a thing where they're like, I'm not sure this is going to work, I'm not positive, but I'll let the CI system tell me if this works or it'll run maybe a complicated integration test that I just can't simulate on my workstation too easily. But either way, what this really supports is that things are happening outside of mainline, they're getting merged into mainline and that's how we're figuring, that's how people are getting signal and they're also taking really good care to keep mainline clean and keeping it green, which is really great when you have multiple developers working on something, this is just following the practices of trunk-based development. They really want the trunk to be clean all the way through. The duration on the default branch is faster at every single percentile than it is on the feature branches by default or by feature branches at large and so what this basically means is branches that are not in mainline generally finish slower than branches that are mainline and again, that makes sense because there's new experimentation going on perhaps your tests are flaky, perhaps they need to be optimized but as they get into mainline, that happens and also recovery time is lower while it's supposed to be lower when it's not on the default branch at every single percentile. So when you're on your feature branches your recovery time is significantly longer than what it is on the mainline branch at every single percentile. The last set of questions that I was really interested in from our data team and from the other people that helped put this together was what can we see? What have we learned about a global pandemic on team performance? What is that really told us from our data set? And as you start to graph thing we pulled graphs from 2019 all the way through 2020 and we tried to do 30-day increments on samples but as you can see, there are dips that are noticeable in the March and April timeframe or changes I guess I would say. In this case, throughput went up in March and April. Our peak throughput looked like at least at the 95th percentile was in April of 2020 generally after that since April throughput spent around flat at most of the percentiles. So it does fall just a tiny bit at least at the 95th percentile. In duration we see that a lot of things went up in March and April and to me the way that I thought about this and tried to rationalize this was perhaps there's a bunch of developers that were at conferences that were on vacation that were away from the computer I guess is probably the right way to put it that return to the computer in March and they started working on things maybe they're adding more testing you have more people directly working on the code bases and so your duration goes up and then you see a little bit of optimization after that perhaps people were like hey, I don't like when my test take longer to return I'm going to go work on that and make sure that doesn't happen. So you see duration increased a little bit in February and then that increase accelerated in March and then decreased in April which I think we can basically say that the hypothesis here is that more tests were happening in March driving up the duration and in April there was more of a concentrated effort on optimization and that kind of adds up when you think about the way that people were kind of sheltering in their homes they're working on things and then it was like wow, we're adjusting to this okay, I'm going to work on this I'm going to make this a real thing I'm going to make it good success rates also increased in during the pandemic as everybody went home and sheltered in place they got higher success rates and part of that again could be that they were paying more attention to the direct things that their company was doing they weren't on the road they weren't as distracted they weren't as sidetracked I do love the 25th percentile just falling off there and I wonder if that's just like 2020 in a graph it's just like no, I can't anymore or if it's those are projects that are not core to the business and so they're just being left alone and maybe they failed and somebody was like well, I just don't have time to give on that anymore and so I'm not going to you know, those hypotheses there and I we haven't I don't have a clear narrative there my hypothesis is that that is non-core business inside projects and tools that were just not essential and they're no longer getting the attention they once got the success rates were the highest on record for April of 2020 again, that's kind of when you had full workforce at home fully engaged and getting longer days things like that so the hypothesis there is that people were working hard on core business stability in March and April a lot of companies were going through reduction in workforce and other type of economic you know, things where they were like wow, this little side thing I don't know we want to dabble with that anymore let's make sure the core of our business is clean and works and is stable and I think we saw that show itself through our data through from developers and then recovery time you know, we can see that it hasn't changed significantly throughout the pandemic period but it did drop a little bit and we saw that in some of the 2019 to 2020 data earlier you know, generally the 95th percentile has dropped quite a bit some of the people that were really good are still really good at the recovery times but recovery time has been improving since April and ours with the longest recovery time have definitely improved and that that was kind of indicated by this top line that green bar there where when you had the longest recovery time that has increased or decreased significantly and again this is like this is a golf not bowling thing where you want to be the lowest score to win so people are recovering faster and faster than what they used to and the hypothesis here is that people have fewer distractions when they're working at home they don't get pulled into meetings or go to lunch or go grab a coffee and then forget about it and then come back Oh, I was working on that pipeline I should probably go fix it there's probably a little bit less of that happening and when I say fewer distractions it's a course for some values of distraction some of us have very new distractions to work with whether that's children at home taking care of loved ones or worrying about sickness or just doom scrolling all day about what's going on there's there's a lot that can be distracting but I do think that people generally are a little bit more in front of the computer and paying a little more attention to what their their signals are in their delivery and so I want to take you into a couple of final thoughts as we've dug through this excuse me final thoughts here when mapped against DevOps survey data CI users at the 50th percentile show up between medium work level and so what that tells us is that you know when you look at kind of the classic layout of how are you doing and what are the data sets if you're average at using CI if you're at the 50th percentile generally you're going to be right on the borderline between medium and high performer and the more you're using it the more it will push you up into high performance and that's high performance in terms of the way that DevOps survey measures it and then the way that circle CI measures it in terms of your CI engagement and so you can kind of correlate the CI engagement correlates very highly with DevOps engagement overall or DevOps success overall so if you're using if you're average at using a CI platform you'll be right on the line and people that are really leaning into this have better outcomes in all four of our critical metrics and and the four critical metrics that these DevOps surveys have been using for years and so CI is it's a critical part of being on your delivery path and making sure that you can deliver software when you want to and making sure that your team is high performing they'll last take away that I that I really loved was that more collaborators means better outcomes the more people you have working on this stuff the better you'll be because I you know part of it to me is I think you just care about your your developer experience for other developers if they have to wait 10 minutes for tests when they could wait five people who work on it make sure they're only waiting five and if you know the pipelines read well somebody else needs it to be green to do their work you're gonna make sure it's green and I think that adding collaborators is really great thing. So I hope you found this interesting you know it's definitely a dive into the data this is kind of a cursory survey of the whole set of data because there's a lot you can go into there's a lot of questions you can dig into there's a lot of questions you can ask of the data team they can slice and dice but if this is interesting to you I will remind everybody we are hiring we have been hiring like mad over at Circle CI and we still have many many positions open engineering and data so this stuff is interesting to you you're welcome to apply and I just want to say thank you I'm again I'm Michael Stonkey I'm the VP of platform over at Circle CI you can find me at Stonma on basically any internet service that you care to find me on and I also want to give a special thanks to Ron and Melissa who helped gather all this data and sliced it and diced it to answer a lot of my questions about it I did not do this all on my own I am not a data scientist but luckily I know people who are so thank you so much