 Hi, this is your host, Tim Bhartia, and welcome to another episode of TFR Let's Talk. And today, we have with us Stephen Kim, CTO of the Care Group, and Nathan Harvey, developer advocate at Dora and Google Cloud. Stephen, Nathan, it's great to have you both on the show. Great to be here. Thanks so much for having us. Today, we are going to talk about the State of DevOps Survey. But before we go there, I would love to learn a bit more about DevOps Research and Assessment Program, or Dora. Talk a bit about it. Dora, as you mentioned, DevOps Research and Assessment is an ongoing research program that's been running for, wow, almost a decade at this point. This research program really seeks to answer the question, how do teams get better at delivering and operating software? And does that matter to organizations? And of course, not surprisingly, what we find is that, yes, technology drives value and innovation across organizations of every shape and size, and that there are proven ways to not only measure, but also ways to improve how teams accomplish that, that software delivery and operations performance. Can you talk a bit about what kind of data is being collected and how and what to learn from this data? When we talk about Dora, we really think about, of course, commercial outcomes as an organization performing. But then we have to look at some metrics to understand how is our software delivery and operations performance going. And for the metrics, we use four key measures for software delivery performance. Two of those measures look at throughput, two that look at stability. On the throughput side, we ask these questions of, how long does it take for a team to get changes into production, and how frequently are we doing that? And so we call those, respectively, your change lead time and your deployment frequency. Of course, we want to move faster, but we need to do so safely in a way that's stable. And so on the stability side, we have the question of, how frequently are those changes that you're pushing causing a failure or interrupting your customer's experience? And then finally, on the stability side, we know that we don't live in a world where there is zero risk. There's always a risk in our systems. So we have to look at, when we do have an outage or an incident, how quickly does a team react and respond and restore that service to our users? So to recap the four metrics, deployment frequency, change lead time, change failure rate, and then your time to restore service. Taken together these four measures help us really assess how well is any one application or service performing. And so we ask in our survey each year, getting to your question of, how do we get this data? Well, each year we launch a survey and we go out into the world with that survey and try to collect as much data as we can. And in fact, we'd love for you to participate today at dora.dev slash survey. And in that survey, we ask the questions about those four key metrics, but then we go deeper and we ask about different capabilities that drive performance of those four key metrics. Those capabilities cut across technical process and cultural capabilities. Technical like, are you using version control? What does your test automation look like? Process such as, what does your change approval process look like? And how much work do you have in process at any given time? And how do you prioritize that work? And then culture, things like, how does your team collaborate with one another? When there is a failure, what sort of lessons do you take from that failure? Or do you just look to figure out who should you fire because of that failure? Please, please don't do that. But these are the types of things that we dig into in the survey each year. What Nathan is talking about here was actually really useful to me to understand how Dora has the four metrics. Obviously, it's a delivery and the performance of a software organization. It's very complex and it's nuanced. And with the things that Nathan is describing here and what we gather in the survey and try to do introspection on, really highlights how the four Dora metrics is can really be an effective conversation piece, right? To go and say, okay, let's start here, but really beneath that surface is, as we imagine, a lot of different directions that we can explore, ask questions about and measure and gather input on to get it ultimately toward a productive goal. What should we do, right? What might be some efforts that we might kick off? What might be some focus area that we start looking at a little bit closer as a result of the conversation that we've had? That way, Dora, that it's widely accepted, has been really, really useful for a company like Carrick, for example, where we come on sort of a common substrate that we can go and embark on a productive conversation. If you look at the previous 2022 report, can you share some of the highlights that were there? And then we'll see if you're expecting something different this year. There were definitely, so as part of the research we publish an annual report, the Accelerate State of DevOps report, and so in the 2022 research and then the subsequent report, one of the tight focus areas for us was around security. We often think about security as the department of no, the department that's slowing down our deployment frequency, but we wanted to test that in our data. And one of the things that we saw is that the teams that have better security practices are actually able to have better software delivery practices and better organizational performance. So better security doesn't slow you down, but then probably the most interesting thing we found about security was the number one predictor of the capability, the foundational capability that predicts better security practices was culture. It wasn't a specific tool. It wasn't a specific technology. It was the culture of your team. How does your team work together? How do we collaborate? How do we learn from one another? Do we have a culture of learning within our organization? It might not come as a surprise, but we have data that backs this up and shows that culture is that key driver. One thing that Nathan and I spoke about earlier in the week was the survey is gathering response from humans. And the Dora metrics as described, as we said, are measurable. They're objectively measurable. What is your lead time? What is your defect rate? What does it mean in terms of remediation? And we talked about do they and how does what humans say about what happens and what we go and observe and measure? How do they compare? And what we found in our conversation and looking at the information was that they do align and they converge over time. So it's really interesting to me because obviously the adoption of best practices and endeavor to try to go and improve developer output and team productivity is one that is a combination of technology and process organizational behavior culture, really, as Nathan goes and describes. And so I really appreciate how the survey, honestly, a little bit of a confession, Nathan, sorry, this was the first time that I did a survey and I wanted to respond to it, but I was pleasantly surprised by sort of how broad it captured information from a human perspective in support of relevance to the Dora metrics. Thank you for taking it, Stephen. The more insight we get, the better. And one of the things that's key to Dora and sort of foundational to Dora is that it is program and platform agnostic. And Dora recognizes that this is a journey of continuous improvement and improving doesn't mean just throwing new tools at the problem all the time. It's us, it's the people in the system that matter probably more than anything else. And so your developers, your operators, your QA testers, your security folks, we really wanna get at the heart of their lived experience and understand what does that tell us? And you mentioned a journey of improvement. It's the people that are on that journey and we have to maintain that commitment over time to really building that practice of continuous improvement. And so it is so important that we start with the people. These surveys, these reports, they do help, not only the community, the ecosystem, Google, all these things as well. I also wanna look at the folks that who are taking these surveys. How does that benefit them? So when they are taking the survey, they are like honest to just like folks like Stephen, when they take survey, they do know what they are going to get out of that as well as well. I think that's a really interesting question because of course, it helps us all to learn from each other. But how does it help you to take the survey? Well, first, it's an investment and I wanna be very clear about that. Sharing your insights into this data is an investment of your time and your capacity and so forth. We do try to keep the survey to about 15 minutes so it's not a huge investment. But what's your return on that investment? I think importantly, and I hear this feedback all the time, Stephen actually mentioned it just a few minutes ago, Dora is about driving discussions. I think as an individual, as I sit down to take the survey and I carefully consider the questions that are posed in that survey, I immediately start to identify areas where my team is doing really well and probably some areas where my team has opportunities to improve, maybe things that I hadn't even thought about. And so after I take that survey, going and sharing that with my team to help drive improvement, I think is so powerful. And so the more people, frankly you can get on your team to take the survey, maybe we'll find places where we're well aligned and maybe more interestingly, we'll find places where we disagree on some of the answers. That's the place where I wanna go and focus a conversation with the team. Let's learn from each other. And it's survey though, if you look at just organizations, it's not that some one person takes a survey from the whole audience but different teams, they are looking at things differently. So the more people within the same organization take the survey, the better it is not only for the organization but for the survey itself. I should also mention of course that the survey is completely anonymous. We don't know who you are, who's taken it. And that is of course a way to encourage both honest responses so that we can get the best insights out of this data. As you were talking about earlier, some of the key findings of the previous survey, based on those, whatever we learned from that, was there any changes or any improvement or any questions that you asked this year or going to ask this year? So we certainly evolve the questions that we ask each year. There are some tried and true things that we ask every year, like those four key metrics. But we also have to be cognizant of this, how long can the survey be? We wanna keep it relatively short. So we have to stay focused. Otherwise no one's going to complete a survey if we ask an hour's worth of questions. We'll get no data essentially. So this year we are focusing in on a couple of things. First, I don't know if you've heard of this term or not AI, AI. A lot of people are talking about AI. So we aren't going deep into AI within this year's survey, but we're just trying to get a pulse. Where is AI actually being used within your development process today? I think that having that information now here in 2023 is gonna help us as we continue this research. So that's one area that's brand new this year. We're asking a couple of questions about that. We're also doing something that I think is interesting this year with one of those four key metrics, the change failure rate as an example. So change failure rate, we've asked questions like, what is your change failure rate? And we've given you options to select, like zero to 15%, 16 to 30%, et cetera. Sort of these buckets. What we find is that there's not a lot of difference across teams. And we hypothesize that that's because the buckets are just too wide. So we've allowed each individual to basically on a slider, say it's 1%, it's 3%, it's 7%. We think that that'll give us richer data. And then the third thing that we've done really importantly this year, to encourage more global participation, we've done some localization of the survey. Did I mention that it's at dora.dev slash survey? But if you go there, you can today take the survey in Portuguese, in Japanese, and then I think we have a couple of other languages like French, Spanish, and maybe even Russian coming shortly behind that. Frankly, we're running an experiment. If we make the survey available in other languages, do we see greater global participation? So help us prove that hypothesis, please. How can we talk about the involvement of companies like organizations like Kerek there with the survey, what they bring to the table, and of course, how it benefits the larger ecosystem again. The things that Dora and the larger community are focusing on is right up our wheelhouse. And as we talked about, Dora is a really important conversation. It gets the conversations going. And more so, obviously, just about every organization is obviously thinking and talking about it, but it really guides them toward things that we can go and pick up and then do something with, which is the important part as well, right? Kerek's model has always been to embed and work with the organization. We go and employ things like lighthouse and pilot models where we go and improve hypotheses out. And so as we come to, obviously we can go to sit with you and run through a Dora-oriented workshop to go and take baseline measurements of where you are, what some good targets for you might be considering what the things that are asked in the survey, for example, might go in and yield. And then we can go and work towards specific things that go to start moving the needle on the metrics that we collectively agree are important to you. So maybe we're sort of on the backside of the assessment part, but once again, starting the conversation is really, really critical. And so I think that makes Kerek's interest in the survey, Dora.dev slash survey really, it's really important to us. It's really important to us that companies are in a deliberate and really in a guided and tried and true way to join the larger community that is engaged in this journey, rather than trying to go ahead and do one off on their own. They can obviously go ahead and do that as well, but I think the Dora survey and the Dora conversations will be really, really helpful, full cycle of getting tangible benefits to the organization out of this exercise that starts with something like the Dora survey. Yeah, and plus one to all of that. And I think it's so, we are so lucky to get to work with partners like Kerek. I think that so many organizations see the metrics, use the metrics to help drive conversations, but then what comes next? How do we put that into action? And that's where a coach and a partner like Kerek can really come in and help a team start building up those muscles. And it really is, I really equate it to building up muscles of how do we get better? Well, how do we get better at getting better? Kerek is a great partner to help us along that journey. Yeah, if I can highlight two specific things that I think is effective and I'm proud of that we've done. Number one is we actually do instrumentation. So we actually go to build dashboards, we do instrumentation of your code pipeline and your release pipeline, and we go and deliver for our clients the baseline and the ongoing, on an ongoing basis, what is the progression, right? And it's almost always to be up into the right. Where it isn't, it's a surprise, and it goes and brings about interesting further conversations of wait a minute, why is that happening, right? Which then leads to further action. So it's a very, very productive one, that's number one. And number two, the thing that I'm particularly proud of what we do deliberately is we go to an infused culture into our engagements. So as Nathan was pointing out, it's about culture and it's about people and it's about mentality. And that starts with individuals but it also speaks to organizational behavior which then leads you to conversations about everything from empathy to organizational structure and in between, there's a lot to be gained to walk through this, take the experience that not only us, but any other looking outside can bring in of how do other companies that have different measurements, what are they doing? That might be saying more different from what we are doing. But the important part is engaging outside your walls in a deliberate exercise in this. Nathan, Stephen, thank you so much for taking time out today and of course I explained the importance of Dura and also the survey. Thanks for all those insights and I would love to chat with you folks again when the results of surveys are out and we can actually discuss what we learned this year. Thank you. Thank you. Thank you so much.