 Hi everyone, this is Ashish Kutela, leading the sales enablement session today. December 6th, we are going to talk about the DORA research that's published annually on the state of DevOps. So let's start with what is DORA? DORA is an independent research entity led by some industry thought leaders, such as Gene Kim, stands for DevOps Research and Assessment. What they do every year is they run a comprehensive system of large groups of adopted DevOps similar results, what's working, what's not working and key findings that help us kind of understand, you know, what's going on in the world of DevOps at the enterprise level. And they've been doing this for over five years, with a survey total of about more than 30,000 responses since they have started. So this is a cumulative number since they have started. So we partnered with them. GitLab was a sponsor in the 2018 research and we wanted to share some results here for you to understand, you know, what is it that's working in these large enterprises and where the pitfalls are, where the enterprises need our help and we jointly presented a webinar with Gene Kim on this as well, a recording of which is available. As a resource. The report itself is not available for us to distribute to our customers. We did not pay for the distribution rates. The report is available internally for you to consume. We'll make that link available as well. So... Ashish, I think it's a key point that if we want to get people's registration info, we can always have them register for the webinar recording and the link to that is in the chat. So if you're going to reach out to someone or doing an email blast and you want people to come back and watch it, the link to it is in the chat. So you can have people register for the webinar. Absolutely. So let's start with a key term that will be used in the report several times, right? It's called SDO or SDO performance. It stands for software delivery and operational performance. And what essentially is the defined SDO as the driver that helps these enterprises unlock competitive advantages against their competitors, right? So it's not just the practice of creating software, but it's also about how do you deliver it? How do you operationalize it? And how do you use it to drive business? So SDO is a key term that's used several times in this study. And as they went through the study and as they looked at the SDO performance, one thing that clearly stood out is cultural challenges and the efforts done around that to make the cultural practices better within different teams in these enterprises was a key driver to driving SDO performance. And it literally falls into two big buckets. One is how do you let your teams perform to their true potential through autonomy and trust? So not only this survey, but as we talk to large enterprises, we find out that a mandate or a hand down kind of recipe of how to do DevOps usually does not work in large enterprises. However, influencing that, letting the teams have autonomy, trusting them to do the right things as long as they're improving is a key factor that really helps these large enterprises move forward. And secondly, creating a culture of learning from failures, right? And successes, what has worked, what has not worked, what needs to be improved is really key. And we do this at get live also, right? We call it retrospectives. And so that's a very key thing to remember as we go into big, large enterprises and our conversations with them. How are they influencing this culture and how are they learning from their failures and successes to move forward? Another key finding that the survey found this year was though large enterprises are adopting DevOps and they're starting to see successes and there are groups who are seeing successes and then there are groups who are not seeing the success as much and we'll talk about both of those groups. Among those who are actually finding that DevOps works for them, there's a key set of enterprises that are performing much better than those who are in generally finding success with DevOps and they call them the elite performers. And some of the key metrics where they were deploying about 46 times more than their normal average enterprise doing DevOps. They were much faster to recover from incidents, from committing, from commit to deploy and then their change failure rate was about seven times lower. And this is important. Not only do you, how do you improve but then how do you take this to the next level and really become much better than your peers? And they found about 7% of these high performers were really elite performers. And this is key because we want to make our customers fit right in the category of elite or super elite performers and how do we do that to help them? Some of the benchmarks that these enterprises are using to see if they're doing better or they're doing worse as they adopt these changes are listed here. They looked at deployment frequency. They looked at the lead time for changes that it took them. Time to restore service if something went wrong and change failure rate. I won't go through each of the definitions here. They're self-explanatory here. But as you start to see the difference between elite, high, medium and low performers, you'll see that everybody wants to be in the elite, I'm sorry about that, in the elite category. And so what does it take to get there, right? That's what this slide talks about. So what was the things that some of these very elite performers or even the ones who were doing well were doing right? Number one thing that they were doing right among other things was they were doing cloud right. And what does it mean to do cloud right is listed on the right hand column. It's not just about adopting cloud but using cloud in ways that matter, right? And they found out that in the survey only about 20, 22% of the teams were actually doing this right. You know, self-on-demand self-service, the broad network access, resource pooling, I mean, they're listed here. If you did these things right, they were 23 times more likely that you're gonna outperform your peers. And, you know, that's kind of very telling. You see that in our customers' environments as well. Another thing that these enterprises who did this well, they're starting to adopt open-source software. And the more teams that are adopting open-source software, letting their teams experiment, work with new technologies, they're finding out that their teams are more likely to do well because they have the autonomy, they use the software they want to use, the use software that actually works for them. And open-source is a big component of that. Also, the survey also found out that the industry didn't matter. We are seeing adoption across industries. Of course, we have traction in certain industries much more than others, but if they did this right, if they did the right things, the industry really didn't matter. From high tech to pharma to banking as we see ourselves to automotive, there's adoption across all these industries pretty evenly. Of course, some industries are more mature and along the curve than some others, but don't be discouraged by the industry you walk in. This success is possible in large companies across any industry pretty much. Another key point that is really important is testing. Specifically automated testing. For my previous years' experiences in different companies as well as the surveys, as well as dialogues with lots of large enterprises, testing is often a bottleneck to moving faster, to finding the feedback loop bugs faster. And the more you can automate your testing, the more you can do it on a continuous spacing. It leads to better SEO performance, software development and operational performance. I promise you this term would be popping up all the time. We talked about culture and culture is not just letting about teams take the autonomy and trusting them and giving them the ability to do things that work best for them, but putting in a culture of learning from failure and improving from that and not being punitive when something fails is really important in these large industries. It's easier said than done, but those who get it right actually like do much better. We talked about autonomy. I want to jump quickly to the team, this slide. This slide is important. So we talked about what is really working, what are the key indicators and factors and habits that large successful enterprises are adopting to be successful. On the other hand, those enterprises who are adopting DevOps, but not doing well, they are looking at the wrong vectors, the wrong metrics and the wrong habits to try to adopt DevOps. They optimize for highly times, low deployment. I mean, sorry, saying this wrong. But they do things kind of almost opposite to what we talked about or don't factor in some of the things we talked about leading to lower deployment frequency, higher lead times and higher deployment failures. And we need to guard against that. We need to educate our customers on how we do this. And I jumped to this slide and this slide will be available. We talked about how GitLab helps as well as how GitLab does this ourselves. And this slide was meant to convey the concept of NBC and how you can get to smaller batch sizes, et cetera. I want to wrap this up so that we can have some time for questions, but to summarize what the survey found, the ways that enterprises are improving their software, their operational performance, four key points. They're adopting the cloud characteristics that we talked about in the ways that they're listed to really realize the benefits of cloud adoption. The culture, the practice, the support from upper management and the investment in technology are really important. We did not talk about outsourcing, but companies are trying to figure out whether outsourcing works or not and how it works to help them down their DevOps transformation. And the verdict on that is still kind of not in, but if you do take a decision, it needs to be decisive. You need to empower the teams you outsource and work very closely with them. And finally, you need to optimize for throughput, stability and availability as metrics. We talked about one slide, what are the four key metrics that some of these large organizations keep their eye on and how do they continuously improve and optimize for that improvement, giving their teams the leeway to do that. So I'm gonna end the sharing at this point and open it up to questions so that we can answer any questions that you might have, any sharing and findings that we can discuss.