 Earlier today in our keynote, we heard Gartner speak about DevOps value streams. That is the end-to-end workstream that is necessary to deliver high-quality software to your end users. One of the purposes of a DevOps platform is to optimize that value stream, to fulfill that agile promise of delivering more value more frequently to your end users. But you can't do that until you understand the stream itself. You can't optimize and remove high-priority blockers if you don't know what those blockers are and you don't know why they're blocked, for instance. As a DevOps platform that sits underneath and supports everything you do to deliver that software, GitLab has a lot of data to help you understand that context and help you make those decisions. But how do you start? In our next session, GitLab's own Larissa Lane will speak about how to use GitLab to access all of that data and to understand where your problems are and ultimately quantify your responses to those problems so you can benchmark how well you're doing to improve that. Let's take a listen. Thank you for joining my presentation about optimizing your value streams to get ahead of your competition. I'm Larissa Lane. I'm the product manager for the Optimize Group at GitLab. The Optimize Group focuses on high-level metrics that provide visibility into your end-to-end development flow and DevOps journey. Before I get started, a note about forward-looking statements. Our roadmaps at GitLab are ambitious and we embrace change without notice. Anything I say in this presentation about future plans could change. Information presented in this session should not be used for the purpose of any material purchase decisions nor is it a promise of any sort. Today I'll be talking about why speed matters and different metrics you can use to measure it. I'll be focusing on how long it takes to deliver value to your end users, starting with an overview of the benefits of optimizing value streams. Then I'll talk about what metrics are available that can help you understand the time it takes you to deliver value and measuring optimizations to your cycle time. So just to quickly touch on some of the benefits of moving faster. Firstly, by delivering value faster, you can reduce costs. By becoming more efficient in every stage of the software development lifecycle, you reduce the number of personnel as it takes to deliver value to end users. If you'd like to read some use cases on how companies have reduced costs by moving faster. In 2020, Forrester conducted a total economic impact of GitLab study. We have the study available for download on our website and it provides some specific numbers as well as more information about the framework they use to arrive at them. Another benefit of moving fast is to increase customer satisfaction. When you're able to deploy more frequently and turn around customer requests more quickly, you improve customer satisfaction and you get feedback faster. And the final benefit I wanna call out is that you increase competitiveness. If you're able to move faster than your competition, you can stay ahead of them with releasing new features and also realize revenue from paid features sooner. There are a couple of terms you'll hear me mention in the presentation. Firstly, a value stream. A value stream is the flow of activities involved in delivering a product or service to an end user. It's all of the activities from coming up with an idea or receiving a customer request all the way through to delivering that value to your end user. Value stream management is the process of identifying what your value streams are, mapping the flow of work required to deliver a value stream and then tweaking your processes to optimize the flow. So the first step to getting better is understanding your current state. How efficient are you today? Where is work flowing well and where are things getting held up? So let's look at some of the key metrics that can help you understand your current state. First, there are the Dora 4 metrics. This is a set of metrics that were established by Dora which stands for the DevOps Research and Assessment. In 2019, sorry, they were established in 2019 as a benchmark for measuring velocity and stability. Stability is something I haven't touched on yet but it's an important aspect to consider when pushing for faster delivery. I'll talk more about stability in a minute. The Dora metrics quickly gained popularity and became an industry standard. It's actually one of the most common topics I get asked about when discussing metrics with large enterprise customers. This one that you're seeing on the screen is the deployment frequency. Deployment frequency shows how often you are delivering values to customer per day. At GitLab, we deploy on average 62 times per day in the GitLab project. Deploying frequently has a number of advantages. It increases competitiveness by being more responsive to market needs. It increases user satisfaction because your users are getting changes faster. You get faster feedback so you can iterate as needed. And you increase your return on investment by making paid features available sooner and generating revenue sooner. Dora did some research and set benchmarks for what they considered to be an elite performer and a high, medium, and low performer. This is a generalization because daily deployments are not appropriate for all industries or all customer bases for a good reason but in general, a high performer is considered to be a team that is deploying between once per day and once per week. Another Dora metric is the lead time for change. This metric measures how efficient your integration, deployment, and release processes are. According to Dora, a high performer takes less than one week to start development work on a change and get it deployed all the way through to production. In GitLab, we measure this as the time when a merge request is merged until it is deployed into production. These two Dora metrics are available through GitLab's Dora API and also through CI CD analytics in GitLab. The next set of metrics I'll cover are in Value Stream Analytics or VSA. VSA is a tool in GitLab where you can create custom value streams and map them to your workflows by defining stages. VSA has several metrics that provide visibility into end-to-end flow times. Firstly, lead time. Lead time measures the entire cycle from when a customer request is made until the feature is available to the customer. This is useful if you need to track the average time it takes you to respond to customer requests. Then there's the cycle time. Cycle time is the time the first merge request is linked to an issue until the issue is closed. This metric skips the design and planning phases and focuses more on the actual development time. And finally, there's the overview metric. This is the sum of all of the stages you've created in your Value Stream. So these are the stages in my Value Stream along the top. Some of them are cut off, they run off the edge of the screenshot, but the overview metric shows you the sum of each of those stages. This is a great metric to monitor because it is mapped to exactly the steps in the flow that you are most interested in and you can customize the stages and events that contribute to the overview time. For example, the screenshot shows the workflow we follow on the Optimize Engineering team. We created scope labels that we use to track when workflow items move from one stage to the next all the way from problem validation through to final verification and deployment of the feature. You can see from the overview tab here that in our case, it takes us on average from beginning to end a building feature, it takes about two weeks. With VSA stages, you not only get visibility into your end-to-end flow times, but also the time spent in every individual stage in your flow. This helps you understand which stage your work items spend the most time in and helps you identify bottlenecks. For example, I can see here that work items spent the most amount of time in the Ready for Development stage. Once work items are picked up by an engineer, they only take three days to actually do the coding and one day to do review. I've talked a lot about being faster, but being fast and efficient has little meaning of the quality of your output is terrible. It's important to also measure the quality and reliability of products and services. This is where the other two of the four door metrics come in. Change failure rate is the percentage of changes to production or release to users that result in a degraded service and subsequently require remediation. For example, a hot fix or a rollback. To be a high performer, Dora says that less than 15% of changes should result in a degraded experience. Then there's the mean time to recover metric. This is how long it generally takes to restore service when a service incident occurs or how long to release a fix in response to a defect that got released to production. For example, an unplanned outage. Dora's guide for a high performer is less than one day to make a fix. Now that you understand the current state of your value stream, you can take action to reduce the time of some of the slower stages. Let's say you introduce a goal to have smaller MRs to reduce complexity of dependencies, testing and reviews. Or perhaps you want to focus on a more efficient reviewer process to reduce your review time. For example, you can introduce code owners or start using the assigned reviewer feature in Gelab. Or you might experiment with breaking issues down to smaller weight so that you can deploy more frequently. Whatever your goal is, you need to be able to track the impact of changes. To do this, you can use the date selector in VSA to compare stage times across different time periods. If you make a process change in August, for example, you can compare stage times in the months before and after the change. You may also want to measure if a process change has a negative impact on stage times. This is an interesting use case that a customer recently shared with me. In their case, they rolled out SAS testing to a limited number of projects and then used VSA to see if there was any negative impact on the time it takes to run tests and deploy pipelines before determining whether to roll it out to more projects. Another helpful tool for tracking the impact of changes is the deployment frequency graph and also the lead time for change graph in CICD analytics. This screenshot shows the lead time for change graph and this is the deployment frequency graph. These were added fairly recently and they're available in the ultimate pricing tier. But what about business value? Measuring speed and reliability is just one aspect of value stream management. It's also important to measure the business value of what you're delivering. This means not just measuring the speed and reliability of your outputs, but also the resulting outcomes for the business. How do you really know if you are delivering the features that customers most want? Are your outputs giving the company a competitive advantage? Are they aligned with the company vision? Are they actually increasing customer satisfaction? Do employees feel like they are contributing to something important and meaningful and therefore you have a high level of job satisfaction and low employee turnover? Not only are business outcomes important to measure, it's also important to measure how much it costs to build specific features or products. So you can understand the return on investment. Business outcomes and investment are tricky to quantify. Adding a weight or time spent to each issue will give you some insight into investment. But to truly understand the investment, you need to track the cost of the many other inputs that go into building a product, such as design, product management and marketing. These are all things we are thinking about at GateLab. For more details on how we are thinking about measuring business outcomes and return on investment, tune in to the presentation by Gabe and Kristen, titled Agile Management with GateLab Present and Future. While there are plans on the roadmap to make it easier to measure the value of what your teams are delivering, there are tools available today to provide you some insight. For example, the tasks by type graph in group level BSA shows a breakdown of the type of work being delivered. This graph is customizable based on filtering by labels. Consider applying labels to your issues and merge requests to give you greater insight. For example, at GateLab, we use labels such as feature, bug fix, tech debt, direction, SUS to measure our usability score, customer for things that are requested by a customer and CSAT for issues that address customer satisfaction feedback. Filtering this chart by labels can help you understand how much time is spent on unplanned work and abandoned work, how much effort is being made to avoid the accumulation of tech debt and how much time is spent on high priority work that contributes to a high level company goal. I hope this has given you some ideas for how to streamline workflows to beat your competition and grow happy teams and customers. Thank you.