 We're going to start with how value is defined and tracked with GitLab. So GitLab Value Stream Management focuses on increasing the flow of business value from customer requests to customer delivery. Its systemic approach is to measuring and improving flow, helps organizations shorten that time to market, increase their throughput, and improve product quality and optimize for business outcomes while only needing one single application. So we start with Value Stream Analytics, which is built into GitLab and provides the ability to measure that time spent to go from an idea to production for each of your projects and overall as an organization, so think that parent group level or within a specific subgroup team. Value Stream Analytics displays the time spent in each stage defined in the process. The essay is useful in order to quickly determine the velocity of a given team or product or project, and it points to bottlenecks in the development process which may help enable management to uncover, triage, and identify the root cause of slowdowns in the software development lifecycle. So here we're viewing the live dashboard of the GitLab's actual product value stream. We mentioned gitlab.org before, and this is the dashboard that allows us to use our own tool analytics to build a stronger GitLab product. The seven base stages in this case that are considered a sample process or actually a default for someone that's using Value Stream Analytics come across as you have issue, plan, code, test, review, and staging, and QA is just in here, looks like we, so I added that in an example. Let's click over to default, only VSA, and now we see that the default stages are present. So tracking is built into GitLab and collecting and showing data from across the software development lifecycle. These stages come out of the boxes, the default stages, and they're free and don't require customization to get immediate value. In addition to these default stages, we do offer the ability to have extensive customization with those stages that are best suited for your organization. So what I mean by that, if we take a look at here, I can go in and I can click, click on plan stage, I'm gonna click over to this tab real quick, and we already have a snapshot into what the plan team is doing, and we don't actually have a snapshot, so now I'll click into it once it loads here. Now we have a snapshot into what the plan team's done, and they've gone in and they have customized that flow to align to what they see fit, best fit for their organization, their subgroup within the gradergatelab.org. And they have several different stages here. We can see that they have validation backlog, problem validation design, solution validation, plan back down, and a couple more. So going back to our default evaluation analytics, just to give a simple example, as we look at this workflow up here, so a fictional workflow of a single cycle that maybe happens in a single day passing through all of these six stages here. So if a stage doesn't have a start and a stop mark, it isn't measured, and hence it isn't calculated in the median time, so it's assumed that when milestones are created and CI for testing and setting environments is configured. So let's take a look at issue here as created at 9 a.m. This is the start of the issue stage, and then an issue is added to a milestone at 11 a.m. So that's now the stop of the issue stage and the start of the plan stage. So the developer starts working on the issue, creates a branch locally, and makes one commit at noon. Make a second commit to the branch that mentions the issue number at 12.30 p.m., and that's the stop of the plan stage and now the start of the code stage. Push a branch and create a merge request that contains the issue closing pattern, so a merge request that has a pattern that will close the issue in its description at 2 p.m. And this is the stop of the code stage and the start of the test and review stage. The CI starts running your test scripts defined in a .catlab CI YAML file and takes five minutes. And so now this is the stop of the test stage. And then the merge request is merged at 7 p.m. And so this is now the stop of the review stage. Next, the merge request is merged and a deployment to the production environment starts and finishes at 7.30 p.m. So this is now the stop of the staging stage. So from the previous example, we see that the time used for each stage is the issue took two hours. From 9 a.m. to 11 a.m. Plan took one hour from 11 a.m. to 12 p.m. And then code took two hours from 2 to 4 p.m. Testing only took five minutes. And then review took five hours from 2 p.m. to 7 p.m. And then staging went from 7 p.m. to 7.30 p.m. as that merge request was pushed into production. So that's just a sample flow, an example flow of how this might be used. And if we look at some of the metrics we can pull from as flows like this that we built out for our teams, we can look at lead time, which is this is a medium time from that issue created to that issue being closed. So currently if we look at the default VSA here, we see it's about two weeks from the issue being created to being closed. Let's take a look at the plan stage. And we can also see there that the lead time is similar in nature. Looking at the cycle time, we can see that this is a medium time from the issue being first from issue having a merge request first created to that issue being closed. And so we may look at lead time as an issues created on the cycle times only around one day. So in particular, you may look at that that the issues created and it's picked up 13 days later. And then that first merge request is made. And then over the course of that day, it moves to the different stages and then it's merged and pushed out to production. So it's less than one day cycle time from the beginning of that first merge request to the issue being closed. When we look at deployment frequency, this is something that is in line with door metrics. It's one of the four that we currently have on the value stream analytics. There's a second one that will be coming soon. And that second one is the lead time. And that metric will soon be shown here on value stream analytics. Now let's show you how someone can create a value stream workflow that's aligned to their organization. What we can do is go into here because the different ones that have already been created and we want to create a new value stream. So GitLab gives you a couple of options where you can create a default template. So we want to say test default. We have this here and then you see there's different stages that have the defaults that we just went through earlier. What I do is I have the ability of giving me, I don't like stage two, I can hide that and that disappears. If I want, you know, review to be above test, I can move that around. And then down here I can add another stage that maybe I want to add my own that aligns into the default workflow that goes there. You have a select a start event. So we have over a hundred different combinations of start events that you can use for your organization by choosing one of those that gives you an end event that aligns with that start event. And this is adjusted depending on, you know, the different start events that you have and those will be closely aligned. If we didn't want to use the default value stream, we can go create from no template. And this is where we can have another stage that we create from scratch. Maybe we don't need that guide from default. We already have in mind what we need to use for our organization. So we start creating stage one, stage two, stage three, and we can create from scratch using this choice. If we go look into, let's say in this case, over here to what the plan theme is done, we can edit that. We can see different start event labels that have been organized based on their start events. So the plan team has gone in and they've agreed upon naming conventions for labels of how they organize. Maybe they're using these on their issue boards to have visibility into where that issue is on their, as part of their sprint. And so here we have different event labels that is being organized based on stages. And so you have that choice once you've built up the naming conventions of labels that your organizations agreed upon. You can do that using these start event labels to organize your stages. If we dive into these different stages here where I want to look at something specific. So maybe let's take a look at a solution validation. I click on solution validation. It's going to give me a running list of the issues that are correlated with solution validation. They're in that stage now. And if I want to take a look at, hey, what's been in this stage the longest amount of time? I can do that here. I can dive into that issue. I can see that, you know, here's a description. This is the issues been built out. And then there's been some collaboration that's been happening around it. And I can read into that. If I want to see where they at, maybe they're stuck somewhere. What decision has it been made? You can see that there's a lot of coordination around setting the appropriate labels. They have the milestone, the next release 14.2. That's expected. And I can go in. And if any problem solving needs to be happened, I can do so there. If not, I can just understand better why it's taken three days to, it's the amount of time spent. It's been 13 days in this particular stage. Going back to the overview view. I also have a couple of different things that a couple of different graphs that I can see here. Days to completion. So this is the average time spent in the selected stage for the items that were completed on each date. Limited to the last 500 items. I can organize this by stages. If I only want to see a particular stage, I can do so here or group of particular stages. I can do that. Otherwise, I have a graph here that is going to identify average date to completion based on that time frame. Time frames can be adjusted up here. Right now I'm looking at the last month. If I want to do a comparison of maybe the month before that, we'll click on, as we can see that the certain amount of days are selected. That's 65 days. Let's just move this back to, hey, I want to see what happened for the previous 30 days. Click on here, 32 days. And then now it's going to adjust the days to completion there. I can do a comparison of where we are currently if we're improving and if we are, if we're not. It's a task by type. In order to become efficient, we need to understand what type of work our team does, how much of it, and how quickly it goes through. So PMs have a difficult time to optimize between features, bugs, technical debt, and security issues. And usually there's no easy way to communicate trade-offs to management. And more of what they don't know how much of that work is stuck in each stage. And so this gives us visibility into that date. And the type of task that is being worked on, maybe it is a backend issue or a QA issue. I can have visibility into the quantity and frequency based on that day and how many are in progress on that current day. And gives me kind of a trend graph over time to understand those different specific labels that associate with however we decided to organize our group. So earlier I quickly went over there, but I just kind of want to reference it here. We have this CICD section. This is something we're aligned with our value stream building out door metrics. We have deployment frequency that is available here. This shows us a trend graph of the number of deployments over a certain amount of time. We can look at the last week, month, and last 90 days. And then as well we can see that lead time with those similar time frames. If I go back to the value stream, this is just showing that currently we have the deployment frequency coming in through as a metric right here. And soon that lead time will also be coming through as a metric as well. And to wrap up this, the value stream, a quick overview demo overview here. I want to go into DevOps adoption. DevOps adoption. This is something which groups in your organization are you. It kind of showcases which groups in your organization are using the most essential features of GitLab. And so it's broken down with an overview and almost obviously just kind of showing the different stages. So you have dev, sec, and ops here. This is an overview showing everything that's been adopted. But if I go into dev, it's going to break that down by group. It's going to show approvals, code owners, issues, and merge requests, who's using them and what groups are using them. Same thing with sec and ops. I can see DAS, SAS, and scanning and then ops is going to give me deployments, pipelines, and runners. And so if I want to see in this case, which of my groups, those subgroups that roll up to gitlab.org are using these particular capabilities of the GitLab product. I can take a look at that so I can see this group here is using deploy deploys, but this group is not. And these groups are all using pipelines the ones we need them or not. So it gives me a great shop to verify whether you're getting the return on investment that you expect from GitLab. You can identify those specific groups that are lagging in their adoption of GitLab so you can help them along in their DevOps journey. And you can find the groups that have adopted certain features and can provide guidance to other groups on how to use those features.