 Hi everyone, welcome. My name is Avishag, I'm very excited to be here. And today I'm going to talk with you about dev team metrics that matter. We're going to understand what are these metrics, how we can measure them really, and also what we can learn from them, and how can we optimize our delivery process by start monitoring them now. So before digging into it, I'm going to shortly introduce myself. So I am Avishag, but all my friends call me Vishi. It's shorter. This is my nickname. I live in Israel, Tel Aviv, as you probably hear. I speak Hebrew. This is my main language. So I suffer from accent glitches and also grammar vocabulary from time to time, but forgive me about that. I'm a software engineer at Linear B, and on my free time, I enjoy dancing. And I even used to dance professionally until I joined the army. Yeah. So at 2013, I joined the army in Israel. This is kind of a mandatory thing there. You have to do it when you turn 18. And I joined the intelligence forces. I worked there as a BI developer, and I did a lot of data analyzing of mass of data. I did it for about two and a half years, and then I joined the industry. At 2015, I started working for all scripts also as a BI developer, but quickly I shifted my career and I became a software engineer there. I developed systems in the healthcare solution work, and I stayed at all scripts for almost four years. So that's about that phase of my life. After all scripts, I joined Cisco. I was working there as also as a software engineer. I developed features for Cisco umbrella in the SWG world. Specifically, I developed features in the backend services there and also for the VPN that we provided back then. And on 2021, I joined Linear B. In Linear B, I worked as a software engineer, and now I'm also a team leader there. I'm hands-on, so it means that I still do code work, but this is a really, really fun experience for me, because Linear B is a startup, so I really had a chance to start there when we were very, very little and we were a very small group of people, and I saw all the process there, and it's kind of amazing, and I love to work there. So I did my first degree in Israel in the Ben-Gurion University in computer science. This is a very social experience to be a part of, and that's where I met a lot of friends that are now working as developers, as probably you. So enough about me and let's talk about my friends. So in Israel now, the wedding season is on fire. It means that literally every two weeks I go to another wedding and I have to go do that wedding because this is the thing there, okay? You just have to do that. So when you are going to a wedding, you always like socialize and you talk with all the guests, and some of them are friends. Two weeks ago I went to a wedding of a friend from the university. That means that I met many, many developers, many, many. And when you meet your friends from the university and in general, when you meet like a group of people, for example, like in this event, so there's always this question, so how things go at your work? Like how's your job? Do you have fun or not? How things are going there? And the answer is always binary. It's either you are very, very happy or you are very, very sad. So when you hear the answers, it goes usually like that. This developer starts by telling you, oh, the benefits are amazing. I love the events and I have a lot of time there playing, I don't know, bowling or snooker or whatever. They provide you in the office. But, and when the but starts, it's when you are starting to worry about your friend. So the step developer actually in his actual job suffers a lot from ongoing conflicts that he's not able to resolve because that it's because the team that he works on never like engaged any tool that will help them to resolve these conflicts. He's delivering very, very, very slow. That means that he might be like an excellent software developer and develops all the features and the logic really good, but he suffers from delivering it to production. And he's always has a lot of on calls and pager duties stuff that's going on in production and many, many bad things that just makes him very sad and makes his life literally like miserable and he's not having fun and he's looking to maybe, you know, switch a job or something like that. And this is the sad friends that I have. Okay. And the happy developer might not have like a lot of benefits, like most of the time, yes, because we are in the tech industry. So we know how it goes, but he's happy not because of that. This is not what will keep him in the company for the long term. He is happy because he feels that he has a lot of impact in general to the product and to the customers that the product has. He feels that he works with high quality and high standards. And he also delivering very, very fast because it's not an issue there because they are engaged like have any tools to help them deliver fast, right? And they also like have a really stable production environment. So he's not suffering a lot from like bugs and issues that goes on product. So that goes like basically that. And these all are related to what we call the dev experience, right? The DX-wise experience. And it's basically like we saw that there is a deep relation between this dev experience that we provide to our dev teams and their happiness in the company. So that's about this one. And I want you to keep it in your mind. I want to move on and talk a little bit about Dora Metrics. Dora Metrics DevOps Research and Assessment Metrics. This is a group of metrics that was actually defined by Google after years of research. And this group of metrics are measuring your performance and your pipeline's health. And it goes to, we can split this metric, this group of metrics into two categories. We want to measure speed and we want to measure quality. Like speed is like how fast you do things and how fast you deliver your code. And quality obviously is the quality that you are providing when you are developing these new things. So Dora Metrics is a group of metrics that help you to really understand what is your group. And even your individual contributor, what is your speed and what is your quality. And the first metric I'm going to show you is cycle time. So cycle time will be the first metric of Dora Metrics that measure your speed. Cycle time definition is how quickly your work moves from coding to deployed. And it's important to understand this part because as developers, we are not always, our main job is obviously coding, but this is not the only thing that we do. We have meetings. We have designs that we do. We have things that are really not measured by coding, but when we want to measure the pipeline and our delivery pipeline, we are focusing about coding. So cycle time refers into our coding flow. So when we want to measure it, we actually want to understand how quickly we are starting to code and take this code all the way to production. And there are many phases along the way. So I want to demonstrate it with our cycle time. So in my group, in my R&D group, we are measuring our cycle time within four phases. So we have four phases that we are measuring separately. And the summary of all of them together gives me the cycle time. Now, this is the cycle time of an elite team. So this is a really experienced team that are working very, very fast. And we can see interesting things here. So first of all, let's understand how it splits, okay? So we're starting by coding and how we measure it. We're starting to measure it from the first commit that we do, right? The first commit, not the first push, but the first commit until we are working with GitHub. So we are working with pull request and we are delivering with pull request. So anyone here familiar with that? If yes, please raise your hand. Okay, great. Awesome. So we are measuring the coding time till the moment that we are open a PR that is not labeled as either draft or width. That means that then when the pull request is mature enough and we want to deliver it forward to review, this is the moment that we are finished with the coding, okay? After that, we are measuring pick up time. Pick up time is the moment that the PR is ready to review till someone actually from the team reviews it, or not any team, like in general, till someone is really actually start commenting it and this whole cycle starts to go. And then comes the review time. Review time we are measuring from the moment that someone commented on the PR till the moment it's approved or closed in some cases, but till the moment it does. So this is the, this will be the review time. It usually will be like the review time takes a little bit, it's a cycle by itself because you know the review might take like a lot of like transitions there between the reviewers and you and you have to sometimes address their comments and it might take like a little bit more time than the coding itself. And after that we are deploying. So in our R&D group we are, we count our deployments when we are cutting tags. So once we have a tag that is ready to be published, this is the moment that we are counting as deployment and we are finishing to deploy it when we are recognizing it in production, okay? So this is our cycle time and this is how we measure it. But obviously in many, many other groups or sometimes individually your cycle time might look a little different. Maybe you have like, for example a QA phase or some integration phase or many other phases. So it's really under, it's when you are start, starting measuring these, these phases and these cycle time in general, you really want to understand what are these phases that you really want to measure. So that was about cycle time. Another metric that is associated with speed. Deployment frequency. How often your work is deployed to customers. So this is another way that we can measure our speed. It means that the many, the, how many times we are deploying to production. So if we do it many, many times, it means that we are probably delivering fast and we get, and we are reaching to production much faster. And that means that we are probably do this work, this coding thing, this whole process like many times and we do it like in high speed, okay? So when this number is high, it means that our speed is high as well. Moving to quality because speed enough by, speed is not enough by itself. Quality is really important because if you are delivering fast without like high quality, it's really like, first of all, it's not good because of the obvious, because of the obvious reason, but it's also like can impact your speed as well. So yeah, the first thing that we, the Dora metrics want to measure when it comes to quality is CFR. Change failure rate. That means that we want to measure how many times, how often deployments results in an issue or a bug. So you know this process when you're delivering and once you are finished and your new service is in production, right away you have a bug, something that you didn't find out before and this is an event that we want to measure because it points if we are really delivered with high quality or not. Another thing that we want to measure when it comes to quality is the MTTR metric, mean time to restore. How quickly you can recover from issues. So let's say you have an issue already in production. Another way to understand your team's quality and your development quality in general is to know that once you have an issue, how quickly you can resolve this issue. So if you do it fast, it points that your quality in general is good because you probably found the issue a lot faster in the code and you understood quickly where you need to fix it and you also probably fixed it really well because you had a lot of tests, for example. So it points that your quality is high when this metric is actually low, okay? So this group of metrics again, these are the metrics that you want to start to measure and you can even start measuring them today and I will explain in the next few slides how you can do it and these are really important when you want to get the full picture of your group performance and especially when you want to plan forward to the next quarters and stuff like that. So another thing that's worth mentioning here is the first slide that I mentioned at the beginning about the happiness of your development team. So when these metrics are good, it means that your developers, your dev pipeline is happy and also that your developers are happy as well. So keep that in your mind and just understand it where I will explain more about it in the next few slides. So another metric, it's not really associated with Dora metrics but it really affects on all of the metrics, all of this group of metrics is PR size, pull request type. We saw that there is a really deep relation between the pull request size to our speed and our quality. So you might guess that when PRs are small, they are probably, what's better, like big PRs or small PRs? If you are a small PR person, so raise your hand, big PRs, this hand, okay. So you know the answer already and we heard it before, big PRs are not good for you and today I'm going to show you our research when we did it and we are going to give you the actual results of our research and that way you will see really the effect of big PRs on your pipeline. So the first thing we discovered based on this amount of data, so we had many pull requests of our customers, okay, they are allowed us to do that of course, we had 3.9 million comments and about 25k developers, so these are really like big amount of data. We discovered a really like huge thing here that PRs under 200 changes will probably get merged faster and the reason for that is, first of all, big PRs are very scary to review. When I get a big PR, I don't know what to do with it. I probably will return it back to the developer and tell him, organize it for me. I cannot review it. So probably as a reviewer, when I get a small PR, I will have a much better review as well, right, because I have a lot of, I have less to review, so everything that I found, I found very quickly and I have like many comments on the small PRs. When I get like really big PRs, it's either I'm just returning it back to the developer and like find a solution for yourself, I don't know, go handle this thing, or if I'm not a good reviewer, I will probably approve it right away and this is a behavior that we are not allowing in our group. So this is a really huge thing here. So another thing we discovered that also reflects your speed is that when you are working with small pull requests, you are actually increasing your deployment frequency, right, the other Dora metrics we discussed about. So this is something also that affects your speed and can change really your delivery and also your happiness of your pipeline as well. Now let's compare two teams here, one that engage the small PR, the small PR approach and the other one that's not. We can see the relation between these two. So teams that are working with really big PRs have a really big cycle time, like a really high cycle time, while teams that are working with small PRs, not always, this is an average of a week, yeah, but it's not always like that. It tends to be between 20 to 100 code changes. Their cycle time will be shorter. So this is something that you have to understand. This is something that we saw with a lot of data. So probably, I mean, we see this trend in many, many teams that we had research on. So you are probably following with the same like teams and it will also affect, once you will measure your metrics, you can also see this kind of difference between these two. So these group of metrics are the most important ones to measure and you can start to measure these metrics today and how you can do it. You can do it by monitoring WebBooks event on your repose and your WebBooks event on your organization. You have like many free plugins that can help you to understand how to monitor that. You can also like use the spreadsheet or maybe plugins with JIRA or whatever product management tool that you are using and you can also use free tools that are there in the market. For example, jellyfish, linear bee, us, and so on. So these metrics are worth monitoring and if you want to understand what's your speed and quality of the team, you want to start doing it now. So you'll have the realization of how your pipeline health is and also how your developers in the group are because not all the developers will tell you that they are unhappy because of that because they are delivering slow. On one-on-ones that we do in our teams, there are not always these developers that will tell you that but you as experienced developers as well as like junior developers, you can really see that by yourself when you are starting monitoring these metrics. Now let's say we started already to monitor our metrics. Let's talk a little bit about the most common delivery process. So this is the delivery process we do in our R&D group but it applies for most of the companies as well and most of the individual processes as well. So it goes like that. You start by coding and this is not the cycle time, this is the delivery process itself. So you start by coding after either why you are finished and you are opening a pull request and when you have this pull request cycle of review and pick up and whatever all of this all of this phase you will merge it and after the merge part you will have integration part but sometimes you might like integrate your code on your features prior to the merge so it really depends so sometimes the interrogation will be before and sometimes after most of the time it will be in both sides of the of the merging phase and after you are done with that phase you will deploy your code to production. So when we want to calculate the cycle time in our R&D group it really sits really perfectly within between okay but other metrics that we want to measure will be in other places as well so CFR that points on your quality will be affected by your code and your PR because when your code is in high quality in the PR cycle meaning the review is in high quality that will affect the CFR metric right change failure rate that means we'll have like less issues when we are deploying when we do these two parts in high quality. Cycle time as I mentioned meets us in the whole process deployment frequency will be measured when we are deploying things and MTTR as well but if we are looking at the whole process we want to boost it it means that we want to speed it up and the way we did it so far and think that we are familiar with these days are related to the dev experience in general so we need to keep in mind that our goal is to provide the best dev experience it applies either you are working in an enterprise any company any startup company and even as an open source developer an individual contributor when you have a good development experience you are more likely to contribute faster and in high quality as well so the way we did it and the way we are doing it right now like in most in most of my companies that I work at we had the CI CD processors so for most of us here we know that by like it's a given thing for us CI CD continuous integration and continuous deployment so these weren't given to us but I remember myself when I was working at all scripts in some teams literally we had to copy DLLs files from the local machine to the servers so CI CD was not always here for us to help us automate our processes so are you always who is familiar with CI CD okay great so we have many tools that help us doing it Jenkins drone GitHub actions and many many tools like that and this is really boost up our process right from many all things we had to do in the past I don't know 10 years ago 15 years ago we are now doing it automatically without human mistakes that we might do and a lot faster because this is all automated and triggered by actions or by cron schedule whatever so this really helps us in that part but we see there is a gap there so there it's not all not not all the process is covered really with the CI CD and we see that developers are still suffering from what we call idle time so let's zoom in in to the process the code PR marriage part and we'll see that the PR itself when the pull request is alive it starts a cycle by its own so this cycle is split it into pick up time and review time and we see that really this is really really funny but the coding itself not taking a lot of like this is an average data but it's not taking a lot of time and more than pick up and review and the pick up and review it's not it's not that I'm reviewing a PR for five days right this is uh caused by who can guess I mean you know this is not interactive talk so I will tell you the answer so um this is happening because of the transitioning that we do and the destruction that it takes we also know it as context switch as human context switch right so in the computer science word we know that context switch is very very expensive and in our process in our delivery process is very expensive a because um this is number one time consumer in the process as we can see in the data and be a developer working hours are very expensive so this is really something that we can we want to reduce and this is not really a work this is really a most of it most of it here is actually an idle time that we want to optimize for the developers okay so as I mentioned before this is the common process and we want to resolve it by getting into that part somehow so how can we boost this idle time and we can start doing it by understanding that not all pull requests are the same so we have small pull requests and we have sometimes big pull requests that we cannot like do it do otherwise for that matter but we want to understand and we want to somehow categorize them and give more context to their reviewers so this is just an example this is more of a concept um that pull requests are can be split into risks categories for example if it's a really small pr uh only uh read me file changes it will be probably in low risk but when you are changing some security definitions or some um um i don't know environment viable changes and stuff like that it might be a little in high risk so you want to somehow to understand it before you are really opening the pr and this is what we call fpm so along with the ICD we are now giving you the cm the continuous merge now this is not a product this is not a tool yet but this is more of a concept okay it's a it's a new concept that we want to engage in our process and we want to spread the rumor about it so continuous merge along with continuous integration and continuous deployment will provide a full cover on the whole delivery process okay so we now have this continuous merge we don't know yet what it is but if we'll have an automation right there that can boost up the speed of the idle time phases and it will really affect on the whole delivery itself and the whole and the whole process itself and it's even more effective because this is the beginning of the process so if we if we'll put if we'll boost it up will affect dramatically the whole process in terms of like speed so this is very interesting and something to keep in your mind so continuous merge refers to the pull request lifecycle we are really are talking about pull requests here and the coding part of our job and not the other parts that we do but and this is really something that we can start engaging and do just start engaging it to our processes so as a concept we have two ways to do it so either we are going to get our pull requests just more attractive to review and we can do it by really um deliver small prs for review and not all everyone are really experienced with that but we saw that a really good tech design prior to the coding itself can really affect it and when you are working what we call in micro tasks instead of like big tickets instead of like big stories they want you want to promote and you start coding right away if you have a really good deep tech design for it before you are more likely to create and work in small tasks and small micro tasks and in that way you'll have in general um you will be more likely to deliver small prs for reviews and to the pipeline itself and this is one way to do it and another way to do it is uh to somehow get your pull request um attractive by automations so you you have like many tools and many plugins of uh hey i opened the fr please review something like like bots and stuff like that that you can engage to your processes right away and a lot of them are free so feel free to reach out to me after that and i will tell you uh what are these and this is the first way so the other way is to automate your actual merge and this can be done by defining rules to each repository because each repository has different personality each repository and each project can be treated like totally different it depends like on the business that it holds inside so if we'll define some rules to each repository we'll be able to automate our merge and not only merge but we can automate it um any kind of process for example in our i r n v group we defined a a rule that um in our specific big repo we have to have at least two reviewers and if we're changing a code in that security file over there it's called um security file that pie uh we'll have to uh automatically it's automatically assigned um uh someone from the security team as a reviewer automatically without us without the developer have to do anything with that and these small things really matter because this is five minutes there and ten minutes there and in scale in like in big maze of uh of developers it's a lot of time so these two ways are are a way to to engage the the c m the c m approach so here i'm going to show you a few examples of how we did it so in linear b we are doing a dog footing that means that we are using our own product so we have these uh bots that for example if we have small pull requests for a view you know this small pull request of only bumping version bump version blah blah blah axios whatever for this kind of pull request we don't want to waste a lot of time so are we are uh standing right away a request not to a channel but to individual when we are looking for uh immediate approval so once a reviewer will will see it right away without having to click on a link and go to get up you can see it right away you can approve it from slack without like having to do anything and or have any concerns about the pull request that he might get so this is an uh one example another example that we are uh engaged um sorry estimated time to review so when we are asking for someone from someone's review and we are actually attaching by an ai algorithm we develop uh how how long we are estimating this review might take so this gives us gives the reviewer a little context and he knows how to plan his day so he knows that if it's two minutes review i'll just go ahead and release this pr and help my uh friend and make him happier and if it's a longer review okay so i need a uh an appropriate time for it i will just block it in uh in my calendar and go and go on so this is like two examples of how we made our pull requests more attractive to review and in another way uh that we are working on right now we call this solution geese room solution so this is the uh automated categorization that we are trying to develop for our pull requests and we are doing it by a set of rules for each repo and this is the i don't have like an appropriate example here but we have definitions of rule in yaml file for each repo and we have it also like in org level so we have this default rules for the all orgs and we have um rules for each repo that uh are really um like like the example i gave i gave before so if for example we are changing only um markdown files in sum of the passes in the repo we'll be able to get auto approve or auto merge and so this is about that and in general we are here to make our word and our dev experience just better whatever and wherever you are working on either your individual contributor working start up all enterprise and dev experience is like the most main things to start investing to provide the best like way for your people to work and just keep that in mind so if you have like any questions i'll be happy to answer okay okay i think there are no questions right so yeah great great talk uh just the question is uh how does how do these metrics particularly relate to developer happiness because uh i i understand you said that the healthy pipeline ensures happiness but i uh is it just correlative or causative oh sorry the last sentence was is it uh like a healthy pipeline does it essentially mean happy developer or are there other metrics so for develop for develop developers happiness it's not a really one thing this is like a combination of many many many things right you have the benefits you have the social environment and the team environment you have the setting even like for some of the developers the setting is really important but dev we saw that like in our field that dev experience in general affects on the developer happiness so think about it when you are working in scale and where you are working like in a multi repose project and we are working on many many things you want to focus on the features that you want to deliver and less about the nonsense like around it so you don't you don't want for example to treat a dev stage and production environment differently you have this environment variable that you set up somewhere and you automatically um feed it to to to your machine and this is one way to reduce the noise the developer has to handle with this is by the way the jobs like the job of the developer most of the time or the dev ops but this is all related to things to small things that just make his the developers life just easier and therefore happier so yeah if that answer your question okay thank you