 All right, and we're ready. Hello everyone and welcome to today's group conversation My name is delia havens for those of you who've not met me before. I'm the director of engineering for the ops product teams So today's conversation. I thought just keep it simple. I picked one topic that's been Consuming a lot of my time and also I have a lot of interest in so We'll we'll talk about that but before we do so I did want to quickly Send a welcome to two people who joined our team recently Peter and Reuben joined the monitor team. We're so excited to have you on board We also have a developer joining the secure team in November We're a bit a long way away from our actual hiring Targets, so if you have referrals, please send them our way I will Shamelessly continue to ask for that because we have great people and we would love more great people to join our team So please don't hesitate to refer your friends And I'm happy to have conversations with anyone that you think would be interested All right, cool. So on to our Topic of conversation for today If you joined in August I introduced this You know MR that we were putting together regarding throughput and Throughput in general is just it's it's a metric. It's a one of the simpler metrics that I've worked with before and The reason I like it is for its simplicity. It's basically Tracking the number of things that you can do From this particular start point to whatever you consider to be a deliverable in our case What I'm hoping to achieve is to basically get a sense of How What is our kind of code effort that makes it into production? So there are a lot of things that we do in support of that code reviews meetings You know design all all the things that go into like what are we actually going to develop? But at the end of it simplifying that and and hopefully tracking down to probably, you know The MR that makes it into production. That's the unit that I'm targeting to basically count and I Want to open this up. So that's that's why I kept it to one slide Please ask any questions. Happy to clarify it challenge me on it That's kind of how the best way to to have some of these conversations and and just if you're not familiar with throughput to contrast that with velocity which Some of you are familiar with with velocity. You kind of focus on estimation using story points or weight In a sense, you could think of it as Velocity is a way to estimate the work throughput is an actual reflection of work. That's been complete So throughput does need some time to build the trend because you're not estimating your throughput You're basically reflecting actually how much you delivered or how many of these units actually made it into production a couple of links here just for reference and and hopefully To get you more familiar with the topic The first link is from the iteration that I introduced in August The next two links are conversations that we're having right now around iteration two and iteration two a big part of it is How can we apply this model across the whole engineering team? And I want to be really clear I I understand that this may seem like a big change But my hope is that with tracking that number we start to give teams Autonomy over how they want to execute but have accountability because now we have data to basically Show that we are on the right track or when we introduce change. It's actually having the success that we're hoping for So that's all the material I got prepared now I'm going to stop sharing and open it up for questions So Let's see here if I can get back to my chat And and feel free to jump in because I think it's it's really great when when someone asked the questions for verbally So jump in and I'll I'll try to grab questions from the chat as we go Okay, I guess I'll I'll take Gabriel's question. So how does throughput relate to cycle analytics? That's a great question The way I've used this before is that you combine throughput with cycle time and now you're able to determine Not only how how many units you've achieved per week or whatever time period you're trying to track but now you can also categorize those and and and pair it up with cycle time and say You know when we go out to fix a bug it our cycle time is two days So we're able to turn around defect resolution in in two days and get it into production Whereas when we take on an issue it takes a month to get through that and then have a conversation around well How can we improve that cycle time? So yeah, I know this is really great because I don't think one metric should be used independently Metrics are just data and the data is to facilitate these healthy conversations So the more of these data points the hopefully the healthier you can Have on the perspective of what the problem is what bottlenecks you may want to address and things like that Does that answer your question Gabriel? Feel free to you know ask any follow-ups. I'm happy to clarify Yeah, kind of a little bit my my initial thinking process was whatever you come up with results of like this this Experiments we should try to make it into a feature inside GitLab. Yes. No, absolutely I couldn't agree more and this is what I love about this company is that we're developed We're building the product for us So I I'm happy to point you I have an epic actually that I started in the insight project to Define some of these metrics and Victor is also tagged on it So I'm hoping that Having some of these rough implementation translate into building it in the product because I know I'm not the only like engineering leader Or or engineer who would make use of that data and have that information. So yeah, no great question Thank you Kelly I'm not familiar with lean kit But it sounds like that might be something I will look into Yeah, so it it was a Kanban board. I don't know if it's still around or not But have used it for a number of years But basically you could you could go through the whole cycle time and efficiency and you'd have like the nice little bar graphs and Right, none of its bar graphs, but the it would provide that so I guess in your link if it helps Yeah, no, no, I appreciate I just put it in Google. I'll take a look. I I haven't used it before Some of the the the ways I learned this was built in-house because we we were basically pulling it off of Jira I would love like the reason I'm excited to be at GitLab is because we can turn it around and fit it in our product But this is great. I will spend some time with it Cool. Oh And and just so you know, I've asked like the dev ops Not the dev ops, but the ops backend team to join Managers developers to facilitate with this conversation. We've spent the last quarter Working with the team on breaking things down to small MRs getting feedback on is this healthy? How much overhead do we have? We were tracking some things manually, but manual I try to avoid having to do so manual tallying if you will so we are working with Max team on building a dashboard So that moving forward. This is pretty automated, but please ask questions I like I think they would have a great perspective as well and and you know Anyone from the ops team feel free to jump in and grab a question if it speaks to you Okay, I might have to place its card and hold this call hostage till we get more questions One question. Okay, go ahead I'm wondering about How you you're gonna use this data because I'm afraid of the the big differences between all measure quest, I mean because this is the unit you will be counting and Depending on the work you're actually doing you can end up with just one or two measure quest or dozens of them I mean if for example, you're working on a see an ee feature You will have one image request and he may be a back port in CE then two other measure quest for the communications But in the end you just bring one value to the product, right? So I Think this is more an engineering matrix, but how would it make sense to have for an iteration? I don't know a big number and then a smaller number How would you leverage that do you mean crossing data with the labels of issues like bugs there are two no more Wait, wait, it belongs and Make no that's a great question And and and we're gonna have to probably apply and learn but from from my experience with this is One of the things I like about throughput is that it incentivizes small MRs So the idea is to try to standardize on delivering the smallest thing with quality to production So let's make sure that we're not just like putting MRs that you know are hidden behind feature flag provide No value all the all the things that you wouldn't expect in an MR So so focusing on the smallest thing and but like you said things are not going to be equally the same However, just one of the one of my ways of adopting metrics is to look at trends and not raw numbers Raw numbers are helpful if you need to dig in and and that's applicable in some cases But overall what we'll be looking for is a trend line is you know, not so much like oh this team can can You know deliver 20 units every week I would be looking at the trend line and basically saying your trend line is is around this average Maybe we hit a bottleneck or maybe there was a challenging situation and our throughput dipped down So come back and have a healthy conversation around like why did that happen? but specific to your question I I think like We trying to get away from the raw number focus because that's not the goal The goal is focus on delivering a smallest of a bit that you can and over time You will start to normalize these activities But it does mean that you'll have to accept that some things are gonna be a little bit bigger And some things may be a one-line change and that's okay That that's like we shouldn't try to optimize on you know breaking MRs down to two lines of code to get those things to match If I can add to that Totally agreed like not not every merge request is created equal and we have to live with that now two specific examples were named CE and EE having needing two merge requests. That's bad. We shouldn't do that So had the quality team Rami They're working to combine it into a single repository so we can get rid of all that necessary unnecessary brain damage The second thing is a separate MR for the documentation. I Think that's an anti pattern So I get like I will want to get stuff in before the deadline. How are you gonna review code? Well, you don't know what it does or what it's supposed to do, right? That's that's the in the documentation We say this is what it's gonna do and then we have to code and you make sure that the two conform How can you do a code review without documentation? So I think it's really bad that we're saying oh I'll do the documentation later. No, it is necessary to do the review in the first place So I think documentation as an afterthought is an anti pattern and we should get rid of it And that way it will be month merge request and still we have one liners and thousand line merge requests So the Dalias points are all valid, but the two specific examples both anti patterns and we should get rid of them in my opinion Thanks, then if I can add on on top of the add-on too I think one of the other things that I don't think we talked about yet is the opportunity This is gonna give us to have a really Good system for talking about improving our velocity in a healthy way Yes, you may have something velocity or throughput. Well improving throughput as a metric, right? Because what you can do is you can say, okay, maybe one team is getting 10 merge requests done and one team is doing 20 As long as we set these Standards and avoid these anti patterns Like Sid was just talking about we can also go in and we can start saying, okay, let's set a KR Let's improve throughput by 20% and assume that as long as we're not violating these anti patterns We're actually finding other ways to increase the amount of work increase the number of times that we're shipping something In a helpful way This is the one few things where if you if you start gaming the system you actually make it better because Smaller iterations so yes, please try to game the system My favorite thing about throughput actually said is that like it's our nature to say How can I how can I make you think that this number is great and game it and I'm like go for it knock yourself out But I do want to highlight though Like I understand that that smaller means it may seem like we're adding Overhead understand that I am very aware of overhead and we need to reduce it as much as possible But the goal of small iterations has been really really powerful You know not only small changes, but reviewers are having an easier time maintainers are having an easier time We're seeing things go into production might much faster So be aware of overhead. We should continue to improve and reduce it But recognize that yes, I I acknowledge that with smaller things means that we're creating more mrs But that is a much better Application if you will then having three weeks worth of work and spending a week just to review it having multiple days of Addressing comments and so on So I'm hoping that you know either You you see it in application once you start adopting it or feel free to tag one of the one of the ops developers And and they're probably going to give you some examples of things that they went through Um, okay, tune. I think you put a question. Did you want to vocalize it? I I know you're on the call Yeah, basically I was wondering if the throughput was something that was would it would have been taken into consideration when you're talking about promotions from Going to one level to the other like senior star Personally, I'm I'm not a big fan of team metrics being used for individuals Data is really important. And I think there are some things that Could you know help a manager have a performance conversation if that is a concern However, I would hesitate to target throughput because Of the nature of it if if we say you need to you know to be a staff you need to push out You know 25 mrs per week There is an easy way to game it and it's not a very like that's not the goal The goal is you know as you move from senior to staff The the the problem that you're trying to solve is more complex It it does require more time So so actually what I've seen happen is that as you move up some of the like your throughput may go down But you are introduced like you're supporting the team you're coaching you're facilitating You know other people to be more successful. So I would I would hesitate to use throughput in that way If I may I just have a concern if I can Step in Yes All right, perfect. Thanks. I I mean they are both All right, my only concern in this is that we will need to Make sure that the review queue is Improved accordingly because it's a lot easier for a reviewer to Stick to incremental changes And in this case that means we no longer require people to have Random access to that queue. I mean, it's not super clear to me What is all this queue this review queue is organized? And we often have the problem that reviewers are not available. We don't know how to Assign to this and with this model. We will stick more reviewers to that Makes sense to you Yeah, no, filib, but that's a very good point and and you're right Like that's something that we've seen in practice where we push an MR and it sits for multiple days to get a review We're gonna have to solve that problem for throughput to be Successful not not in as a metric in its own But just the goal is to deliver to production smaller and faster things So we do need to work on how to get reviewers to review things quickly almost one of the things we've done I've done in prior life is Starting with an SLA so saying like an MR should get a review within X number of hours or within a day or something to that There is something that that is definitely One of the things I'm working on in the background is increasing our maintainer pool Our maintainers are extremely valuable. We they hold quality of the product But right now we're growing the engineering team to a size that's not You know within You know with a very tough ratio for maintainers A lot of the feedback I get is that the last week of prior to freeze Is is a nightmare for most reviewers and maintainers because of the pressure that comes with it So one thing that will really help is that as we Deliver things incrementally and more frequent throughout the release that pressure should start to relief from that last week And and then we really need to focus on like improving and coaching reviewers so that the maintainers role is is not having to dig in much deeper But also like how can we grow the next set of maintainers so that we have a good ratio of maintainers to developers on the team It'll also help with our community contributions for sure. Um, so thank you for bringing out that point for me It's another one of those ways that there's almost no bad way to gain the system Right as long as we avoid the anti pattern of just not reviewing things Anything we do to try to make that happen more quickly Is going to help Sorry Tommy. Can you I was I missed that the anti pattern of not reviewing things Right, so I was saying that's another way where we're going to have pressure to gain the system in order to keep the throughput up Um, so as long as we avoid the clear anti patterns like we could say one way we're going to get faster is just Not have people review things and worse than I don't think that's the idea I think I think the idea is is that the the smaller you make your changes the easier things are to review And the smaller you make your changes to the last you'll have at the end of the release cycle Exactly and there and there are perhaps some other ways that we could try to speed that up as well But as long as we avoid the anti patterns the clear anti patterns just about anything we can do to make the reviews go faster Uh, or have more opportunities for review is going to help us move faster with their throughput I'm curious about like the Your your roadmap or your vision about this is to put like the first step That then we'll evolve into something more like moving to velocity or something that takes Uh context or uh the type of work or something that evolves into something kind of like the idea of cycle analytics that takes the whole um The whole life cycle of Coming from idea to production Or is this or is so put the ultimate goal because I think this is like a first step that should be improved with additional Uh dimensions or whatever Yeah, I mean, I'm happy to have conversations around it for me throughput is is the goal We we're talking about, you know, adopting more cd practices and and basically, you know Focusing on delivering faster and smaller bits and and throughput helps helps in the sense of you You don't need the ceremonial estimation. You don't need to Put your effort toward, you know Velocity and story points or weights and and so on and instead use that effort to break things down to a smaller unit as possible That you can deliver So to me throughput is is the metric that that i'm looking for and and I like it for its simplicity How closely aligned it is with cd? um And and to enhance it one of the next iterations will do even though like it was kind of easy to implement So I don't know if we want to count it as its own iteration But within throughput, we're also going to cat categorize the different m r so We're going to start labeling m r's based on, you know, what areas of investment they line up with whether they're feature security issues Potentially, you know, like impediments like technical things that that we're doing But that that along with with the throughput number would give us an idea of how much we're investing on each front We're all here to build great products. So we want to make sure that we we don't get into that notion of Only focusing on one category versus the other So having the throughput category is also going to help us continue with that balance. Um, and um This is actually going to be very positive with pm being engaged in prioritization because they will have The full view of the team's bandwidth and having those categories really outline Is an easy way of having that conversation around what balance do we want to have or what balance we currently have? Thank you for the question and and and like please comment in the issues These are you know, we're adopting this model and and we needed to be the git lab model So if there are things and that are obvious that we should improve by all means, uh, we should have that conversation Is that uh, I love that quote you said instead of spending time estimating focus Spend time breaking things down. Is that from the center in the stuff in the handbook? Uh, I think it is in in in that last m r that that I put said but I will um I will go back and check on it But the last m r is 12 8 1 6 Um, it's sorry the yes 12 8 1 6 exactly which which pushed the you know the throughput definition into our handbook Yeah, it says under 1 32 instead of spending time sizing and figuring out the weight We should put this effort towards breaking issues to the smallest deliverable Cool. All right any final questions. This has been really great Yeah, um No, no one will ever do a final question Dalia. So don't bother asking for it Okay It's hard. We're figuring this out these group conversations But I think the whole initiative is amazing and I think this is Like we have this value of iteration But how we do it and how we measure it. We're figuring that out and thanks for your help and that really appreciate Awesome. Thanks, dude Felipe says yeah last questions before five four Cut it off. All right. Um, no, this has been great. We are 25 minutes in so I'll go ahead and end the call Thank you so much. Please use the links contribute to the conversation Uh scheduled coffee chat happy to talk to anyone interested in in the in the topic and we'll we'll keep we'll keep at it Iteration is our value cool. Thanks everyone. Have a great rest of your day Thank you