 All right, thank you all for coming to my session. Surprise, there's more people here that are not interested in data and open source communities and how we're able to drive developer efficiencies. I guess everybody's already completely maximized all the efficiency out of their developers already. So my name's Lee Faus. I'm the Global Field CTO at GitLab. I've been in the developer tool space for about 20 years. I've actually been in the software space for 30. Started off as a high school teacher, was an adjunct professor at NC State University. Excuse me. And I go back to the developer tool space. Back to 2000, I worked for a company called Togethersoft. And Togethersoft did UML, Lada Java, J2EE. And then I ended up going to work for Red Hat. And between those two companies, I got really involved in the open source community. I was a huge fan. I was a contributor to Tomcat, early days at Tomcat, contributor to the Eclipse Foundation. I actually, anybody here ever use Eclipse? Anybody here use SVN or CVS with Eclipse? Oh, all the time? So if you hated it, it wasn't me. If you loved it, it was me. So I actually wrote the virtual file system layer for Eclipse for all of the version control systems. And I ended up running my own consulting practice for about five years. And started doing a lot of DevOps, a lot of automation, had a little bit of an operations background. I've been a CTO, a VP of engineering. And the talk I'm going to give today is near and dear to my heart. Spent four and a half years at GitHub. Saw a lot of this when I was there. Now seeing it at GitLab a lot because at GitLab, there's a lot of things that we do in the open core community where the open source users and the people who run self-hosted community edition, they actually will turn on the heartbeat functionality. And that heartbeat functionality will send us data about their instances. So some of the stuff I'm gonna talk about today are aggregates of things that we've learned both from the academic community as well as from our user community about how we can drive better efficiencies into our enterprise organizations around developer workflows and things like that. So developer efficiency. Why is developer efficiency important? Well, when we go out there and we talk to our user communities and especially to the executives, one of the first thing that they talk about is they say it takes way too long for a developer to get onboarded and get to their first commit as a new employee. We have customers that will tell us it will take them anywhere from six months to a year before they feel comfortable before they're actually writing code back into their existing code bases. If I take on average based on prices and I try to do this, I give this talk all around the globe. So if I talk about $300,000 and I'm talking to a group in APAC, they're gonna sit there and say we'd never pay that amount of money. And when I talk into US and I talk about $100,000, they're like, where are you finding those engineers? So this is an average around the globe. So it's also really easy to do the math. So if I've got new engineers that are coming in and it takes me six months to onboard a new engineer, I've already invested $50,000 into that engineer before they were ever even productive, before they wrote their first line of code. The other thing that we find is engineers that move between projects. So when they move between projects, if you're using best in class tools where every team can sort of choose their own tools, it will take them up to three months to understand what the tools are, where they should be collaborating, where the system of truth is. And so that's gonna cost me $25,000 just for me moving between projects. In really large enterprises, one of the things that's really interesting when we have talked to these users is what happens to those people that are new users that move between projects in the first six months? Because we actually see this timeline actually even get longer. The other thing that we talk about is attrition. So I have a really funny graph that I do when I'm doing more of a comedy skit around this. And one of the things that I talk about is around developer attrition is when I've got a project that I have high security, high value, and my risk is sort of a line that goes between the bottom left-hand quadrant underneath the risk is where my developer attrition bar is. Because I'm a developer who's not really doing anything fun, sort of doing a lot of maintenance. I might be the mainframe developer and those people have a tendency to either want more money or they wanna be promoted into a management role or they wanna move to another project where it's something that's moving a little bit faster and wanna be able to be one of the cool kids at the company. So developer attrition, if I take $100,000, there's a lot of different websites out there that will tell you that it's three X the cost. So the loss of the engineer then the cost of recruiting and then the cost of onboarding, a new user is gonna be three X the cost of whatever you're paying that engineer. So some of the engineers that you may have that are senior engineers today that might be making $250,000, $300,000 a year. You're talking about a million dollars in loss. And a lot of companies don't take that into account when they're looking at how they build developer efficiencies. The one I'm really gonna dig in today because this is one I actually helped do a study at NC State, I'm gonna pull some more relevant data out of this today is developer contact switching. So how many people here are developers today? I'm gonna go to have developer. How many of you, when you were in the office or if you're back in the office today, we get tapped on the shoulder when you're in the middle of writing a really important piece of code. You're troubleshooting, you think you've almost got it and somebody comes and tapped you on the shoulder and says, hey, you got a quick question. I got a quick question. Can I bother you just for a minute? Yeah. So that contact switching is really critical to being able to reengage or get back into the flow of where you were. The other thing that we find with the community that we talk to is there's a lot of different factors and I'll go through them through this talk, but one of the things that's actually really intriguing to me is they will reintroduce bugs that they've already closed because if I'm, let's say, doing a problem solving and I'm on step six of problem solving this in my head, somebody comes and taps me on the shoulder and I go, oh yeah, hold on, let's go to the whiteboard, I'll draw it out for you and I'm gone for 30 minutes. I'll come back and be like, where was I? And you may actually revert back to step four or step three before you can get back to where you were being able to solve that original problem. Now, this doesn't sound like, where this is, how does all this apply to open source? Well, when you're an open source, a lot of times you're a lone wolf. You know, you're out there on an island, you fork the code, you go ahead, you're trying to make a change, you don't have somebody standing over your shoulder helping you out, but at the same time, you most likely have a regular day job and somebody pings you in Slack or Microsoft Teams or you've got to go work on something that somebody, your manager pings you, that's a high critical item. Well, now in the open source world, you had this thing that you were all excited, man, I'm ready to go push, create my PR, I'm gonna move this thing back into the main code line base and just when you're about ready to hit the button to open the PR, you get pulled away. Now all of a sudden you go back in and you're like, wait a second, well, what did I do again? Why was I doing this change? And I've even seen developers, when I talk to them that are contributors to open source, they'll actually never open the pull request because they'll change their mind, they'll think it's not as important. They'll say, ah, you know what, somebody else will fix it, they're smarter than me, somebody else is gonna go and fix it. What we don't realize is every fix is critical. Moving the timeline of that change set is very important. One of the changes that I had, I'll never forget this, when I was at GitHub, they were going from Twitter Bootstrap two to version three and I was going through and I was using it to build an application and I found this issue and I was like, I actually think I know how to fix this. I know what to do and I got all excited and I went ahead, I forked the code, I started making my change and all of a sudden I opened my pull request. Man, the maintainer of Twitter Bootstrap, all these people started and they're all like, this is shrewd, this is so smart. Man, I wish we would have thought, and I'm sitting there going, that's right, I'm gonna become a contributor. Here I go. And then all of a sudden I wake up the next morning and I get a nice little email notification, your pull request is being closed. What, everybody just talking about how smart I was, how awesome I am, what happened? And what they did is they said based on that change, they could actually refactor it at a higher level inside the code base and impact that change across multiple components at once instead of just the one component that I was doing it in. But they never even linked back to my original pull request. They never said, this is where it originated, this is where it started. Man, you know what, that sort of turned me off. And we see this all the time in enterprises as well. We talk about intersourcing, we talk about wanting to do, get people involved in getting them engaged. There's so many times that this, it just doesn't happen that way. So this is for all the quants in the room. So this was a study done in 2018 about context switching. So when we talk about context switching, you'll see on the left-hand side, not gonna read all these to you, these are the different situations that people get pulled away for. So, oh, I've got an item that has a newer due date. Hmm, okay, so let me go ahead and let me re-shift my priorities to go ahead and do yours. Well, across all these different factors, there were some things in here that I never even thought about. So there's task-specific factors and then there's context-specific factors. The one thing that I found very intriguing in this paper, and anybody hit me up on LinkedIn or on Twitter, it's just Lee Fouse. I found out on LinkedIn, I'm the only Lee Fouse on LinkedIn, which is really surprising. Feel free to reach out to, I'll send you a link to this article. Self-interruption was the biggest impact. Step four? Ha ha ha ha ha. So, the self-interruption was the biggest cause of the biggest delay of change. And so I started reading, I'm like, what is self-interruption? That sounds dirty. And then what I found out is self-interruption is, I need another coffee. I need lunch. I need to go take the dog out. And I was like, I never even thought about these things. I always thought about the things that happened in the office, but now with a lot of the people that were going to do remote work, there were things that they did that was actually introducing unforced change that they never even thought about. Other things that was very interesting is the different project, same project. There's a big gap in what happens when I switch projects. I did not realize today a lot of the engineers that were queried inside of this particular study, a lot of these people switched between common libraries and different projects that they worked on. They owned more of the full stack versus just owning a component in a particular part of the application. And switching even between projects because when you look at GitLab or GitHub or Bitbucket, every repo gets its own project unless you have a mono repo where you try to put everything inside the same project. And I started thinking, I'm like, man, or have we been organizing our repositories and our projects incorrectly? Are we actually inducing an unforced change that is causing you to context switch and making you have to rethink how you actually do your work? So a lot of this study, all this work and everything, we have actually got four universities right now in Europe that are comparing open source and enterprises using GitLab, just using our core open source product. And they're calculating measurements out of it. And they're trying to figure out how we do efficiencies around how do I organize my projects? How do I create groups and subgroups? And what about my CI, CD workflow? Is it better to break them apart? Is it better to have everything altogether? Should I be doing mono repos? All of these questions that we have because we have opinions, but unfortunately opinions are, everybody has an opinion. So we thought it'd be good to be able to study it and be able to really figure out what it means. This is actually gonna be a three year study that we kicked off at the beginning of this year. Some of the data that's coming back is very interesting. One of the things I used to do as a VP of engineering is all of my engineers were given a block from, and this was when we were in the office, from 10 o'clock until three o'clock. And no meetings were allowed to happen between 10 and three. So my developers knew, hey, this is my block. And then I would tell them, I'm like, you fit your lunch in there whenever you wanna take your lunch, that's inside your block. If you wanna wait and maybe you're doing intermittent fasting and you wanna wait outside of that, but whatever you wanna do, that is your block. But I would not allow anybody, sales people, marketing people, docs people, nobody could bother the engineers during that block. One of the things that we found is we closed more bugs in that block and the developers would spend time outside of those hours building features. Because they started realizing that in that block when they knew they weren't gonna be interrupted, they could close out the smaller, size small, medium sized bugs. They could get those out the door really fast, but if they're trying to problem solve or thinking about something new needing to be creative, they wanna do it outside those hours because they knew they were gonna go away, come back, go away, come back, go away, come back. Sometimes figuring it out at 2 a.m., which I know happened to me a lot of times. So out of part of this data is we're learning how important data is to the flywheel of continuous improvement and continuous delivery. So as we think about continuous improvement, continuous delivery, that flywheel effect and then having the four main building blocks between plan and create and integrate and verify, deploy and operate, monitor and prove, between those four blocks, we end up creating value streams. How many people here are maintainers of an open source project, anybody? Few people? So one of the things I took a little bit of, I was listening to one of the keynotes this morning and they say developers are really good at writing code. And I'm like, that's not the genesis of an open source project. The open source projects I've been involved in is because there was a pain that nobody would agree to do internally. So somebody decided to build it externally. And there was a value statement associated with that. Number one, it's a pain that I feel when I've talked to eight other engineers and they all have the same pain. So I'm gonna go build it outside my organization. We can move a lot faster. I'll go do it nights and weekends. There's a value statement I can apply to it. I'm trying to reduce cost. I'm trying to improve security. I'm trying to make it easier to onboard new engineers. Whatever those value statements may be, you had a pain that you're trying to solve. And that's why we go write code. I don't know of any developer that goes out there and says, hey, I'm just gonna sit down at VS Code and I'm gonna see if I can fill up 10 tabs of code. A lot of times what they wanna do is we're trying to solve a problem. And when we look at a platform so you see where GitHub's going, it's becoming more of that all-in-one solution. So with GitLab, you've got your planning, you've got your security scanning, you've got all that. What we're learning is it's not about the feature functions. People really don't care what tool does the SAS scanning. They don't care how their artifact got built. They don't care how it made it into production. What they care about is the data that surrounds it. How long did it take to get to build it? How long did it take to get to production? How many criticals did I actually package that got released into production because we created an exception for it? How do I audit and plan for that to make sure it doesn't happen again? How long does it take for a merge request or a pull request? How long does it take from the initial commit to actually closing it? How many people reviewed it? How many reviews required a new commit or a new change? Those types of things all drive efficiencies and when I'm able to do that from a single UI, from a single contact from my IDE, all the way through to my collaboration platform and I can see all of that, what we are now determining inside of GitLab is there are two types of input. I've got manual input. So go create a merge request, go create a new issue. Somebody goes and types it in and then I have automated data that gets created and that's gonna come from my automation, my CI, CD. So I'm gonna go run a SAS scan, I'm gonna generate a report, I'm gonna have an artifact, I'm gonna have those things. All of that data needs to go to one system of record. And unfortunately that system of record when you try to do that yourself, you lose context. So one of the examples that I gave that came out of the study that we did is we were like, man, there's this one team at this company, they were rock star quality. Man, they were releasing four to five times a week, there were no bugs, their code quality was A across, everything were like, man, you guys should be like, that team should be, replicate that team because they're awesome. Yeah, well unfortunately we missed some context. They had one phenomenal rock star developer that was doing like 100 commit a day and understood the plant had been at the company for 15 years, understood all the different tools, knew how to work around the system, so knew what data they were gonna be collected, so made sure to write the code in such a way that they was always gonna report back that hey, we are awesome. And then there were eight other people on the team that were doing like two or three commits a week. Well, guess how data works? That person who's doing 100 commits a day, all the data's gonna be skewed towards that one individual. So one of the things that we have to take into account is when if we try to extract this data and try to put it into a data lake and try to build a lake house and try to do all the around the state, you're gonna miss the context. And when you miss the context, you're not gonna understand where you can build efficiencies. So this flywheel effect is really critical, thinking about where that data's gonna come from and how you use that data to reorganize your teams. I have a lot of discussions with new engineering managers about how do I get new engineers like people just coming out of college, how do I get them, so they're committing earlier. So one of the studies that I did when I was at NC State was about pair programming. And one of the things that we found very interesting is you would think that if I took two A students and I paired them together that they would knock it out of the park. Nope, they became a C student. So both of them became a C student because both of their opinions had to be right and neither one of them could agree on how to be able to solve the problem together. But you know the best teams were two B students, two B students became an A student, two C students became a B student. I could take an A and a C student and they both became an A student. So when we sit there and we think about how we organize our teams and how we put these things together, it's actually really critical. So not sure how a lot of the teams today that you are organizing, but Spotify has a really interesting model about how we organize our teams. And Spotify will tell you this is not a one size fits all. You cannot just go into your organization and just, hey, if you do this, everybody's gonna be great. But the way that they do it is you have a product owner and you have a squad and the squad will never have more than 10 people. And then what I do is I have multiple of those and then up at the top, this is a line of business, a tribe is sort of like a line of business. And then underneath that line of business, across I'll have, let's say, a UI, a UX team. That's my chapter that goes across. I'll have a UI UX person embedded in each one of those squads. I'll have an SRE or a platform engineer. That will be a chapter that will go across. And then what I'll do is between the chapters, I will then reorganize and I will collect people from different like teams that are at opposite ends of the spectrum and I will then use them to be able to reorganize and those are the people I will move between projects because then they can learn from each one of the projects. So I'll take a lower performing squad and a higher performing squad. I'll figure out where the chapters are and I will take the chapter individual who helped make that particular squad a high performing squad and I will move them over to another squad. So a chapter is like UI UX. It's an engineer. It's a project manager. It's a like function across all of the different chapters or across all the different squads. So the goal was is the way that they thought about it is you would embed that specific role in each squad. Now, one of the things that they found out is well, it got really expensive for it go out and hire an SRE that was 100% allocated to a particular squad that only released once every quarter because the SRE basically sat there and waited until the end of the quarter and said, okay, here's everything that we need to fix. So now what we see a lot of teams is you'll start to see this and it doesn't represent very well when you're talking about people is kind of like cut up in half or into a quarter because then you're kind of like, well, which quarter do you want of me? So, but that's really what we're seeing now inside of a lot of organizations. Now, from our perspective, what we're learning out of open source and a lot of people who are building automation into their tooling is that that is really the data aggregator. So DevOps is now becoming the data aggregator and what we're doing is we're now creating two different teams on either side of that data. One is a platform engineer on one end who's my SRE, looking at all of my operational statistics, all of my SLAs, SLOs, and then on the other side, I've got product operations and product operations is using that data for me to understand how do I reorganize or what features of which products do I need to light up across my organization to become the most efficient. So inside of open source teams, the way that it normally works is you've got four different roles that you'll play inside of an open source team. You'll either be a maintainer, a contributor, a collaborator, or an observer. Now, when we go out there and GitHub released at GitHub Universe, they talked about the state of the universe, GitHub's universe, and they'll talk about the 90 million users. Well, I can tell you as an ex-employee about 90% of them are observers. They're going out there, they're using it like Stack Overflow. Let me go out and let me go find a Rust project because I've got some Rust things that I need to do and let me see how somebody else solved this problem and then I'll go into my project and copy, paste, and go put it into my project and see how I can make that work. When we talk about the collaborators, collaborators are usually people who find bugs and they're coming out and they're opening up issues. Now, collaborators are the easiest to get frustrated because they're gonna raise their hand and say, hey, I've got an issue. And then what's gonna happen is, is that issue's gonna sit dormant for about three years and it's never gonna get worked on. And they're gonna try to understand why nobody's working on their stuff. Well, the only way that you're gonna get your stuff fixed is you're gonna have to become a contributor. The way that you become a contributor is by being able to sign up to the contributor license agreement, getting your company to buy in, and then you'll have to go fix it on your own. And sometimes that's hard. Like, there's a lot of really smart developers out there and it's kinda hard to get over that imposter syndrome of being able to say, I'm willing to contribute code. Am I good enough to be able to contribute code to this project? And then ultimately you have the maintainers. So these are the admins of the project. They're the ones who can determine who can be a maintainer. If you're not a maintainer and you wanna become a maintainer, you have to fork the project and then to be able to fork the project you can open up a pull request or a merge request to allow your contributions to be made because nobody wants to give read-write access to all of their repositories. When you go ahead and look at this, we can apply this internally as well. So we all know, I joke with everybody, I kind of put this out there inside of the enterprises that I work in and it's really funny, I'll sit there and say, we all know where that Rockstar team is and everybody holds in high esteem. And I keep waiting for somebody to say, yeah, we don't have one of those. Everybody out there, yep, yep, they can, yeah, that's Susan's team, yep, yep, that's Bill's team. I know exactly who that is. I know exactly who I go to when I've got a problem that nobody else can solve. So those maintainers are probably gonna be less than 5% of your overall workforce. So if you sit there and think right now, you've got 10,000 engineers, only 5% of them are gonna be Rockstars. And those are the people you can't afford to lose. Those are the people who understand the business, they understand the challenges, they're the ones who contribute the best quality code, they're the ones that everybody else is gonna learn from. And then you've got contributors. Contributors are wannabe maintainers, okay? They have ideas, they've got things that they wanna get done, but the only way they can do it is to contribute to other projects to be able to get those. And then your collaborators are gonna be developers, project manager, people that wanna interact with the community, but they're not really comfortable writing code, and then the rest are going to be observers. So one of those things to think about, and those observers are gonna be those people that you just hired and they're on their six month ramp and they're trying to figure out where they go and how they actually get into contributing. So I joke because I have to talk to a lot of people that don't know what developers do. So when I talk to them, I'm like, this is what you think a developer does, right? I accept the task, code the task, deploy the task. Yeah, yeah, yeah, that's all our developers do. That's all we ask them to do. I'm like, okay, hold on. Let's dig into what your developers actually do. That's what your developers actually do. And we're gonna sit there and we're gonna sit with the project management team. And they're gonna sit there and say, hey, I wanna let you know that I think this should only take us three hours to do. And you're gonna look at them and say, yeah, this is more like a three week task. We should break this down. And then you go ahead and you've got your task dependencies, accepting tasks. They clone repos, they review other people's MRs. They've got all of the QA tasks that they need to do. All of those things. Imagine in an open source project, if you were the maintainer, this is what you do every day. And then people wanna sit there and say, I don't understand why my MR isn't getting approved. Because they've got their regular job that they're doing during the day and then they're coming in and looking these things afterwards. So they're trying to get into your mindset to figure out how you tried to solve the problem to see if it's something they want to accept or not. Which that's actually much harder. Like anybody, has anybody here ever been a professor, adjunct or term? A little bit? Okay, yeah, sure. So hardest thing to do is to grade somebody else's code. Right? Because you're gonna sit there and you're gonna say, I'm sorry, you solved this problem incorrectly. And they're gonna come back and they're gonna say, but it executes. And then you're gonna have to look and you're gonna have to, well, but there's some efficiencies that we said, you never talked about efficiency in the problem statement. So there's lots of ways to solve different problems. So you have to really think about what they're actually trying to solve. And when we talk about building efficiencies, this is where we need to create efficiencies. So everybody, the talk of the town over the last week is chat GPT. So one of the things that's really interesting is, go in, and I actually did this. I wish I would have taken a screenshot. And I asked it to do an automated code review for me. On a pull request that I had opened. Let me tell you, my code sucks. They were like 18 requests for change that came from chat GPT. So at what point are we gonna be able to automate all these other things that we currently have to do manually to be able to move things and create a more efficient workflow? Now, obviously you're gonna want somebody to check whatever it's saying that you should do. But all these things that we normally have to manually, like one of the things that it's been so busy, I wanted to go back in because one of the things I really wanna do is I wanted to write a test plan for me. Because that's one of the things that I think people really struggle with is creating test, create me a test plan for this code. So, or give me a regression test plan. Be able to show me what all the different tasks are and then be able to like have it point at an API and then ask it to do, see if it actually can figure out what that looks like. So this is very common. If you don't know this workflow by now, I've been giving this talk for eight years. So this is the cycle that we think about when we're using get more creating branches and being able to get feedback into that loop. And you've got your accepting your tasks, your plan and create on the left. You create them in GitLab, one of the differences is, you don't need to create a commit to create a merge request. I can have an issue and create a merge request without needing a commit. So in GitHub to create a pull request, you have to have a branch and you have to have a commit on that branch before you can create a pull request. So this way what I could do is in the way that we talk about it at GitLab is this way it allows me to start a conversation. So I might have UI UX, I might have a design, I might have a test plan that I need to follow. So that way I can have a discussion before I ever even write a single line of code. So some people in the open source world, you're just moving really fast through the code. So doing a branch and a commit, sometimes that makes the most sense. So doing a PR inside of GitHub can be just as powerful. But then once you're in there, you have this loop effect and you wanna create as much automation to build as much efficiency as possible. And what we wanna be able to do inside of this is to be able to get that feedback loop to the analytics and the data to the developer as soon as possible. And it has to be relevant to the current change. So inside of GitLab, we have this thing called the merge request widget. So if you're in GitHub, you have to go down to the bottom, you get your status that came from Jenkins, whatever it's down at the bottom. If you've got sneak plugged in, that's gonna come in through an issue comment. If I've got check marks, that's gonna come in through an issue comment. And all those things are gonna be in different issue comments. You're never really gonna know what the current, quote unquote, status. So inside of GitLab, what we have is your code quality metrics, your test summary, your metrics, what are all the changes? You can look at the details, security scans, all that data is right at a developer's fingertip. They know immediately what are those things that they may need to change. Up at the top, I can see my pipeline. I can see all the different stages with all the different jobs. I can immediately see up at the top where my code may have failed. So I can go click if one of those little check marks is an X, I can go click on that X and it'll take me just to that segment of the failure in the logs to be able to show me where my error occurs. It's contextual, it can tell me. It allows me to build efficiency into my process by knowing where did the failure actually occur and let me go fix that? Instead of giving me when I go, I don't know about you guys, but I go to Jenkins and I sit there and I'm reading through an 80 page log file and I'm having to copy and paste and put it somewhere so I can do a fine search and look for error and hope I can find something that means something to me. And then you try to fix all of them all at once. So inside of here, all these items and then down at the bottom, when I know that I've got a blocked merge, I can see immediately why is this not able to be merged? So I'm giving you the steps, I can tell you. Eventually, this data, we are mining and we're pulling this out based on that data in the beginning that we're talking to people about efficiency because they were telling us. I don't know what's keeping me from merging. Okay, that's data, we need to surface to the developer. Now, eventually, how soon do you think it's gonna be that I'm gonna see that the source branch is 322 commits behind the target branch? How soon do you think it's gonna be before chatGPT says, I think you should go ahead and do a pool. How about I just go ahead and do the pool for you? So those are the kinds of things that we wanna make sure that we surface to an engineer. Now, this is another interesting study. So the links down at the bottom, again, hit me up on LinkedIn or on Twitter. You can find me, I'll send this to you. So through the association of computer machinery, how we measure developer productivity has been wrong. All the years that we thought, hey, you know what, I'm gonna measure you based on the lines of code that you write. Yeah, that was wrong. I'm gonna measure you based on the number of bugs you close, wrong. There's actually this space concept that talks about the five different things that you need to measure to be able to understand how productive a developer is or not. That goes from satisfaction and well-being. How many of you have participated inside your organization, whether it be online in a community that you're a part of or inside your company where you've done an NPS study that has asked about your developer happiness? All right, got one, got a few people, right? I actually, a lot of people say, oh, that's all warm fuzzy. I'm sitting there going, no, there's actually, there's some really good data that you can get out of those. You can know which projects are, the projects that people wanna be a part of and which ones they don't. Then we've got performance. This is the outcome of a system or process. So when I look at the outcome of what's happening from my CICD, those things, I've got the activity, how often am I interacting with other developers across other projects, as well as the project that, what does that waiting look like? Communication and collaboration, how frequently am I helping others inside my organization? And then efficiency and flow, how quickly can I get my changes out the door? This is really, this is about eight page report study that was done by a bunch of people, but one of the people was the lady who wrote the original Accelerate book around all the door metrics, stuff like that. So really interesting study to go out and read about. So with that, I will end my talk and I'll open it up to see if anybody has any questions. Any questions? Yes. So it's definitely something that we are looking at. So there's a number of items that I raise up to our engineering team all the time is we have open merge requests that have been open for like three years. And I'm like, at what point do we just decide to close it? And how do I set a marker on that item to, when should I reevaluate it? We're looking at effective ways to be able to make that meaningful to an engineer because we do know that developers will move between projects. And if that particular item, it has to be in a way that it's still relevant to me. So if I moved to another project and it was something that I worked on three years ago, it may not be as relevant to me. So is there a way for me to be able to determine through AIML to know who would that potentially be important to now? Especially if there's existing code and all of a sudden we're seeing the same area of code is now being investigated for change that somebody may have already started a change that you can just rely on that was opened and closed but was never merged. Those are the things that we wanna figure out because there is so much code that is out there that has been abandoned. We also have on the flip side of that, we also have a lot of dead code sitting in systems that we never refactor out. So again, how can I start to use AIML to be able to refactor out those things where hey, I just ran through a security scan of your project and I wanna let you know that there were 480 lines of code that we could never even hit based on the calling trees. Maybe that should just become an open merge request and I should have somebody who goes and investigates that and I can go ahead and remove it because if I'm building an in-memory tree to be able to security scan it and I can't hit it, there's no way anybody else is gonna be able to hit it either. So all of those things are all about what we're trying to do is use that data in a way that allows us to become more efficient and efficiency, reducing the lines of code inside of a code base, being able to figure out where people have already started a change but abandon it, how do I move things faster through the process? There's a lot of things in the community we can learn from that we wanna be able to empower and get that back to our enterprises. Yes. So time blocks is the most effective. The other item that I have seen also be effective is basically requiring somebody to self, like the hard part is we have so many tools on our desktop today. So it's like how do I let Slack know, hey look, don't bother me? How do I know that I don't go check my email and I got a notification that pops up that a new email just came in from my boss? Like how do I turn off all of that noise? So that seems to be the biggest thing. And I used to have when I would be coding, I used to have like one of those on air things outside my cube. So I would let people know like, hey, don't bother me, I'm in the middle of something. There needs to be a way that an engineer can let other people know that, hey, I'm working on something important. We all know like if you're trying to reach out to the CTO or the CEO of your company and you look at their calendar, you don't just go and override somebody else's meeting. You sit there and you try to find a slot. So we gotta get better at knowing when our engineers are available, when we need to ask them. The other thing is a lot of times we ask, a lot of times it's not the engineer who needs to block, it's about changing the culture of the other people who are interacting with the engineers. We need to get better at asynchronous. Like one of the things I love about GitLab is Slack is not a, hey, I need you to respond now. It's a, I need you to respond when you're available. And I've had Slack messages that have gone for four or five days and then I'll respond and the first message, hey, thanks for responding. You know, it's not them being snarky. Hey, thanks for responding. It's actually, I can tell it's an honest like, hey, I'm glad you were able to catch up and you gave this some thought and now you were able to respond to me. It is, it's a cultural thing. I talk a lot about a lot of the tools that we do. We bring in tools today without thinking about the impact on process and culture. And then all of a sudden the executives want to know why all the tools that we brought in aren't bringing the business of value that they thought it was going to. And I say it's because you never actually changed the culture and the processes to make those tools effective. And those are things like people think that Slack and Teams is just a way that I can ignore emails and I can get you faster. It's like, no, that's not what they're meant to be. They're supposed to be a place where we can collaborate. So those are the things. I think you have to change the culture on the other side and the way that you can impact some of that culture is by forcing those time blocks and telling people you cannot bother me during this time. Other questions? Well, great, well, thank you all for attending. It was great meeting you all.