 Good morning folks. My name is Andrew Grimberg. I am with the Linux Foundation's release engineering team and we're going to be covering some learned best practices. This is kind of an intro course. Honestly, it's targeted at fairly new developers, but, you know, folks that have been in the industry for a while will learn some things as well. So, just kind of going over things. The Linux Foundation itself, who I work for, is creating the greatest shared technology investment in history, enabling open source collaboration across companies, developers and users. Y'all are actually participating in that just by, you know, coming to this event, even those online, right? The release engineering team at the Linux Foundation kind of adheres to release engineering practices. We are there to improve the continuous integration systems and the jobs that projects run through those systems for those that contract with us to do work for them. So, with that out of the way, we're just going to go ahead and get into this source code management, how, what, when to commit. Primarily, we're going to be focusing on Git. That is pretty much the industry standard at this point. There are, in our experience here at the Linux Foundation release engineering team, there's three primary Git systems that we interact with on a fairly regular basis. You've got GitHub, you've got GitLab, and we also manage a system called Garrett for a lot of our projects out there. So, we are working in various different Git systems. They're all kind of, you know, similar in a lot of ways. So, what is a commit? Commit itself is an object in the SCM that contains a message and code or a change against your code base. Empty commits are those that are basically the messages, merge commits, tags, those are all forms of an empty commit. But, you know, if you have a change, there's going to be data in there. And then, of course, changes affect a branch of code. And the default branch in Git now is main. It was historically master. So, for those that haven't interacted a lot with projects out on GitHub, you may be seeing them. A lot of them are transitioning to main. Many of them may still have master. We have a mix across all of our projects right now. So, what is commit? Code changes. So, the changes to the code base that you're doing are the diffs. That's actually what a commit is. All we do is we store the differences between what was in the repository and what you're bringing in. Binary objects. This is a source of contention a lot of among many developers. On the release engineering team, we strongly discourage that because every single binary object that is stored in the system must be stored in its totality for each commit. And if those are large objects, your repository bloats very large really quickly. And every time somebody clones it, they have to pull down all of that data. Even if they're doing a shallow clone, they're going to still end up pulling in some of that data. It's not really good. If you need to do, you know, versioned control of binary objects, we really recommend that you start working with something called Git LFS, large file store. But that requires a lot of extra setup and plugins for folks to use. So, telling your commitment story. There's a couple of ways we can talk about this. I like to start with two very different pictures here. The first one here is from XKCD. This is historically what a lot of people's private branches start looking like after time. You might see this actually in some other code bases out there. This is not very useful. You can't really tell what's going on there a lot. Now, we on the release engineering team have recently started enforcing certain types of standards on commit messages. This is a commit log of one of my own personal projects recently. What you'll find here is it's very easy to determine what's going on with everything. You'll see features, you'll see chores, you'll see fixes, but they're all clearly defined as to what they are. And part of that, it comes from some of the ideas that we propose that people kind of follow. So, commits themselves. We recommend the following seven rules, which are actually coming from a blog post several years back by Chris Beames on what commit messages and actually commits should be like. So, you want to separate your subject line from the body with a blank line, right? It itself understands this concept. When you are using the command line tools for Git, some of those tools produce email messages. The subject line of your commit message is that first line. And we want you to separate that out. You want to separate that, you want to keep that subject line down to 50 characters or less because as I just mentioned, email, that Git was designed for the kernel development community and they are all email based. A lot of these kind of fall into the, you want to be nice to the people that are using email category here, right? Capitalize your subject line. This is actually a bit of a contention lately, particularly because as you may have noticed in my previous slide, I have these components at the start of each of my subject lines that are kind of a topic. That's called conventional commits or semantic commits. And the standard, if you want to call it that, is actually all in lower case, but if you're kind of following the best practices here, you should be capitalizing those. You don't want to end your subject line with a period. That's because those subject lines end up going into email. It makes it a little harder to deal with things in some ways. If you make it a fixed sentence, you want it to flow well into other things. You want to use the imperative mode because if it's in the imperative mode, it's a very directed statement, right? You want to wrap your body at 72 characters because again, email, if there's email discussions that happen on mailing lists, 72 characters is going to allow it to survive a couple, you know, replies without breaking things up. And then you want to use the body to explain the what and the why, not the how. The how can always be done in your, the code itself is going to tell you the how. So again, I'm just kind of reiterating here. Your body tells the story. And the story is kind of why you're trying to do this for folks, right? What is it you're trying to change? Make that subject line as useful as possible. And you want to kind of focus in on one type of change at a time. You keep your changes as small as possible, makes it easier to review. And you want it to convey everything as possible in that subject line, but also, you know, you leave that story to go with. We want to use commit footers for tracking information. So don't stick your issue tracking into the subject line. That pollutes the subject lines and makes it harder for you to get a lot of other information into that subject line. Additional things you're going to want to consider putting into your footers. The Linux Foundation Release Engineering Team and actually any project that's part of the Linux Foundation is supposed to be using what's referred to as the developer certificate of origin. It's a signed off by line. It's basically a testing that you have the right to contribute this code. You want to put your links to bug tracking in there and any other metadata that your SCM system might use. For instance, Garrett requires an additional metadata footer called change ID, which allows it to track iterations of a patch through its life cycle. So again, you want to commit in small atomic pieces. This is going to allow you to make it easier to focus on particular changes. You want to make it as testable as possible in that small piece, right? And it should be something that's easily reviewable. We don't like to see commit bombs. This is not useful to a community in general. It's not nice to committers or maintainers because they have to go through a lot of code and try to determine if that entire change is something that you can take in without breaking things. Whereas if you do it in small pieces, then it's a lot easier to actually understand. So you want to commit regularly. A lot of people don't do this, honestly. I recommend to my team and I practice it myself. As I'm working through a piece of code, I'll make commits on my local branch. I'll make all these commits. I might squash them together a little bit. But at the end of the day, I'm kind of using it as a save point. SCMs are great for this because you start working on one thing. You're like, oh, this is going to work and then you find out, no, it doesn't. But you've got that commit back there a little bit and you can roll it back quickly. Push them up for review early. You don't have to be ready for people to actually review it. Market is work in progress. But it pushes that out there. So if you have a problem with your system, you've got to back up out in the cloud already. And that also means it's out there and people can start helping you determine if there's a better way to do things or they can start reviewing it earlier, too. So get workflows. I'm actually running through this a lot faster than I expected. All right. So we're going to focus in on GitHub specifically, mostly because it's the easiest platform to get out there. GitLab works basically the same way on this. Pretty much the only difference is GitHub you refer to as pull requests. GitLab, it's merge requests. So the basic workflow is kind of this. You want to make sure you have a clone, obviously. But then you pull down the latest changes from the origin or the upstream. You make a local branch. I always recommend you work on a local branch and not on the main branch itself. Make your changes. You commit and write a good message per our previous discussion there. You kind of repeat these steps as needed as you build up your set of changes. And I do mean a set of changes. You want each of these commits to be atomic pieces. They can go up as a single PR or merge request. You can send them all up at one time. But you want them to be in small pieces so that they're easily reviewable. And then you go ahead and push up your local branch for review and open your PR or merge request against the origin or upstream. So, again, this is kind of a pseudo-code version of that. I've gotten to the point where a lot of people ask me, how do you go about all this work? This is my workflow. I always follow this workflow. It doesn't matter which SCM system I'm working on. This is my workflow. I always work on local branches. I always work on topic branches. It makes it easy to have multiple changes in flight. When, you know, I have a lot of bug fixes that I'm working on, I'm doing feature requests. If you do it this way, you're not polluting your main branches. You're always working against that main branch, but you're not polluting it with your changes locally because if a reviewer says, hey, this needs to change, well, what you can do is you can go back to your local copy, make your changes, push that back up, but you don't have to go do a weird rebase dance against all of it on your main branch anymore. If you've got other things in flight, it's really hard to work with. So, that's why we recommend doing it that way. In fact, we started using a system called Pre-Commit, which has a plugin that allows us to basically force people to do topic branches. As long as they've installed it on their local machine, it basically won't even let them make the commit unless they've switched to a topic branch. It helps enforce that mentality. So, your code review. Here's our recommended thing to do. Always do a self-review before you push it up. We see this fairly regularly in fairly new developers. They'll make all these changes. They think it's ready. They don't do a very thorough code review of it themselves. They might see one or two tests locally pass. They make their change and they push it up for review, and then the continuous integration system starts flagging things left and right, because they didn't test all of the test cases locally or they didn't verify certain semantics that the project requires, that may be enforced by the continuous integration system. So, don't send your change up just because local tests pass. Make sure you're following all the requirements of your project that you're working on. If you're reworking code, this is kind of a vital one. If you're reworking code because somebody's asked you to, change was asked for, then what you want to do is you want to make sure that you've hit all of the requests from that person or reviewers. If there are changes that need to happen and you agree with those changes, make sure those are all taken care of. If there is something that they are asking for but you don't agree with, make sure you've started a conversation about that before you actually push up your changes. You kind of want to do this because, again, this is something that we see fairly regularly. Somebody makes a change and they push it up for review, and they skipped like three to five different things that were requested of them. They saw two or three things that were requested and they just fixed those, but not the other things. And sometimes we're constantly going back, you need to go fix this, you need to go fix this, and we're chasing the people to go make the fixes versus just make sure you've taken care of everything from the get-go. It helps you and it helps the maintainers of your project a lot. Are you rebased against the latest head before you push up your change? This is something we see a lot of, well, right? Somebody will push up a change and it's five, 10, maybe 20 revisions behind where the latest code is. Now, sometimes those will merge cleanly. A lot of times they won't. And then you're going to be dealing with having to do resolution of bad merges. A lot of the projects we work with on GitHub have decided to use the requirement from GitHub that the code that is to be merged has to be mergeable cleanly, right? So it has to be on the latest head before it's even mergeable. It does require people doing a lot of rebasing, but it's kind of what you need to do. Have you included all of your relevant documentation and tests? If you're creating a new feature, you better have tests. You better have documentation. This is something that... The unit testing? You can get around that with continuous integration and make sure that they've actually applied that. Documentation is more of a social contract. The release engineering team at the Linux Foundation socially enforces that documentation must come in with all changes, right? So if you are making a change and it is adding a feature, we require documentation. Unless there's documentation. If you are making a fix, we require at least a change log as to what the fix is. If you are refactoring code, that doesn't necessarily require a change log or documentation, but that's, like I said, we're getting into the social dynamics of the project then, right? And the final thing there is, does your code pass the languages standard linting for the language you're working in? Python has a linter. Go has a linter. Whatever it is, there's a linter out there and make sure you're passing that before you push up your change. Our continuous integration, again, I'm going to reference the release engineering team, our continuous integration runs linting as part of its cycle. If you don't pass linting, you don't get merged. We literally don't allow mergers to happen unless the continuous integration system says it's okay to merge. We've denied our maintainers the ability to merge code that doesn't pass continuous integration. And you may be finding more and more projects that are starting to do that now. Their tests are complete enough that they can say, if you don't pass, you can't merge. We still have a lot of projects out there where people will override the CI, but it's becoming less and less. Ask for your review. You've gone through all your local checklists. You're ready to go, so you want to go ahead and push your code up. Go ahead and do that. GitHub UI is going to give you a link saying, hey, you just made this change. You want to open a PR. You can go ahead and do that. It's going to detect all the changes between your topic branch and the target branch, at least the main branch. It's going to allow you to open a PR. It's automatically the cover letter, so to speak, of your last change. If you're doing this from the command line, it'll give you a URL on the return on the change being pushed up. So raising PRs may trigger automation. That's not a guarantee, but they may trigger automation. It's either going to be GitHub actions nowadays or some other CI system. GitHub's introduced a requirement for people to basically say, the first time you raise a PR, you have to basically say a maintainer has to say, yes, this PR can run. There may be a code review that happens beforehand, but after a couple of times, you're not going to have to worry about that anymore because the CI system will just pick up and start running its thing. That was done as a preventative measure against people abusing the system. And then the maintainers of that project will in some way get notified of that change being requested. They're going to get an email, and they might see something else in their dashboards, whatever it is. They're going to see that. That doesn't necessarily mean they're going to reply right away. I have changes open in some open source projects that I've opened a month plus and they haven't responded yet. It's a little frustrating. I understand that. And that's where you kind of want to start thinking about the ping the maintainers. And by that, I mean you leave them a comment, say, hey, if you know who the maintainers are specifically, you can call them out in a comment on the PR or a merge request. But I strongly recommend against opening the commit message with their name in it directly. That's not something that's socially acceptable in a lot of cases. You want that kind of discussion, trying to get people's attention or having discussion to happen in the review system, not on the commit message. The commit message itself is going to be the record of what the change is doing. But you don't want all the discussion around that. The discussion is metadata to the commit. If you were to change SCM systems in the future, you're going to end up losing the discussion around why the change happened, but you're going to maintain that commit message history. And that's what matters the most in that case. So you, as a maintainer, a committer, reviewing somebody's code, go ahead and read that PR cover as well as all the commit messages. So this is my workflow. I'll read through what it is they've asked me to do. I'll take a look at that. I'll look at the commit messages related to all of the code before I start a review of the code. Generally, as a committer, I wait until the CI system has come back and said everything's ready to review. That's just so that I know that they're passing at least our basic requirements. I don't generally want to go reviewing code that the CI system is going to just outright reject because, as I already said, our repositories require that you pass CI before we can even merge it. So if you don't get through the CI, you're not going to be actually even reviewing your code. And you kind of want to think about that when you're raising these up for folks to do. You go ahead and evaluate your code just the same as I was recommending you do for yourself. Do this. You want to make sure that the code is coming in as meeting the standards of that project. If you see code that needs to be cleaned up, now's the time to ask for it. Why ask for it after they've already done it? The best time to clean code is when it's coming in. And then evaluate if it's solving the problem in a satisfactory way, and it buys by your project's coding standards. There's a little bit more in here. So evaluation of code security. You as a maintainer are kind of on the hook to evaluate whether or not this code is coming in as secure. Now, you may not be as well versed in security as you might want, but if you are the maintainer of a project, you'll be holding to being on the hook for making sure this code coming in is secure. It's not going to introduce a really visible issue. Now, there might be some edge cases that slip by you. That's going to happen. We're all human. Hopefully, your CI system might be set up in a way to actually start catching those for you, though. If there's code issues with the code, leave actionable requests. I've seen this a lot in the nearly decade I've been doing release engineering at the Linux Foundation. People will make these requests of contributors, but they don't give really actionable requests. They might open a discussion, but it's not a very actionable discussion. They're just kind of like, I don't think this works, or anything else like that. I mean, the contributor is something to actually work with. You want to leave people actionable items. The more actionable it is, the easier it is for them to work with you to get through that code review. Be polite. It's hard, especially when you're all remote. It's hard. There's flame wars. Be polite. There's somebody on the other side of that, unless it's a bot. But even then, be polite. I'm polite to the bots, too. Granted, the bots don't really speak back to me, but I try to be polite to them. Don't be afraid to ask for changes. In fact, when I get a new release engineer on my team, especially if they haven't worked in open source, one of my first requests for them is to go start reviewing code. I ask them to go look for people that need code to be updated. And I want them to do that because I want them to be comfortable with asking others to make changes. The more comfortable you are asking somebody to make changes to your code, the more you're likely to accept people asking you to make changes to your code, too. And it's a two-way street. But if there's asking for something that they don't agree with, right? Debate them politely, please. We don't want to be encouraging flame wars. Do you see issues in the code around the code that you're reviewing? Whenever you open these up in GitHub or GitLab, Garrett, you're going to see the code around it to some extent as well, right? If you see an issue around the code that's not pertaining to the change itself that needs a fix in some way, my recommendation is don't ask them to make that fix then. It's not relevant to the change that they're trying to do unless it is. That's actually not too common, though, in my experience. Go ahead and open an issue in your issue tracker, right? Just let them know, by the way, I saw this. I'm going to open this issue over here. You don't need to fix it now, but it would be nice if you'd go and fix it later. They might not, but somebody else might pick it up because the point isn't to fix all of the code that's coming in or all of the code around your code right then. The point is the change that they're working on, right? Keep focused. This is part of the reason why we ask for atomic changes, small pieces. And then you want to work with that contributor through the various iterations. Your contributors may be drive-by. It does happen. Drive-by contributor comes by. They raise a change. It might not pass CI. You say, hey, this needs to be fixed this way, and they never respond to you. That will happen. You've got a couple options on that. You could let the change die. Or if you've got the time and you actually feel that the change is worthy, you could pick it up as a contributor or a maintainer and go ahead and just pick it up and run with it yourself. My general take is try and get the contributor to update it, make their changes, fix things up. The more you do that, the more likely they are to engage with your project. The more likely they are to engage with other open source projects, which then enhances the open source pool in general. And finally, be patient. These things take a while. My team is fairly fast-moving in a lot of cases. We see changes come in. They get reviewed. They get closed right away sometimes. I've seen changes come in and get completely reviewed through CI and everything else in 20 minutes or less. It happens. And then we have the changes that take weeks, sometimes months. I've worked with one of my developers at one point. We went through 45 iterations of their change before it was merged in. It will take time sometimes, but it's worth it. We had a lot of discussion on that change. We had a lot of other people come in and make recommendations and suggestions as well. Usually, when a change takes that long, it ends up being a really good change at the end of the day. So, validation testing. Every language out there has some sort of test framework, at least if it's in general use. If you're the creator of a brand new language and you don't create its own unit testing, shame on you. But every language in general use has a testing framework. I don't know what your language is that you're working in, but go learn your testing framework. If nothing else to learn, the linting, if not at the barest minimum. Testing your code should happen locally if it up all possible. There are times when you can't test everything. Make sure that your test suites have smoke tests or quick tests that can happen locally, and then the harder, more intense tests happen in your continuous integration systems. You want people to be able to validate that, hey, this at least passes a sniff test. It's going to get past the basics. Do that. Practice test-driven development. This is a slow process, especially if you're not used to it. You want to write your tests first. Understand your problem, write your tests. Your code shouldn't exist yet. Make sure those tests are failing because your code doesn't exist. Write the barest minimum code that you need to actually get that test to pass. That might include hard coding variables. What you want to do is make sure that the initial part is passing so your basic logic is right. Then you go back and start refactoring. Take out those hard coded variables, make it more generic, more useful, and make sure that the tests are passing throughout all of this. Once all of this is ready, you make your commit locally. See all discussion earlier about good commit messages. Make sure you've got your documentation included, especially for new features. Include your tests in that commit, and then push it up. Your tests need to be part of that change you're raising. Especially for new features. If you don't have those tests in there, how's the continuous integration system going to know how to test the code you're proposing? It could do basic linting. That's pretty much all it's going to be able to do. Maybe some semantics or other bits, but unless it knows how to test it, it can't test it. Tests should be written against both positive and negative outcomes. I have encountered a lot of times where somebody writes tests that they only think about the positives. They don't think about how it's going to fail, or they only think about the failures and they don't think about how it's going to pass. Okay, I've hit all my failure conditions. I should just pass all my other conditions. What about the edge cases? What about the corner cases? Think about them. Make them part of your change. Well-written tests are going to allow you later to refactor your code without breaking things. We have a library that we use in our CI environments for a lot of our projects. And it's homegrown. They all are. It has some unit tests. It didn't have a lot at the very beginning. This was before we really started becoming really rigorous on it. There's times when we refactor code, and because we may not have tests around it yet, it gets through CI's basics and we've broken something else because we didn't have good unit tests. But the code that we have good unit tests on, we can refactor that pretty much willy-nilly and we won't break anybody in production because our tests are validating that we're doing what we need to do. As I said, test-driven development's a little slow, especially at first. I have a developer that's never done test-driven development before, and I said, you need to go do this thing, but I want you to see your tests beforehand. I want to see them failing. And he went and wrote this, and it took him two or three days, whereas the code, once he wrote it to get past all the tests, only took him a day. But he came back at the end of that and said, that was so much easier. Because he'd got previous bits of code he'd done at other jobs. He'd go and write the code and then he'd just do some local manual testing and things would work and then would get out there and break. And he'd have to go refactor and he wouldn't know what was actually working well. This allows you to do that in a more sustainable and better way that allows other people to pick up the code later down the road. Continuous integration, continuous delivery. My team works with basically three primary CICD systems out there. Our largest usage is actually in Jenkins. That's that first icon there, the little butler. We've also picked up a lot of GitHub actions since GitHub actions showed up on the C not too long ago. And then we've got GitLab CI out there as well. These robots are out there to help you improve your code faster. Use them. So a little bit of history here. As I just kind of mentioned, GitHub actions just showed up on the C not too long ago. GitLab CI's been around for a long time, actually. They work against their own SCM platform, GitLab itself, or against GitHub. So you can actually use them against both. Jenkins has been around for a very long time. It's an old beast. Since 2004, I've been using Jenkins since before it was called Jenkins. It was originally called Hudson. So it's been around for a long time. It's probably the most flexible CI platform I've ever worked with. It's old, it's venerable, I understand that. People don't like the UI anymore. It still does its job. And the nice thing about it is that it works against any SCM system out there. So if you're running some bespoke Git system that's not GitHub or GitLab, you can still run Jenkins. You can run some of the newer CI systems too, but Jenkins is pretty much a guarantee to work. So we like it. We work with the others. We're moving more to the others because they're a little easier, honestly, but there's things that Jenkins does that the others still can't do. So think about it. Use the tools wisely. So the purpose of your CI system is to validate your code. That's its number one purpose. Its number two purpose is to deploy that code. Not everybody uses that. My team writes libraries. We don't deploy code. We just test code. Other people pick up our libraries and deploy that code. But that's its main purpose, validation that things are happening in a way that's mechanically sound. Many systems, like I just said, use it to deploy. I see people using it to deploy websites. I see it deploying to production. They use it in conjunction with Kubernetes. All sorts of things out there you can do that. If you can imagine it and script it, it can be done with the CI system. One of my favorite statements of Brown's CI systems is they are bash as a service. They're scripting as a service. They're also useful in other ways, too. You also have to think about the fact that these are as-a-service things, and if they're public now, they could potentially be virus as a service. Think about that. Just be careful about what you're doing. If you have a CI system, and I strongly encourage you to have one, make sure that it's validating your code, at least at the very basic lint in your code. And as I mentioned a couple of times, if you have a good set of tests, consider removing the ability of your maintainers to merge code without CI passing. It will force people to consider making sure that the tests are passing, right? If the tests don't pass, they can't merge their code, they can't get their fun little change in, and now they're having to actually think about this code a little bit better. We have a lot of people that are really scared about that. They're like, but our tests aren't good. I'm like, fix your tests, you know? You fix your tests, and all of a sudden, your code is more testable, and everybody wins. So in conclusion, a few resources here for you. Linux Foundation's release engineering team maintains living documentation at docs.relang.linuxfoundation.org. A lot of what I've talked about here have sources out of that, including additional sources down in there later. My reference in the slide on my changes to semantic or conventional commits, there is a link right there to that developer certificate of origin that's signed off byline. There is a very small website. It's a static page about yay long. It tells you exactly what the DCO is kind of all about. And finally, the Linux Foundation mentorship program, especially for green developers, this is a great way to get your feet into open source. It's also a great way to get a little bit more experience out there. And I am actually at the end of my slides early. Are there questions? Yes. Okay, so the question is, if I were to pick up an orphaned change as a reviewer, should you have somebody else review your fixes to that change? Yes and no. Yes, if your project's big enough to have multiple maintainers. Right? No, if your project's small. It's a, what's the size of your project question, right? My team is around 12 engineers. It's very easy for me to say, hey, I'm going to pick up this change that somebody orphaned. Right? I need somebody else to review it, right? But there are projects that we help that only have two contributors, right? And they might be in literally diametric time zones. They might not be online at the same time, right? And let's say it's a critical change. What are you going to do then? Right? If you've got a breaking change and you need it through CI and review, there are times when you as a maintainer are going to have to pick up those orphan changes and get them through. Orphan changes are rarely critical to production. And generally what I've found is that if somebody's got a critical change that's coming in, they're not going to orphan it if it's breaking something. They're going to care about it. Now if it's documentation change, you're more likely to have an orphan change with documentation changes. Are there other questions? Yes, sir. So the question is, do we get a lot of pushback if we're trying to implement new processes? Yes. Let's take for instance the conventional commit messages that you saw earlier. That is actually a process change that we implemented on our own repositories back around May of this year. I was working on an earlier version of this talk. I was doing some additional research on some new best practices that it's shown up since the last time I basically worked on something like this. And I stumbled on conventional commits. Ooh, this is interesting. I started doing more reading and I'm like, this is really interesting. I like what it's doing. I like what it's forcing on people. And then I raised it up to my team and we had some discussion. I had some dissent. Our operations team, because we did this on our internal repositories for operations as well, had a lot more dissent because they're like, I'm not making all these kind of changes. What are these for, right? But we went ahead and said, okay folks, you need to start doing this, but we didn't hard enforce it. And then a little bit later, we actually started hard enforcing it with a pre-commit plugin. And so the pre-commit plugin allows people to make sure that it validates locally, but we also run it through our CI system. So our CI system will validate everything the pre-commit does. So if there's a pre-commit plugin, our CI system is going to pick it up and check it too. So people have a way to make sure that they aren't breaking things before they push them up with the pre-commit, but then we have our CI system saying, doush out not unless they've already done that as well. And when we turned that on, I got a lot of pushback because all of a sudden they couldn't get their changes in because their subject lines were greater than 50 characters or they weren't specifying a proper topic. But all of a sudden we turned that on and within about two days people fell in line, partly because we didn't turn it off, but also because it suddenly forced them to think about the changes that they were making and to describe them better. And that's one of the keys to, especially when it comes to commit messages. You may have noticed earlier that my seven rules of a commit focus primarily on commit messages and not on what the content was. That's because the commit message itself is so critical to people later understanding what the change was all about. In an earlier edition of this talk, I had a picture of a commit that literally changed a single character, but the commit message was almost a page long. And the reason for it was that that single character, which happened to be a period, was describing a bug that only occurred in a very specific instance that only some of our CI systems were picking up and not others. And the deep dive into the analysis of that was what that commit message was. So that commit message was not just saying, hey, I made this fix. Here's how I found the problem. Here's why I made the fix. Here's what that fix is actually trying to do. And in the future, if somebody comes back and says, why is this that way, they can go, oh, now I know. On this particular version of this particular distribution, this doesn't work. But it does if you do this, and it works on all distributions then. So do I get pushed back when we start pushing these kind of changes? Yeah. Do we get pushed back from communities if we're recommending this to them? Sometimes. For instance, again, the conventional commits. My team sits on an advisory status on a lot of technical steering committees for projects, the ones that are using our services. And we recommended to one of our projects that they might want to consider conventional commits. And they deliberated on it for about a month. And then they decided, you know what? We're going to start doing this. And they got some pushback from their community. But all of a sudden, all of their commits started becoming a lot clearer as well. Because again, as I said, it forces people to think about what they're doing. It forces people to, especially if you're using conventional commits, to make those changes small and atomic. You don't end up with somebody that's producing a cleanup of documentation and, oh, by the way, I'm going to add this little feature on the side. If somebody's reviewing this and their change says it's a feature, but all they're doing is fixes, they're going to get pushback on their change. Because what their commit message says it's doing versus what it's actually doing are very different. Or there's a scope creep in the change. That's one of the things we try to convince people to do is avoid those scope creeps and your changes. If you see that this thing needs to be fixed, you need to go lint all this file set. Do your linting as a separate change, right? Any other questions? Yes? Can you repeat that again? So the question is, what's the balance between pushing a critical feature or fix, I suppose, versus test driven development? My take on that, and some people disagree with me on this, if it's critical and you don't have tests around it, you need those tests before you push out your fix. Because you want to make sure that in the future, you don't have a regression, right? Now, if you're one of those sad people that goes and fixes things in production and then rolls it back into your code base, you're not going to have those tests, right? You're not going to verify that it's actually fixing things or not breaking something else. And that's, it is a very tight balance, right? If the fix that you're trying to push out is going to keep your company from losing millions of dollars, you might want to go ahead and get it fixed and then put in your tests. I understand that, right? That's a judgment call, but in general, you want to make sure the test exists before you get it merged in. Don't get me wrong. My team does sometimes do those quick fixes, like you say. We do those. We try to avoid it. And every time it happens, I'm like, why? Why do we skip the process here, right? That's where your social contracts come in, right? Anybody else? No? I don't see anybody chatting at me either online. So, all right, folks, thank you very much. That's been a pleasure. And these slides should be available online if you're interested. So, I did upload them to the system, so supposedly they'll get released with the video later. So, I am available on Twitter and other places at Tikeel. And I will be in the Slack rooms as well throughout the rest of the conference. So, pleasure. Thank you.