 All right, so welcome to the Code Review Best Practices session. If it helps you to follow along on the slides, they're posted on my Twitter, which is at XJMDrupal. I asked this morning for them to be posted on the session page. I'm not sure if that happened or not, but if they're not yet, okay, we're going to get to no photos in the next slide, so no photos, please. That's what the red lanyard means. So right now, I am unmasked. This is the first time I have been unmasked in a public building since the pandemic started. So I would really appreciate it for my safety, if not for yours, for you to put a mask on if you're in the audience. You can probably hear that I'm a little hoarse. I'm hoping this is just a mechanical thing from climbing the fly flights of stairs up and down to my Airbnb. But just in case it's not, maybe you want to put a mask on, please. Second note, before we get started, I ask that people not take photos of me for privacy reasons. Photos of the slides are fine. Photos of the room is fine. So long as you don't see anyone else wearing a red lanyard and there is at least one other person in the room that has it. So look for that when you're framing your photos and cropping them or posting them afterward. That means please don't photograph me and so please respect it. So hi. All of that out of the way. My name is Jess. I'm XJM on Drupal.org and I'm a Drupal core committer. Now there are thousands of people who contribute to Drupal core. But as a core committer, I'm one of about 15 people worldwide who can actually accept or merge changes into the primary Drupal core code base. Meaning that they become a part of everyone's applications. I'm also a member of the security team and a Drupal core release manager. So I actually create many of the Drupal releases that you can install on your sites each month. So for more than a decade, I was funded full time to perform these roles for the community. Now, volunteer contributions are an essential part of our community, but it is unsustainable and unhealthy to expect volunteers or part-time contributors to perform the responsibilities of full-time salaried position, especially for like a time critical role like a release manager. So when I had full-time funding, it allowed me to respond quickly to emergencies when other committers couldn't because of their day jobs or time zones or their families, whatever. And it also helped me provide critical support for the strategic initiatives that you should talk about in the keynotes. And I also, by the way, created more releases than any other committer in Drupal's history. So that's my one feather in my cap forever. So I was completely unemployed for three months. I was laid off twice this past year. But this month between two sponsors and support I'm receiving on Patreon, I managed to get back to about 60% funding. So from 100% to 60% funding just to do release management of core, you or your organization can actually help sustain me by making a small sponsorship on my Patreon page, which is patreon.com slash xj. I'm not hard to remember. I did have to take a few billable hours of work among my sponsorships in order to make one of them work. But I might be seeking a third sponsor in 2024. Now, one organization that I'm obliged to introduce before I get to the content of my session is Hero Devs, who've been funding me for 16 hours per week. That means they're giving me 16 hours per week just to do release management work on core. So they're working on building a Drupal 7 never ending support product, basically a fork of Drupal 7 that will be maintained after its end of life. Fabian X, the Drupal 7 core framework manager is working with them on it. So it should be pretty legit. The platform will also scan for security vulnerabilities. And in fact, Hero Devs will also have a security reporting and patching process for any new vulnerabilities that are discovered that affect Drupal 7 only. We're still in the process of negotiating with the security team what happens, you know, after Drupal 10 vulnerability is disclosed if it also affects Drupal 7. And it's not just for core either. They'll also be focusing on like accessible contributed modules. I don't know too much about this never ending support myself. I know a lot of people are going to be very excited about it. But if you are interested, I can put you in touch with people who know more about it on their team. So yay, thank you Hero Devs for funding my time today so that I'm not just here as an unpaid itinerant. All right, so the primary goal of peer code review is to catch bugs in new or changed code before it goes into production. Peer code reviews also help the code health increase over time rather than declining. Now it's not just about the code quality though software code is written by human beings for human beings. So it's important that your code review process reflect that put simply peer code review is also a tool for helping you build stronger teams. By reviewing each other's changes as they're developed, you build a shared understanding of your code base across your team. If done properly it should foster open constructive communication. It can also help create a sense of pride and shared ownership in your projects. Code review is also an excellent opportunity to mentor junior developers some of the best onboarding there is really. And finally peer code review will help reviewers and code authors alike write better code the next time because reviewers learn from the process too. Of course if you want your peer code reviews to be a positive experience for your team, you need to have a positive review culture. Now I'm not big on like emotional labor myself it's very difficult for me and I find it exhausting. But peer code review is the process where it really is worth putting in that extra time to try to make sure everyone involved feels safe and supported. The most important thing is that everyone involved in review should treat each other as equals. Start from a place of humility. Everyone brings their own knowledge, experience and reasoning to the table. We do peer code reviews precisely because we're not very good at catching bugs in our own work. And that's true for the reviewer as well. In the years that I've had common access to core, I've noticed that a lot of people sometimes just sort of do anything I say without questioning it which sounds like it would be great. But actually I want to hear the author's feedback on my feedback, especially if they disagree because I make mistakes too. That's part of why we have a peer review process. It's also important to have empathy for others during the peer review process to consider their needs and constraints. If something seems wrong or confusing, ask a specific question about it. This will help both of you understand what's going on. If something technically works but could be improved, phrase your improvements as a suggestion. And finally, when you disagree with a code author or others involved about something in the code, encourage others to explain their perspectives. Even if something definitely needs to be changed, it's helpful to have misunderstandings and additional perspectives documented for future reference in case that code needs changes in the future. So I structure my code reviews in three parts in the hopes that they'll be well received by code authors. I try to start by thanking the code author for something specific that they did well. This helps them read the review positively and shows them that I'm aware of the work that has gone into the issue. If I dive straight in to asking them to change this, that and the other thing, it might seem like I missed the value of the work or the big picture of what they're trying to do. Next, in my comments about specific lines or aspects of the code, I try to provide references backing up my recommendations. Even if you as a code reviewer have seen a particular problem dozens, hundreds of times, the code author may not have. So provide them with policy documentation, references to other code and the like. Providing references helps the code author internalize best practices a little bit at a time, and it also sets a good model for future reviewers so that everyone learns from it. Finally, after I finished my detailed review, I close with a summary and an actionable next step. If I've given a lot of feedback, I'll kind of write up a paragraph about what I think the most important concerns that I've raised are. It's always good to help contributors understand what the path to done is, and that starts with an actionable next step. Now, before I get into any more specific advice about the technical aspects of code review, I need to explain some background concepts from psychology. Of all the tools we use to do peer code review, the most important one is our brains. So it's important to understand the limitations on our brains and how we can enable them to do their best work. The first concept I have to introduce is that of decision fatigue, which simply stated is that after we make many decisions, our ability to make additional decisions becomes worse. If you've ever come home after a long day and found yourself just sort of staring into the refrigerator as if it held answers, or been completely overwhelmed by the array of choices in a large supermarket, this is probably what's going on. We make hundreds of decisions every day. Many of them are completely trivial, what to say or text to the given moments, what to have for breakfast, you know. But when you make many decisions in a row, your brain gets fatigued. And symptoms of that fatigue can include feelings of exhaustion or mental fog, impulsiveness, and procrastination and actually avoiding making further decisions. And code review involves a lot of decisions. For every single line we read when we're doing a code review, we have to decide whether the code is right, what other side effects it might have, whether there's a better way to make the same change. Another important concept is that of directed attention fatigue, or DAF. When we have to focus on a single task, like we do when we do a code review, our brains have to suppress stimuli from distractions. These distractions can be external, background noise, the kids, the pets, messages from coworkers and Slack. They can also be internal, random unrelated thoughts passing through our minds, feeling hungry, worrying about the bills. Finally, distractions can also come from the code itself, code style errors, bugs in nearby code, and so forth. And that in particular is something we'll talk more about later. Unfortunately, the mechanism in our brains that allows us to push aside these distractions and focus on the task at hand gets fatigued when overused. And symptoms of that fatigue can include restlessness, irritability, and apathy, none of which are good for your health or for a positive review culture. Directed attention fatigue can also cause you to miss social cues, which can make it harder to communicate empathetically with the person who's code you're reviewing. And finally, it can also cause misperceptions, confusion, impulsiveness, or poor decision making, all of which naturally will lower the quality of a code review. It's also worth pointing out that distractions are an accessibility issue. Directed attention fatigue does affect neurotypical people, but it can also be compounded for folks with ADHD. Now, there's been research that shows that the same mechanism in people's brains is stressed in both directed attention fatigue and ADHD. The important difference, of course, is that DAF is a temporary condition that you can recover from. It takes time for that distraction suppression mechanism to recover after it spends stress, which is probably one of the reasons that many folks with ADHD use a strategy of taking frequent short breaks in their work with like a timer. Overall, you should treat distractions both in the code and in your work day generally as accessibility and inclusivity concerns. And this is something that I'm trying to get better at in my own reviews as well. It's definitely a process. So with that context, let's talk about some research that's been done and how different factors can increase or decrease the effectiveness of our code development process, the effectiveness of our brains. References for the research are in the speaker notes of the slides I tweeted and that hopefully are on my session page, I don't know. So the key concept to understand for this section is that of defect detection. This means of the existing potential bugs or flaws in a code change set, how many are actually identified by a given process. Now, higher is better, which you might not think it first, higher defects, that's not good, because it means actually that you've caught more bugs before they can become bugs after the code is committed or deployed. So early defect detection is really valuable in the industry. According to one of the sources that I looked at, even low impact bugs are twice as costly to fix once the code is in production and a critical bug can be as much as 100 times as costly. And I'd say for Drupal core, the equivalent of a code being in production is probably in it being in a tagged core release. So for me, that's the line. Now, something you might not have expected. A formal peer code review with multiple reviewers is on average twice as effective as automated testing for defect detection. Talking about them just happening by themselves before you've integrated them. This is actually amazing that people simply reading the code can find, they're there twice as good at finding bugs with it as software that it can actually run it. Now, like I said, combining both is even better. I'm not advocating that you abandoned all your tests covered sweet. So I have some slides to illustrate this that compare the effectiveness of peer code review with different kinds of quality assurance. First off a word about unit tests unit tests can catch anywhere from 15 to 50% of the bugs within their scope or 30% on average, but that average doesn't actually really mean much because those air bars are so wide. The reason that it's so variable with unit tests is that their effectiveness depends on the scope and encapsulation of the change set, right, like their unit tests, they're not good for testing your whole application. Furthermore, misusing them and in place of integration tests can lead to a very high rate of Paul false positives. This is something that we experienced in the early early years of adopting PHP unit for Drupal core. And false positives are also distractions in your code review so use with caution. Really want a unit test for a single unit. Static analysis is pretty great and can catch 33% of potential defect defects. This is actually more than typical unit or regression tests. We've recently started doing static analysis on all Drupal core issues and it's been fantastic. Integration tests typically are the most effective automated tests catching about 35% of these potential defects. So as an aside, when in doubt, write an integration test over a unit test or regression test. If you're time constrained and feel like you don't have time for that full suite of coverage, do the integration test first. Now, compare all of the above with a formal peer code review process, which will catch 60% of defects. Again, this is peer code review by itself before any tests have even been run. Finally, for context, I've included as a cystic about the effectiveness of large beta tests with over 1000 participants. Such beta tests identified 75% of potential defects. Now, just think about the time and cost it takes to set up a beta test of that scale to say nothing of the free time that you're getting from your beta test participants. A formal code review costs a fraction of that, but still might find as much as 80% of the same stuff. So even if for some reason you don't have automated tests at all, which hopefully you do. So like I said, those statistics are about using each tool by itself, which isn't very real load practical. But peer code review is even more effective when combined with other strategies. If you combine design reviews, automated testing, QA and peer code review, especially in that order, that means over 90% of the bugs are found before they make it to production. So you should still write integration tests, still use linting and static analysis tools and have that all run before your peer code reviewer even takes a look at the issue. That'll reduce the number of defects in the code before it gets to the peer reviewer, which in turn will make them even more effective because there are fewer things they have to focus on. In summary, peer code reviewers are your most cost effective resource, tools, people, technology, so you should optimize for them in your development processes. So how do you do that? How do you optimize for a good code review? I'm going to spend the next several sections exploring how you and your team can work together to optimize for codes review ability. So in the graph on the slide, it compares the defect density found in review, the y-axis, that's the vertical one, with the number of lines of code under review, the x-axis. Higher defect density in this graph is a good thing. It means you're catching more of the bugs that already exist in a piece of code early before the code goes into production. Lower defect density is on the other hand bad. These huge change sets out here with 700 or 1000 lines of code under review, they didn't have fewer bugs per line. They just completely overwhelmed the reviewer to the point that they couldn't find them anymore. There's a pattern of exponential decay. Reviewers are really good at spotting bugs in small change sets, but the reviewers effectiveness drops off rapidly as the size of the change set increases. So why does this happen? The reason is that reviewing too many lines of code at once can cause both decision fatigue and directed attention fatigue. So we've run right up against two of those limitations in our brains. So the best way to fix it is to reduce the number of lines of code under review. Split the change set up into smaller logical pieces. Fewer lines to review means fewer decisions and fewer distractions. Based on the data, it's best to review around 200 lines of code at a time. Now note, this is 200 added and removed lines. It includes the removed lines as well because you do as a reviewer actually need to read those so you understand what the code was like before to ensure the change is correct and that there aren't regressions and so forth. So you add the absolute values of the added and removed line counts to get the total lighted of the code under review. Now if that sounds a little bit too much like, you know, middle school algebra for some of you, I have an example to illustrate this. So this is a random example from course commit history. This change added 113 lines of code and deleted one. So that adds up to 100 of a total of 114 lines of code under review. That's a good change set size. Folks were able to review it within an hour. Also in general, when we're talking about like upward limits, you should never review more than 400 lines of code at a time. And even with change sets that large should you should reserve it for simple one to one replacements, scannable things. For example, replacing a deprecated functions usage across your code base. If it's a one to one replacement, if there's no other decisions, you have to make about it. And here's an example of a problematic change set side scoping from a different issue also from court. This issue was a bit much for me. I felt like physically exhausted when I got about halfway through reviewing it and I checked and sure enough, the scope of the change that was just too down big. I'm only showing the very bottom of the diff stack here, but the issue changed 94 files with 229 assertions and 201 deletion. That adds up to a total of 430 lines of code, which is our over recommendation and that's part of why it felt tiring for me. Another thing that helps is to never spend more than 60 minutes doing a code review at a time. Now note that 60 minutes is sort of the recommended maximum for like neurotypical people and it won't work for everyone. You might find, for example, that 30 minutes of time followed by a short break works better for you. And then between those 30 or 60 minute reviews, you need to take those breaks so that your brain can recover. Deal with other distractions that have come up, but also just take a minute for your brain to just rest. Some folks might find that they work better with like shorter segments of 20 minutes of code review, a short break, and then a long break after four cycles. That's an idea I got from the ADHD community for this. So speaking of distractions and code review, let's talk about handling nitpicks, which requires a drink of my tea before I dive into this. So nitpicks themselves are trivial, but how you handle them is actually a really important part of your code review that does need to be discussed. So to me, reading code style hairs is like watching a speaker who has food in their front teeth. Imagine a little flock of something green stuck right up here. I checked in the mirror before this session to make sure that there wasn't actually something green stuck right up there as like an illustrative example. Now you would have no bearing whatsoever on what I'm saying, but you still might not be able to like push away this out of place thing to concentrate on what I'm saying. And that's how I feel when I review a change that has code style or documentation errors. They are minor, they are un-importment, and they are utterly distracting. I can't see your code if your white space is messed up. I don't understand your inline comments if they contain grammatical errors. It is a real problem. Code style errors are distractions that increase your directed attention fatigue and make the review harder on the reviewer. They're not just being jerks to you when they comment about it. Trivial decisions about fixing them will also add to decision fatigue for both the reviewer and the author. Do we have to fix this here? How do we fix it? What's the rule? Blah. But on the other hand, nitpicky reviews, like I alluded to, they're really discouraging to code authors. They've spent hours or days fixing something. And then instead of evaluating the design or appreciating their work, you get lost in the distractions of formatting issues or spelling errors. It can seem oppressive or like a lack of appreciation for the contributor's work. So how do we fix that problem? How do we resolve those two separate concerns? On the one hand, code style errors get in the way of effective code review. But on the other, they are the least important thing about any change set. The answer is to use automated tooling that runs immediately every time you submit a code change before it gets to your peer reviewer. Instead of dozens of tiny nitpick decisions, you have one binary decision. Did it pass the automated coding standards checking? The Drupal core community has now spent a full decade trying to make core compliant enough with its own coding standards so that we can do this. And when I talk about this, it's hard for me because I've been working on that project all the time. It's hard not to be melodramatic about just what a positive thing this has been for our community. It has made our code review process so much healthier and so much more focused on what actually matters. And here are some of the automated tools you can use. PHP CS, PHP stand, C spell, ES lint, and style lint. Core has a script that will automatically run a number of linting checks including these tools on the core code base. These are both as part of Drupal.org's automatic testing and as a pre-commit hook when committers like myself commit changes to head. You can also configure jobs for these tools using GitHub actions, GitLab CI, or your own continuous integration or QA process. Now I won't go into detail about how to set them up and configure them. There's documentation on the internet about that. And my slides, like I said, at least they're posted on Twitter so you should be able to find the list again if you didn't choose to take a picture of this right now. I strongly encourage everyone to use these tools both for contributed on Drupal.org and for client projects. In each case, Drupal core already provides a default rule set that you can use to make your project compliant with core standards. But you also have the option of developing your own rule set if your organization mandates specific coding standards. Now automated tooling can't catch every nitpick type problem. I have an example here of a minor documentation fix when someone should have used the at C PHP doc tag for just a little one line of a doc block. But they used a short sentence instead. You might be able to see up there for those of you who are able to see they had the word uses instead of an at C there in a period at the end of the sentence that wasn't really necessary. Now get lab in particular makes it very, very easy to deal with this kind of problem. I as a reviewer instead of typing out a comment explaining why this is wrong PHP doc, I can just use the insert suggestion feature, which is the left most button in the get lab widget when you're looking at a like a merger quest. It's like a little piece of paper with like a carrot on it. You just highlight the lines of nitpick is about click that insert suggestion button and then edit the snippet. And then the merger quest author, they can adopt your suggested change with just one click. They can even batch them and commit them all at once in a single commit. So if for a number of reviewers in our community who are notorious for spotting these little things like the period shouldn't be there, the period should be there. We've empowered them to fix their own nitpicks rather than having to burden that with an entire review cycle. So with all that background information, let's now talk about how we can optimize our change sets so that peer reviewers can do our best work. Remember, human reviewers are the single most efficient thing you have to catch issues early, safely and cheaply. So lean into that. There's something quite important about your peer reviewers. They are your first insight into what experience someone is going to have down the road maintaining this code. If the reviewer can't follow your code just by reading it, chances are that's going to happen again in the future when someone else looks at it for the first time. So optimize for readability and reviewability. It might even be you looking at that code in a few years, right? You don't want to be sitting there. Oh my goodness. Did I write that? That was so long ago. I'm so embarrassed. You want to be, ah, thank you past self. That was very clear. You explained it to me quite well. And I've had both experiences with my own code. I'm sure many of you in the room have as well. Now while smaller change sets are easier to review, it's definitely possible to take that too far. First and foremost, you shouldn't ever commit something that leaves your main branch in an unshippable, incomplete or broken state. If you were using Drupal about a decade ago and wondered why Drupal 8's release date kept slipping later and later than what was promised, it's because of this right here. We didn't keep the code base in a shippable state, so we ended up with hundreds of critical issues blocking Drupal 8's release. Another mistake people might make is to make a diff smaller by separating out the test coverage or the documentation. This isn't a good idea because the tests and docs are an important part of understanding the change, and a diff that's missing them isn't optimized for a review ability. This affects the immediate experience for the reviewer, the shippability of your code base, and the usefulness of your git history for that matter. Finally, scope creep, another drink of tea. You're adding a file and you see something else that's wrong, a couple lines above, so you think, why not fix that while I'm here? It's a simple problem, I know how to fix it. The problem with this is that it adds distractions for the reviewer and forces them to think outside the context of the primary change. It also creates an ever-expanding set of changes that need to be in the issue, blurring when the issue is really done. Remember, the path to done is important for contributors, especially in an open source project, but also, I mean, especially when you have actual timelines to deliver something for a client. Reviewers can make this mistake too, by the way, especially when they were reading moved or context lines in the diff. It's very easy if the lines have been moved from one file to another to read it as if it's new code and then start giving feedback on it. I've made that mistake myself all the time, but it's actually not helpful feedback at that point, because the person is just changing the factoring and git just doesn't render that. It's very difficult without extra like extensions to get to render that in a way that makes sense to our minds. So instead of fixing that one little thing near the changes, especially in moved code, restrain yourself, get in the habit of filing separate follow-up issues or tickets for unrelated problems that you spot when you're looking at a merge request. Now, sometimes you'll get a reply from a code author that a certain pattern you pushed back on is used elsewhere in the code base, even if you've documented a real technical issue with the approach. Even it's just we don't do this anymore. And there are two kinds of code authors here. Some code authors take their response to your feedback with great enthusiasm, and then they go and change all instances of that same pattern everywhere else in the file or even the whole code base, which is definitely scope creep. Others push back on your pushback and want to add new code that's using a known bad pattern just for the sake of consistency, just because the bad pattern exists elsewhere, especially in a large application that's not helpful. Both these things are the wrong approach. File a follow-up, write good new code now and deal with old bad code later. You also shouldn't even fix out of scope things on the same line, for example. When I was first working on Drupal Core in 2011, this was a mistake I personally made a whole lot. I would think, oh, well, we're updating this line. We might as well fix this other thing too. I can improve the wording. I can improve the formatting. It'd be cleaner with a ternary because we can use those now because PHP 7, whatever. Bad idea, don't do it. It's still scope creep and it will still make your peer reviews less effective. And here's why scope is relevant even on a single line. Some kinds of changes are easily scannable if you use a word diff. Replacing all instances of a misspelled word, fixing one coding standard, or replacing deprecated a method with a new one. It's also useful in Drupal, by the way, for like reviewing the contents of like t when someone's changed a string in t in the line. Diff doesn't help you. It'll show you which words actually changed. All these changes are easily reviewed with get diff dash dash color dash words. If you have not used it before, it will change your life, I promise. So in the diff on the slide, a change is standardizing how the word writable is spelled throughout the code base. Serious issues here, people. Let's say I thought of a better way to word the method docpluck that says whether this is a writable storage. It sounds a little awkward to me. I don't know. It could be tempting to say, well, I'm already changing this line, so I might as well improve the rest of it. The problem with that is that it completely changes the kind of review that's needed. If the original change that is only standardizing a couple dozen instances of how writable is spelled, that's very easy for me to scan. It'll only take a couple minutes. However, if we suddenly add a bunch of rewritten docs, I have to use different parts of my brain. Instead of just checking the spelling, the reviewer has to think about the whole sentence. Is the new version actually an improvement? Is it grammatically correct? Are there other errors being introduced? What's more, the reviewer will now have to read this entire method to decide whether or not the updated documentation is actually correct. Suddenly what was a two minute review is instead going to take 20 or more, depending on how many lines we decided we wanted to improve. The approach we should take and set is to get this simple change set in, the one that just fixes the spelling, and then consider a follow-up, a task for the other improvement. Note that get-diff color words can also take a reg-ax to specify which pattern should count as a word break for the purposes of the diff. I find that a pattern of a single dot is very useful for purposes of reviewing things like replacing one function call with another. The example on the slide is from a recent change we made to adopt the new PHP8 stir-starts-with function. The original merge request lumped together adopting stir-starts-with, stir-ends-with, and stir-contains. If you don't know what these are in your PHP developer, look them up, you'll think it's cool. It changed like 300 lines with these replacements, which meant that there were 600 total lines because it was changing it from old PHP functions with position matching and whatnot to the new functions. And then there were also 9 or 10 different patterns of replacements because you can use stir-starts, you can use all kinds of different string functions in PHP 7 and earlier with different logic to figure out which of these new functions it should be. I spent like two hours trying to review these 600 lines because the contributors who had already done the work were very insistent that they had already done the work. So I did my best, but after two hours, I just felt like my brain was melting out of my ears and I couldn't finish. It's a clear sign that despite all the effort that had even already gone into the issue, it was not well-scoped. So what I did is I just split the issue up with git add-p, moved some of the changes to one MAHR and then other different patterns of changes to different MAHRs, and that way it was much easier to review. So after it was split up for this first step, we have to review only three things. We're scanning for something and we're looking for three things each time we look at a changed line. Does the original code have a check for whether stir-pass is zero? Are the comparison operator and the zero removed? And does the updated code have the correct logic? If originally it was checking that the position was not zero, then stir starts with should be negated because it's about what the string starts with. Otherwise, if it was checking that the position of whatever was zero, then it should be a direct replacement for the stir-pass call. So there were only three decisions and very simple ones at that to make about each line rather than a dozen, and there were only 100 lines I had to deal with in that first step rather than 600. Both these things improve the reviewer's ability to give a good review and decrease the time that large change sets will be fighting merge conflicts in the issue queue, especially for Drupal Core. So now let's talk about the most important questions to ask during your peer code review. The first thing you should always do is ask yourself if this change should even be made in the first place. For Drupal Core, there's a corollary to that, which is that maybe this shouldn't be a core change, but maybe it should be done in contrib instead. There can be pressure and open source to merge a change because other contributors have done the work on it, but unfortunately there really are changes that make the application worse or at least less maintainable. One example of this is that in core we get a lot of requests for specific configurations or new options to be exposed in the core user interface. Sounds great, right? More features, more configurable Drupal, except that more elements on the page that are exposed in that user interface leads to more, a decline in usability and accessibility because more elements on the page are distractions. Same problem for the front end. The second question to ask is sometimes the only one an inexperienced reviewer might know to ask. Does the change completely resolve the issue without regressions? And it's up to you to define what completely resolve the issue is in a smart way for you. The without regressions part is also important and it means that anything that might introduce a behavior change like a bug fix should also probably be manually tested. The third question is whether the code quality is better after the change than before. Generally a change that decreases code quality should only be made temporarily in critical situations. For example in a core security release we'll often add duplicated code instead of creating the new API that you would expect in order to reduce as much as possible the risk of any theoretical custom code breaking due to naming or sub gassing issues. After the security release goes out we file a follow up issue to properly add that new API in a minor release. Now note also about this point that it says better and not perfect. If something seems to be getting stuck in analysis paralysis refocus and ask if the proposed change is at least better than the current state or not. And if someone wants to improve it further follow up. Fourth question. Does the change set meet standards for usability, accessibility, testing and documentation. These four categories also correspond to what we call the Drupal core gates. And for each of these categories Drupal has minimum requirements that must be met. Specific accessibility testing that should be done, specific test coverage and manual testing that has to be included. Specific types of documentation changes that are required both from the code base to Drupal.org and etc. And finally is it the best fix we can come up with right now. This can be a harder question to answer especially for someone who's newer to the application under review. Having multiple peer reviewers really increases your chances of finding something finding optimizations and writing cleaner code in this category. I always value working with peer reviewers who are good at identifying unnecessary API surface for example. Because it makes the code surface cleaner, smaller, easier to understand. Regresses the chances of regressions being introduced and reduces maintenance needs. Now we're almost done. And I have some final tips about how to do a peer code review. These aren't from any particular source just X jams, random thoughts. First piece of advice, I said piece of advice, piece of advice. No, I'm not going to say anything else. The first piece of advice is to pay attention to the bigger picture of what's being changed. To really understand a change you need to understand the API and effects. Read the whole method. Look up the callers, trace the call stack in your IDE or API documentation. Or even step through it in the in the bugger if you can't just, you know, parse it by reading. Do all of that until you understand the bigger picture. If you only review code in its own context, you might miss something important. Recommendation number two, grep or searching the code base. After Drupal.org and get itself grep is the tool I use the most in core code reviews. I grab for a huge range of things and probably one thing in a majority of the issues I review. If I don't understand something or it looks kind of funky, I check for other places the same pattern might be used in the core code base. If the user is only fixing one instance of something that I think might be present in other implementations, I grab. If I want to compare what the author is doing to other uses of the same API, I grab. Be curious and think not only about the code in front of you, but also of the code base as a whole. And for my final recommendation, be curious about why something is wrong in the first place. You'd be amazed how many people never asked this question, especially if there's dead code, unused variables, incorrect documentation or just strange bugs. It really is looking worth, worth looking into how the code got to be that way in the first place. For example, if you delete dead code without understanding how it became dead code, you might be removing the last evidence of an unreported bug. If you don't understand the history or context of an issue, you might inadvertently make it worse. The best tool for learning the history of a line is git log dash capital L. A lot of people know how to use git blame, but more often than not, git blame will not give you the information you need. A few months back, I had someone ask me to clarify a to-do comment in code related to an issue I had never seen before, because they saw with git blame that I was the last one who had made a commit that changed the same line. In reality, all I did was commit a bulk update patch that changed all the Drupal.org URLs and code comments so that there wouldn't be a redirect. And if the computer I used git log dash L instead, the situation of who actually added the to-do would have been a lot clearer. And I have the example for that on the slide. Now, you can't always stop there because git isn't great at tracking when lines are moved from another file, like I said earlier. So sometimes you have to check out the last commit before the first one you see in this list and then repeat the process. But eventually you will get to the relevant commit and you can look up the issue from our commit messages to figure out what actually happens and happen in that issue and usually find your answer. So as a final summary, I have tried to reduce the most important considerations for code review into six points that fit on one easily photographed slide. But you have to wait. They run by one. So first and foremost, foster a supportive review culture. Manage nitpicks with tooling like code style testing, static analysis and merge request suggestions. Practice good scoping and optimize for reviewability. Decide as early as possible whether a change should even be made in the first place. Ensure that all your changes either maintain or improve the code basis, code quality, usability, accessibility, test coverage and documentation. And finally, ask yourself why issues exist to begin with. Look at the big picture and consider better alternatives for the solution. Thank you.