 All right, so it is now 3 p.m. Hello everyone and welcome to code review best practices I Apologize in advance. I as some of us discovered during the break I had to bring an ancient dinosaur of a laptop with me to this conference and the version of Apple's operating system that does live captioning Does not install in this machine, so I apologize in advance, but the slides are posted online So you should be able to download and follow along if that's helpful for you My Twitter is xdamdrupals, and they're also in the session description on the session page and Just one other note before we get started. Is this mic picking me up? I feel like it's very far away. It's good Okay, I ask some people not take photos or videos of me for privacy reasons That's what the orange lanyard beans. So if you see it on other people, please respect that as well So hi, my name is Jess. I'm xjm on Drupal.org And I'm a Drupal core committer There are thousands of people who can contribute to Drupal core, but as a committer I'm one of about a dozen people in the world who can actually Accept changes into Drupal course code base making it a part of everyone's Drupal applications This means that one of my primary responsibilities is reviewing other people's code I'm also a member of the Drupal security team and a core release manager So I actually create many of the releases that you can install on your sites each month I'm funded full-time to perform these roles for the community by a company called Agilina We like to describe Agilina as a small but mighty government contractor We support awesome clients like NASA the White House the National Archives in the Smithsonian. So it's cool I'm the only full-time Drupal contributor Agilina But other developers at the company are encouraged to participate in our contribution Fridays with 15% dedicated contribution time There are a couple other folks on the team who help with core and the company also helps maintain several contributed projects as well It's a great company with a positive work environment that puts people in health first and we're also hiring So if you're interested in hearing more talk to me afterward and I will or you can find the booth in the exhibition I'll downstairs So the primary goal of peer code review is to catch bugs in new or changed code before it goes into production Your code reviews also help code health increase over time rather than declining It's not just about the code quality though Software code is written by human beings for human beings. So it's important that your software development process reflect that Put simply peer code review is also a tool for building stronger teams By reviewing each other's changes as they're developed you build a shared understanding of your code base across the team and If done properly it should foster open productive communication It can also help you create a sense of pride and shared ownership in your projects Code review is also an excellent opportunity to mentor junior developers. Some of the best onboarding there is and Finally peer code review will help reviewers and authors alike write better code next time because reviewers learn from the process as well Of course if you want your code reviews to be positive experience for your team, you need to have a positive peer review culture Now I'm not big on emotional labor myself. I find it exhausting But the peer code review process is one place where it really is worth putting in that extra effort to help everyone Involved feel safe and supported The most important is that everyone in a review should treat each other as equals Start from a place of humility Everyone brings their own knowledge experience and reasoning to the table We do peer reviews precisely because it's difficult for us to catch errors or weaknesses in our own work And that's true for the reviewer as well in the years that I've had commit access to core I've noticed that many contributors just sort of do anything. I suggest without questioning it Which you you might think would be nice But actually I want the author's thoughts on my feedback, especially if they disagree because I make mistakes, too It's also important to have empathy for others involved in the review process to consider their needs and constraints If something seems wrong or confusing ask a specific question about it This will help both of you understand what's going on if something technically works, but could be improved phrase your improvements as a suggestion and Finally when you and others disagree about something in the code encourage others to explain their perspective Even if something definitely needs to be changed. It's helpful to have misunderstandings and additional perspectives documented for future reference So I structure my code reviews in three parts to improve the chances that they'll be well received by the code authors I try to always start by thanking the author for something specific. They did well That helps them read the review positively and shows them that I'm aware of the work that's gone into their code chain set If you dive straight into asking someone to change this that and the other thing It might seem like you've missed the big picture or the value of their work Next in my comments of of specific lines or on aspects of the code I try to provide references backing up my recommendations Even if you as a reviewer have seen a certain problem a hundred times the author may not have so provide them with references to policy documentation References to other code that's relevant or the like Providing a reference helps the code author internalize best practices a little bit at a time and also sets a good example For other reviewers to follow so that everyone can learn from it Finally after I finished my detailed review, I close with a summary and an actionable next step for the issue If I've given a lot of feedback, I'll highlight what I think are the most important concerns or issues to address It's always good to help contributors understand what the path to done is and that starts with an actionable next step Now before I get into any more specific advice about the technical aspects of code review I need to explain some background concepts from psychology of All the tools we use to do code review our brains are the most important So it's important to understand the limitations on our brains and how we can enable them to do their best work The first concept to introduce is that of decision fatigue Which simply stated is that after we make many decisions our ability to make additional decisions becomes worse If you've ever come home after a long day and found yourself like staring into the refrigerator as if it held all the answers Or been completely overwhelmed by the array of choices in a US supermarket. This is probably what's going on We make hundreds of decisions every day. Many of them are completely trivial What to say or text at a given moment what to have for breakfast But when you make many of these little decisions in a row your brain gets fatigued and and Symptoms of that fatigue can include feelings of exhaustion or mental fog impulsiveness and procrastination or avoiding making further decisions and code review involves a lot of decisions for every single line We decide whether the code is right what other side effects it might have whether there's a better way Another important concept. I want you to think about is directed attention fatigue or DAF When we have to focus on a single task like when we do a code review our brains have to suppress stimuli from external disc Extractions now the distractions can be entirely external background noise the kids the pets messages and slack from your co-workers But they can also be internal random unrelated thoughts passing through your minds feeling hungry worrying about the bills Finally the distractions can also come from the code change set itself Code style errors bugs in nearby code and so forth and that's something we'll talk about more later on Unfortunately the mechanism in our brains that allows us to push aside these distractions to focus on the task at hand gets fatigued and overused and Symptoms of that can include restlessness irritability and apathy None of which are good for your health or for fostering a positive review culture Directed attention fatigue can also cause you to miss social cues Which can make it harder to communicate empathetically with the person whose code you're reviewing and finally it can cause misperceptions Confusion impulsiveness and poor decision-making all of which will lower the quality of your code review It's also worth pointing out that distractions are an accessibility issue Directed attention fatigue does affect neurotypical people, but it can be compounded for folks with ADHD There's actually been research that shows the same mechanism in people's brains is stressed in both directed attention fatigue and ADHD The important difference of course is that directed attention fatigue is a temporary condition that you can recover from It takes time for that mechanism to recover after it's been stressed Which is probably one of the reasons a lot of folks at the ADHD use a strategy of Taking frequent short breaks with the timer and so forth Overall, you should treat distractions in the workday and in the code change that itself as accessibility and inclusivity concerns This is something that I'm trying to get better at in my own reviews as well Keeping the you know little aside out of it. It's so tempting to narrate and put my thoughts in the review But it's not actually helpful for everyone reading So with that context, let's talk about some research that's been done on how different factors can increase or decrease the effective of our combate development processes references for these are in the Speaker notes in the slides which I've tweeted and are on the session page as I mentioned So the key concept to understand here is that of defect detection This means of the existing Potential bugs or flaws in a code change set how many are actually identified by a given process Higher is actually better because it means you've found more bugs before that can actually become bugs An early defect detection is really valuable It's much better to find a bug before it ships seems obvious, right? According to one source that I looked at even low-impact bugs are twice as costly to fix once they're in production And a critical bug can be as much as a hundred times as costly to fix after it's in production For Drupal core the equivalent of that being in production is probably after it's been released in a tagged version of Drupal and Then something you might not have expected a formal code review with multiple reviewers is On average twice as effective as automated testing for defect detection This is actually amazing that people simply reading the code are twice as good at finding bugs as computers that can run it Of course combining both is even better And I have some statistics to illustrate this that compare the effectiveness of pure code review with different kinds of quality assurance First off a word about unit tests They can catch anywhere from 15 to 50 percent of bugs within a change set or 30 on average I didn't show the error bars on the slide But that's because unit tests were the only tool which with such unpredictable Inconsistent results and it varies so widely because the usefulness of unit test depends on the scope and encapsulation of the change set Furthermore misusing unit tests in place of integration tests can lead to a very high rate of false positives and a lot of technical debt All of which are distractions. So use with caution Static analysis is pretty great and can catch 33 percent of potential defects more actually than typical unit or regression tests We recently started doing static analysis on all Drupal core issues and it's been awesome Integration tests finally the most effective automated tests catch 35% on average So as an aside one in doubt write an integration test over unit or regression tests now Compare all of that above To the the formal peer code review process with multiple reviewers, which will catch 60% of defects Again, this is pure code review by itself before it's combined with any of these other tools Now finally for context I've also included a statistic for the effectiveness of large beta tests with over 1,000 users Those beta tests identify 75% of defects and code on average Just think of all the time and cost it takes to set up a beta test of that scale Not to mention all the quote free work that you get from your beta test participants as well a Formal code review costs a fraction of that but it still might find as much as 80% of the same stuff Even if for some reason you didn't have automated testing at all Which I certainly hope you did by the time you got to having a thousand person beta test, but anyway So like I said those statistics are about using each tool by itself But pure code review is also even more effective when combined with other strategies If you combine design reviews automated testing QA and pure code review It can increase the defect detection rates to over 90% So you should still write integration tests and still use linting and static analysis tools And in fact you should have all of that run before your peer reviewers even start looking at the code That'll reduce the number of defects in the code before it gets to your peer reviewer, which in turn will make them more effective So in summary pure code reviewers are your most cost effective resource for code quality So you should optimize for them in your development process Well, so how do you do that? You're thinking how do you optimize for a good review? I'm going to spend the next several sections exploring how you and your team can work together to optimize for codes review ability So in the graph on this slide the it compares the defect density found in a review Which is the y-axis the vertical axis with a number of lines of code in the review set, which is the x-axis Higher defect density again is a good thing here It means that you're catching more of the bugs that already exist in a piece of code early before the code goes into production Lower defect density on the other hand is bad now these change sets out here with 700 or a thousand lines of code of them They didn't have fewer bugs. They didn't even have fewer bugs per line If they just completely overwhelmed the reviewers so that the reviewer couldn't see the bugs in them anymore It has a pattern of exponential decay Reviewers are really good at spotting bugs in small change sets, but the reviewers effectiveness drops off rapidly as the size of the change set increases So why does this happen? The reason is that reviewing too many lines of code at once can cause both decision fatigue and directed attention fatigue So the best way to fix it is to reduce the number of lines of code under review Split the changes into smaller logical pieces Fewer lines mean to review means fewer decisions and fewer distractions Based on the data, it's best to review up to 200 lines of code at a time Note that this includes removed lines as well because you need to also read those to ensure that the change is correct And that there aren't regressions So you add the absolute values of the added and removed line codes to get the total So I've got an example since that probably didn't make much sense So the change here added a hundred and thirteen lines and deleted one So that's a hundred and fourteen total lines of code under review a good change that size And in general you should aim to never review more than four hundred lines of code at a time Change sets that are that large should be reserved for simple one-to-one replacements of one thing or the like for example Replacing deprecated functions usages across your code base So here's a problem example of problematic change that's open also from another core issue It was a bit much I felt like physically exhausted every time I got more than about halfway through and sure enough like checked and the scope of the Chainset was just too darn big. I'm only showing the very bottom of the diff on the slide But this issue changed 94 files with 229 insertions and 201 deletions So that's a total of 430 lines of code under review Over our recommendation and that's part of why the review was tiring for me Another thing that helps is to never spend more than 60 minutes doing code review at a time Now note that 60 minutes is sort of the recommend a maximum and won't work for everyone You might find that 30 minutes at a time works better for you for example and Between those 30 or 60 minute reviews you need to take breaks so your brain can recover Deal with distractions that have come up, but also just let your brain rest Some folks may find that they work better with cycles of 20 minutes of code review followed by a short break And then after those several cycles one longer break So speaking of distractions and code review, let's talk about handling nitpicks Nitpicks themselves are trivial, but how you handle them is actually a really important part of your code development process That does need to be discussed So to me Reading code style errors is like watching a speaker who has something stuck in their teeth So imagine I guess in an unmask setting you have to use your minds here a little flock of something green stuck right up here It has no bearing on what's coming out of my mouth on what I'm saying But you still might not be able to concentrate on my words if you're distracted by that one little thing And that's how I feel when I review a change set that has code or documentation style errors They are minor unimportant and utterly distracting I can't see your code if your white space is messed up. I don't understand your inline comments if they contain grammatical errors It is a real problem code style errors are distractions that increase your directed attention fatigue and make the review harder on the reviewer Trivial decisions about fixing them will also add to decision fatigue for both the reviewer and the author But on the other hand nitpicky reviews are also really discouraging to code authors They spend hours or days fixing something and so then instead of evaluating the design or appreciating their work You get lost in the distractions of formatting issues or spelling errors It can seem oppressive or like a lack of appreciation for the contributors work So how do we fix the problem on the one hand code style errors get in the way of effective code review But on the other they are the least important thing about change set The answer is to use automated tooling that runs immediately every time you submit a code change before your peer reviewer looks at the issue Instead of dozens of tiny nitpick decisions. You have one binary decision Did it pass automated coding standards checking? Now the Drupal community has now spent nearly a decade improving Incorporating automated coding standards checking in the core process and also actually making core comply with its own coding standards Which is another problem and it's kind of difficult for me not to be melodramatic about just how much this has made things better Our community health is has improved so much as a result of this and it's makes us so much more focused on what actually matters And here are some of the automated tools that you should use PHP CS PHP Stan C spell ES lint style lint Core has a script that will automatically run a number of linting checks including these tools on the code base Both as part of Drupal.org's automated testing and as a pre-commit hook that committers like myself have run before we make a commit tad You can also configure jobs for these tools using github actions get lab CI Drupal CI for contrib or your own continuous integration or QA tooling I won't go into detail about how to set them up. There's information information on the internet at the bottom But you should be able to find a list again at my sites I strongly encourage everyone to use these tools for both contributed and client projects in each case Drupal already provides a full rule set that you can use to make your project compliant with triple standards Or you can also you have the option of writing your own rule set If you work from an organization that mandates its own coding standards Now automated tooling can't catch every single kind of nitpick type problem I have had an example here of a minor documentation fix where someone should have used the at see PHP doc tag, but didn't Fortunately get lab makes it super easy to deal with this kind of thing with the insert suggestion feature You just highlight the lines that your nitpick is about in on the merger quest Click the insert suggestion button, which is the leftmost one in the widget this little one with a carrot and an underscore She'd a paper tiny thing that no one knows what it is and then then edit the snippet in line in the box So you're basically just hand editing the code For their merger quest in a way that they can accept with a single button click It's great It saves you time having to write out an entire sentence saying I should use an at use statement And it saves them time having to make this nitpicky change. That's not really that important in the big greater scheme of things So with all that background information, let's talk about how we can optimize our change sets so that peer reviewers can do their best work Remember human peer reviewers are the single most efficient thing you have to catch early Issues early safely and cheaply so you should lean into that There's something quite important and magical about your peer reviewers They're your first insight into what experience someone is going to have Maintaining this code years down the line If the reviewer can't follow your code just by reading it Chances are that's going to happen again in the future when someone else looks at it So optimize for readability and review ability It's essential that someone in the future understand the code easily and actually it might even be you in a few years You don't want to find yourself saying yeah, did I write that? It's much better to say. Ah, thank you past self. That was very clear and I've had both reactions to my own old code I'm sure many of you have as well So while smaller chain sets are easier to review It's also definitely possible to take it too far First and foremost, you shouldn't ever commit something to your main branch that leaves it in an unshippable incomplete or broken state if You were using triple about ten years ago and wondered why triple eights release date ended up being years later than was initially promised It's because of this right here. We didn't keep the code base in a shippable state So we ended up with hundreds of critical issues blocking triple eights release We no longer do that which is why we have scheduled release dates now and it's fantastic Another mistake to make is making the diff smaller by separating out test coverage or documentation This isn't a good idea because tests and docs are actually an important part of understanding the change a lot of times I read the test coverage to understand what the issue is even really about and A diff that's missing them isn't optimized for review ability It affects the immediate experience for the reviewer the shippability of your code base and the usefulness of your get history So keep it together Finally scope creep You're editing a file and you see something else that's wrong just a couple lines above so you think why not fix that well I'm here The problem with this is that it adds distractions for the reviewer and forces them to think outside of the context of the primary change It also creates an ever-expanding set of changes that need to be made in the issue blurring when it's really done Reviewers can make this mistake too by the way when reading moved lines or context lines in the diff I've done that myself many times before So instead of fixing that one little thing near your changes get in the habit of filing separate follow-up issues for unrelated problems That you spot when you're looking at something Sometimes you'll get a reply from a code author that a certain pattern you pushed back on is used elsewhere in the code base Even if you've documented a real technical issue with that approach Some code authors take their response to your feedback too far and change all the other instances of the same pattern across the code base Happens so often you'd be amazed and then others will push back on your pushback And they'll want to add code that's using a known bad pattern just for consistency with the rest of the file Both of those things are the wrong approach file a follow-up Write new good code now and fix old bad code later You also shouldn't even fix out of scope things on the same line as the one you are changing When I was first working on core in 2011. This was a mistake. I personally made a lot thinking Oh, well, we're updating this line. You might as well fix this other thing with it too bad idea Don't do it. It's still scope creep and it will still make your peer reviews less effective And here's why scope is relevant even on a single line Some kinds of changes are easily scannable if you use a word diff Replacing all instances of a misspelled word fixing one coding standard or replacing a deprecated method with a new one All of these changes are easily reviewed with get diff dash dash color dash words If you have not used it before it will change your life. I promise So in the diff on the slide a change is standardizing how the word writable is spelled throughout the code base Now let's say that I thought of a better way to word the method doc block that says whether this is a writable storage It could be tempting to say well, I'm already changing this line. So I might as well improve the rest of it The problem with that is that it completely changes the kind of review that's needed If the original change set is only a couple dozen instances of standardizing how writable is spelled That's very easy for me to just scan. It'll only take a couple minutes However, if we suddenly add a bunch of rewritten documentation I have to use different parts of my brain instead of just checking the spelling I have to think about the whole sentence is the new version actually an improvement Is it grammatically correct? Are there other errors being introduced? What's more I'll now have to actually read the entire method to determine whether the new Documentation is accurate when we're just checking spelling. I don't actually have to review the rest of the code, right? Suddenly what was a too many reviews instead going to take 20 or more depending on how many lines we decided we wanted to improve The approach to take instead is to get this simple change in first the spelling fix and then have a follow-up further improvements Note that get diff color words can also take a reg x to specify what should count as a word break for the purposes of the diff I find that specifying a pattern of a single dot is really useful for replacing things like one function call with another So the example on the slide is from a recent change we made to adopt the php8 stir starts with function The original merge request for this lumped together adopting stir starts with stir ends with and stir contains and Replacing all the various other string functions that php has that could possibly be replaced by those It changed like 300 lines which meant 600 total added and deleted lines to review And there were nine or ten different patterns for replacements in there I spent like two hours trying to review it and I felt like my brain was melting before I could finish It was a clear sign to me that it was not well scoped And so despite the fact that other people had who had created this patch You know just using an audit probably a php storm automatic replacement, you know, I took them for like five minutes to make the patch I Ended up splitting the issue at myself with get add-p into five separate issues So they could be handled in a more manageable way for myself and other reviewers So after was split up for the first step we had to review that that's again This is just the one function stir starts with review three things Does the original code have a check for whether stirposs is zero? Are the comparison operator and zero removed and Does the updated code have the correct logic? If originally it was checking that the position was not zero then the function should be negated Otherwise if it was checking that the position was zero then it should be a direct replacement for stirposs call So now there were three decisions I had to make about each line rather than a dozen and there are only a hundred lines to review rather than 600 Both of these things improve the reviewers ability to give a good review and Decrease the time that this change set will be stuck in the queue fighting merge conflicts And indeed the first issue was already been was already committed long ago The second step also committed long ago and the third ones in progress now So now let's talk about the most important questions to ask during a peer code review This is probably like the main content of the session of anything else The first thing you should always do is ask yourself if we should even be making this change in the first place For Drupal core. There's a corollary to that which is that maybe this shouldn't be a core change But should rather should be handled in contret There can be pressure and open source to merge a change because other contributors have done the work But unfortunately there really are changes that make the application worse One example of this is that we get a lot of requests for specific configurations and new options to be added to the core user interface Sounds great, right more features Except that more elements the more elements that are exposed in the user interface the more the usability and accessibility tends to decline additional things in the page also distractions The second question is sometimes the only one an inexperienced reviewer might know to ask Does the change completely resolve the issue without regressions? But without where the gressions part is important And it means that anything that might introduce behavior changes like a bug fix should also be manually tested Now the third question is whether the code quality is better after the change than before Generally a change that decreases the quality of the code They should only be made temporarily in certain critical situations For example when we do a security release We'll often add Duplicated code instead of a new API in Order to reduce as much as possible the risk of any theoretical custom code breaking downstream Due to naming or subclassing issues After the security release goes out we file a follow-up issue to add the proper correct new API in a minor release So there's a temporary drop in code quality, but then an increase in the next minor Note also that this says better and not perfect if something seems like it's going to get stuck in analysis paralysis Refocus and ask if the proposed change is at least better than the current state or not fourth question Does the change set meet standards for usability accessibility testing and documentation? These four categories also correspond to what we call the Drupal core gates For each of these categories Drupal core has the minimum requirements that must be met Specific accessibility testing that should be done specific test coverage and manual testing that is included types of documentation that have to be added and so on and Finally, is it the best fix we can come up with right now? This can be a harder question to answer especially for someone who's newer to the application that's under review But having multiple peer reviewers really increases your chances of finding optimizations and writing cleaner code. I Always really value working with peer reviewers who identify unnecessary API surface, especially Because a smaller API surface makes the code cleaner and easier to understand Reduces the chance of regressions being introduced later and reduces maintenance means so that's just one example of the kind of thing That falls under that last one now have some final tips about how to do a peer code review Most of the rest of my presentation is on the internet somewhere either as someone else's work or mine But then these are just things like I didn't talk about that and it's important. So that's what these are First bit of advice Pay attention to the bigger picture of what's being changed to really understand a change You need to understand the API it affects read the whole method look up the collars Trace the call stack in your IDE or API documentation or even step through it into the bugger Until you understand the bigger picture of what's going on if you only review the code in its own context You might miss something important Recommendation number two grep or searching the code base After Drupal.org and get itself grep is the tool. I use the most in my code reviews by far I Grep for a huge range of things and probably something in at least two-thirds of the issues that I review for core If I don't understand something or it looks funky. I grep I check for other places in core that the same pattern is used or isn't If I think it's fixing only one instance of something and I suspect the same bug might be present in other APIs as well, I grep if I want to compare what the author is doing to other uses of the same API I grep Be curious and think not only about the code in front of you, but of the code base as a whole and For my final recommendation be curious about why something is wrong in the first place So many people just go to fixing a bug without stopping to say, but how did this break? If there's dead code unused variables Incorrect documentation strange bugs. It really is looking and worth looking into how it got to be that way in the first place For example, if you delete dead code without understanding how it became dead code You might be removing the last evidence of an unreported regression If you don't understand the history of the context of an issue you can inadvertently make it worse The best tool for learning the history of a line is git log dash capital L a Lot of people know about git blame, but more often than not git blame will not give you the information you need a Few months back. I had someone come to me to ask me to clarify a to-do comment and code Related to an issue. I had never even seen before Because they saw with git blame that I was the last one who had made a commit that changed that line Now in reality, all I did was commit a patch that did a bulk update of all the Drupal org URLs in the core code base So that they wouldn't cause redirects. That was the that was the only thing I committed Now if the contributor had used git log dash L instead it would have been clearer And I have an example of git log at all for that exact change on the slide Now you can't always just stop there because git isn't great at tracking lines when they're moved between files So sometimes you might have to check out the commit right before the last one in that list Find out where the lines were removed from and then repeat that same git log L process But eventually you will get to the relevant commit and you can look up the issue that led to that commit You can usually find your answer So as a final summary, I've tried to reduce the most important considerations for code review into six points that fit on one Easily iPhone photograph slide You're welcome First and foremost foster a supportive review culture Manage nitpicks with tooling like code style testing static analysis and merge request suggestions Practice good scoping and optimize for read a bill review ability, which is a very difficult word to say that I said a lot of times wrong Decide as early as possible whether a change should even be made and And sure that all your changes either maintain or improve the code basis code quality usability accessibility dust coverage and documentation and Finally ask yourself why exist use exist to begin with look at the big picture and consider better alternatives for the solution So that's all the structured content. I have thanks everyone for listening Here's a list of five references. I recommend that anyone interested in this topic read The first two are the secondary sources. I got all the research I presented for most of the research that I presented from And they have so in you go to the website and they summarize and then they they have references at the bottom for the primary surface The third is a resource from Google on their engineering best practices for peer code review I found it was consistent with a lot of triple values like I was dotting my head a lot And there were things that I had never seen written down in our documentation that I saw in there as it was like Oh, yes, I do that too. So it's it's definitely worth a read Fourth item up there is our handbook page on course automatic development checks Which also link references for all the individual tools like pH PCS and ES lint as well so you can figure out how to set those up for your Drupal projects and Then the final link is Drupal.org slash core slash scope the core issue scope policy Anyone who knows me knows I love this document It's very detailed and it's going to seem totally ridiculous the first time you look at it But it has actually really improved our core development process compared to how things were 10 years ago It includes examples of good and bad issue scope and should be easier to understand after you've seen this session. Thank you So we have Yeah Here we go. So let's keep that up on the screen and we have good eight minutes for Q&A Seven minutes. Sorry Any questions? They are posted They're on the session page and Twitter, but you might also want to take a photo people's workflows differ Sorry repeating the question for the recording. The question was are you going to post the slides? Yes, absolutely It's very useful to so actually the Google the third resource on the other slide that you might want to look at I'll go back down That how to do a code review document go Google has standards for like Prefix all nitpicks with knit and so on and so forth the practice that I used to use was I would like Rearrange my dreaded a review back in the day So that you could see the things that were nitpicks here and the things that were important architectural questions They were up at the top and the questions that I didn't know the answer to and wanted their feedback on worth the bottom Unfortunately get lab makes that sort of a process more difficult because it's not like you're Composing this one long thing and then posting it to the issue as you start to read and start to comment on lines in the diff There's like an obscure setting to try to post it as a batch review But I've found that it's very buggy and unstable and lost entire reviews before So that's something to kind of especially when Drupal org is in this transitional state where You know, we have we have our Drupal issues, but then we have get lab as well What I'll what I do is I'll sort of like post a comment on the issue that I'm starting a review that for my feedback sandwich And then I'll try to indicate in each bullet You know what kind which type of category of feedback it is and then summarize again in a separate comment on the issue It's good under pull that August right now It's it's a little bit messy and I haven't quite found the right But it's it's definitely a good idea to make it very clear Because it's so easy if you're just going along you give people four plates and you ask like I have a question Is this like this and then they think that your question is blocking or you know, so definitely a good idea Plus one Whenever possible, especially for the longer the review is the more comments there are the more important it becomes to that structure Anyone else have questions? Yes, sir in the back So, so what do you recommend when the right? Okay? So so when there's there's there's too many changed files, so I mean that you have a couple of different options Right, so you can obviously add to your get ignore. You can try to look at it locally, right using get locally It I guess it depends on our is are you reviewing Someone's like are they making are they only doing a vendor update or are they? Making changes to code as well, right? So those are those are two different kinds of things now we do do large vendor updates for core all the time We are npm dependencies are actually packaged and shipped as files in in Drupal core So we actually we actually have to do these kinds of reviews for our process where they're like large dependency updates And we actually have a whole process of so we first we just audit what's being updated by yarn We'll look and see okay. It's fixing X Y and C security vulnerabilities. We'll glance and see if anything is Related to anything that could be relevant to us then We have some assets in core that are built assets That are built by the npm dependencies and if those built assets are changed as a result of the change to the npm build That means that we need to look at those assets and make sure first of all that a security issue isn't being introduced Or that there isn't some sort of API change Affecting us that we might need to take account for and so then we actually have a sort of complicated process right now Where we unminify the built assets do a dip of them reminify it and then that's the sixth step of it But it you know depending on what you're reviewing You know, it's it can be helpful just to like, you know, do the vendor updates as a separate commit or just like Use command like can't command like it has infinite options for excluding directories looking at only one file I Switch I switch back and forth all the time between the graphical user interface that get lab or get hub provides and the command line for that reason because There's the I mean, there's just so many options for how to So many commands like if you want to review the changes only to one file Get you can use your normal get diff with like between whatever and whatever and then a double dash And then the full path to the file and it will show you the changes to that file only you can use strategies like that for Anything where it's related to a vendor update I'm not sure if that's what you were getting at is is did that cost out of cover you were addressing or did you have a Different kind of large change What's that? We have a few more minutes left. I'll give one last chance for quite chance for questions So question Okay. All right. Well, thank you everyone for coming