 Thank you. Hi, everyone. Can you hear me okay? Lovely. Hi, I'm Nina Zakarenko, and I've been writing code professionally for over a decade now I've worked at some companies that you might have heard of like HBO, Meetup, Reddit. These days I work at Microsoft as a cloud developer advocate And what that means is my goal is to make Azure easier to use for all Python developers Now that up there, that's our mascot bit. So if you think he's pretty cute, come find me after the talk. I have some stickers to give out. And today we're going to talk about code review skills for Pythonistas. These slides are available online if you want to follow along or share them with your co-workers. The link is just right up there. Now show of hands, how many of you have been part of a code review? Okay, almost everyone. Now another show of hands, how many of you have only had positive experiences with code review? Oh my God, one guy. We're going to talk after this talk. I really want to know about your experience. Have you been coding for one day? Okay, I'm sorry, I'm sorry. So after this talk, you're going to learn some new skills to change that. And we're all going to have a lot more positive experiences doing code review. Now I want to try something kind of new with this talk. If you're learning something new, if you're excited about something that you've heard today, share a tweet, use the hashtag EuroPython. And you can at mention me, my Twitter username is NNJA, that's like ninja but without the I. I'm actually going to experiment with that right now. I'm going to live tweet a picture of all of you. So thank you for coming. Now, what are we going to talk about today? So I'm going to share with you the proven benefits of code review. They're not just anecdotal. They're going to be based on research and case studies. We're going to talk about setting standards. We're going to go over tools for automation that are going to make reviews better and easier for Python developers because everybody likes it when their job is easier. And I'm going to give you specific examples of how to review code helpfully, how to be an effective reviewer, an effective submitter, and how we're going to use code review to build a stronger team. Now, what are you specifically going to walk away with from this talk? Well, if you're totally nervous, I'm going to give you a comprehensive overview of code review best practices. If you're intermediate, we're going to talk about tooling automation. And if you're a total pro, we're going to talk about the hardest part. And that's the people factor. Now, this talk is not one size fits all. I'm offering you suggestions and you're going to need to adjust them based on your own work situation. And there are factors like team size. So taking the time to review for a team of two is going to be harder than for a team of 10. The product type makes a difference. So if you're in an agency, you might have tighter deadlines, more incentive to just try to push code out the door. If you're working in open source, the rules are just kind of different, you know, because a paycheck offers an incentive that's pretty hard to compete with. And when you're coworkers, you have a lot more motivation to try to collaborate and work well together. And barriers in open source can happen because of things like the committers speak different languages. Now, the last thing I want to talk to you about in terms of factors for consideration is defect tolerance. So what is a defect tolerance? It's the rate of failure that's acceptable in your software. If you're dealing with money or aircraft navigation systems, medical equipment, you should have a much lower defect tolerance than something like a mobile game or a blog. Because I assume that none of you would want someone getting stuck in an MRI machine because of your code. No. It's happened. Yes. So why do we even bother with code reviews? Because under the surface, they can appear really frustrating. I mean, just look at this developer. She is so angry at the idea of code reviews that she is eating her laptop. She's so mad. And we don't want that to be us. So some of these apparent code review frustrations, people think it adds a time demand and that can be especially noticeable on a smaller team. It adds process and everybody hates process, right? Reviews can bring up egos, team tensions, and personality incompatibilities. And sometimes we run into that one smart dev who thinks that their code is just too good to be reviewed. So how do you change these attitudes? You have to show the benefit. In the short term, frustrations are inevitable. But like with all things, with practice, you'll have much better velocity over time. And you want to fund bugs and design flaws. You want to identify them before the code is complete. Multiple case studies from IBM, AT&T, Microsoft, they've all shown that code review can lower the bug rate by about 80%. And it can increase productivity by about 15%. And at the end of the day, the goal is to find bugs before your customers do, right? You don't want to end up with egg under face. And reviews help us feel the sense of shared ownership and knowledge. We're in this together. No developer is the only expert. You gain familiarity with different modules by the process of doing reviews. And we want to decrease the lottery factor. What exactly is the lottery factor? It's a measurement of how much concentrated knowledge, concentrated specialized knowledge belongs to just one individual team member. So in this example, the New York City subway vending machines, when they go down, there's only one person that knows how to reboot the system. And his name is Miguel. And he shuts off his phone on his way home. That's bad enough, right? But what if Miguel wins the lottery tomorrow and he decides that he never wants to work another day in his life? That's a huge problem. So you want to think about will one person sabotage the whole project if they left tomorrow? So quickly review code review benefits. We can find bugs before our customers do. We have a sense of shared ownership, shared knowledge. We reduce that lottery factor. How do we do that? With consistent code. You have to remember that the code is not yours. It belongs to your company. Your code should fit your company's expectations and your company's style, not your own. And reviews need to encourage consistency for code longevity because let's be real, nobody stays at a company forever. Is the person laughing have you stayed at your company forever? That's why you're laughing, right? No? Okay. And code reviews need to be universal and follow guidelines. So it doesn't matter how senior junior you are, if only senior doves review, that's a huge bottleneck. Everyone on the team should do it. I worked with a developer in a Java shop who insisted on formatting his code C++ style instead of Java style. So there's just a little bit of difference in where the opening braces for Function Go and Java, same line, C++, it's a new line. And this just made for complete nightmare diffs. Lots of frustration among the team members maintaining his code. And when I protested and I was like, we can't do this. I was told, well, that's okay. He's allowed to do that. He's Bob. He's the most senior engineer. And unfortunately, this is a true story. I see a few smiles of solidarity. You need to remember that inequality breeds dissatisfaction. So nobody is special. Now, let's jump right in. How do we do that? We need a style guide. Style guides distinguish personal taste from opinion. And many of you are thinking, well, okay, isn't that what PEP 8 is for in Python? No, not really. PEP 8 only really scratches the surface, offers suggestions. But there are lots of other great style guides to choose from. Google has PyGuide.md. Plone has a great style guide. And it doesn't matter which one you pick. Just agree upon it beforehand. Pick it, stick with it. Because great code bases are going to look like they were written by an individual, but they were actually written by a team. Python has useful tools that help in this process called formatters. Auto PEP 8 is the least strict. It just formats code to adhere to PEP 8. Black is the uncompromising Python formatter. It has almost no configuration options. There is no room for disagreement. You pick a line length and it does everything else for you. And then YAPF is very configurable. It even allows you to specify a style guide to follow. So choose what works best for you. There's a really cool black demo written by Jose Padilla. And you can check it out at black.now.sh. After my talk, of course, for those of you who have your laptops out, and try out black on some of your code by pasting it in. Now, VS Code also has amazing support for black. If you don't know what VS Code is, it's a free open source IDE with amazing Python support using an installable extension. Cross-platform, runs on Linux, Mac, Windows. All you have to do to get this working is PIP install black and update a few settings. And then you can have VS Code use black to automatically format your code on save, which is pretty cool. Now, Bob, maybe he wasn't in the wrong. Maybe C++ style formatting is in fact better. But the problem was that he wasn't following convention and consistent code is easier to maintain by a team. Now, one more thing. You need to remember that CoderView should be done by your peers and not by management. The end goal of this is not to get someone in trouble. Any bugs that are found during CoderView should never ever come up in performance reviews. That's why we do CoderViews in the first place. And you don't want to point fingers. You want to maintain a no-blame culture because failure is inevitable. And when teams review code, the team becomes responsible for that code quality and not just one individual. This IDE needs management support because if you get in trouble for sharing your mistakes, you'll brush them under the rug next time and you won't be able to learn from them. Support your teammates. It's not a competition. And when CoderViews become a positive process, developers are going to expect their changes to be reviewed. They're going to want it. They're going to look forward to it. They're going to be excited by it. And they're not just going to want it or just accept that it's just part of this process that they have to do. So some CoderView fundamentals. It should be done by your peers. You need a style guide and preferably a formatter for consistency. You need a no-blame culture. Now, how should we review code? There are two signs of this coin. Being a good submitter and being a good reviewer. I'm going to talk about how to be a good submitter first. Just a little comic summarizes a lot about CoderViews. The one on the left is saying, well, no need to double check this change list. If some problems remain, the reviewer is going to catch them. And on the right, no need to look at this change list too closely. I'm sure the author knows what he's doing. You've all kind of been in this position on one side of the story or the other. You need to be careful not to get rubber stamped. So what is rubber stamping? It's when a submitted solution is so complex that the reviewer thinks it's just totally obvious that the author knows what they're doing and they just rubber stamp and approve the code without bothering to fully understand it. Don't be too clever because submitting overly complicated code is a surefire way of getting rubber stamped. I have a developer friend. He used to love showing off how smart he was by checking in all these complex, over-engineered solutions. And he stopped doing it when he realized that he was punishing himself. It meant that he always ended up being the maintainer of his code. Yeah. So remember that readability counts. And I think Russ Olson said it best. Good code is like a good joke. It needs no explanation. If you feel like a code, a piece of code is confusing. It probably is. Leave a comment either in code or in your review tool, but better yet refactor it so that it's more readable. I find this process a little bit easier to think about in stages, ranging from before I even submit the pull request to after the review has been completed. So let's go through those at stage zero where before we even do the submission, what kinds of things do we need to do or think about before starting that review? The most important thing is providing the context. You need to help the reviewer. Why did you write this code? What was your motivation? Link to the underlying ticket or issue or bug report in whatever tool that you use. If there's not enough context in the ticket, provide extra context. And for larger PRs, consider providing a detailed change log. And remember to point out any unintended side effects of your code. Now that you've provided the context, remind yourself that you, all of you are the primary reviewer. So review your own submitted code as if you were giving a review. This is going to let you anticipate any problem areas. For some reason when I look at it on GitHub, I just start catching new things that I don't see in the context of my editor. I don't know if that's a thing that everybody does. But yeah, thumbs up. Okay. Was that a middle finger? Sorry. And as the primary reviewer, it's your responsibility to make sure that your code works. It's thoroughly tested. Don't rely on others to catch mistakes. Always QA your own stuff. And yeah, you don't want to be known as that person. One way you can be a better submitter is by trying a checklist before you submit code. What makes a good checklist? You want to put the small stuff on there. Did you check for any reusable code or utility methods? Did you remove your debugger statements? Are there clear commit messages? And you want to use it to check for the big stuff? Is your code secure? Is it going to scale? Is it maintainable? And is it resilient against outages? For this process, I highly recommend a book called The Checklist Manifesto. It's a book about how checklists help doctors make surgery safer, how checklists help pilots deal with disaster better. And guess what? Checklists can help developers too. Great. So now we're at stage one. We've submitted the review. Remember that you're starting a conversation. Don't get too attached to your code before the review even starts. You need to anticipate that there is going to be comments and feedback. And you need to acknowledge that you might make a mistake or two here and there. Because the entire point of a review is to find problems and problems will be found. So don't be caught by surprise comments. The next stage is optional. It's a work in progress PR. I personally like the system a lot. As a rule of thumb, you submit them when your code is 30 to 50% done. It's a great idea for bigger projects, more complicated tasks. For you perfectionists, this is a hard one. It's going to drive you crazy. But don't be afraid of showing incomplete and incremental work. I know it's hard to let go and just want to make everything beautiful and perfect, but that's not what a work in progress pull request is about. Because when your code is work in progress, you can get the kind of feedback like architectural issues, problems with your overall design, design pattern suggestions. Before you approach being done, it's going to save you a lot of time. You don't have to rewrite a finished product. You can make modifications while you still have a chance. At stage three, we're almost done. We can see the finish line. At this point, the kind of feedback that we'd prefer to expect is stuff like nitpicks, variable names, request from our documentation, comments, any small optimizations. Because as the code evolves, it should become more firm. No one wants to hear change it all at this phase. It's unproductive. And if you do hear that, it means that somewhere you've had a breakdown of communication. So this process prevents wasting time and effort for bigger, more complex pull requests. Now, you want to have one review per PR. Because at this point, you want to ask yourself, did I also end up solving an unrelated problem? If the answer is, if the answer to, did I only solve one issue with my pull request? No. You want to break up your code into multiple reviews. And you can even branch your code off a feature branch. And that will help keep the diff small while you have this work in progress. And we want to keep reviews really small to prevent review or burnout. Case studies have shown that reviews become less effective when a reviewer looks at more than 500 lines of code in a session. Keep it small and relevant if a big review is unavoidable. Make sure that you give the reviewer some extra time. And we can make automated tools and static analysis to really streamline this process. First, a linter. Hopefully, all of you have linting setup if you don't leave my talk right now. This might be the most important thing I have to say today. And go set it up. So what is code linting? It's an automated way of checking syntax. Or if you want to get fancy, you can set it up to check style guide violations. Here's an example of a Python linter. A linter can integrate into your code editor. And this way, the reviewer doesn't have to waste any time pointing out syntax problems. My favorite, pylint. It's a great linting option for Python. Lots of coding options, or configuration options, and integrations like coding standards, error detection, refactoring help, IDE, editor integration. I'm going to show you one of my favorite rules. Because you want to take the time to learn your linter and its arguments. For me, I don't know if any of you do this, but this is a really common gotcha. Refactoring parameter arguments. I cut and paste it out. And then I end up with this trailing comma. It's really hard to notice, right? There's a trailing comma after the assignment to bar. And when you do this, your type, the type ends up being a tuple. And you end up with all sorts of vague errors and test failures. And it's kind of non-obvious. And tracking down this sort of bug has messed up my day multiple times. So pylint to the rescue, this rule has been available since version 1.7. It's called trailing comma tuple. You can use vulture.py to find dead or unreachable code in Python. It uses static code analysis. This doesn't really work very well when the code is called via introspection. And because Python is dynamic, vulture can make mistakes. So it's good practice to double check the results. It really helps keep a code base clean. So in the sample code, I have three methods, foo, bar, baz. We call foo and bar, not baz. And when we run vulture on this file, it tells us with 60% confidence that the function baz is unused. Now, get precommit hooks. They allow you to short-circuit a commit and make checks before the code even reaches your repository. It allows you to do things like run a linter, check syntax, check for to-dos to bugger statements, unused imports, enforced styling with your tool of choice, auto pep eight, black formatters, sorting imports, et cetera. And then an option to reject the commit or accept it if conditions do or don't pass. That sounds great, right? It also sounds like a lot of work. And we developers are lazy. So thankfully, someone has already solved this problem in a really nice package. If you don't want to write your own precommit hooks, pre-commit.com. It's an amazing library with a lot of resources. It gives really nice, well-formatted command line output. And as a bonus, it can test your hooks without actually trying to commit, which becomes really tedious when you're writing your own precommit hooks from scratch without this framework in place. And it's got lots of support for nice hooks in Python, an auto pep eight wrapper, like eight and pyflakes to flake or lint your source, checking an AST, so check a file, check if it's valid Python or not, check for debugger statements, and for Python 3.7 breakpoint calls. And there are tons of other hooks that are not Python specific, like trailing, trimming, whitespace on commit, checking for files that have merged conflict strings in them, verifying JSON, et cetera, et cetera, tons of time-saving features. Now, tests, there are probably years of pre-recorded talks about Python tests. So I will touch on this very, very briefly. Write them. Yes, please write them. Tests really need to be passing for somebody to know, for somebody new to your code base to know if they can meaningfully contribute. They let you identify problems immediately. You just, you need to know the status of your overall code health and you can't know that if you have failing tests. And nobody wants to work with scumbag programmer who commits untested code. He's a bad guy. Now, continuous integration, what is it? It's an automated build with every push. You can use it to run your linter, run your tests. You can do, you can set it up so that it happens a few times a day or when a new pull request is open. There are lots of tools available for this, Travis, CircleCI, Corpython uses VSTS, that's Visual Studio Team Services. It supports multiple platforms, Mac, Linux, Windows, and many if not all of these projects are free for small teams or for open source projects. And all of them integrate with your GitHub pull request. So it doesn't matter which one you pick, find the one you like. Coverage. Coverage is the percent of code that's executed when a test suite runs. It gauges how effective your tests actually are. And coverage.py is especially a great tool. It can generate nice HTML reports. And remember we talked about fault tolerance earlier. If you have low fault tolerance, for example, if your code fails, is someone going to get stuck in a spaceship? Well, then your coverage should probably be at around 100%. Coverage tools, they integrate into GitHub too for Python, coverage.poi, there's coveralls.io, lots of different products that work with different languages, and automation saves everybody time. Now, we're at stage four, the last stage, the reviewer has finished looking at your code. You need to remember to still be responsive. If the reviewer left comments, just reply to each one, doesn't have to be an essay, thumbs up, thumbs down, resolved, won't fix, whatever. If you're not going to fix something, make sure that you explain why and you've come to a mutual understanding. Don't just ignore the reviewer's comments. That's not nice. And remember, at this point, it's still a conversation. After the first pass, you want to let the reviewer know when your code is ready to be reviewed again. Lastly, don't bike shed. What is bike shedding? It's arguing over minor and marginal issues while more serious ones are being overlooked. So people arguing what color to paint the bike shed before the house is not even done. If you've gone, there's a great website, by the way, bike shed.com. If you've gone back and forth more than three times, just step away from the keyboard, talk, use your words. If you're co-located, that's great, stop by, have a conversation. If you're remote, hop on a video call. But importantly, record the results of the conversation in the pull request so that you maintain that context of what was talked about. If you're not co-located, there's an amazing feature in VS Code LiveShare, excuse me, in VS Code called LiveShare. It's a tool that allows real-time sharing and collaboration between two VS Code instances. So it lets you keep your own editor, your fonts, your themes, your keyboard shortcuts, because everybody knows that that's important in Emacs. And we don't need to learn anything new. We can just edit collaboratively, navigate independently, and it's really an amazing feature for remote teams. It takes a ton of pain out of the process, and I have a link in my slides to download the extension. But fight for what you believe, but remember to gracefully accept a feat. If you disagree with the reviewer's comments, don't just have radio silence, carefully explain what the reviewer might have missed. Try to open a friendly discussion until you understand why the reviewer left the comment, maybe the reviewer missed your thought process. Try to clarify. Stay back to the other person, what you think they're trying to say in your own words. Sometimes arguments can happen because people think that maybe you don't understand what they're saying. But at the end of the day, maybe you're just wrong. So learn to gracefully accept a feat. It's okay. We're all wrong sometimes. And don't take that feedback personally. Use it as an opportunity for growth. Admitting that you don't know something is really hard. It's a great way to fan that imposter syndrome. But don't take it personally. You are not your code. We're all working towards the same goal to ship bug-free code. And be grateful. Offer thanks that someone spent the time to review your code. So how do we be a great submitter? We provide the why, the context. We review our own code first. We expect a conversation. We submit work in progress. We use automated tools to help us out. We're responsive. But when necessary, we also accept defeat. Now, we spent a lot of time talking about how to be a great submitter. Let's cover the other side of that story. I love this comic. The therapist asks, why do you think you're so hostile in code reviews and the dev limits if only I had been more popular in high school? Code reviews should not look like an appointment with your therapist. You need to approach it objectively and without ego. You need to leave emotions behind. You need to have empathy towards others. There is no room for hostility at this point. And most importantly, you need to have empathy towards yourself. Maybe check in before you start. Are you hungry, angry, tired? Are you dehydrated? Do you need a coffee or a water or maybe a walk? Remember that all of these feelings can affect the review process. So take care of yourself before you take care of others. And our language really affects the process. So be objective. You can say something like, this method is missing a doc string instead of you forgot to write a doc string. Reviews are a learning opportunity, not a chance to catch someone being wrong. And this type of phrasing really helps. Try to ask questions instead of giving answers. Would it make more sense if we did it this way or did you think about doing it this other way? Try to offer suggestions, things like it might be easier to do this or we tend to do it this way. Suggestions are always better than ultimatums. And avoid these terms. Simply, easily, just, obviously. They're really condescending, right? If it was so obvious, these winner would not have done it in the first place. They might be missing context or unaware of a concept. And well, actually is another one to avoid. You say that one when someone says something mostly correct, but you interrupt them to make a minor correction. It's not worth it. This is an amazing rule from the Recurs Center. And I highly recommend reading the Recurs Center social rules. They're amazing. Okay. So I'm not sure if any of you practice yoga, but this happens in my class all the time. The teacher just twists herself into a pretzel somehow. And then she tells the students, now simply touch the feet behind your head. I don't think any of you would consider this simple. I certainly don't. So try to remember this little guy when one of those words pops out of your keyboard or your mouth. These rules apply to written, spoken conversation as well. To be effective, you really need to have clear feedback. Strongly support your opinions for maximum impact. Share the how and the why, how you would implement something, why you think the change is necessary. Link to docs, blog posts, any other resources that might back up your opinions. And don't faint surprise if somebody doesn't know something, even if you consider it to be a basic concept. Like I can't believe Dave doesn't know about the singleton design pattern. Cut down on the snark and innuendos. This isn't the time or the place. Reviews bring up a lot of emotions. So don't do that. Stay away from critical emoji-only feedback. Don't do a sad face or a thumbs down with no comment. That sucks. This is not clear feedback. And remember to compliment good work and great ideas. I like to leave a thumbs up when I see good refactoring, some elegant, clean code, or anything that kind of catches my eye. Reviews shouldn't be all about the bad. And for large reviews, maybe think about when you're leaving a lot of comments, maybe think about leaving at least one compliment. And don't be a perfectionist. This tweet says it a lot better than I can. The goal is to write better code, not exactly the code that you would have written. Because for those big issues, you don't want to let perfect get in the way of perfectly acceptable. Try to prioritize what's important to you. And usually 90% of the way there is good enough. When you press for complete perfectionism, you end up taking ownership away from the person who wrote the code. It takes away their feelings of accomplishment and creativity. So think about how those last comments might affect them. But really it's okay to nitpick syntax issues, spelling errors, poor variable names, missing corner cases. What's the harm of letting a few of these pass by? Well, there's kind of a broken window theory of code. If I see sloppy code, I assume that it's okay to check in sloppy code too. Save those nitpicks for last. So after you're done addressing all the big stuff, maybe bring a few of these up. But really specify if your nitpicks are blocking merge or not. That's important. Is this something you'd like to see fixed or is it something that must be fixed? And as a pull request submitter, you can think of nitpicks like a compliment. I mean, if the rest of your code is so wonderful and well written that the small stuff sticks out like this, you did a great job. As a reviewer, you want to avoid getting burned out. So studies show that you should only be looking at about two to 400 lines of code at a time for maximum impact. In practice, reviewing between two to 400 lines over 60 to 90 minutes lets you catch about 70 to 90% of the bugs. So if 10 bugs existed in the code, a properly conducted review would catch between 7 and 9 of them. And the studies also show that about 500 lines, the ability to find bugs drops dramatically. If the code stops making sense, you're too tired, you might miss something. Take a break. A good rule of thumb when you work at a company is to try to do the reviews in 24 to 48 hours. This is especially easy when reviews are small, you know, 500 lines of code or so. This lets you look at reviews incrementally and not let them build up. It also means that the code is fresh and the submitter is mined for questions. This rule is flexible for open source projects. But try to respond at least within 48, 72 hours just to keep excitement and momentum even if you won't have time to do a full review. Nobody wants to end up like this guy. He is still waiting and waiting and waiting. So how do we be a great reviewer? We have empathy. We watch our language. We have clear feedback. We give compliments for good work. And we're not a perfectionist. Don't be a perfectionist. Avoid that burnout. And complete those reviews in 24 to 48 hours. Don't leave your teammates hanging. Yeah, you can see all the grains of sand. What success. So code reviews sound like hard work and they are. But now you know that you can really reap the rewards. And you can use that as an advantage when somebody new joins your team because code reviews can really build a stronger team. First day vibes no matter how much experience you have, whether you've been doing this for a day like the guy who's never had a bad code review or for 20 years, your first day vibes are always I have no idea what I'm doing. So newbies, that new person joining your team, they might not have had experience being reviewed. And they might not have experience giving reviews. So try to remember what it felt like when you yourself were introduced to the process. Ease them into it. Because if you've never had somebody look at your code before, it's really easy to get scared and think, you know, what are they all going to think of me? And when you're onboarding that first submitted PR, it's always the hardest to be extra kind with comments, give extra context. Encourage the new person to read recently completed reviews before trying to give one of their own. That first review that they do should be really small. Don't let somebody stumble around in the dark on the wrong path. Share the style guide so that expectations are set. Maybe do the first few reviews as a pair. And remember that expectations are good, but don't be rigid. You want to evaluate any new ideas or best practices that that employee might have brought over from a previous organization. They might be really important. And it's good to change the way that you work. And everyone should be a reviewer. So pair program, use it as a mentorship opportunity. Don't muscle out newer or inexperienced devs. This is a great opportunity for them to become more experienced. Because when your team succeeds, you succeed. And my friend Sasha said it best. She gave a great talk giving and getting technical help. She said, hiring senior engineers is hard. And instead, you can hire junior engineers and grow them into functional, productive parts of your team. And code review is one of the fantastic ways of doing that. So if you're not doing reviews, you're missing a huge opportunity. Code review is provably shown to improve code quality across all kinds of organizations and code bases. Think about what's blocking your team from trying some of the techniques that we talked about today and figure out ways of eliminating those blockers. So remember, allocate the time. Develop the process. Don't force it. It's not one-size-fits-all or a one-stop fix. Use it in addition to test QA, et cetera, for maximum impact. And it's not just anecdotes. I personally have become a much better programmer by participating in code reviews, especially at a company that had a great review culture. I've used it as an opportunity to learn from others. And I started anticipating what a reviewer might point out before even getting to the review process. So what did we learn? Code reviews. Co-workers who are good at code review are worth their weight in gold. My co-worker Sarah said this and I 100% agree with her. And most importantly, we learned that reviews decrease WTFs per minute by increasing code quality long-term. That's a scientific fact. I'm an expert, trust me. We all know this is the only valid measurement of code quality. Everybody wants to work with the team on the left and not the team on the right. And bless WTFs. Lead to happier developers. Thank you all so much. And if you want to learn about Python at Microsoft, aka.ms slash Python. Okay. So any questions? I'll ask you to go, just go to the microphone. I'll take questions in the hallway. Question in the hallway?