 And now we're getting ready for our next session, and we have Vinicius Gubiani Ferre. Oh, sorry. Can you introduce? Can you say your name? Sure, sure. I will. That's okay. I have a slide just for introducing myself. Oh, that's not me. You're from Palo Alto? Not really. I am from Brazil, actually. Oh, yeah. And are you calling in from Brazil today? Yes, I am right now. It's 1045. Is this the first European event you've been to? Yeah, it is. Okay. So are you ready? Yeah, just let me share my slides. Yes, I think we should get going and I'll let you share your screen and start with your talk. Can you see my slides? I will stop my video and you also appear in the channel so everything's fine. Hold on. Okay, should I start? That's okay. Yeah, please go ahead. Oh, great. Thanks. Well, okay, let's do this. Guys, I'll be talking today about defective code reviews, the edge between hard skills and soft skills. For those who do not know me, this is my first Euro Python conference. I work at Azure Technologies, which is an edge platform to build and roll low-lensity applications in real-time data analysis. I am a Python back-end developer at Azure Technologies. I am slash was a cyclist, at least before the pandemic. I'm a TV and movie series lover, addicted to quality assurance and an open source contributor. I contribute on a daily basis to the Python Portuguese documentation because here in Brazil, not everybody speak English, so that was the best way I can fight to help on a very frequent basis. And according to GitHub, I'm also a guy that invests a lot of time into code review. This graph is maybe a bit outdated, is about a year, a year something ago. It might have changed it a bit, maybe a bit more scale it into commits nowadays. But you can check it out how it is right now at my GitHub profile at the end of this presentation. So what this code review talk is all about. Resuméed version is code reviews are a software quality assurance activity in which one or several people check a program, mainly by viewing parts of the code. And it's during the code in itself or at the end. And one of the authors, one of the people involved is not exactly the author. That sounds simple, right? Well, not exactly because actually machines are predictable and human beings are not. So I'm going to tell a bit of a story. Once upon a time, a developer goes to work, start his day, and he sees some really bad code, which is really, really hard to work on. It's ugly and place your favorite adjectives over here on how you like to call bad code. And that guy decides to be brutally honest about the author that wrote the code and sometimes tell to him directly or to the whole company, expressing sometimes the coding skills of the author or the nasty work that is actually done. And if that happened every now and then with a higher frequency, eventually that developer is let go of the company and have to find a new job, even thought that person is a great programmer, know the language and all the frameworks from A to Z, etc. That might be one scenario. Another possible scenario is that that person is really, really annoyed with the bad code that he see frequently, but decide to keep his opinion for himself. And that leads to frustration, anger management, and sometimes deep obscure desires to hurt or even murder somebody else, usually the cold alter. I mean, look at this guy, he seems distressed. He's smashing those sheets of paper really, really as hard as he can, but that's not actually going to make the code better. Believe it or not, I use it to feel like this guy before working with cold reviews. I work with companies that didn't have cold reviews, developers just pass it forward, cold forward, like get it done with the task without too much quality assurance. On top of it. And that was really annoying. So the objective of this talk is to help you guys out with lessons learned so you don't tend like this guy or the previous guy that is fired, or you have to go to therapy because you are stressed from work. So cold reviews. Why do they exist. Mostly for these reasons for transmitting knowledge, lots of company have what they call the bus factor, which is when a project only have a single core developer. And that can be dangerous for the company, because if that developer goes on vacation, quit his job, or in the worst case, literally gets hit by the buzz and dies, then the project sometimes die along with him. So to train the next generation of developers in the company that you have to pass knowledge on. And cold review is awesome for that. To stimulate collaborations into the project to ensure that your changes are on the right track, because every now and then you think you're done with a task, you ship it off for cold review. And it turns out that you didn't understand the task very much and you have to either do it over from scratch or redo it like 30 50% of it. So that's also helps from preventing you from having to do it over from scratch. We have good practices when coding and to ensure we have good quality on the final product. You can see that I scratch it out cold, because in most companies, including asian code is not exactly the final outcome, what the client is actually demanding code is more like a meaning to achieve something. It's a tool. Let's call it that way. Lay out three golden rules for healthy code reviews. And by healthy, I actually mean you don't end up stressed like the guy banging his head against the wall or fired. A first rule is don't take any comment as a personal offense. I had a teacher which used it to say smooth with people and harsh with problem. There are a lot of people that with just one, two or a few comments already changed to a defensive position, which sometimes is not actually actually defensive is more like an offensive position. Like they are almost starting a fight with you a little true fight. And my advice in this case is you don't have to protect your code like if it was your own child. You are the alter of the code. You have DNA in the code. Let's call it since you are the alter. But in the end, you don't own the code. The owner of the code. It'll be either the community, or the organization, or the company that is actually paying your salary to write it. So as soon as you realize that your code is not your child, then you'll be all right. You'll be more willing to accept advice and feedback. A second rule is listen to feedbacks, because that sounds dumb, but most people actually don't want to listen to other people. And there's a popular saying we have two years and one mouth. Therefore, we should listen twice as much than we speak, right? In the end, you realize that everybody works for the same company with the same goals and objectives, or almost the same goals and objectives depending on the team, and everybody's there to help each other out. But of course, since you will be participating in cold reviews, you have to, you don't, you have to differ feedback from I will screw you back, because if somebody was harsh with you, you're not supposed to pay them back into the same currency. So don't screw somebody else back because you think they were unpolite with you. And a third golden rule is accept the fact that you might be wrong, because after all being part of being human is making mistakes. In the end, after a while you realize that teamwork is much better than being a lonely wolf, a bounty hunter, or have some human resources like to call it a rock star, a ninja, a rocket scientist developer, whatever they call it, I see it every very often. Even if you truly are a rock star or a ninja, those people usually are right, 90, 95% of the times, so there are still a small percent of times where they are actually wrong. So this is known as Chuck Nother's syndrome, or I never fail. My advice for these people is be humble, and because that will create a better image of you for your team. You will need it when you don't know how to do something, a specific task. So regarding feedback, most of the feedback we give is, to be honest, will not exactly be pleasant feedback, it will mostly be halting a delivery of a task or negative feedback. So every now and then, we have to say some nice words for good things that you find into pool requests. Okay, nobody is in fourth grade in kindergarten anymore, so they don't have to say nice words all the time, but every now and then when you see something you really, really liked, you can use words such as good job, awesome, if you're into gaming, you can also say stuff like flawless victory, KO, monster kill, any other approval phrases. If you're not confident into the context of a pool request to approve or eject it, you can use acronyms such as looks good to me, in my honest opinion, that is a way you do it, back at ASIO. And since cold reviews every now and then are sometimes nervous and tense moments, you can always light the mood using emojis, memes, laugh phrases, such as this one I brought from previous times. In this case, my buddy was being maybe too much redundant since he had in a single line, three, four times the word choice, choice, choices. And I decided to use this meme to explain to him he was maybe raining in the pool, which is unnecessary. There's already enough water in there. In this case, a buddy of mine was God knows why, trying to check if a variable somehow some way changed its value, and I have no idea why he attempted to do that. Maybe just to get a test passing and he forgot to that into the pool request. I just literally gave him an upside down smiley emoji saying, okay, what are you trying to do over here. And on this case, a colleague of mine just reminded that fixing tests is unavoidable. Since you are producing new code or changing some tests, if they break, you will have to deal with fixing broken tests. That is a fact from the life of developers. Some people every now and then asked me how to check if a code is good or not. But there's not a truly specific metric for that. I usually consider two metrics, which is the amount of curse words that you either speak or think when you are reviewing a request. If you're cursing just a few, then it's probably good code. If you're cursing a lot, then it might not. That became very, very real this image for me about two, three months ago, when I was reviewing some JavaScript code. Basically with backend and Python has very nice conventions for the language and frameworks. And for me, don't work with JavaScript on a daily basis. It seems so much more permissive, allow it like you can do pretty much anything you want and I was looking at that, God, no, why? It was, I was laughing and nervous at the same time. And another good metric that every now and then we use is the amount of comments that is left onto the request. But of course that depends maybe from the size of the request itself. So we're talking about good feedback and negative feedback. When you are rejecting a request, always explain why you are doing so because the outer or the outers might think that you are not actually being reasonable or fair with them. So reasons for rejecting. If you believe that the issue is not achieving what the pull request is not achieving what the issue is asking to, or you believe that pull request will break something, either because of the code that is present or some code that is missing on the pull request. That is my biggest reasons for rejecting. And besides always explain you have to watch out on how you will explain for the outers. So here are a few tips. Always, always be nice and try to show some empathy with the altar, since he probably put a lot of work into it. You should try to put yourself into the altar shoes. Use collective words such as us can we all instead of the individuals are you him or her, you need those terms sound you're like blaming the altar, which is not the case you're just commenting. But if you use us can we collective that creates a sense of synergy and that is trying to you're trying to help him and you'll notice soon that everybody is on the same boat. If it's going to take a while to explain, since you always have to try to be clear and straight to the point. You might as well tap on the older shoulders if and asking if you have five minutes to spare for a quick chat. Of course that might be a bit hard doing the pandemic right now, but you can do it using Skype slack or whatever instant messengers you guys are using on your company. And you can always raise questions instead of giving the direct answer for the altar. What if you do it this way, it looks more efficient because X, Y and Z. When you raise questions, you leave the altar to thinking about it when then it you might arrive to conclusions that he didn't thought it before. When you are on the other side, being a developer of the poor request, you have to keep an eye on these things. Isn't a poor request getting maybe a bit too big, not a bit too big. Is it possible to break it down into smaller chunks, which are easier and faster to review. A good size for a poor request is usually 250-300 lines of code without counting style changes such as black, pepiate and eye sort. After that size, the poor request starts to become maybe a bit of a Godzilla or Megazord. It becomes tiring to hear it, to review it. So that sounds an interesting metric. People ask me, can I open it as a whip, a working progress? You can, you definitely can. Because if you're not sure what exactly are you doing, because let's be honest, how many times we are sure what we're doing in life. I'm not exactly what I'm doing right now to be honest. But you can open it as a whip, just mark it as a whip and preferably open it as soon as you pass 50% of the changes or 60% because if you just submit your poor request, when you believe it's done, you might be totally off tracks and you might have to get it over from scratch. That happened with me and a buddy of mine a couple of times, about maybe four years ago, something like that. But every now and then that happens. Always place the style changes such as black, pepiate, eye sort into separate commits. Why? Because those can be automated using pre-commit and other tools. So it's not worth wasting time into discussions like this. You can automate it and you should preferably skip that into code review. Prior to opening, did you pass the test on your machine? That'll be nice if you do it locally because sometimes for some companies, you might have a CYQ. So you're actually fighting with people for CPU resources. So if you can pass the test locally, that'll be great because you are doing a favor to other people. The poor request clear. What about the description or images that show off what was changed? So it's very, very likely that some of the reviewers will not be from your team. So they might not be up to date with the context on what's going on, what's this change for. So you have to describe it as clear as possible, like if somebody that joined the company today understand what's going on in this poor request. Is this a new fix or a fit? Where are the tests? And some people will demand that on your request. I know I do because every fit and fix should have tests to either ensure that you are achieving the requirements that were desired for that specific task or that a bug that became into enter the code shall never return again, at least hopefully. And finally, you should review your own request prior to opening as much careful as you can, or as I like to call it personally, consider the next programmer up cycle who knows exactly where you live. Because the same way as you don't want to work with crappy code, like the previous guys that I presented the one that get fired or keep his opinion to himself. You should always release code that you are proud to work on, at least for yourself. And if you have happens to find coins that it's not that good, improve it, even if it's just one line of a time, because your buddies will thank you for that. So when opening your request, always keep an eye on PDBs, comments that are not necessary, unsuccessful rebases or merges, that kind of stuff will save a lot of time into back and forth discussions, pipeline CPU time. So review your own request, please. What should you keep an eye on. If the test or the pipeline is actually failing, don't waste your time with that because it's probably that some code will enter to fix the test or the pipeline. So you might end up wasting time if you are starting to review something that's not passing. If you see any typos such as comments, tests, variables, name, classes, you can and should comment on that. About two months ago, a buddy of mine said he didn't understood my poor request. I went to check it out. I misspelled not. I actually meant to write now. And after changing that single letter, he totally got it, the meaning of what I was trying to achieve. As soon as a Brazilian company, but our code base is all written in English. So every now and then, Portuguese word slips through the code. So we have to comment on poor request to keep it on a regular basis, all everybody in the same page. There are logic errors that you should comment, but of course you have to be sure that there is a logic error into that. It might backfire if you're not sure. Suggestion for this is raise it as a question. Maybe I didn't understood it correct. Is this okay? Like you're not affirming that is wrong. You are raising the doubt. And performance optimizations. They are a tough manner to discuss, but if you can optimize small things, they might turn out to be very, very beneficial. For example, there is one specific place that you might have to hit the database and that line runs like 10,000 times in a single hour. So if you prevent that from happening, you're already preserving CPU resources from the database. Again, some of our stuff that you might keep an eye on. Follow style conventions and project architectural guidelines, good practices adopted by the company and the community. Create documents inside the company as you have several of those, including for cold reviews, which this talk was originally based on the commit message and poor request titles. Preferably, I suggest a specific pattern, which is Jira or bug track issue number, followed by a prefix such as fix, fit, docs, char, and a small message, all of these fitting to 60 or 50 characters. If you can't explain under 50 or 60 characters, there is a good metric to knowing that your poor request was too much big and you should have broken it before. It's a way of learning. You might stumble to situations where the author doesn't give up. So my advice for that is to negotiate. So you hear a lot of stuff. So the spring is too long already way behind and that kind of stuff. And don't use force push use force if lease, which makes the reviewers job much easier. To sum it up guys, cold review comes down mostly to people then actually technology, you have to find a way to express yourself without anybody getting harmed into the process without hurting anybody's feelings. In the end, it's hard because dealing with people is actually harder than dealing with machines. I guess you guys will agree with me. Well, thanks a lot for the opportunity for speaking at Europe Python 2020. Like we'd say at Asia move to the edge. Thank you. And I don't speak Russian or Chinese. If you guys please pronounce this for me. If you have any questions at all. Please contact me on discord or any of these means over here. I'd love to have to see your feedback. Thank you guys. Thank you. Yes, we have some short question. Let's see. Can you hear me? Sure, I can. Okay, the first question from Thomas. How do you balance time effort and depth of code review. How we can avoid scratching only the surface of the code. That is a good question. We have a rule at Asia, which the code needs to be approved by two persons. You can always place that we do have persons that just approve blindly. I believe that you should. That is more like a personal feeling. I guess the company don't force specific time. But I do admit that I invest a lot of time into code review by already prevented several bugs from going to production. I'm really proud of that. Okay, thank you. And one more question. Do you recommend any particular tool for code reviews. For example, a web application pull request discussions on GitHub or Atlassian crucible. Sure, we use GitHub itself for the pull requests of GitHub. We also use GitLab. We have about half of our repositories into each one of those. I know some people use Garrett. I think that's the pronunciation. There are lots of tools. I've been looking to automation tools. I'm not sure if somebody will ask that. I liked them for finding out security issues and maybe some other stuff, but they will not produce a human readable code. That's my feeling about it. Okay, final question from Thomas. Can you recommend how to discuss a pull request with the owner of a repository who are acting sometimes like benevolent dictators for life? Yes, let's see recommendations for that case. That depends a lot. I'm not sure if you're talking about specific open source code or a company code. If you, great, I see you updated the question. If you are in the company, I recommend always be polite. He'll probably fight you back. That is very common. But if you truly think that something might break if that code follows along, you should maybe involve your manager or even his manager or both the managers, like the tech leads, because that might happen. I've seen stuff that I noticed on the pull request. You went ahead anyway and exploded in production just as I told him. I stayed quiet, like taking a coffee afar. Just looking at that and looking at my watch, that kind of stuff. Okay, thank you very much for the questions and for the talk. And we can continue this in the Discord talk channel. But at the moment, everybody at home, please give some applause together with this pre recording.