 Welcome to this talk on improving code quality with static analysis. My name is Joe Purcell. I've been part of the PHP community since 2003. I'm a senior developer at Digital Bridge in Chicago. We build e-commerce solutions for mid-sized companies in the Midwest. I gave this presentation in Dublin last year. And in the keynote, or the Dries' keynote, I was inspired by Dries' talk to give a thank you to all the people who have helped shape my career. I could say many great things about each one of these people. Larry Garfield, known as Krell, he's down there at the bottom, he was asked to leave the Drupal project about a month ago. It's left me with a lot of confusion and frustration. I've had many conversations and read the public information available. One thing that I will say is to me he's been an inspiration for why I'm here presenting. And I want to trust him. I have no reason not to. But more importantly, the Drupal community is much larger than Dries or Larry. And that's why we're here. We're here to grow and to learn from each other. Who are you? How many of you know, if I said cyclomatic complexity, how many of you would know what that is? Raise your hands. Okay, great. I assume that you value code quality and want to get better. The takeaway for today is you're going to have more information about how to improve your development workflow or improve your project using static analysis. Let's start with a story before we get in. So this example here, it's a picture of a sign that says private customer parking only. All others will be towed, but the word towed is spelled like the frog, not T-O-W-E-D, like you're going to tow your car, which is funny. Humans are pretty forgiving in this case. We know this is either someone making a joke or maybe it's a typo. Either way, it's kind of funny. And there's an urban legend around this called typoglycemia. The idea is if you take any word and you scramble the letters in the middle, you can still understand what the word is. So I've written a phrase here, who needs a spell checker? I hope everybody uses a spell checker, right? Like your browser has one built in. So what's interesting here is to humans, spelling isn't required for comprehension, but for computers it is. They're not so forgiving. And we're going to go to a story. This goes back to 1962. The U.S. was launching their first interplanetary mission. They're going to inspect Venus, the planet. It cost $18.5 million. Today that's about $150 million. Within minutes of activating the launch sequence, they noticed that the guidance system was not responding properly to commands, and they had to self-destruct the vehicle. After inspection, what ended up happening was this was back when humans would transcribe information and when they were copying information and writing the program for the guidance system, they didn't use a smooth value. Now a smooth value was indicated by a bar, a horizontal bar above one of the variables, and that was missed. So instead of using smooth value, there was also hardware failure. So hardware failure combined with buggy code was why they had to destruct this. Some know this as the most expensive hyphen in history. The point here is clear. Computers will do exactly what you tell them to. No more, no less. Now let's go to a more modern-day example. Have you heard of go-to fail? The iOS bug a couple years ago. This was September of 2012. In iOS and OS X. This code statement was here in the SSL library. The problem was that this bug allowed anyone to perform a min and the middle attack with an SSL connection between an iOS or OS X device. It wasn't fixed until two years later. Now to those who are not programmers or don't have a keen eye to what's going on here, let me explain. So you see the two go-to-fail statements. The second section here, that's actually how it would get executed. So it would have the conditional and it would say go-to-fail and then if it didn't pass into that it would run the second go-to-fail here. So the problem is any code after that second go-to-fail would not get executed. It's a very simple example but Martin Fowler in his blog post pointed out that this bug could have been fixed by static analysis. The point for today is if you're not using static analysis, you're wasting time. You could put more time into training or configuring your IDE or what have you but we're going to talk about how you can improve your process with that. I'm sure all of you who are developers are very familiar with the grind on poor code. You make a change, week later you have to fix it or revise that section of code again. So what we're going to talk about, we're going to talk about static analysis and present a definition for what that is so you understand that more clearly. We're going to go through some examples of what static analysis can find in code and we're going to talk about how to do continuous inspection which is incorporating that into your development process. So first of all, what is static analysis? It's essentially a spell checker for code. It's anything that you can learn from code without executing it. So going back to the go to fail example, if we ran a static analysis checker on the go to fail example using Drupal community code sniff, you would see that the hash handshake method does not utilize as a go to statement. The coder code styles suggest not using, or PHPMD suggests not using go to statements. Inline control structures are not something allowed. That's a Drupal community thing. So you know how you can type an if conditional and not put the curly brackets? That would have gotten flagged with a static analysis checker. And then lastly, you would get an error because the second go to fail, even if you are using an inline control structure, the second go to fail is indented too far. So how is this done? How do these tools do this? I mentioned PHPCS, which is what triple coder uses in PHPMD. All it's doing is reading a file off disk and looking for a pattern. And there's a lot of tools that do this. Copy and paste detector, which is looking for any copy and pasted code. We all do this, but you're not being dry that way. There's PHP LOC, a very simple tool to gather information about how many lines of code there are compared to comments, etc. And there's many other tools that come up that do different types of reporting and many other languages have very similar tools like these. So let's try them out. We'll go through two examples. One check would be global variable access. So if you're writing code and using a global variable, typically found upon for a number of reasons. But essentially what that check is doing in PHP is it's going to look for any reference to the word global. You could do this yourself using grep, but PHPMD detector is a tool that aggregates that or PHPCS. Another example would be an unused variable or method parameter. This check is for... If you've ever made a spelling error in a variable name, it can surface as an unused variable. So this check would go in and find... And here's an example that I pulled out. These examples are all from Drupal. The second line here is assigning get entity to a variable, but nowhere in this method is it used. So kind of interesting, right? Like you can run this program to help you spell check your code. The point of this is when you're writing code, there are certain things that smell like a bug or they... It smells of poor coding, right? If it stinks, change it. So there's a book called Refactoring. You recommend reading it. It's a good book. And it's referencing... How do you know when to change a diaper? Well, if it stinks. So I think that principle applies well here because as we go through these examples, you might be thinking, well, I've done that and it made sense at the time. Well, the point is if there are going to be false positives or cases where it makes sense, static analysis is a tool that can help you identify when it's not. So I have gone through and categorized some of the checks that we're going to go through. And I want to go through each of these checks. Like for example, psychomatic complexity, we're going to go through that. If you ever see these in one of the tools, you know what it means. But I've categorized these with topical areas that will make the project harder to work with. So readability might be things like code style. If you are doing code review and you read thousands of lines of code, it's very nice to have a consistent format. So readability. Or if it's indented very strangely. Maintainability. Software is always mutating. We're always making changes to it. So how easy is it to maintain and apply updates to it? Extensibility. You have a client who wants a new feature. How easy is it going to be to add that? Security. One prominent area or topic is security for static analysis. And there's tools that just focus on security because of its importance. But there are checks that we'll talk about that relate to that. Testability. Complexity. The psychomatic complexity example will apply here. Some code is a lot harder to test, which makes it more hard to maintain. And correctness. So you can write code that meets the business requirements, but is it efficient? And we'll go through examples of each. So global variable access. We already looked at this one. Why would this be something you wouldn't want to have in your code? Well, testability. First of all, you have to ensure that when you execute this method, that you're setting the state properly. If you don't, or if you have another test that has set it, and then you run another test that is also setting it to a different value, you might run into a flaky test. An issue with a flaky test. There's no sanitation of this variable here. Globals is properly named because any place during execution time can modify this variable. Are you ensuring that a malicious user hasn't injected code or have you and set this to a variable that might introduce an exploit? And then extensibility. There's no abstraction you can do here on the global variable because it's not a class. You couldn't do type inference or subclass it to add some behavior to it. The other example we talked about already was the unused variable or method parameter. We mentioned this here. It's unclear what entity that variable is doing. Was this a bug? Did the author of this intend to use it? But just forgot to? Or was it an artifact of refactoring at some point? We don't know. If you're assigned a ticket, or you see a ticket in the Drupal issue queue, and you see this method, and you're trying to debug it, like you're reading every line of code, you read this, you don't know what to do. Incorrectness. If GetEntity is an expensive call, you're wasting effort there. You're wasting execution time. So we could make this method more efficient. Dead code. We looked at this with go to fail, where you have a... I don't think this example shows it clear enough, but in this method that I found, there was a section of code that got executed and just like go to fail, there was code afterwards that never got executed. The problem with that is you're going to ask the question, why is this code here? Was it a bug? Did the author intend to use it and was it refactoring, but didn't reincorporate this method? And then correctness, if that code isn't getting executed, you're wasting lines of code. People are going to be reading this. Number of public methods. Now, this one's a little controversial because we have some classes where it's nice to put a lot of functionality, a lot of behavior into a single class, just to have an easy point to reference. But there's a limit, right? As humans, there's a limit to how much we can understand and how much we can easily maintain in a single class. If you have a lot of methods, it's usually an indication that you're violating the single responsibility principle if you're familiar with solid design. So if you were asked to extend entity type class, you have to make sure that you're doing that in such a way that you're accounting for every single one of these methods. In this example, there are almost 70 methods, public methods on the entity type class. So you have to account for every single one of those methods to make sure that you're extending it properly. And then correctness, as I mentioned, the single responsibility principle usually applies in this case. Now, if you're using PHP Mess Detector, it'll throw this kind of alert, and you can set a threshold. I think the default might be 100. No, that's not right. 50. I think it's 50. Anyway, each one of these has typically as a default. Use of statics. So I know frameworks like Laravel encourage the use of statics with facades. And even in Drupal, you might use, in this case, we do load multiple. There are other entity API calls that you do as a static call, or if you're not injecting a service, you might be calling, like, getting the logger service, for example, and that's a static call. Well, one of the challenges there is testability. You can't stub the load multiple method because it's calling the workspace type class. You would have to essentially replace that class, which I don't think you can even do that. So when it comes to testing, you're going to have to make sure that whatever load multiple is doing that you're accounting for it. So if it's making all the database, you need to make sure the database is available. So statics like this in your code makes it harder to test. If you're actually creating an instance of a class, you could mock that. Or if you're injecting the workspace type object, like in your test, during the setup, you could just totally replace workspace type and control what load multiple is returning so that you can have finer tuning on what you're testing with this add method. And then extensibility, you can't subclass workspace type. If you have a static call, you'll always be calling that exact class. You won't be able to override that. Oh, one thing to mention here with statics too. Sometimes you use statics to ensure that you're controlling extensibility. Sometimes you don't want people to extend. For example, the Drupal class, there's an intention there to have consolidated place to, for example, calling the container or the logger, et cetera. So there are some framework decisions that might make sense for these of the static. Missing .com in. Drupal does a really good job with documentation. In this example, I'm picking on a contrived module here, and this method didn't have a .com in. You can see this, and it's so nice to see a module, like the group module, has excellent documentation. So you can read through the whole thing and know exactly what's going on, rather than having to read the lines of code and do the computation in your head. So in this case, that documentation isn't there. So if you want to update it, you have to run through the whole computation. You need to go to the doReplication method and understand what that's doing, because one of my immediate questions looking at this is it looks like the update method is just a wrapper around doReplication. I actually know why this is here. And then extensibility. So if you want to add a feature to this and it's related to this update method, you have to understand it before you can extend it. In some ways, you could consider missing documentation a critical bug. Great. So this one's a fun one, because it's lots of fancy words that mean something really simple. InPath or cyclomatic complexity. I'm combining the two because they serve a similar purpose. The purpose of either one of these metrics is to clarify how much complexity there is in your method or in your code. Cyclomatic complexity counts the number of control structures, the F statements, switch statements, etc., and tallies one for each one of those. The difference between cyclomatic complexity and inPath complexity is that inPath counts those control structures, but it also counts operators. So if you're doing like ampersand ampersand or an OR operator, it'll count those as well. So inPath will count the number of paths through the code, including operators. Cyclomatic is just looking at the number of control structures in your code. This is a very handy way to know how many tests are right. So if you have a cyclomatic complexity of three, you have three F statements. You probably want to have three tests to test each one of those if you're doing unit testing. If you have high values with either of these, there might be a problem with readability. If you've ever read nested for each loops, it's kind of hard to read, or nested conditionals, right? These things are hard to read. Maintainability, try to debug this. This disable entity types method has a cyclomatic complexity of 15, and the inPath complexity of 420. So when you're trying to debug this, you have to think through maybe not all paths, but many of those paths. So it makes it harder to debug. And then correctness, sometimes a high number, is an indication of your violating single responsibility. You might be trying to do computation on information that's not directly impacted by this class under test. EFARENT and AFARENT coupling. Also, kind of fancy words, but something very simpler, simple. EFARENT coupling means the class under test knows about many other classes. AFARENT coupling is the opposite. AFARENT is you have many other classes that know about this one class. Why is this important? Well, it's very important, EFARENT coupling is very important when you're trying to test the class. In this example here, multi-version manager, it has 15 classes that it knows about. If you want to write a test for this class, you have to, and you want to do unit testing, and you're not doing kernel test, but you're doing a solitary unit test, you would have to write a test double for all 15 of those dependencies in order to control what the indirect inputs and outputs are to that method. This also means that if any of those other 15 classes change, you may have to refactor your test. It's totally unrelated. AFARENT coupling has a bit different impact. The impact there is if you have many classes that know about your one class, such as the Drupal class, and you want to make an API change, a breaking change, that means that you have to change all of those other classes. It's a metric. I don't know what the threshold is. I think the threshold is 10, NP-PMS detector. Again, you can configure that, but when you see this, it's an indication that this code is going to be hard to extend. The high coupling here means that you're going to have a lot of classes to be aware of when you're extending this class. If you want to override the functionality, it's going to take a lot of effort. What's nice is you can know the static length. You run static analysis, you get the number, and you can have a gut reaction to how hard it's going to be. You have to write a test double for each one of those classes, and correctness. Usually, if you're aware of this many classes, you might be violating single responsibility. Again, you as the human, you'll know. If it stinks, then change it. Space before parenthesis. Code style is a valid quality metric. When you have a code base as large as Drupal, and you're reviewing someone else's code, it makes it very nice to be able to see a consistent format of the code. Whether curly brackets on the same line makes more sense than not. I think that's a separate conversation, but just more importantly, having that consistency is very important. So, this example here, and actually ran into this last week, you'll notice on the fourth line you see getTotalPrice on the far right, on that last line. That's actually a method call, and the brackets are on the next line. So, readability, your first reaction is, this looks different. Yeah, you can't have to read each character to pick out why. And maintainability, let's say I didn't need to change this line of code, but I saw it, and I want to fix it because it's hard to read. If you fix it, you might be creating a merge conflict with some other ticket that's RTBC ready to get merged. So, in Drupal, there is kind of a strict, set of rules that you have to so, checks like this are very nice to surface before someone makes that contribution. All right, so we've gone through a number of checks, how do you do this? How do you do a continuous inspection? How do you tie that into your day-to-day, week-to-week? I'm going to use some examples from CodeClimate. CodeClimate is different than a lot of tools that have one static analysis engine, they have many, they have like some for PHP, some for JavaScript, et cetera. And it's free for open source, so their alignment or goal is to be free for any open source project. Right now, they only have support for GitHub projects. There are many other tools than CodeClimate, but I found that to be the most broad. I've created a repository and I'll show this slide at the end as well if you want a link. github.com it'll show some examples of how to configure CodeClimate. I'll show a quick example just so you have some context as we go through what I'm about to talk about next. So the CodeClimate YAML you specify what engines you want, PHP MESS Detector, I'm able to configure Drupal Coder to apply here. Now, when I originally wrote this CodeClimate YAML the they didn't have the Drupal CodeSniffs installed but I was able to contribute that back because it's open source. They have their static analysis engine. If you wanted to write your own static analysis tool you could contribute that back. So anyway, I've got PHP MESS Detector I'm looking for PHP files, Ink files, module install and then I'm also doing PHP CodeSniffer and that's where Drupal CodeStyle applies. And then there's a rating section so one of the things we're going to talk about is a GPA. GPA is an indication of your overall health of the project based on the number of issues combined with the weight of the code, lines of code. So the rating section there I'm just looking for my custom modules and then I have some exclude paths because I don't want to rate core because that doesn't apply to my project and I also don't want to rate contrib I just want to look at my code. So that's a quick example of the PHP MESS Detector XML file is a configuration file just for the PHP MESS Detector tool. I also have this example up on GitHub if you want to use it. The point here is that there's configuration involved to show you what I'm going to show you next. Two main ways you can tie in static analysis and that this is like this is the moment these are the key takeaways for today. There are two ways you can tie in static analysis. One is in your development workflow I mentioned continuous inspection so we're going to talk about that next. The other is doing a code audit. Code audit might apply at the end of a sprint etc. So we'll get to that next. So development workflow there's a book called Continuous Integration and I love the story that's told here the idea is imagine you could hit one button on your keyboard have all everyone's code is merged, integrated, your tests are run if you had code that you needed to compile if it wasn't PHP or let's say it's like Java or something compile it, you do your database integrations run inspections and then it deploys. So imagine you had one button to do that. If you're doing continuous integration and you're not doing this step you're missing out on a critical part. Static analysis or inspection of your code should be done in parallel with your automated tests. So if you're doing continuous integration also consider doing static analysis checks because if you're not you are wasting time. So I've identified some features that I think any tool that's going to help you in your day-to-day workflow one key feature is isolation of violations by commit or PR. So I've seen some activity on Drupal.org for getting PHP Drupal Coder involved in the issue queue but one of the challenges there is how do you make sure that if there's a violation that it's only applying to my code. So if you're looking to incorporate this on your own project you may not want to start out day one by just running Peachby code sniffer because it's going to come up with tons of errors and you may not have time to fix all of them. So a good tool in my opinion is able to isolate only the changes that you may have when you're doing code review. Ability to address false positives those will happen. Some tools will let you dismiss those violations as okay we're aware of this violation it's intentional maybe it's a static call to a logger and there's a reason for it. Indicator of overall health if your static analysis tool isn't able to aggregate the number of issues or the impact of those issues and the whole project team is aware then I think there's a loss of value. You also want to have weighted impact. Not every violation is the same like a code style violation is going to be different than someone throwing in a whole bunch of access to like global variables like these two things are valued differently. Open source integration like phpcs this is important because by having an open source tool you can do more collaboration that way and ability to run the static analysis locally if you're using PHP storm or what have you. For these reasons I think Code Climate is a good fit if it's almost all of these. So your day to day workflow is going to look like you run static analysis in your editor. I hope you all have static analysis configured in your editor. If you're using PHP storm you can configure it to link your code whether that CSS, HTML, PHP. There's linting for all of that or code style checks for all of that. So you run that in your editor you make your commit, you push that up you have your automated test run in parallel it's running static analysis if it fails which means that I've contributed a violation I see that again a notification that revise it if it succeeds then code is reviewed and merged. What this looks like if you're on github Aidan Feldman at 18f they're using Code Climate and this is what it looks like so if he gets a pull request and he sees green and he sees green on Code Climate he doesn't have to look through for code style it's done, it's already done for him, the computer did it so we can just merge this now if there is a failure it'll show up red he can click on details and details will take him to a page that shows you know here's the specific violation the interface has changed a little bit but we have some actions here one action is you can say yeah this was understood I can clearly see there was a reason for this so you click yeah this is an accepted violation and then it shows green and you can merge it you also get to see per pull request if there is a violation you get to see overall impact on quality of your project so maybe it's late in the game you see some violations I really don't want to send this code back for revision it needs to get merged because we got to get this out the door you can see the impact on the GPA just kind of a gut check feeling for how bad is it, what is the overall impact now this is the second way that it ties into your day to day or week to week is the code audit you can do this during plan to refactoring there is kind of a misconception in the agile community that you can iterate from a bicycle to a race car it's not really true you need to have planned refactoring along the way you will accumulate technical debt so maybe you plan a certain period of the project let's say it's a year project maybe month 6 you plan 2 or 3 days to just focus on cleanup or maybe use it in a sprint in retrospective and the project you see team, I know this is a difficult sprint, we push some stuff out the door, how bad did we do what cleanup do we need to do out there with a tool like CodeClement you could get a list of here's the GPA rating by file so I know that the image is very whitewashed here but I know that I've got a handful of files iterated F which means there's a lot of violations relative to lines of code so maybe at the end of the sprint we say alright let's spend 2-3 hours and go clean up some code stuff great value to the project and hopefully at the end of the sprint you don't have a lot of tickets open so you can do these kinds of changes without creating merge conflicts another way and this one I love there's a tool called PHP Metrics it is a tool that generates static analysis reports for you with a web interface, got lots of graphs they have good documentation I've taken one of those graphs here and I have efferent coupling along the horizontal axis which means as you move along that axis the class knows about more classes so it's going to be harder to test harder to extend the vertical axis is cyclomatic complexity so further up along that you go that might be indication of single responsibility principle it's going to be hard to test etc so it's interesting is looking at this and seeing what kind of bubbles up to the edge and you have Drupal kernel the Drupal class because it knows about all these things it's got the logger, container etc form builder very complex a lot of stuff going on there entity API I don't know if you've ever looked at archive tar but it's just like one big file immense complexity it's not broken out into different classes what's nice about this chart you can immediately take I don't have to be a technical person I can look at this and I can ask my team I can say hey entity API how come all these classes are out here is that something that we might be able to improve on refactor, make it easy to maintain archive tar it's really complex could we spend some time breaking that out into other classes reduce the complexity make it easier to debug so what not to do I've seen some bad things ways in which static analysis can work against you and we'll go through some of those examples static analysis is only as good as it's configured to be so for example code climate and I think this is very poor choice but I don't have a better suggestion is out of the box you throw code at code climate it's going to run and it's going to give you a GPA but it's not configured so it's just like a best guess at what checks and violations make sense but for example you could end up in a situation where you're running php mess detector on javascript code it doesn't make sense so it's only as good as it's configured to be a core layer to this if you ever see a GPA or you see someone talk about quality metrics for a project don't take it at face value unless you know it was configured correctly unless you know someone took the time alright here's the directories that are actually our code versus contributed directories etc it's not a replacement for your brain there are some checks that I think could be improved I think there are some checks that applied better in certain situations than others if you're going through the process with your team to incorporate static analysis make sure everybody's on the same page take time to educate your team if you're on a Drupal project I've done some of the legwork for you to create a curated php mess detector and incorporating the Drupal coder style checks so I've done that you could take that and run with it but if you're not in Drupal or you're writing custom code just make sure that your team is aware before you take this on and don't compare GPAs of projects that use different configurations doesn't make sense different projects are going to value things differently saying you comply with PSR2 that's that's easier to do than saying you comply with Drupal code standard Drupal code standard is way more specific than PSR2 so it's kind of unfair to say if you see two GPAs or correction it can be unfair to make that comparison so just be aware that GPAs might be computed in different ways so it may not be a one-to-one comparison but don't avoid static analysis it's too much to tackle it's similar to trying to get 100% test coverage if you have a unit test suite and you're trying to get 100% test coverage it's kind of the same principle you can choose for 100% compliance with your static analysis tool it's just really hard that goal is good sometimes but in other times especially with a project team it may not make sense it may not be valuable to your client but spend 20% of the effort and you'll get 80% of the benefit here and don't run static analysis locally they should be shared just like tests if your automated tests are failing your team should be aware you're doing code review you merge a PR test on develop run and it fails team should be aware because that does impact the team and in a similar way static analysis it impacts the other people on the team and I like this graphic it shows over time the cost of owning mess Doc Martin Robert Martin he has a book called clean code and over time you have high productivity in the beginning but over time we went through some examples earlier that builds and if you're not planning time to do some refactoring or clean as you go you lose productivity so you're going to save time by focusing on this you're not wasting time looking for code style we looked at that pull request that was green if I'm reviewing that I don't have to look for code style that's already done I can just focus on functionality doesn't meet the business requirements you're not worrying about obvious bugs or typos I have broken production before because I mistyped a variable name that would get caught with static analysis if you set this up properly with your team you can clean as you go and you know the parts that are hard to change so you don't have to keep grinding on code if you have a team this is what looks like is hard is that reflective can we spend some time to refactor so what's coming next as the Drupal community I think if we could agree on a static analysis tool and a configuration for Drupal 7 Drupal 8 core and contrib as well as projects we have one like we have Drupal coder which checks from PHP mess detector and PHP copy and paste detector agree on a process for versioning these so you say hey I've got 100% compliance and then you introduce a new violation and you roll that out people should be aware of that how does that happen what does that process look like start using analysis on core and contrib there's a ticket out here I think I'm optimistic I think that code climate could solve this now I've had some conversations with people on the infrastructure team and I know there's challenges to this but at least for your own projects if you're on github you can use code climate today if you want ensure the community has the same checks on their projects Drupal coder is great but that infrastructure isn't there we have this problem with tests trying to run Drupal's test locally it's a challenge we have the same challenge with static analysis could we have more public infrastructure that we could use for our own projects maybe with code climate you can create an engine we could have a Drupal engine if you're paying for a private project you could just say here's the engine I want to run and the Drupal community can keep that up to date ensure editors can integrate with the same checks we don't have the case where Drupal.org is running checks that you're not doing on your local machine and at GPIS2 compared to contrib modules right now it's kind of based on well I've used this module before or I've heard this other person has used it great I'm going to use that or you look at popularity, how many downloads GPA could be a way during the module acceptance process someone submitting a new module GPA could be a way to be a quick gut check of how well have they done on code style it could also be a good check for are they writing documentation right these things might be helpful for the community now there's some longer term interesting things I want to include imagine the security team could have a check written for any security issue that they find if they find a pattern imagine them being pushed out in an update to code climate engine everybody just hits rebuild on their project and they can see if that violation is there that would be amazing Ruby's Breakman already has a tool that does something similar to this imagine if automated code quality spike tickets could be published so a module merges code and it drops below a certain GPA and a ticket is created saying hey can you spend some time to clean up your code use analysis reporting as a way to make Drupal API calls especially as we go into Drupal 9 and we we we want to not have to reinvent the wheel could you write static analysis checks to deprecate certain API calls yeah you could you could do that and then publish that update and say hey everybody we're deprecating this enter the API call run this check and immediately you're aware oh I need to go fix these three things it can also indicate improper usage the Linux kernel team they do something similar to this and lastly you could use these checks in an auto correct imagine you submit your patch and you could use PHP code beautifier and fixer to automatically fix these problems for you that's pretty cool now we talked about Mariner 1 Mariner 2 launched 36 days after the Mariner 1 failure this was the first successful interplanetary mission we learned some very interesting things about Venus such as the atmosphere can melt lead I think with this kind of discipline in our community I think we can do great things I think we can do what we do a lot better so thank you come to the sprints on Friday and you can see the slides up here and find me after the stock if you want to chat more thank you