 Yeah, we're going to talk about sloppy code today. My name is Casey Kinsey. There are a bunch of links there where you can stalk me on the internet. And I'm a developer consultant from Fayetteville, Arkansas. By day, I'm the president and chief hacker of my currently one-man consulting shop in lofty labs. And by night, I am the co-founder of a project called Heapsort, which is a career website and job board for full-stack web developers. These slides are available online at the lofty labs website. So you can go there right now. They're built in HTML and JavaScript. So you can use them to follow along. There will be some code samples. And I think you'll be able to see them easier. And that's all contingent upon my $5 digital ocean droplet not crashing when you all go to pull it up. So before we get started talking about actually dissecting the code, I want to spend a moment talking about where sloppy code comes from. I think the obvious place is an experience. That's the first thing you would say. And it makes sense. We were all beginners at some point. But that's not what we want to talk about today. I'm not concerned about that. If you're new to Django and Python development and you're here in the crowd today, then you've done the best thing you can do, which is to come here. And you're going to learn a lot over the next couple of days. Now I'm more concerned about the sort of sloppiness that comes from things like an emphasis on development speed, pushing code out the door quickly, strict deadlines, and incentive to cut corners. And there's also something I've noticed in my time working as a consultant, as a contractor. And it's a psychological disposition, the lack of long-term ownership of a project, and what that does to the code that gets delivered. If you don't have long-term ownership, you lose incentive to make sure that the project is stable for the next person that has to inherit it. And this is a little bit off topic. But I wanted to point out something. As I was looking at this, there's sort of a sweet spot between these two points, number two and number three. Lack of long-term ownership and an emphasis on development speed that occurs within a concept that's somewhat fashionable in our industry right now. And it stems from this idea of rapid prototyping, which is really popular with modern frameworks. Because one of the benefits of modern frameworks is that on the long term, we can build something that's very complex and sophisticated. But in the short term, we can build something that's relatively complex and sophisticated without a whole lot of time being spent. But I'm going to say something controversial, because I think rapid prototyping is a lie. And I don't mean that the concept itself is wrong and that it's something that was lied to us about. I think it's something that we ourselves, it's a lie that we tell ourselves. And it has everything to do with this word prototyping, what we call prototyping. Because in the rest of the engineering world, a prototype is strictly a proof of concept. And those prototypes never see production. This is a prototype Tesla Model S. Obviously, this car is not driving around the highway somewhere. And it never showed up on a showroom floor looking like that. It's an inherent part of the engineering process when you build something physical that at a certain point, the prototype gets thrown away. And you take what you've learned and you build what you use in production. When engineers build a new type of suspension bridge, it's built on a very small scale. Obviously no car ever drives across a model the size of this table. But they take what they learn and they throw that out. Because you can't bend the metal and create something new from it. It's inherent in the process. And that's not something that happens in software engineering. And I would be wrong to say that it's not a benefit to us as software engineers that we don't have to throw things out to go to production. We can bend the metal, take what we have, make it better, and we don't have to start from scratch every time. And while that's a benefit, it also can be used very much to our disadvantage because it's not inherent in the process that a prototype stops being a prototype. Because in software engineering, prototypes go into production all the time. Because we use the word prototype always to our advantage. It's not fast, or it's not very efficient. Well, that's okay, it's a prototype. Oh, all right. But then one day the client comes and they say we need to ship it right now or that deadline comes up. And then suddenly it's not a prototype anymore. We slapped the word beta next to it which beta is just another word for prototype that went into production, right? So I want this to be on the back of your mind as we talk about the rest of the topic today. I want you to think about it because it's important to understand, this isn't the only way that sloppy software gets produced but it's one of the big ways that I've seen in my career. And so understanding where sloppy code comes from helps us to fix it because we understand the mindset of the person who built it and also helps us not to be the next person that bumps the version number and pushes the prototype out into production. Okay, to work on a sloppy code base you're gonna need some tools and you're gonna need to be familiar with those tools. The first of which is a good editor or integrated development environment. And there are tons of options here but what we're really looking for is something that offers code navigation tools. So being able to quickly move around a project that's really important when you get a new project you've never seen before, being able to highlight a symbol and then immediately go to where it's defined, okay? That'll help you learn where everything lives. And then beyond that, you have code introspection. And this is where your editor has a pythonic understanding of the code you're working on and you get things like code completion or being able to work with a model and call one of its methods and see what the signature looks like. Know all the keyword arguments that that method accepts. That's a huge plus working with unfamiliar code. And then lastly, refactoring tools which is anything stronger than just a brute force find and replace across a bunch of files. Being able to find methods that you wanna change and introduce a variable to them and tools that let you find everywhere that it's used across the projects. You're not just blindly changing everything or manually doing it one by one. So those are pretty broad. There are a lot of tools that give you all of that more. I personally use PyCharm. I think it's great. There's also PyDev, which is built on Eclipse. Emacs, who uses Emacs? And yeah, people get enthusiastic about Emacs. And I understand, I don't personally use it, but it can literally do anything. You can e-file your taxes with Emacs. So if you can stomach the learning curve, Emacs is another great one. You're gonna need to be familiar with the debugger. And no, the Django debug toolbar doesn't count. That's not the type of debugger I'm talking about. I'm talking about a full debugger. If you're not familiar with one, then you need to find one and just get familiar with the basic operation of it. I'm not an aficionado of debuggers. I use the one that ships integrated with my IDE. There's PDB, IPDB, PUDB, pretty much any combination of the letters PDB. Optionally, a fourth character is a legitimate Python debugger. And then for bonus points, like I said, have that integrated with your IDE. Rather than just running it, being able to set break points as you're editing, that's gonna be really crucial. You're gonna wanna know how to use coverage.py, which is a tool for checking test coverage. You run your test suite and it tells you how much of your code base was exercised. It's super easy to use. You just run your test command through coverage. This is sort of my go-to starting point for coverage configuration, the coverage RC file. It tells coverage to check everything in the current directory and below and omit things like migrations, whiskey files, settings files, things that aren't normally exercised by your test suite that would arbitrarily bring down your coverage report. And then you generate a report. It tells you all your files, how many statements were run, how many were missed, gives you your percentage. Very easy. And then there's Pilant and Pepe. And these are both tools to help you in the actual writing part of writing code. So Pilant helps you clean up poorly structured code. It's really useful during refactors. Pepe, if you're not familiar with Pepe, that's the Python enhancement proposal number eight, which sets the de facto standard for Python style guide, stylistically how Python code is written. And then Pepe lowercase is the Python package that looks at your code and tells you when you break those rules. Pepe will keep your code legible. It has one major caveat and that is it'll make you irreversibly anal about Python code when you will become intolerant of non-Pepe code. And it's worth it because you end up writing good looking code and when you have that standard in place, you know that the next guy will be able to read what you've written. And then again, having these integrated with your editor or your IDE really helps. It's better than just running it against your code base. You get the response, the feedback from these tools immediately as you're writing code, which is very helpful. Okay, you've just gotten your hands on a sloppy code base. You've been given the keys to version control and you're gonna start looking through it and your immediate reaction is going to be to start tearing it apart, right? You're gonna start digging through the code, seeing what you've gotten yourself into. You're gonna see things you hate and you wanna change it. Don't touch anything. At this point, everything in the project is hot lava. And if you attended DjangoCon last year, I gave a talk about hot lava. Well, hot lava made an appearance in it. When I was a kid, I played a game. The floor was hot lava. If you touched it, you will die. So don't touch anything yet. First, you're gonna have to do some prep work. First thing, make a copy of the code, okay? Use your version control of choice. Check out a copy of the branch as it stands. When you get it, move it into a separate branch. This makes it really easy to use tools like diff. And as you're making edits to code, you inevitably delete something that you needed back or you need to see the way it originally worked and you can do that quickly. Even better, when it's all said and done, you can get a list of how many lines of code you changed and use it as your badge of honor. And now you wanna get the project started up. But again, we're not touching anything. Our goal is to get it running in its last known working configuration, working. So you create a virtual environment. Don't get attached to it, they're disposable. If the project shipped with a dependency list, use it. That'll save you some time. Otherwise, you're gonna have to manually figure out everything that you need. If the project shipped with tests, count your blessings and try to get them running. But keeping in mind that if the project really is in a state of disarray, there's no guarantee that the test suite that you were delivered was actually passing in the last known working configuration. So if it shows you configuration problems, that'll be helpful. But if you start seeing logical issues, obviously failing tests that are caused by the code itself failing, then don't spend too much time with it. You're either gonna have to fix those tests or throw them out later. And then be sure to freeze any changes you make. You can't remove any dependencies at this point. But if you add some, or if there wasn't an original dependencies list at this point, freeze that all so that you have it for later. Now you're gonna need test coverage before you get started. But we're not gonna go too crazy here. We're looking for integration test coverage for high level Django concepts. And we're looking for integration or unit test coverage, whichever is most appropriate, for low level and business critical processes. Okay? And when I say integration tests for our purposes, we're talking about an end to end test of the Python and Django code. Our goal is coverage here. We wanna cover as many lines of code in the project as possible because we're looking for landmines. We're looking for things that blow up unexpectedly. We wanna see that when we make a refactor in this component, it didn't break something in that component over there. We're essentially automating the process of clicking on every page on the site and making sure you didn't get an error message. Automated smoke tests. And this is the type of test that I'm talking about, an example of how it'd be written. This is from Heapsort. And in this case, we're testing the homepage as an anonymous user. We're just hitting the page, checking the status code 200, and then I like to use this assertion for this kind of test to make sure that the appropriate template was loaded. Sometimes views can construct their own responses. Sometimes they can return the output of another view. So we wanna make sure that the user is kind of seeing what he's supposed to see. And notice that we're not checking any data on this page. We're not checking to make sure that the appropriate things were rendered there. We're just making sure they ended up in the right place and an error wasn't thrown. And this is for an anonymous user. So when you write these types of tests, they're very quick and easy to write because you don't have to do much, but you do have to catch every user path. So if a user's logged in, you would wanna separate tests for that. On Heapsort, we have two types of users. We have our candidates and we have our employers. So there's three tests on Heapsort for everything like that. If the view implements a form, you wanna test get and post. So all of the user paths for every view, but very simple tests. Now for unit tests, our goal is not full unit test coverage. That's not something that you necessarily have to strive for at this phase because as you go through rewriting all this sloppy code, you're gonna negate the work that you do writing unit tests. Now, you can take testing as far as you want here. And if you believe in test-driven development, then go for it and you will wanna do this. But in my opinion, I'm not a test-driven developer. I know that I'm gonna write unit tests and then have to throw them away as I rewrite the code. Instead, we wanna focus unit testing effort on core components where accuracy is critical. This goes back to the business critical processes. And this is an example of that type of unit test. We're looking at another test from Heapsort. We sell job ads on the site. And after 30 days, those ads expire. That's business critical to us. That's what enforces our value ad. So we want this type of test to make sure that an ad no longer shows up on the website after 30 days. And that's enforced with a manager. We have a manager here, the active job manager. Now this isn't the purest example of a unit test. We're still invoking database machinery and all of that. But that's okay. We're not using the test client. We're not doing this from the page level down. We're specifically testing a real figure 30 days and making sure it reacts properly. That's the type of unit test you should strive for when you first get your hands on a project like this. So how much is enough? Integration test coverage for all Jango views and user paths. Management commands are really easy to test. You just import call command and call it. Potentially third-party APIs, if you can do that with the test client. We want unit test coverage for business critical processes and other logic. It says non-view, but really non-testable with the Jango test client. So that's billing processes. When you're charging people money, you wanna make sure that not only do they get charged, but they get charged the correct amount and only once. Transactional communication. If your project sends out emails or otherwise notifies users, that's business critical. Asynchronous tasks. If you have an asynchronous task defined, more than likely that's business critical. And it's not something you can catch without writing unit tests. And we're gonna shoot for 95% coverage. I think that's the ideal scenario. I mean, more is better, but that's the point that I would really feel comfortable moving on from. Coverage of 90% is actually pretty easy in most projects, especially when you write really broad integration tests, like the examples that I gave. Even a medium-sized project, one or two developers can get 90% test coverage in a couple days at most, writing that kind of broad test. So we're gonna shoot for 95% and we want no individual files uncovered. So if you have one file out there that's 50% coverage, that's not good enough. We want that level of coverage for all of the files. It's not as hard as it sounds. And then and only then is it time to start hacking on the project. And you're gonna be really happy that you took the time to put those tests together. There's no telling what you're gonna find when you first get your hands on this type of project. And so this is not an all-inclusive list, obviously, but what follows is sort of my greatest hits, if you will, of design patterns or anti-patterns that I've encountered working as a consultant. The first type is, I call it the Patchwork Quilt of Dependencies pattern. And this is a real diff from a project I was working on just a couple weeks ago. I removed 133 external dependencies before I even got started. I'd never seen anything like it. The problem with third-party dependencies is that all of them introduce complexities. Whether or not it's good to use one is just a matter of determining if the ends justify the means. Each dependency is going to be a potential obstacle to upgrades, okay? Imagine if I decided I wanted to upgrade from Django 1.5 to 1.6 on that project, and I had to test 133 external dependencies to make sure they were all Django 1.6-compatible. And when you have a project like this, the project ends up being a lot of configuration that just makes all of these different things work together, sort of like a system of plug-ins and that's it. And these sort of low-level Django extensions often conflict with each other. They make debugging a nightmare. You go to look at some process, something simple seems to be going wrong, it turns out there's four or five layers of abstraction between where you thought the data was coming from and where you want it to be, and that's a pain. And then lastly, third-party modules installed from BCS, GitHub, things like that, make no stability or availability guarantees. So if you're just pulling someone, especially if you're just pulling the master branch of some repository out there, code can go into that any day and it can disappear tomorrow, okay? Excuse me. So you wanna trim the fat. Anytime you get a project, you wanna audit the dependencies. Remember that virtual environment that we created in the beginning? Good, throw that shit away. Because you're gonna start over. You're gonna install the absolute bare minimum of requirements that you know you need. South, obviously Django, core APIs. And then you're gonna go through the requirements list you either updated or created and you're gonna audit each one, one by one, researching them to know exactly what they do, why they're used, and how it works, and then that's gonna put it into one of three categories. It's either totally unnecessary or not used. So 133 dependencies I removed, 80% of them were installed, not even integrated. So those were easy to get rid of, weighed off my shoulders. They're gonna be potentially unnecessary. You feel like that's something you could get rid of, but you know you have to rewrite some of the code in order to achieve that. So you make a judgment call at that point. If it's low enough effort, you may do it right then, or maybe it's a part of some larger refactoring task later. You wanna get rid of that dependency, you're gonna rewrite the module that uses it anyway, so you're gonna come back to it, but make a note of it. And then lastly, you'll find other things that turned out to be necessary, and you leave those in place. Then you run your tests. So the tests help you to make sure you've gotten all of the dependencies that you absolutely need installed, and that anything you removed wasn't actually necessary. If you miss something, the tests let you know, you go back to the previous step, and you audit that dependency, same rules. And you do this until all of your tests are passing, and now is the best time to actually upgrade your packages. So one of the things I said was a complexity introduced by dependencies, they block upgradeability. At this point, your project as that, it's all time low of third-party dependencies, the least amount of compatibility issues. It's a good time to try upgrading Django or any other package as far as you wanna go. Upgrade, test, repeat. The next pattern, the monolithic app of death. Monoliths are an organizational nightmare. We're talking about one giant Django application that contains all of the functionality in your project. And the problem with that is you can't succinctly evaluate the functionality of any one component, because you have to dig through the functionality of every other component that sits there alongside it. And for the same reason, nothing's portable. You can't pull any of these things out into their own module. You can't use them in another project if you wanted to open source them. And it's full of import star. Yeah, I heard someone cringed out there, and I don't blame you because I have no idea what I just imported. And when you have everything in one giant application, well, it kinda makes sense that you would do it, because every view is in one file, and every form that gets used by a view needs to be imported into that file, so you do it. But it's really hard to take this apart. We're talking about good old spaghetti code. I have no idea what depends on what. So to fix this, you're gonna create the same app structure, obviously. And one of the other really big benefits to building the test suite upfront is that at this point, you have a really good cross-section of how everything in the project works. You've forced yourself to do it. And so you probably, if you have a monolith on your hands, have already determined, you've already been thinking about how you wanna restructure this. So you just implement it. You're gonna get rid of those import stars. Pilots really helpful here, so you just remove them. Pilot reports back to you every symbol that's being called in the file that hasn't been imported, and then you turn those into static lists of imports. So now you know where everything's being imported. And then you migrate models to their new applications. I've found that the best way to do it is to move the models and everything else sort of follows. You move the model. Every form that depended on that model now can't find it, or you've changed the import. You remove the import of the model. Pilot tells you which forms we're using that model. You move the forms. Every view that depended on the form is now reporting that it can't find its form. You move the views. And then at any point in this process, if you think you've lost something or you wanna make sure you didn't miss something that you need to move, you run the test. Now migrating models across apps happens a lot in these sort of patterns. It's kind of a pain. If you're pre-production, you don't have any data, any real live data out there in the world that has to migrate. Just squash the migrations, start over, it's worth it. Otherwise, you need to migrate the data across. If you're using south, go here. Stack overflow is usually something that I find to be reactive. I go there after I have a problem. This is one thing that I keep bookmarked at all times because there are great answers here and awesome discussions techniques for using south to move models across easily, preserving your relationships, preserving things like content types, because those are things we don't think about that have to change in order to move from one app to the next. The labels all change. If you're not using south, the new Django migrations, I haven't had a lot of opportunities to upgrade projects to the latest version of Django using new migrations. It seems like it's a little bit of a challenge from what I researched. It looks like you're kind of stuck doing things the old fashioned way, creating copies of the models and manually writing migrations and moving it over unless you are comfortable writing raw SQL in your migration files. So it's kind of a pain, but it has to be done. Now we have the every model is an app pattern, which is like the exact opposite of the monolith. It causes the same problems though. It's another organizational nightmare. Again, you can't succinctly evaluate the functionality of any one component because all of its functionality is distributed across the entire code base. Again, nothing is portable and you have a different type of import woe. The cyclical import, anyone ever had to deal with cyclical imports? It's like one of the worst problems I think to have to solve. And it makes sense because if every model is in a walled garden, at some point they're gonna have to interact with each other. At some point this app needs to import that app and the other way around, inevitably this happens. You're gonna do the same things here that you would do with a monolith. So same challenges, you gotta rebuild and organize structure and move the models across. Everything else tends to follow. Good news is less work probably because some of your apps will host the other so you don't necessarily have to move every model. I guess my point there is that between these two design patterns this is the one you'd rather get. And then we're gonna move to the next one. Receivers everywhere. The Django signal dispatch system is so powerful and useful because signals allow us to decouple models which initiate side effects on other models. Especially in third-party code. The easy example here is user profiles. For a lot of us that have come from Django even before Django 1.5 I would say probably the first signal we ever wrote was one that created a user profile when a contrib-auth user was created. And signals are great for maintaining data integrity. When data changes here I need data to change over here. Otherwise I lose data integrity. There's a reason why most of the signals that ship with Django are built into the ORM and model machinery pre-save, post-save. But they're not always useful because receivers can hide important functionality from developers especially you inheriting this project. There's probably stuff lurking around there in signals that you won't necessarily see. And they become unnecessary types of abstraction between two models in the same application. If two models are in the same application in the same models.py file what need is there to decouple them with signals? And they can implement overzealous business logic. Logic that belongs in the views but occurs at the database level when things change in the ORM. So you wanna be on the lookout for signal receivers that interact between two database-related models. I'm talking about a database relationship, foreign key, mini-to-mini-join table. Or signal receivers in which the instance simply operates on itself. Because in both of these cases, the logic really, I mean pre-save and post-save. Those are your receivers. You can do this in the model definition itself because it's a matter of data integrity. It belongs here. There's no reason why the model can't change a property of itself. It has access to all of its relationships. So you can do that here and there's no reason, there's no way for it to accidentally be hidden from me. When I look at this model, I know what happens when it gets saved. And then what you really wanna watch out for is that business logic. And I'm not talking about database transactions. I'm talking about user transaction. User clicks a button. Some side effect happens. That doesn't belong in signals. I've seen this happen in five or six different projects, okay? This is a signal receiver that sends an email. And there's a special type of terror that only happens when you're playing around in a Python shell and realize that you just sent out emails to every customer in your database. So don't do that. Don't write, this is business logic, right? We don't want to build a system where you can't noodle around in the ORM without fear of some side effect happening. This is like, unless you are in the business of building a product and your value add is we will send you an email every time a database row gets updated. Don't do this. So look for that. When you get a sloppy project, the point is to go track down all of the receivers. Even if they don't need to be rewritten, you wanna find them all because logic can be moved away from the model that initiates it. These receivers can be registered anywhere. So go find them all and make sure you're aware of everything that happens there. And for things like this, kill it with fire. I couldn't think of a better name for this pattern. Because they are cool. They're so powerful. There's so much you can do. And there's something kind of special that it's a feeling you get when you're like, I've designed a system that hasn't, it's sophisticated enough to warrant using middleware to implement something. That's kind of a cool feeling. And it reminds me of when I discovered list comprehensions in Python, okay? And for like three months, I had just resolved to never write a for loop again in the traditional way. Yeah, one line, so cool. Because it was neat. And list comprehensions were powerful and I just wanted to use them all the time. And I think that's part of the mentality of why context processors in middleware get so widely abused. Context processors can impair performance quickly. You're running logic, often database queries. Every time a template gets rendered. And it's another place where business logic hides. So you're looking around in this template, you don't know where this context is coming from. It's not in the view. It's hiding in a context processor somewhere. And you should evaluate your context processors when you get a sloppy project and see if the logic is better suited as a template tag. This is common when you have common template elements, things like data that gets presented in the header and footer. Maybe it gets included on the base template. And so you need it on every template that gets rendered, it seems like, so it gets put in the context processor. But now you can't build a template that doesn't have access to this data. And you get 10 context processors. They can conflict with each other. This is a great time to use assignment tags. Perfect use case. Change it into an assignment tag. Use it on the template in the same way. But now you can build other templates that don't necessarily have this logic and there's less stuff piled up in your context processors. There's also context processor logic that should just be refactored into views. So this is much more inefficient use of a context processor. I mean, maybe it gets used in five different views. And for whatever good reason, those five views live in different applications. But this is a perfect use case for class-based views. Create a base class or a mix-in. And even if you're not using class-based views, there's no reason why you can't move it into sort of a neutral location, import it and use it there. So audit context processors. Middleware is potentially even more dangerous because it can run on request and response cycle. And you can really modify things that make bizarre changes, bizarre to you when you don't expect them, right? You can mess with the request and responses. I don't have examples for how to fix it because it's so powerful and there's so much you can do with it. Whatever middleware you have will be esoteric to your project. But the question to ask yourself is, do I need this logic to execute on every single request and or response if it's implemented? Or maybe even better, do I need this logic to execute when Googlebot comes to the project, okay? If you don't need it when Googlebot comes, there's a good chance you should engineer your way around. And this isn't even a pattern. This is the absence of any sort of pattern. Good old undocumented code. And I'm not talking about a didn't ship with a readme. I mean, there's not a comment in the whole thing, okay? Like what does that do? I don't know if you can see it. If you're following with the slides, you should be able to. But I mean, we have like these ambiguous variables. We have classification, classifications, level, class level, and string. We're accessing iterables like directly by explicit index. I have no idea what any of this does, and there's not one line of commentary here to kind of help me out, okay? The only thing worse than code like this with no comments is code like this with just enough comments to piss you off. Returns to dictionary. Thank you for that. This is kind of complicated. I better drop a hint. Returns dick. And there's no easy way out of undocumented code. You can try and brute force your way through it and say like, you know, we're going to assign to one developer. He's obligated to write documentation for this whole project in a week, or we're just trying to knock it out all at once. I think that's impractical. You're going to be reading through all of this code anyway. So document as you go. And make it policy. Don't just say I'm going to document as I go. The rule is, if you edit the code, you document it. If you have to dissect some code like the example we showed, and to understand how it works, you're going to do that when you write tests probably. Then you have to write comments letting others know what you find. So for every module, function, method, class, you write a doc string. No exceptions, even if they're really stupid ones. Sometimes that happens. But if you get in the habit of always writing one, that covers like most of your documentation effort. These are the workhorses. So at the very least, when I look at a class, when I look at a method or a function, I know what comes in and what I can expect to come out. Like the previous example, long multi-step logical branches where data is being transformed in one way to get it to one format to transform it to another. Inevitably you'll have to read through that and figure out why it's doing it. And when you do, leave a comment letting the next guy know, or letting yourself know when you come back. You'll forget how it works. Break it apart with comments. If it's difficult to read, okay, where there's ambiguity, leave a comment. And then I've got a list of other things to be on the lookout for. Not patterns in and of themselves, but stuff you can expect to find everywhere. Hard-coded URLs. You're going to want to try and remove those before you do any rewriting. You want to be able to move views around the project without all of your test-breaking. Broad try and accept blocks, okay. Try accept pass. This is not PHP. We don't have a silence all errors symbol for a reason. And even just try accept all exceptions, even if it doesn't pass. If you catch every type of exception instead of a specific one, at some point you're going to silence a more helpful error in order to raise a less helpful one. So be very specific with try accept blocks. We talked about VCS requirements earlier. Fork those. Whoever owns the project should have a fork of that. So if you're the long-term owner, fork it in your GitHub repository. If the client is a long-term owner, make them create a GitHub organization, okay. They can have their own fork of it. It'll never go away. They're in control of it. If the original branch, the original project gets updated, they can pull those upstream changes in, but now they're sort of, they're the masters in control of it. Watch out for misnamed concepts and code. And this goes from everything to, from typographical errors, like misspellings, to concepts that totally had their name changed. Bite the bullet and rename them and don't keep using them with the wrong name, because it just gets harder and harder as you go. I had a big regret. I worked on a project for like six months and we had this concept called Boost. And at some point for a business reason, that concept was named to the end user, Easybook. And then shortly thereafter, I passed the project on to someone else and I never had the chance to go through and update all that code. And I know that poor guy was like getting these tasks and assignments like, we need to update the Easybook pages. And he went to the code and it didn't exist. And what the hell is all this Boost stuff? So rename those things. It makes life a lot easier. When you get sloppy code, you'll realize that stuff like this is what made it really sloppy in the first place a lot of the time. Someone just didn't do the due diligence to rename that. And then premature configuration. We talk about premature optimization a lot and this is sort of a type of it. I've gotten code that never, never left a developer's machine and never had any real work done. It wasn't in any way close to finish, but it was preconfigured to use like five different key value store backends and no SQL databases and celery. We had no requirement for asynchronous tasks, but it was all there. And if you leave that in place, it always ends up getting in the way. It gets in the way when you try and do deployments. Sometimes developers will override really obscure settings that you don't expect to have been configured. And so weird things like that come up. The point is don't be afraid to throw all that out and start with fresh configuration on a really sloppy project. You have a copy of the original code. You can get it back at any point. Don't bang your head against the wall if you don't have to. So just to recap. Write tests first. All dependencies introduce complexity. So review them. Organization is important. Monoliths, every model is an app. There's a sweet spot in the middle. It's a broad area, but you want to be somewhere in there. Check all of the signals and receivers in the project. Make sure you know what they do. Same thing for context processors and middleware. Bad stuff usually hides there. And document the code as you go. Oh. And then lastly, don't do any of these things that are handed off to someone else because it's just a prototype. Thank you.