 Welcome everybody. When I was 15 years old, I was writing a computer game a little bit like this one. I wrote this on a C64 computer in the assembly language. Is anyone here in the room who has tried doing that actually? A few people, great congratulations. Well, I was writing, this was my first bigger program I wrote in assembler. And the way it was like, I wrote a couple of lines, tried to run the program. Usually the computer would crash and I switched it off, waited a few seconds, switched it on again, loaded the compiler, loaded my source code and the entire process repeated. Needless to say that you won't program very fast doing this because my average debug run cycle was about 10 minutes. And this led to a lot of frustration. At the moment, my game is 24 years behind schedule. Only later I learned that on the C64 we had devices like this one. A small cartridge that you could plug in the back of the computer. It had a button on it, you push it and you jump right into the compiler, can edit your code and continue where you stopped a few seconds ago. But I was totally unaware of that. And basically the aim of my talk is to prevent us from pitfalls like this one in Python projects. Now, I'm not going to talk about anything fancy or totally new here. Because what I want to address is a common problem that I have observed happening when people are new to Python or are a little bit more advanced in Python. After you mastered the Python basics, you pretty soon figure out that there are plenty of libraries. For instance, if I want to do data analysis, then I need to find a library for data analysis, like pandas. If I want to do signal processing of Fourier transformation, then I Google a library for that and find sci-pi for instance. Or if I want to do web development, then I will find Django and so on. These things on the left side, they are easy to find. Even if I don't know them, I may have a hunch that something like this must exist. But the ones on the right side, if I have no idea that something like interactive debuggers exist, then I won't be looking for one. The same goes for automated testing and the same goes for all the tools in the Python ecosystem that help support and maintain our code. And I want to shed a light on these dark spots. So if you walk out of this room and see I haven't learned anything new, then consider this an additional safety check. Pretty much like if people start an aeroplane pilot, they have a checklist that they go, okay, do we have clearance from the tower? Are there any other planes on the runway? Do we have enough fuel for an emergency landing and so on and so on? And I think it's good to have something like that in a programming project too, and this is why I call it best practices. And my talk is split into three parts. I would first like to talk about debugging, then show an example of testing, and then about maintenance. Debugging. When you talk about debugging in Python, the first thing that comes into your mind might be print. Now, print is something that I consider a bit problematic, even though I do it a lot. Because it's like shooting holes into a building to see whether there's a fire inside. Every time you add a print statement to a piece of code, there is a risk that when deleting the print statement, you delete one line too much without noticing. And this is why it's worth keeping in mind that there are other debugging techniques. For instance, the interactive debugger. For instance, logging. With the standard logging module, this is really an excellent way to produce diagnostic information. If your program is bigger than a couple of screen pages, then logging becomes superior to print after a while. You need to know your introspection functions like dir and type is instance and a couple of their cousins. And I put code review here as a best practice that helps with debugging a lot. We had a tutorial on debugging yesterday. If you were attending some other part of the conference, the exercises are there on GitHub. You can try them out by yourself. But what about the really tough bugs? If something does not work and does not work, you stare at the code and go on staring at your code without any progress. I'm sure that this has happened to most of you in the room. At least it happens to me still after 28 years of programming. And I have figured out something very nasty over the years. Not that I'm lacking some additional best practice for debugging here. Because nine out of ten times the problem is inside my head rather than in the computer. And fortunately there are ways to fix that. So I would like to add to those best practices of debugging for very elementary things. Sleep, talk, read and write. Most of the time if I spend more than 15 minutes on a bug and I don't find the solution then I'm probably tired. Often it helps to talk to another person explaining what you do. To realize what the problem really is maybe I'm looking in the wrong spot and explaining helps usually. Sometimes my knowledge is limited and reading the manual of the library this time for real helps. And if all of these fail, writing down what the program is, formulating a couple of hypotheses or at least ideas what the problem might be could lead to progress. Most of the time I'm lazy and take a break and this has solved lots of bugs in the past for me. Testing. The check icon here is actually a bit of a provocation. Because this suggests something that automated testing does not do. Testing actually does not prove correctness of your program. I know that many Python developers they love automated testing. I like to use write automated tests and run them. It gives a feeling of achievement. But there's a pitfall and the pitfall is that one in the bottom right corner. If there's always a possibility if my tests pass it could be that both the code and the tests are incorrect. Even if I try hard to keep my tests as simple as possible. So tests by themselves do not prove correctness of the code but they have a potential to prove the presence of bugs. So if I see a failing test I know that somewhere something is wrong. Now how can we write good tests actually? Let's imagine we are writing the game. This time in Python not in assembly this group two tiresome. We have a figure that is pushing these blue boxes around. It can only push one box at a time cannot push any boxes through the wall and so on until you reach the exit here in the bottom right. Now how could a test for this situation look like? The first thing that you can think of is a fixture in PyTest. I learned today when talking to the PyTest core developers that it's a good idea to place fixtures in a fileconftest.py and because they get automatically imported into all your test files. Now fixtures are actually pretty straightforward. You decorate a function with the PyTest fixture decorator and the name of the function will be available as a variable in all your test functions if you put it there as a parameter. So in that case we could have a level parameter available in our test that then contains a parsed version of our example game situation. Actually I did something additional here. I parameterized the test. So there are two versions of the playing field supplied to the test. One with empty spaces and one with dots on the playing field. So I can have two fixtures or more in one by parameterization. And then we can use this in a test function. I like to group my test functions into classes. With PyTest this is fortunately a lot easier than it used to be with unit test. I still have less boilerplate code. So I can write a normal test function with just an assertion that is self-sufficient or I can use the level parameter here. Note that I'm not importing level anywhere. This gets automatically filled in by PyTest. And this test function will generate two tests for me. One for each of the variants in the fixture. What else can we do? The third most important thing that I would like to emphasize about automated testing is test parameterization. So we can have one test function write multiple examples like here. We say by this parameterized decorator we would like to try out all the examples in the list. Like having a move that goes first up and then left. And afterwards the playing figure should end up on square 2x and 2y. It's even possible to build failing tests with this or tests that we expect to be failing. So we have still write only one test function but with this one we would generate eight tests in total. So it saves a lot of code. The code becomes actually very readable. And if we end up in a situation where our test code is ridiculously easy to read, much easier than the code we are testing, then we are on the right track. So if we execute this code by writing PyTest, this is another thing I learned this morning. That the dot inside PyTest has been deprecated. We can use PyTest without the dot in the middle. So we see that all the tests execute. This test actually uses a window. And we see 34 past tests in the entire test set. Not only the ones that I showed on the slide. There's a couple of more, a few more running in the back. Plus two that are expected to fail because I marked them with the X fail decorator. Now how much testing code should you write? In my opinion this depends quite a lot on the size of your project. If your project is small and prints an obvious result anyway, then maybe a manual test is enough unless you want this to be continuously integrated. Sometimes I still write test code in the main block of a small Python module. I can make automated PyTest functions out of this quite easily later if the project grows at some fixtures as the thing grows further. And if the program keeps growing and growing, then at some point it might be helpful to switch on testing tools like Jenkins Travis for continuous integration or TOX for testing multiple Python versions. When I speak about size of the project, this can mean different things. It could be absolute volume in lines of code, but it could also be the expected lifetime of the project. So if I expect code to be maintained for two weeks, then I would not worry too much about testing if I'm writing a throwaway program. If the program has a high dependability, so if it needs to be extra safe, then doing more tests and reviews and things like that is also a good idea. In the final part of my talk, I would like to elaborate on maintenance. Python has a fairly sophisticated ecosystem of maintenance tools. And they serve the single purpose of keeping your code in a good shape, like PEP8 being a layer of paint on your program, like many of you probably have heard in the talk of Anand a while ago, making beautiful code is a virtue and Python has nice tool support to help you with that. Instead of picking a few must-have tools, I tried to throw in some that keep reoccurring, with Git being in the middle, being not a coincidence. So if there's anyone not using Git or version control yet in the moment, this conference is a good starting point to learn that because you won't be getting anywhere without version control. But others, there are many other tools as well. Some of them are interchangeable, like PyScaffold recently in my personal ranking surpassed by Cookie Cutter. There is Sphinx for documentation, Virtual Enf and PyEnf for managing your Python installations and libraries, PyLint and PyFlakes for watching your coding style and so on and so on. Now, what can you do to keep an overview of all of these tools? Now, I would like to just mention two possibilities here. One of them is Magdalena Rotter is going to give a talk this Friday where she's going to present, give an overview of all the different configuration files that you can find in a well-maintained Python project. So this is on Friday afternoon. If this is still too far away in the future, I recommend you to take a look at Koala. Some of you may have noticed that there's a flyer in the conference bag. I visited the Koala booth yesterday and the developers actually gave me a quick introduction and I was able to run Koala within five or five minutes or ten minutes. So Koala is a framework that basically hosts many linting tools. That means codes, tools like PyLint or MyPy or other tools that analyze the quality of your code, not just for Python. And I thought, how awesome is that? I can put, I can check not only my Python files, I can also check my HTML templates and JavaScript code and whatever has been accumulating in my project and get everything from one tool that tells me how good it is. So how does that work? Koala brings its own configuration files that mainly contain a list of the tools that you want to switch on for a given type of file. For some reason, Koala calls these different linters bears. So you have a list of bears in this configuration file, which in my opinion is kind of cute and I need to say thanks to the developers for doing that. I really appreciate it. And you can put in some additional parameters. What you can do when you have this Koala file set up is that you simply write Koala minus minus ci and Koala starts scanning your entire code base recursively, analyzes all the Python file and comes up with a huge list of comments and suggestions for potential improvements. This ranges from sorting imports to style checks or even using a static type checker on Python. So please feel encouraged to try Koala out. I'm not as brave to call this a best practice yet because it's kind of a new tool, but I hope to make that statement next year. Now, this is an overview of testing, debugging and maintenance practices that I wanted to give here. And there's one more thing that I would like to do. I got into this topic and liked it so much that I wrote a book about it. And I got three copies with me that I'm happy to give away. So after the Q&A session, I'm holding a bag open here. So if you put a piece of paper with your name inside, I will draw three lucky winners at the end of the session and you can read more about best practices there and some of the content like the debugging tutorial you will find on my GitHub profile. That's it for my talk. Thank you very much for your attention. Okay, so we have some time for questions. Any questions? Hi. One thing I noticed that you didn't really mention was talking about virtual length and kind of the mismatch of packages with repositories and pip and compiling from source. So could you repeat the question, please? So it seems a bit missing about virtual length. Surely that's a good practice to use virtual length so you have the correct version of the packages. So what's the best practice for getting the versions of packages right? Well, so using the requirements, the dependencies file. Do we have requirements text here on the top of the cloud? So yes, this is the number one way to go for getting the right versions. Because pip can deal with it, Konda should be able to deal with it and saves you some trouble. Any more questions? Sure. So one thing why I was going to say about the requirements thing is that I work with software where the version number is not fine-grained enough. So we actually use Git commits for the actual checkout. But it relates to if in your requirements file if you don't pin exact versions, if in the future they get updated and function to get deprecated, it ties into your maintenance issue is that future users won't be able to use your software. So it's just a key point there if you've not really kind of pointed that out before in talks or in your book. It's an interesting one. This is one of the tougher problems. So I'd be a bit careful with leaving a certain version number in your dependency like forever. I wouldn't do that. Rather check it for from time to time. I'll check a few different ones if a newer one comes out because I would not feel comfortable with leaving the version number empty all the time especially not if you are planning to automatically deploy the code. If you are running the program manually, then okay, I'd feel comfortable with it but not if you have any automated pipeline running in the back of it. Is your book also available as an e-book? Yes, but unfortunately you have to pay for it. But it is yes. Fair enough. Okay, no more questions then. So that's it.