 Well, perfect. So it's a pleasure to have you here and you're going to talk to us a little bit about cookie cutter science as well, right? Yeah, you're definitely right. Cool. I am personally interested in this talk as well, because I really like Kedro and Kedro is like one step after the cookie cutter science. So it's like it's inspired on, but we can have a chat about this later on. So, yeah, the floor is yours. Take it away. Okay, then welcome to my talk, Python table manners and color cookie grass grass fully. Yeah. I'm Wei Li. I'm software engineer at Rockton Slice and also a volunteer at PyCon Taiwan. As you can see, this is a teacher from PyCon Taiwan. Also, I'm a maintainer of commitment and tools, which is a tool I will mention in this talk. Today, I will illustrate how to first clean up your table before you get your dinner and will ensure you that you will put the correct tableware on your table. Then we'll learn how to use this tableware elegantly because the step maybe to travel for you, you need some mnemonic rest. And if you're asking others like Git to help you, you should say please. In such an occasion, you'll like to speak formally. When we have our knife to cut the cookie, we'll ensure our own safety and the last step we'll see first the cookie. Starting from dependency management. This might be how we used to start a Python project. We create a virtual environment and then we activate it. And after that, we'll freeze some packages into requirements.txt. But sometimes we just forget to activate the virtual environment or we forget to add a package into requirements.txt. So we can use tools like Pippin. Because Pippin can manage your virtual environment and package it at the same time, so you no longer need to manually think up your virtual environment and your requirements.txt. It's also generate hashes from the packages Pippin download from PyPI, so it can ensure you that you can get the same package next time you install from it. You can initialize virtual environments through Pippin install, then this is how an empty PIP file looks like. Pippin use PIP file and PIP file.log to manage dependencies as an alternative to requirements.txt. And the API is pretty much the same as PIP, so you just need to type PIP install which package equal to which version. If you add a request to your virtual environment, it will update the PIP file like this. And it will add this section into your PIP file.log. This hashes is generated from the code Pippin download from PyPI at that moment. Even if the next time you download it, the code is changed, but the version is still 2.22.0. Pippin will rise and error, so you are guaranteed to have the exact same package next time. But sometimes you just don't need everything in your production environment, so you can install the packages into your development environment only through adding a dash dash dash argument. And it will appear in the depth package section in your PIP file. And because we already set our virtual environment and manage our dependency at the same time, we need to run our Python program inside our virtual environment. You can do so by Pippin's run Python, your program, or something like Pippin's run Django management, the Py to start the web server. But some people might say that Pippin does not update that frequently, although it's just updated two months ago, and the back of it is really slow. And it does not sync up with install request instead of the Py. Maybe you could try poetry. The concept of poetry and Pippin are alike, so I'll leave the comment here as a reference. For releasing a package, I will recommend you using poetry because you don't have to manually update dependencies both PIP file and the other Py, poetry will do that for you. But for Python application, I'll say both poetry and Pippin work for me. Testing. Now it comes with standard unit test, unit test framework in its standard library, but it's borrowed the concept from JUnit in Java. So today I want to introduce PyTest. Why should we use PyTest? Because PyTest is considered to be more Pythonic, and it's compatible with the old unit test style. In unit test, you will need to use a third function like a third equal, a third true, or a third false, and extra, etc. But in PyTest, you just need to memorize a third, and the RSI you can use the sensing test as the normal Python you use. And PyTest provides better test discovery, advanced features, and it also comes with plenty of plugins. So this is how we run PyTest. Actually, after we install our virtual environment, we should always install packages into our virtual environment and run our Python program inside the virtual environment. This is how a unit test style test looks like. In PyTest, we use the setup to prepare all the data needed in our test cases. But because unit test is borrowed the concept from JUnit, it's the setup function is camelCast, which is considered to be non-Pythonic. And we'll inherit unit test, test class, best class. And as I mentioned previously, we'll need to memorize the assert functions. And in PyTest, we use fixtures to prepare individual data for individual test cases. So we no longer need the setup function. Also, we don't need to inherit a best class. And we don't even need a class to run PyTest. And the assert function becomes much easier to memorize because you only need to use assert. And the syntax afterward is just as in how you use Python daily. And this is my configuration for PyTest. I use PyTest.inNee to configure PyTest because setup.confid is not recommended for PyTest configuration nowadays. And after PyTest 6.0 is released, you can even configure PyTest through PyProjectedTerminal. These are the plug-ins I use in almost every of my Python projects. You will use PyTest.mark for replace the objects that are hard to test. Like if your program interacts with AWS, GCP, or other third-party services, you don't want to actually interact with them because it will cost you money. So you will want to use a fact object to be tested in your unit test. And PyTest coverage can show you which portion of your program is not covered by your unit test. And PyTest.exist can accelerate your test by distributing your tests to multi-core. Coding style. As Python programmer, we not only want to write correct code, we also want to write elegant code. We can do so by Flacket. Flacket is a tool that can enforce style consistency across your Python project. It can also check possible error before you actually run your program and also eliminate the bad coding styles. In this example, I redefined the OS library as a string, which could be a possible error, because after this line you can no longer use methods like OS that get current working directory, because OS now is a string. And I add an additional space here, which is considered to be a bad coding style. After running Flacket, it will tell you where are the errors and bad smells. This is my configuration for Flacket. I use that other config. In this section, I will introduce you a lot of tools that relate to coding style. So by following this configuration, all these tools will not conflict with each other. Piling. The functionality of piling is pretty much the same as Flacket, but it can generate more detailed reports. We use the same code and run piling. This is the report it gives me. And if you run piling with dash argument, it can generate even more detailed reports, which you can compare with your previous piling run to see where your coding style is improved in between the two different checks. And I use pyproject and thermal for configuring pytest. I also use pylinxrc for configure pytest, but I found that the default pylinxrc contains too many default values, which is distracting when I make it make me hard to find the thing I really want to configure. So I think pyproject and thermal or you can use sell.config to configure pylinx, which might be a better solution. In Python communities, type annotation is now encouraged. So we now have tools like mypy to do static type checking. And by doing so, so you can avoid possible runtime error because mypy can run compile time type checking. And by doing type annotation, it can enhance your readability. Now, it's not only work as like dark string, it's machine checkable documentation. In this example, values is annotated as a list of stir by we passing a list of integer into it, which would be a runtime error. But tools like sake and pylinx won't won't won't do about this error, but mypy will. Mypy will tell you that you should actually passing a list of stir instead of this integer. The first argument indicates that you want to check all the file with the py extension. And the second one will ignore the error that your server party library is not type of notated because what we actually care about is where our code is type of notated. And this is how I configure mypy through sell.config. Let's turn on the configuration, the other arguments I showed in the previous page. So after this, you no longer need to type the arguments. We can even tap one step forward by facing the style automatically to black. Using black is really easy. You just need to run black dot and it will fix all the styles under your current work directory. This is how black reform it. The red one is the old code before black actually fix it because backslash is not recommended nowadays that will use as good to do the black line. And it will fix the notation and also add an additional space between comment and its comment and its content. So actually we use black because the black code style is not configurable. You cannot tell back how to how to format or I don't like this portion of black style. No, you cannot. You need to accept the black code style for your whole file. You cannot even get it a temp ignore mark to format only a portion of your code. You need to format the whole file. And which leads to no more argument about which style is better. So you can focus on what really matters. The feature you want to deliver to your customers. And there should be one and preferably only one obvious way to do it. It's from the general Python. And this is my configuration for that. This is not for form form black formatting. It's just to tell that which file is you include and which file issue is true. In this, in this Python file I randomly import some libraries. But according to Pepe, we should start our libraries in the following order, which you first have standard libraries. Second, third party library and serve our local application or library inputs, and you should add a blank line between each group. After running ISO, it will group this impulse for you. In addition to group this, it will sort the libraries alphabetically. So the next time you want to find which which library is imported, it will be much easier to find than randomly sorted. And this is my configuration for ISO. This is all the coding style related tools I use for my Python project. And this is our command for formatting and linking. But it's just way too many commands. So we need some tools like Python invoke for test management. It's like a Mac file, but it's written in Python. I'll demonstrate how we use invoke in report generator command line in practice. This is how we install our analyzer and run unit test before we use invoke. We'll need to memorize these long commands. But after invoke, the command become much shorter. You might say, even if they are much shorter now, we still need to memorize them. But no, you don't. Actually, the only thing you need to memorize is invoke dash L. It will list all the commands you implement. So how can we implement these commands? So test.py, you add this file to the root of your Python project and then move the command to the text. And best of all, it can ensure that your Python program is run inside your virtual environment by adding a virtual environment prefix in your test.py. After you introduce a bunch of text, you might want to modularize them through the concept of Netspace. So as you can see now, the command becomes invoke build dot develop. But wait, your command become even longer now. So now we need auto completion. Invoke comes with a completion script. You can generate the script for each of your shells through this command. And after that, you can type invoke build dot tap and it will show you all the options you can choose from. So why not just use Mac file? Well, because we are Python developers and some tests might not be easy to handle through shell script. And shell script in different shell command might be different. And in Python invoke, you can combine the power of Python and shell script. It's the best of both, actually. And because people might forget to do the check even after we made the check much easier, so we can ask it to do the check through precommit. So how do precommit do the check for us? Precommit can run some command before we do any Git operation like Git push and Git commit. We'll first need to tell precommit what command we want to run. And in the example, I first use a repo local, which means it will run the local command on your computer. And the first hook is style reformat. I will check it at the stage commit, and I will run invoke style reformat at this stage. The second one is style check. I will do style check when I do Git push. There are also some existing hooks that is commonly used. So precommit has a repository precommit hooks. For example, I have introduced NL file fixers and trawling white space, which will remove the trawling white space in your files except month-on files. And popular projects like Black, ISO, and even Flake has their own hooks on GitHub as well. So after we configure it, we need to install it into your local repository to precommit install. Because I mentioned I will use that push and commit. So I install hook type precommit and pre-push in this example. After that, you do Git commit and you will run the NL file fixer, trawling white space, and do the style reformat. And when you want to push your code to your remote repository, it will do the first two and then it will do style check without style reformat. Speaking of Git, we might want to cultivate a Git commit convention. If you are like this guy and you will see Git look like this, and it will make it's really hard to find the right version to roll back to when your system goes down because all the commits are updated. You cannot distribute them. So commitment tools is here to help. By using the commit, which is a command from committison, you will get a user interface that will first ask you which type of change is this one. And it also gives the user a hint that you should not mix lots of different changes into a single commit. For example, you should not add bug-based feature and refactoring into a single commit, which will make it really hard to review. And then it will ask you about which scope is it, the subject of it, and whether this commit is a breaking change. And you might want to add some more details to your commit. And then you might want to reference to your Git lab issue, Git hub issue, or some jurisdiction. This is a commit we just generated through a previous patch. And if you are continuously using committison, you will see this kind of Git look, which is much readable then update. Committison also comes with some advanced features. It can prevent you from not using committison because people still forget to use committison and they use Git commit with update sometimes. And because the rule we just use is conventional commit, we can use customizable commit rules. And because our commit message is standardized, we can auto bump our project version and generate changes through committison through the functionality of committison. And best of all, I will hold a spring tomorrow and Sunday. So let's spring, join us on virtual run 10. So security issues. You might have seen this kind of running on GitHub. It tells you that some of your libraries might have security issues that you should upgrade your library to certain version. You can do so locally through safety. You can also check the vulnerability by running safety check. In this example, it tells me that pie crypto 2.6 to point what might be then dangerous. So I need to update to a higher version. But if you are using peeping, you can just run peeping check. You can see, well, such a vulnerability in the CVS database for, for the known vulnerabilities. And Betty, Betty can do static analysis on to check common security issue in your Python program. And in this example, Betty tells me that in this portion of my code, I have a median severity security issue. And then this is a, how confident, then this thing, this is actually an issue. And this is, then they will also tell you how, why this might be a problem and how you could fix it. But not all the warnings should be fixed. In this example, it tells me that I should not use a set. Because a cell might be ignoring some Python configuration. So if you are using a set to identify your user in your logging system, you might be a potential vulnerability, but we still need to use a set in our test cases, right. So you should add test into your SQL section into your bandit configuration. But if you are not, but if you don't want to ignore the whole file, just some section or some lines in your code, you can add no secure after the end of the line of code, you want to ignore the warning. So cool. Let's talk about cookies for project template. You might want to use all these manners in your all your Python projects afterward, but configure it every time is really time consuming. So we could create a project template once and initialize projects read afterwards through cookie cutter. So this is my cookie cutter template that consists all of the tool I just mentioned and also a GitHub action and some documentation generating tools inside this template. The only thing you need to do is keep install cookie cutter, then add my URL to cookie my cookie cutter template. It will first ask you some questions like what's your project and in this sample I type Python table manners. And my template also let you choose your dependency management tools. And this is how generated project looks like. So how to make a template, you'll need to first add a cookie cutter that Jason, the key is the value you want to fill in your template and the value is the default value for that key. And in this example, it will ask you so whether which which dependency management tool they want to use. And this is a project template of my template. Project structure of my template. The upper part is cookie cutter configuration and the lower part is the template and I use to create it and cookie cutter follow the syntax of ginger. This is an example in my template, how I initialize my environment. If the user choose peeping as their dependency management tool, they are in this task will contain the command peeping install. If they are choosing point tree, they'll if the pie will run point tree install. And sometimes you might want to run some check or some operation before and or after the project is generated. You can do so by adding hooks post post or pre generate project that pie. In this example, if the user does not choose peeping as their dependency management tool, I will remove people file for them because people file is not needed for other cases. And again, this is my, my cookie cutter template. So your journey toward a bad manner is now complete. There are other interesting tools that I don't have time to mention today, and you might want to take a look at it. And these are the related talks. I suggest you reading. So, does anyone has any questions. Yeah, we can also check on on the disco channel. I think we have time for one question. I just don't want to do this to you. So, um, let's see. So, yes, Gus is asking, do you find that committees and slows down your workflow. He finds it hard enough to spit his commits up and not commit everything at once. I value readability more than efficiency because you can surely you can add all the things into into one commit but if you want to roll roll back to to certain points you'll be hard to find and if you make something like refactoring and feature, feature and box fix into one commit and you suddenly find funnel that the box fix it doesn't actually fix anything you want to roll back to. And after you roll back that commit, your feature will be gone, your refactoring will disappear as well. So I will still recommend you to keep the commit simple and let's let out. Perfect. Perfect. Well, thank you. Thank you very much for everyone that is that has questions I have loads of questions here for you. Unfortunately, we don't have any more time. Yeah, I just made like a block of like four of them for you to answer on your channel. Yeah, we have a few more. I hope that answered your question because Gus also has a few more questions for you. So please just go into this court. The channel is talk by tone table manners. Thank you very much.