 All right, okay. And what are you going to talk about? Today I will be talking about difficulties that we can face during development of Python code. I will share my screen. All right, so give a big hand to Michal. Go for it. Okay, so before I start, maybe a few words about me. I have eight years experience in Python, five years commercial experience in developing Python code. And today I will share my knowledge and best practices and best tools that I'm using during project development. My presentation will be divided in three main sections. Each of them will be covered with a small story. And first of all, we'll start with the packages that are available in Python world. We'll have a little bit about history of Python and confusions related to that. Then we will switch to the virtual ads. And we will tell what are, what are for and some great tools that are allowing, developing our Python code in virtual and then we finally focus on package managers and we will try to clarify or all what are, which are the best and what are the advantages and disadvantages of certain package managers. But before that, we have to say that the Python world packages, virtual ads and package managers is looking like this. And because of the history, we have some old documentation that is pointing for packages that are not used. The same thing is for package managers, which is not obvious which of them we should use. And today I will try to clarify what is the right path. And I hope that this picture will be much simpler. So in the beginning, we'll start with the story related to AWS. Recently, I have a few projects related to the, the, the, the, the, the, the, the, the, that are, that were stored on AWS. And mainly I was working with Lambda, ever as Lambda functions. And before we start our work, we was choosing out of many tools how to deploy the Lambda. We could use Terraform, serverless framework, put the three cloud formation and many, many, many other tools. Finally, we decided for Terraform to use Lambda. The deployment of Lambda is looking like this. We have to prepare the zip file with our Python code. And we also have to add the libraries that were required for example, to communication with that database, like a school alchemy because Lambda itself contains only base libraries of Python. And during the development, we noticed that we have a problem that Terraform is constantly building our code on Lambdas. And the reason of that is was using different wheels because we, in the beginning, we was doing this by hand and each developer was using different platform. And what is real, I will tell in within few slides. And also I tell how we fixed this issue. But let's start from the beginning. We have Python files and we can share them between community. For example, in bottle web framework, it is possible to create small web page for deployment and we can just share this file, but it's not convenient to use that. We have to use something more convenient and reliable. And first thing that came across is a source code. And for that, it is just simply the archive that is containing the Python code. It may contain other source files like C source files for C Python. We can define there the scripts that will be available for our terminal. We can define, of course, readme, the set up by our set up CFG. I will tell a little bit about set up by later. And also we have, we can define some external packages, package data like, for example, the trained model of artificial neural network. And of course, metadata to know what exactly we are, what we have installed in our system. And here we're coming to the conclusion that each time we will want to use that, especially if we have the C source files, we have to compile that and this take time and we would like to have the instant package that we'll be ready to use. And such, and this can be offered by binary packages. The first approach was X in Python. They was introduced with the set up tools in 2004. There was no need for building or compilation. It was a distribution format and runtime installation format. And the first approach of listing what we have installed in our system was a info directory where we had all necessary formations, what we have installed in our system. The funny thing about X, it was possible to install many versions of certain library and of course, it was the possible to import directly the egg. So what was the problem with them? There actually wasn't official pep for that and each package manager, package maintainer, has its own directories inside X and we, in Python community, we had to make something with that and make some standardization. And for that, wills came across and this also is a binary format it was introduced in 2012 together with pep 427. With wills, we have also standard for distribution of the wills and standardization of the metadata. So instead of egg info, we now have a different directory that is contained different metadata and this is the current standard for building the binary. Wills are not containing the Python bytecodes but may contain other pre-compiled code, for example, for C Python, some C libraries. As I mentioned, instead of egg info, it is used as this info directory and the very big advantage of wills is that they are versioned, they have the name convention. In the name, we can find the Python version that is supported, the certain implementation because the Python can have different implementations. Also, there is information about application binary interface, which is different in Macintosh and Linux and Windows and for which the architecture is built, certain will. For example, Intel and AMD, or other architectures and of course, whether it's for 32 or 64 bits. For now, some wills are importable directly, for example, for our rep, but it's not supported by PEP 4.2.7 itself and here we can see the example wills. The first is a new universal will of the PIP library, which is for Python 2 and 3. And this is example or one of the NumPy will, which is for C Python 3.6 or Macintosh 10.9 for both Intel architectures 32 and 64 bits. Currently, there are no builder on the PyPy, the maintainers just have to create all the necessary wills. We can create our will by this comment, but there is a great talk about wills by Elena Hashman. If you want to know about application binary interface in Linux, the ELF format and how they are maintained and built, you can just watch it by clicking on the links provided here. And she's also supporting two projects, the Manilinux and AuditLit will. The Manilinux project is actually the group of the most popular Linux distribution in Docker that are building a will for this certain distribution. And AuditLit is a command line tool that is used for checking whether this will is proper for certain distribution and nothing is missing. So if you want to know more, just click on the links. And this is the end of the first section. We'll move to the virtual ends. Oh, okay. One more thing. Just to show how the package is built. In the most deep layer, we have the standalone model, which are Python file. If we have more Python files, we can have a source code. And about that, if we've compiled that, we have the will for certain platform. So this is the summary of the packages. And then we can switch to the virtual ends and next story. During my journey of the development of the Python code on the studies, I have many projects and I installed Ubuntu on my laptop and I started developing many projects at once. I started installing many libraries to the global system Python. And I noticed that suddenly I noticed that such picture like here and the problem is that when I was developing and installing many packages, I reinstalled one very important dependency that was used for a graphical engine of Ubuntu and I had to reinstall everything from the scratch. But it was very quick lesson for me why we should use virtual and AMP. So what is exactly the virtual and AMP? It is isolated directory where we have all necessary libraries and Python, which is completely isolated from the global system which is good. And we have few tools to do that. One is the built into virtual AMP which is refarctored now. There is a great talk. We had it today by Bernard Gabber writing the virtual AMP. I fully recommend to see this talk. And together with virtual AMP, there is a useful thing to use the virtual AMP wrapper which provides useful comments for creating virtual AMP, deleting it, switching between them. And it is easier to develop the Python code with that. How the virtual AMP is actually working, they are putting the path of the certain and virtual AMP in the directory to the path before the global system and every time we are typing the Python, it is first looking in the Python code and we are typing the Python. It is first looking in the virtual AMP and not in the global system. From today's presentation, I have heard there will be possibility to create virtual AMP for different Python versions which is very good, but they must be installed on the system. They must be present for virtual AMP. In Python world, we have many implementations of the Python. We have the main Python in C Python which is written in C. We have a PyPy which is written in Python. We have a J-Ton which is written in Java. We have an Ion Python which is which backend is written in C-Sharp. We have a Micro Python which is used for certain embedded systems. And we have the Stackless Python which is not containing the stack. And if we, for example, want to develop our library and test it against many versions and many, let's say, distributions of certain Python, we have to have this certain Python installed on our system. And there is a tool that is doing this which is called PyEmp. There is possibility for installing many versions of Python, many implementations of Python. It is offering switching between them. There is an automation of switching between certain virtual AMP in this, creating the virtual AMP. And here I prepared a very useful command that can be used with PyEmp. But before that, I just want to tell you that the installation of PyEmp is very simple. If we want to install it, it just can be done by the script provided on GitHub which is called PyEmp installer. If you don't trust the script, you can do the command one by one in the script. And very important thing, you have to install dependencies that are required for compiling the Python versions. And there is all dependencies accordingly to the distributions which can be found here in the frequently asked questions. And be aware of that. If you want to, for example, use the Jton, you have to have Java on your machine. If you want to use C sharp, if Ion Python, you have to have the C sharp on your machine. And after that, if you prepare everything, you can just list all available. The PyEmp is installing in your home folder. And all versions are isolated from the global Python system and are just put it on the home folder. And if you want to list all available Python versions, you have to use this command. Sometimes there is a need for check out the GitHub repo locally just to update the newest version that appeared in community. To install certain version, we are using such a command. We can uninstall certain version. And the same command is used for uninstalling environment. To list all available Python versions, we can use this command. And very important thing is that if we want to know where exactly our virtual and F is stored, we have to use the PyEmp which Python command. Because the normal command which Python we will show the sheen directory, which is an engine used by the PyEmp. And the real path will be shown by this command. Also, we can activate or deactivate the environment by hand. And there is a possibility to set some namespaces in our terminal of availability of the certain version of the Python. The hierarchy of this command is like this. If we want to just test some Python version, we can use the command PyEmp shelf and this certain command. We can attach certain version or environment to the directory. And when we switch inside to this directory, we will have our environment. And we can set up the PyEmp global, which will be covering the system Python, which is very good. And it will be shown the last in this hierarchy. I will show it in a quick demonstration. So here, this is my PyEmp all versions. I have installed micro Python. I have minicon 3. There is also an anaconda for that. This is all versions that are available by PyEmp. We have the version of an anaconda and Ion Python, Jton, micropyton, minicon, PyPy, and so on and so on. Here, currently, my global system is 382, which is covering my system Python. And for example, when I change my directory to the PyPyx directory, automatically there is activation of my virtual ENF. If I, for example, want to test the micropyton, I'm using the shell command, which will be covering the environment from the directory and my global Python. And if I just stop testing, I can unset it and I will be back for my global Python So this is very convenient during development. I very recommend to use that. And actually, that was the last slide of the second part of PyEmp. Then we are now switching for package manager, which is a big topic and big confusion in Python community because there is a lot of choices that we can make before we start our project. And here, I start with my short story about dependency hell. In one of my projects, I have many libraries. Each of these libraries have many dependencies. And most of these libraries was using the same dependencies, but in different version. In our project, we are using the PyP package manager and we were using the requirements TXT file for reproducibility of the environment. And after a few execution, we noticed that sometimes our application is not working because of the wrong dependency version and we were just wondering why it's happening and we noticed that in some cases, the order in requirements TXT was changing our dependency because the first library, for example, was using the Python, the library version 1.0.0 and the second one was using 1.0.3 and what was happening, it's not resolving this dependency versions. It's just installing the first version. In the second line, it's uninstalling this first version and it's installing again the second version. So we just wanted to know how we can just manage that and we switched for different package manager and I will show you plenty of them. But before that, the short story, what was first. In the beginning of Python, we have easy install, which was using X for installing. It was provided with setup tools in 2004 and it was using installing packages and its dependencies and there was no PEP for the easy install. So with the standardization, the PEP came across and we find that and the PEP was released in 2008. It's using wheels or source code and introduced the requirements TXT for freezing our environment and knowing what we have exactly installed in our virtual environments. The good news is that from certain versions of PEP, the latest versions of PEP, there is a possibility of checking the hashes of the versions of Python so we have more security, but it has to be explicitly used. But as I mentioned, PEP is not resolving dependencies. So here is a package, here is a table that is comparing easy install together with the PEP. The main difference is that easy install is using X. There was possibility to install many versions of certain library. There is no PEP for easy install and for PEP, of course, there is PEP for 3.8. It is using wheels and this is the main difference between them. So we have, but as I mentioned, there is still a problem with the resulting of dependencies. So before I move to this tool, to these tools, I will tell short introduction to setup by script, which is used for our packages and libraries that we want to install in our environment. We are feeling, we are feeling necessary information and the setup by script is providing the possibility of building the packages, distributing them to the PyPy and installing in our virtual ends. This is the example, a piece of code of setup by where we have the version, description, authors, packages. We can define additional non-Python packages like SVG files or other data. I don't want to focus on the setup by in this presentation. I just wanted to mention it because it's used in the next tool that I will be talking. So the PyPtools, it's a package manager that is using two commands, Pyp Compile and Pypsync, and it is resolving dependencies in certain way. It is using the setup by and requirements in file for resolving all dependencies. After that, after compiling, we are getting the requirements TXT where we have all resolved dependencies and we can reproduce our environment. The second command, Pypsync, is checking whether all necessary versions of libraries are installed in our environment and it's just showing if there is some inconsistency. So here we are requiring three files for resolving dependencies. So if we would like, for example, to create it for depth test, UAT, and prot environments, we have to reproduce requirements in and requirements TXT, which is not so convenient. And then we have a different tool that is P-Penf. It can manage the virtual environments. If we, for example, is installing something and we have not activated the virtual environment, it is automatically activating the random virtual Enf with the random name and installing it there. But when it's something, when the virtual Enf is activated, it is installing in the current virtual Enf. And it is using the PIP file and the PIP file lock files. And PIP file contains two sections. In the first section there are local development packages that are used for creating the project and the production libraries. And each time we are providing something, it is resolving automatically all necessary dependencies. And it's telling whether there are conflicts that cannot be resolved. For example, one library is requiring the version 2.3. And the second library one dependency that will require version below two. So this is a very useful tool for pointing out such situations. And of course, when we have the stable environment, we are just locking the environment, which then the PIP file lock is used just simply for installing our CI CDs, Docker images and so on, so on. Also, PIPF is using our hashes, so we have the security. And there is an alternative for PIP Enf, which is also great, which is poetry. We use it for, actually poetry has the same features as PIP Enf, but it's faster. I will show that. It's faster in resolving dependencies. The main, so it is managing the different environments. It's also installing packages. It's resolving dependencies. But there is also the convenient weight of creating packages and publishing into the pipeline. So it is more useful for libraries. And I will say a few words about a package manager that is not used, that is not pure Python, which is anaconda, which is the Python libraries plus some other data scientist libraries. The mini-conda is the smaller version of anaconda, and the conda is the package manager. The main problem with anaconda is that it's not supporting PIPI, it's supporting only their channels. They have different standards for their packages, so there is no compatibility between PIPI and conda channels. And of course, conda package is not working with Python virtual elements, so be aware of that. It is mainly used for machine learning projects, and we have the possibility to use conda itself or conda with other package manager. If we use conda alone, we have to know that channels, we have to choose a good channel, because very often libraries are outdated, they are maintained by different people. And most safe is to use conda with other package manager, conda for some non-Python packages, and Python package manager for getting the latest libraries from Python. And this is mainly useful for machine learning. Also recently, there was a release of a PDM package manager, which is implementing PEP 582. The PEP 582 is talking about storing the libraries inside the project. There is no need for the virtual elements. Everything is with the project directory, and also it's implementing PEP 517, which is installing libraries from the directory. So this is a great feature, but it's on early stages. And finally, we can use sometimes the PEP X, which actually is not resolving the dependencies. It's installing one package per environment, and if you want to install a second package, it is installing different in virtual elements. Also, there is a possibility to install two packages in the same environment, but it will finish with not resolving the dependency. So it is resolving dependency because it's installing on one package. So here, I've just gathered all information that I have already said. You can check it. It is separated in three tables with annotations to these tables. I will skip that. This is just the reference. And finally, I've made some benchmark, and surprisingly poetry is the fastest from chosen package manager by me. And I'm very often using it because of that. And finally, I will tell a few words about my setup. I mainly installing PEP locally on my home folder. If I something install from my user, it's installing locally on my Linux system. Then I'm installing PEP and poetry and other tools that I'm requiring for most of the projects that are common for projects. And then I'm installing PyEmp for managing many Python versions and VirtualEmp. And then I start a project by creating VirtualEmp and developing. So here are all resources that I used for presentation. Feel free to see everything, what I used for this presentation, and that's the end of my presentation. Feel free to ask me questions. Thank you, Nihal. See, people loved your talk. See those claps. So, okay, so we got two minutes. I think I can only go through one or two questions. So Matthew is asking, how did wheels help you solve your problem with Terraform and AWS? Actually, we just had to create a Docker, a Docker image that was common for every developer and we solved that. And then we created AWS Liars, where we have all libraries. Right, okay. Oliver is asking, how is PyEmp in comparison to VirtualEmp? Any preference and why? I love to use PyEmp because I can easily switch between versions. I currently work in a project where I have five repositories and five different Python versions. I don't have to install it in different... I don't have to install it globally. And I just can switch it easily. Okay. What's the best way to maintain package versions between requirements.txt, pip file, and setup of PyEmp? Could you repeat? What's the best way to maintain package versions between requirements, pip file, and setup of PyEmp? I think... I'm commonly using the pip file log or PoE3 log because it's resolving my dependencies. PipTools is okay, but it's too big to have many requirements for many environments, I think. I don't know whether I answered your question, but we can talk after on the next day. Cool. So we reached the end of the time. There's still three questions here. I will urge people to have questions to go to the talk on Discord, the channel of this talk. Thanks for coming and thanks Nihai for the talk. We're going for a short break.