 So this is a tutorial on building and deploying Python releases using continuous delivery. This is Martin. He'll be running the tutorial itself. I'm Uzi. And I'm going to interject with a bit of background info a few times. And the plan here is to build a release pipeline live. We'll see how that succeeds. And you can do this tutorial on your own computer if you want to. If you want to show the requirements, just the next slide. Yeah, so if you do want to do this on your own computer, that's the looks a bit weird. But those are the requirements. Well, mentioned that the pipi.org account is not the same one as the actual pipi. This is a different instance. We'll be using test.pypi in the tutorial. Nothing prevents you from doing this on pipi itself. That's what we're going to be doing. So if you don't have those, but you want to follow the tutorial on your own, please get the account and so on. And you should mention that this is a longer session than a lot of the others. So if you haven't noticed, we might be spending a bit of time because we are building the pipeline. I wanted to mention that some of the processes that Martin is going to be using are going to be streamlined for the tutorial. So for example, he commits directly to the main branch. And obviously, that is not a best practice, but it's for the to keep this tutorial moving. Now, one last thing I wanted to mention before I let Martin do his thing is that, basically, why we are on stage here, because neither of us is really a Python packaging expert. We're developers, among other things, in a project called Python Tough. And you don't need to know anymore about that, except that it's a supply chain security project. So we kind of thought we would like to do a bit more than the bare minimum, but actually do a good job. And for personal context, for me, why I wanted to put some effort and work into this before working on a Python project like this. I've been building some large custom Linux systems and done release and supply chain management for projects like that. And then coming into the Python ecosystem, it was a bit of a shock, maybe, that attitudes were a bit lax, maybe, in this kind of release and build area. So what I mean is kind of this belief that because we trust the maintainers of an open source project, then transparency or auditability of a release or a build system is not that important, that we can just believe it that when a binary appears somewhere, it's probably good because it was done by a trusted maintainer. And I think that isn't correct. I think these processes should be as open as the source and the development practices. Yeah, so we did quite a lot of work in Python Tough and ended up with a build system and release process that we're quite happy with. We believe it's kind of secure, transparent, and especially ready for future improvements so that we're able to keep up with the ecosystem if it improves practices and so on. And for this tutorial, Martin has taken the sort of generally applicable parts from our build system and is going to show how to make a build slash release system out of them. Yeah, go ahead. Hello from me as well. First, before I start, as we have already said, this is a tutorial, so I expect that some people will follow us. And we have one colleagues of ours, Joshua Locke, who, yes, you can see him in front. If you have any problems, he will gladly help you because I know that sometimes when you do something in life, it tends to break. That's why we want to make sure everything works fine. So now what I'm going to start with is first, I'm going to show you a simple prepared repository for the purpose of this tutorial. It's a base start. We are going to, how can I zoom it? I don't know if you see it, it's really good. But for the people who follow, what we want to do is we want to take this repository, make a fork of it, and then we are going to build on top of it everything what we want to achieve today. So first, I want to ask, is there anyone who wants to follow the tutorial but is not ready with test-by-pi account? Because later we are going to need it. If somebody needs more time, please tell us. Otherwise, I'm going to proceed right now. So I would do what everyone, what we are supposed to do and fork this repository, create a fork, and then we are going to clone it locally by again clicking the code and following the what we want to following the simple git commands. So git clone and we are going to paste the link. Then for the purpose of this tutorial, I wanted to use less applications as possible. That's why I will only use VS Code and the browser. So we will open the repository. I will just zoom it a little bit. I'm using a virtual machine. That's why it takes a little bit to connect. OK, so let me show you the files we have already prepared in this tutorial. First, we have chosen a simple MIT license, as what we're going to show is not something particularly important to anyone. Then we have a simple readme. Doesn't contain something important. Next, we have a git ignore file, which I'm going to later create a virtual environment called dot and v, and I don't want to submit it in my repository. We don't want PIC files, of course. And we have the disk directory, which is not created at the moment, but later on we are going to use it for our built artifacts. And lastly, I want to mention the requirements built.txt. We have decided to pin all of our dependencies in this tutorial because we want to achieve reproducibility among other things. My colleagues will later describe why this is important. And also, we need this file instead of just manually typing the install commands as we're going to use this file in our continuous deployment on GitHub. So those are the simple files, and I don't want to forget there are two more. So we have one .github workflows directory. I want to mention that the naming of this directory is important as GitHub is going to look into those directories for its workflows. Later on, you will understand what is workflow if you don't know yet. And also, I have created a simple copy.txt file. In this file, we have some actions with their commit references in order to make it easier in this tutorial to copy and follow me along. And finally, of course, an important file that we're going to change a lot is the pyproject.tomo. In this file, we have decided to use catchlink as a built system. The reason for that is that it supports built reproducibility again. And also, it's officially recommended by the PyPA tutorial for packaging Python projects. And again, I want to mention that we have decided to pin catchlink. And I will let my colleague to explain why this is important. Yeah, maybe a good time to talk about the reproducibility stuff in general. This is not a packaging tutorial, so we're not going to spend much more time on this. But since the reproducibility aspect affects a lot our ability to do cool things with the release pipeline, I'm going to talk about it a bit. So binary reproducibility of packages or the release artifacts can be a bit of work. And it might not be immediately obvious why it's useful that you get bit by bit reproducibility for your builds. But I've got a few reasons for that. First of all, it just leads to higher quality, non-reproducibility. It's not always, but often it just means buggy. And second one is that the reproducibility makes build systems more transparent. This is what I talked about earlier, that it's just good if you can see what's being done and you can confirm that that is actually what was done. And it enables many security improvements down the line when we can trust that rebuilding is going to give us the exact same results. So what does it mean in practice? What do we mean? Because reproducibility is a word that a lot of people use, and it's not always the same meaning. And what we're talking about is full binary reproducibility, just because with Python that's an achievable goal. If your project is not pure Python, then it gets a bit more complicated. But we're talking about pure Python projects that are fairly easy to get reproducible. So what it means, controlling your build environment over time, over the different machines you might be building on. So for example, pinning build dependencies to a reasonable degree can't always build pin absolutely everything, but should try. And then the second is just ensuring that the build is repeatable in the sense that, for example, Martin mentioned hatchling, we use that and not, for example, setup tools. Because setup tools create source starballs that are slightly different every time you build them, so we didn't want that. So things like that. Yeah, I guess that's about it for reproducibility. And I want to show a couple of more things in this file. Here you see this section, two hatch built targets as this. This is needed in order to make sure that we have the correct files in our releases. We don't want any metadata that is not necessary. Then I want to mention that this pyproject.tomo is intentionally without any, it's a minimized version of a real pyproject.tomo. So don't do it like we did it here, as this is not the purpose right now. And yeah, we have all of the usual fields. Now I want to mention that here now we are going to change one attribute, and it's the name attribute. We need here to add the name of our project. When you add the name of your project, consider that the name you are going to use is in a global namespace, meaning the name is not connected with your username in any way when users are trying to install it. That's why we are going to create a name that will be unique for our tutorial. So I can call it tutorial, then MV for my first, which will be my initials, and then, for example, OSS 2022, something like this. Then please don't create a project with the same name as you're going to create troubles for me later. Then we need this name, and I will show you after a second why, so you can copy it. And we're going to create two folders. The first one is the usual CRC model of a folder. And underneath, I'm going to create another folder with the name of my project. The reason for that is when somebody installs your Python project, that makes it easier to then import it. And that's why I'm going to paste the same name as the name of my project. And as this is not an important, it's not important what we have here, I'm just going to add an empty init.py file so we can import, actually, this module. OK, we have everything that we want in this initial state. So what we need to do now is we want to create a virtual environment. So you can follow me. Sorry, when it's in person, everything happens. OK, we have Python, then what's wrong? Are you in an existing directory? Yes. It does say release tutorial deleted. So I think you need to get out of the directory and enter again. OK, I'm not sure why my command prompt is that way. But it should work now. OK, sorry about this. OK, so now we want to make sure that we activate this environment, and we're ready. We want now to create a build. But in order to create a build, we want to install everything we have in our build requirements. I'm not on your Wi-Fi. We had some problems with the Wi-Fi earlier, and we connected to his Wi-Fi, but yes, not true. Are you good? Yes, I think I am. And we're ready. Now we want to build our project. And after this build is completed, we will see that this directory is actually created, the one we have already ignored in Git Ignore. It contains two files, a will and target z. I won't go into details as this is not a Python packaging tutorial. The reason why we want to do all of this is to showcase what a person will do when you want to upload your Python project to test.pypi. And then we're going to move all those steps into GitHub continuous deployment in order to better understand why we need all of the steps in our continuous deployment. That's why we are doing what we do now. The last thing we need is actually to install twine. Because without twine, we won't be able to. Maybe worth mentioning that having twine on your development machine or even building on your development machine, it only happens once so that we can do the initial project creation on pypi. And after that, it'll come up that basically you can create project-specific tokens for pypi. It'll become clear later. And now we're ready to actually upload our builds using everything we have already created in this directory to test pypi. So Python 3, twine. We give the command for twine. Then because we're using test-pypi, that's why we need to provide a repository. And finally, we need to provide the folder where our metadata is located. Either you want to call a module Python-m or just... Ah, yes, sorry. I forgot minus m. Now it will ask you for username and password. And we're generally with this process that we're building, we're trying to avoid this step, like having to have your Pypi password on your input on your command line, but this once we have to do it to get the project created. I have pasted my password and our project is released and we can view it in the link provided in the end of the command. And yeah, sorry, I used 2022 in the end. It's not really clear the version of the project, but for the purpose of this tutorial, that's not so important. And here is our test-pypi project. I will let UC explain why we decided to use test-pypi because this is also a decision we made during the tutorial. Yeah. Yeah, so Martin uses test-pypi here because it's the right thing. We don't want to pollute the real repository with just example packages. And for the same reason, we suggest you use that if you're doing this on your own for figuring out and testing the actual release pipeline. But there is something you need to know about test.pypi.org. If you haven't used it before, it's not safe to install packages from there. Even your own package, basically, because if you have dependencies, then those dependencies come from the test server and no one knows where they come from. So, use it to test the release pipeline, don't install packages from there. Yeah. Now, we have managed to do our first task, but now we're going to the important part of this tutorial. And, but before we start actually creating our first workflow, we're going to commit our changes. You'd commit. Okay. Then, we are going to create a new file inside GitHub workflows. The file name is not important right now. Yeah, it's not important as a whole. You can do whatever you want. And first, we are going to provide a name of the workflow. This is what you're going to see on GitHub as a name of what you're running. And we're going to use the name, yeah. We're going to describe what we want to achieve. The next thing we want to provide is the trigger of this workflow. When this workflow will be activated, you can do this with the own attribute, then push. This means that when there is a git push, this will be triggered, but more precisely, we want to say that when there is a push, let me just write it. This syntax means when there is a git push, we attack starting with V. It doesn't matter what are the other letters after that. By the way, if there are any questions during this tutorial, as this is a tutorial, not a talk, I'm okay to accept questions during the tutorial. So please raise your hands. Yeah. Can we get a microphone? Yeah, they will handle it. Ah, okay. Yes. And you'll have to take the phones. Thank you. I was just going to ask, what's in your requirements build.txt? It's, we need this. We have this. It's just build. Okay, brilliant. Okay, thank you. Someone else I saw on the other hand. Is there any more stuff we need to install because the internet is horribly slow and it's still downloading the twine dependency. Yeah, we're sorry about that. I think, what else? What else? Let me see. Build, we will need six store for later. No, it's not part of the tutorial. Okay, yeah. I mean, I think this is the only thing we need to install for now. Yes, built and twine. Those are the two things. Yeah, sorry about that. We can't, we can't help with that. The internet. Sorry. That's why we have decided, by the way, to use mobile internet for this case particularly. Any other questions? Okay, I don't see for now. Maybe, yeah, I can continue. I hope that the twine will install before we actually, even if you missed the step with twine, I just wanted to showcase how much effort is required to actually upload your package by yourself. What is needed is to upload your changes to GitHub later. I hope that this won't require so many internet resources, let's say. Okay, we want to make our first step, our first job actually. In GitHub workflows, there are the so-called jobs or you identify particular tasks you want to do. Each of those jobs can contain multiple steps and we're going to use that in this tutorial. So we're going to call our first job built. Then we need to say which operating system do we want to use? Then it was like, yeah. We're going to use Ubuntu Latest, okay? And now we are going to write our steps. First, you need to provide the name of your step. What we are going to do for this test is to set up Python. In GitHub Actions, you can use actions that were created by somebody else and we're going to use that in this example. What you will do is first write uses as a dependency and you need to go to .github workflows in the copy.txt file. Here, what we want to copy is the setup Python on line seven and with its commit reference. For this action, we need to provide an argument to it. We can do that with width and what we want to set is Python version. Keep in mind the Python version should be a string. That's why I'm using quotes. It could be a mistake somebody can do when following me. And that's our initial workflow and we are going to test it if it's working. We are first going to commit our changes, okay? So now I want to mention that in order to see how this workflow works, we need to create a release. As we have mentioned that in order to run it automatically we need attack with a specific version and we're going to simulate a real project. What exactly a real project will do? That's why the commit message will be a release 0.02. The reason is that the initial version is 0.01 and we're going to bump it. I don't think you've actually changed the version number, but not yet. I made the mistake. I have forgotten to actually make a release. In order to make a release, you want to go to pyproject.tomo and in the version section, you want to bump it. So we're going to create it to 0.02. Yeah. In order to make sure that there is no confusion, I will just bump it to 0.03 because I already created the commit with the other message. Now let's push our changes. Hopefully I'm not, yes. Are we sure? Yes, yes, it's upload. Everything is fine. Let's verify that everything is here. What do you expect? Okay. And now we're going to create attack so we can actually run our workflow. The command for this is git stack, then the name of your workflow. It's really important to start with a V, otherwise the workflow won't start and I'm going to give a little explanation with initial workflow. It doesn't matter what explanation you're going to give. In a real situation, signing your tags is a good idea. Yeah, I'm sorry, that's like, I haven't set up this for this tutorial. And I'm going to push the tag. In order to do this, you want to add in the end, minus, minus, tag, tags, like this. We see that we have a new release here. We didn't have before. And when we go into actions, we can open it. It will load. Yes. And we see that everything is working as expected. C Python is loaded version 3.9.13. By the way, one thing you want to be careful, if you want to achieve reproducibility, you want to make sure that the local Python version you're using in your virtual environment is the same as the one you're using in your GitHub workflow. Otherwise, you won't have reproducibility, at least not by default. I mean, hopefully you would, but... Hopefully you... But we saw some small differences. Quick question. Yes. Is there a reason why you're not using the SCM version capability? Can you repeat it? Is there a reason you're not using use SCM version and then just doing the versioning from the Git tag? Version for what? So is there a reason you're not using use SCM version and just driving everything from the Git tag for the versioning, rather than having... Oh, for the Python? No, no, that seems like a good thing to do. Thank you. We haven't thought about it, honestly. Okay. Just like this. So now we have our initial workflow, but as you can maybe figure it, you have already figured it out. You're not here just for that. So we are going to move on. Now we are going to finish the build job and later we are going to add one job more, which will be upload. So what we want to do right now is to use a job to check out our code and make it available. Again, we are going to use a third-party action. That's why I am going into the GitHub workflows directory, go to the copy file, and again, I'm copying the commit reference to that action. We're using commit references again for built reproducibility. Yeah, and while finding the first hash is always a bit, you need to figure that out from somewhere, but then you can set up, depend about for example on GitHub and it will just suggest you updates as they're made. So it doesn't make things any more difficult in practice. Yeah. Then there are no arguments that are required here. The next step we are going to do is install our built dependencies. We can directly run a command for that as this is something trivial. That's why we need requirements built.txt as I have mentioned before. And now we are ready to build our project. We are going to run a simple command as we did locally. The last thing we are going to do is to showcase what they mean by built reproducibility. In this really simple Python project we can achieve byte by byte reproducibility. We know that in real life this is harder, it's a lot harder to do, but we can at least showcase this in this perfect world. Shazam. And this is for the algorithm. And finally the last thing we want to say that we want to list the sum of 256 for all files in our disk directory. And that's what we want to do at this step. And we want to verify that everything is working and to push it to GitHub. We want to commit our changes. Again, I'm going to create another release. And that's why I'm going to call this, but before I commit that, I want to make sure that this time I won't forget. So I go to pyproject.tomo and inside of it I am bumping the version number. Are there any questions until now? Okay, I'm going to add this and now we're ready to commit the changes. Okay, let's say build job. And let's push the changes and make sure that we have what we want. Yes, we build job and everything is expected. Now we're going to tack our version to make sure we can run the workflow and build our new release. Again, we get tack. The name of the tack should start with V. Again, I remember I remind. And we are ready to push this tack to GitHub. Now we can see we have two releases with two tags and we can go to actions and see that our workflow has started. While we waited, what we can do is build our new release and make sure that the chasms that were printed on GitHub matches those that were printed locally. So we can make a new build. And now when it's ready, we're going to print the chasms. Chasms minus A. Now we provide the algorithm we want and then the folder where our files are located. Because we have more than two files here. Let's me do it like this. We want to make sure that we are looking in the last two lines because the other ones are from the auto-release and I don't want to remove them right now. And we will go to the build. All of the steps are completed. Let's see the chasms. So the first one, it ends with nine EE. And nine EE. The last one, nine and 86. Nine 86. So it seems like at least for now, everything is fine as we expect. We have the build job, but our task is not completed. We have to create the upload job. But in order to do that, we have some chores to do. But before I show you what we need, I will let you explain some of the positives, advantages of using GitHub and why we care to build our releases on GitHub and not instead build them locally and do our releases locally. All right, because next step, Martin is going to deploy to PyPI from someone else's computer. And I know that some people don't appreciate the idea or think that that's not a good direction to go. So I wanted to kind of reiterate some of the reasons why, you know, kind of step back and say, why are we doing this? First of all, once you have this set up, it is easier. There's just less room for human error. Like anyone who's built release pipelines that have existed for multiple years knows how it looks. Might be 15 steps in a release to-do list and you have to follow them very carefully and kind of do manual steps. And it's very easy to end up in that situation where doing releases isn't fun or easy and you really don't want to do that when you're busy. The second point, again, the transparency for build systems and release processes, it's just good to have them somewhere where other people can see what's happening, even if you don't get fully audited things. Just seeing how it works is good. And, you know, we're not saying that building locally is wrong, it's great. We've just done that as long as the result is verified to be the same as the sort of more auditable system. Yes. And, you know, if we have time in the end, we'll show maybe a bit more professional way to ensure that than just checking the shasms by eyeballing them. Now, it's true that there are some pitfalls to doing it this way as well because GitHub Actions is an NE system that can do this. These systems are really complicated and it's pretty easy to do something there that means that you aren't any safer. So, I'll mention a couple of things about GitHub Actions specifically that probably apply to the other systems pretty well. So, one is that the actions you use, like third-party actions, they are built dependencies. You need to treat them as such. Don't just use them and think that they are safe. Second point, leaking secrets is really easy. We're going to be adding a secret into our project soon and basically, if you add a normal secret to the GitHub secret storage, then anyone with push access to your repo can leak it anytime they want and you probably won't even find out. And that's why you should use environments to limit the exposure of your most important secrets and we'll cover that. Finally, when you've kind of limited the exposure of your secrets, make sure that you only execute the minimum amount of code. Like don't use all of your build commands in the section that has access to those secrets. Kind of compartmentalize the whole thing so that secrets are only available to the actual command that needs them for a purpose. Yeah, that's it. And by the way, UC mentioned that you want to limit the how many steps and how many actions have access to your secrets. That's one of the reasons why in this tutorial, we show you two jobs. We show you the build job and we show you the upload job. And the upload job is really short. As you're going to see, it contains only two steps and we want to limit how many steps and how many actions actually have access to our secrets. So now what we want to do is you want to go to your GitHub page and from here, I'm going to, I'm going in my project settings and inside your project settings, there is a menu called environments. It's underneath code and automation and you want to click it and we're going to create a new environment as UC have mentioned. The name I'm going to use is not important, but it's important later, but right now, we are going to use it as a release. Configure environment and you can see multiple options, how to protect your secrets and we're going to apply each one of them. The first option we want to make sure we use is required reviewers. This actually, what is this? Before an action can use this particular environment, somebody has to approve this. This is really useful in a real release where you want to verify that this release had been started by someone that you trust and that it contains the files you expect to contain. So I'm going to add myself and you want to make sure when you add yourself as a required reviewer that you click save protection rules, otherwise it won't be safe. The next thing we want to add, the next protection mechanism is called deployment branches. This limits which branches or tags can access our secrets. So it's not only that we say a reviewer has to see this, but also we say let's make sure that no branches there are no branches other than let's say main or deploy that can have access to this environment. And while we're not doing that this time, you should also look at the tag and branch protection rules in GitHub, but we're not doing that, I think. They really work beautifully together if you combine those two. So now we click all branches and instead of all branches we want to say that selected branches can have access to our tag. Now we want to add the deployment branch rule and we can say that all tags starting with V can have access to our environment. As you remember we are using the same pattern for our tags so we should be able to have access to this environment later. With the same tags. And finally now we're going to add the secrets. So I will show you how to do it, but for a little bit I will disconnect so I can add my secrets without without leaking it to the internet and everywhere. So what you want to do is click your account, then you want to go, this is the test by PI page maybe I haven't mentioned. So you click your account, you go to account settings. It will ask you for password and below when you scroll down down you will see API tokens. Then click add API token and you can give it a name, token for tutorial, let's say. And you want to make sure that the scope of the tutorial is only applicable to your project. And before I click add token, when you click add token, you're going to see your token generated and you will see copy token button. You want to click that button and the next step what you want to do is go to environment. In underneath environment secrets, you want to add a secret. Give it a name. I'm using the PI PI token name for ease. Later in my workflow I'm going to use that name so you can decide what name do you want. And you can copy the content of your token and paste it here. So excuse me for a moment when I disconnect. I hope that everything will work after that. I am ready and hopefully there won't be any problems. Yes. Okay, you see there is a PI PI token added and we can continue. So are there any questions with that part? Okay, no questions, then I will continue. We are opening VS Code and we can continue with our workflow. So we are going to add a new job but in order, I want to mention something important, jobs don't share any formation between each other. That's why we want to make sure that the built artifacts we have from the built job are available to the upload job as well. That's why we need one, this time truly last step we're going to call it built artifacts. Yes, artifacts. It's a, we're going to use a third-party action. We will do the same process. We're going to go to dot GitHub, workflows, copy.txt. We want to copy the commit reference and paste it. We have two arguments that we need to provide. The first one is called name. What actually by name it's the output mount or the directory where our built artifacts will be located after this job is completed. We're going to use a place called built artifacts and the next one is called path. This is the place from where our workflow will take everything in this and we'll put it in built artifacts. So now we're ready to make our next job which is called upload. We need to provide the operating system, the same operating system we're using. Now I'm going to say that upload needs to wait for built to finish before it can start. This is what we do with this syntax. Next, we are going to provide the environment. I want to show, I want to mention here that the environment name is the same as my GitHub environment name, but with lowercase r. So if you have created an environment with other name, just make sure that everything here is correct. And now we have two steps to add. The first one is self-explanatory. It's again a third-party action we're going to use. So I'm going to copy.txt again to copy the content, the commit reference and the next job is more, no, before I go to the next job, we have two arguments to add. They are the same as above. So name. This is the name from where we're making the download, where are the files located and path where those files will be stored. And the last step for, we want to add is called, no store, but publish. We're not going to store them, but publish. Again, it will be a third-party action we're going to use that underneath calls twine for us. And everything else so far has as opposed being like not actually third-party, but from GitHub itself. This one is from PyPA, which is the Python packaging authority. So we figured we'll trust those guys. So we give, we need a couple of arguments to provide. First is the user. The user here is token with double underscores. And the password, the syntax here is important. Is the template syntax to access a secret. And our secret is located in secrets dot and the name of the secret we have added. Here the name should match exactly. The final argument we need to provide is the repose to URL. And you wouldn't actually need this if you're using the actual PyPI. Yeah. In this case, yes. To make sure I don't mess up something with the URL, I have added the test PyPI URL here. The API endpoint, I want to be clear. Because if you just write testpypi.org, you have problems, you need to make sure you add legacy in the end. So I went to copy.txt and pasted the URL. And we're ready with that. We have download, we have publish. And we want to make sure we create a new release in order to start our workflow. Let's go to pyproject.tomo and bump the version. Then we are going to commit our changes. And we are going to push our changes to make sure that everything is as expected before we make commits. And we, before we add tags and make it official. So we go to the code and let's see the final commits. Load jobs. Yes, everything seems to be correct. So let's make a new tag. Again, starting with V, a little description and we can push this tag again. Minus, minus tags in the end of course. So it's actually pushed. Let's see our releases. It should be here. It's here. Let's see the actions. What we can see here is that we have two jobs started. And the upload job is waiting for the build job to finish before it can start as well. I'm not going to verify the chasms again this time. But in a real scenario, you won't use chasms but you have some way to verify that the signatures are correct probably that everything is as expected before you actually click approve to the upload job. And the build job has completed. And the upload job is waiting for me to actually approve it before it can start running. So as I said, I'm not going to validate it again in order to not repeat myself. But here you have a release need approval and you need to click review pending deployments. And this says to me that this action needs once access to the release environment and I will prove it. This is the point where the action gets access to the secret to the upload. And when it's finished, we should have a new release. And we have automated the process of building our projects, making sure everything inside of it is as what we expect and the deployment itself. And our deployment is ready. Yeah, no new version is available as I have refreshed and older page. And we have our release here. We have applied multiple mechanisms here but we see that it's not ideal as we haven't showed how to actually verify what you have built that is on GitHub is actually what you want and what you want to release. Yeah, I mean, I wouldn't say that this is already I think an improvement on a lot of release pipelines that I've seen. But yeah, we do have some future improvements as well but this was basically what we have for the tutorial itself. Can you show the slide please? Yes. So like I said, couple of future improvements that you might want to consider is signing releases. Like this is obviously something that in most release pipelines is part of the program. And we did not include it in the actual tutorial for a few reasons. Like one of them is that there's just not being any standard way of signing, like a way that Python developers would expect. So we thought that just adding something like GPG or something like that, it's not that useful. Then it's just signatures that are thrown somewhere and no one uses them. But we did plan to show you a demo of using SigStore because that looks very promising. I think that has a chance of coming the sort of ecosystem choice of signing. And I think has a high chance of becoming supported by the ecosystem more widely, like maybe some PyPI support or something like that. There is another reason we didn't include it in the tutorial and that is that the SigStore tooling for Python is kind of new, it's just a few months old. And it's improving rapidly but still slightly, maybe some rough edges. So that's why it's not in the tutorial. The other thing that you might be interested in is just looking at the build side and making that tamper proof. There is a Salsa GitHub action, I suppose it is, that you might want to look at. It's interesting but it was slightly out of scope for again for the tutorial here. And maybe also out of scope for let's say the average Python project. Maybe it's a bit complicated for that. So yeah, do we have time for a demo? I suppose we do. Yeah, it's a couple of steps. So it's not something complicated. But this is a demo I don't expect you to follow me as we have created some stuff before. I want to mention that I have created a repository where you can look into all of the steps and what we did today, step by step. And if you want to do it by your own when you're at home or wherever. And in this tutorial, so this is the URL, I'm sorry, I haven't added in the last slide, honestly. But I have also cloned this repository and I'm going to use it in order to show you what we have done with Sigstore. Yeah, and I could mention some generic details about Sigstore. I'm not going to talk about how it actually works. But basically, Martin is going to use his GitHub identity and authentication to cryptographically sign locally the release on his machine. And what we're doing here is committing the signatures to the actual Git repository. It's not the usual way to do things, but we can do that because we are reproducible. So you can sign a release before the release is made because you know what it's going to be. Yeah. And then, you know, what you can do with that, like when you have a signature available somewhere in our case in Git, the build system can verify that Martin has actually signed the release and it was Martin and not someone else. And also that his build results actually matched what is built on GitHub. Yeah. And then, of course, everyone else can also then verify that Martin has signed the release. But yeah, go ahead. Okay, so as I said, I have it locally. So I'm going to open it. I want to mention a couple of things that are changed compared with our tutorial. First, this is pyproject.tomo in this file. And you can see that I have added a release, a release script. This script, I'm going to show it to you. It's nothing, I won't go into details. It's just a simple utility script that actually verifies six store certificates and signatures. And also it's able to sign the signature. You can use six store as a CLI tool again, but we wanted to automate the things so we can directly use it and not have to worry about this. This is just a way to avoid having six, seven arguments to six store verify or a six store sign. Yeah. And just be able to have one or two arguments. Also, in getting no, no, no, it doesn't mind. So I will make a new terminal, okay. And what we want to first do is make a new release of this project. So I will bump the pyproject.tomo. And here I'm going also to build the project now, not directly commit my changes. And this build is done just so it can be signed. Yes. And the build is basically thrown away. Also, I have added in the requirements build.txt, I have added six store as we are going to use it. Okay. And now what we want to do is I'm going to cause the release script we have shown. I'm going to give the command to sign the release and I will provide the version of my release. So 0.016. That basic, the version helps our script to identify the files that it needs to sign. Now six store opens a tab that asks me to authenticate myself that I'm the one who actually owns this email address because we have hard coded the email address for ease. Okay. Everything is successful. And we have signed our release. The signatures are in the signatures directory. I have a lot of signatures for my test. I'm sorry about this. I haven't delete them. But what is important for us are those four. We have a certificate. We have a signature for the wheels. And also we have a signature and certificate for the TAR-GZ file. So for both of our built artifacts, we have already signed them. As you have mentioned, because it's built reproducible by by by by reproducible, we are going to commit those changes, push them to GitHub, and then on GitHub verify that what we have built is actually what we want. That is the step we have left missing in the tutorial. So I'm ready to commit my changes, signatures, and I'm also adding what's inside by project.toml. Then we want to push those changes and make sure that everything is already in main. Ah, yes. Here is master. I see my last commit. It contains version. It contains the certificate for both files and the signatures. Now I'm ready to make a tag of my release. I'm again using with V as I have set up the same rules. And I want to show you before I push the step in our workflow where we actually make the verification that everything works as expected. So everything here is the same as on the tutorial. The step we have changed is the list chasms. Here, what we do is we run a command. We take the version that way is just the GitHub syntax. And we call our script release, verify, giving the version and my email address. And so what this does is it looks at this directory where we've just in the previous step we've built the artifacts on GitHub and it verifies that those build artifacts are assigned by Martin with the signatures in the signatures directory. Yes. Now I'm going to push it. And at this point, anyone else can verify the signatures in the same way. If you have access to sources, you can build your own release and verify that Martin has signed that or alternatively you can download once the release is made to PyPI. You can download those files and then verify that those files have been signed by Martin. And when I see the release, the build is not ready yet. If there are any questions. You've just signed the release with your own email address but I can imagine a use case where you're working as a team and want to sign it as a company, for instance, that your customers can know that the company has signed the release and not like an individual employee. How would you do that? Well, I think something that you might want to do is, for example, have the build system sign it for you and this would, like if you want to go really professional then the salsa link that we had on the slides is what you want. But if you wanted to do this same thing, like just sign but on the build machine using GitHub's credentials, that actually works too. And it's actually very cool because it also includes the details about the sources and so on, like what we've used. That would be maybe the path that I would take with that. Otherwise, I kind of think that for things like building, like source code, it makes sense to actually have actual people signing things. And yeah, it makes sense that at some point in the sort of supply chain pipeline you want a company stamp on it or something like that but maybe not here, I don't know. Is there, are there any questions? Okay, so let's look what happened. We have successfully built our project and we can see in the verify release signature step our script says what was verified and it says that it was verified successfully and the upload job again is waiting for us to say that everything is okay and we have, we made sure that everything is really okay. Now we can click the review deployment button and allow it to make a release. And yeah, from here it's something that we have already seen. So it's not something so interesting but it will deploy my, the next version of my project to test PyPI. Again, if there are any questions, we will gladly answer them if there are none. Yeah, I think we've covered all the material we have. So questions are welcome now. There's one. So first of all, thanks for the token tutorial. I was wondering if you use poetry or something similar to ensure that you are using the right versions when creating a build or installing a package? For building, we basically just, no, that was just, well, as easy right here. We just did that manually. And if I think about Python Tough, where this work comes from, we don't use poetry or things like that. We do have Dependabot enabled. So I suppose that does roughly the same thing in that area. Yeah, well, in poetry you can have a poetry.log file where you have all the dependencies. And I pin dependencies with the exact packages. So when you build or you install your other package, you can ensure that you have the same versions. Right, yeah. I mean, we do use requirements files that just depend about anything else. Then I suppose we're done. Yes, thank you for your time. Yeah, thanks.