 Hello everyone. The next presentation is on packet. Federal release is automated by Mati, Laura and Franciszek. Hello everyone, I'm Franciszek and together with Laura and Mati, we will shortly describe what packet is and how it can help you with the federal releases. So, what we have prepared for you. So, for those who doesn't know packet, so we will shortly describe it, then we will describe how actually the federal works or how it looks, so you can see how we can help with that. And also, what are our future plans and since we are scared of your questions, we've prepared a few instead of you. So, let's quickly start with the packet project. So, we have a project called packet and it has two goals actually to bring upstream and downstream together or more closer together. So upstream, it's something like GitHub or GitLab or actually the place where the source code is being created, maintained. And then we have downstream like Fedora, Rail or other distributions. So, we are trying to bring those two ecosystems together. So, we are trying to bring downstream feedback back to upstream and also we are trying to help with like getting the code from the developer right to the distribution. So, for example, you can install the application on your laptop, for example, or easily. So, this is basically most of the system we use and on the left, you can see that the Gitforge, it's GitHub or GitLab. So, packet can act as a CI system. You can trigger copper builds or run tests through it from your pull requests, commits or maybe releases. So, that's the CI part but we have also automation and as a reaction to the upstream release, we can do a lot of stuff and this will be described later on. And if you don't believe us that it makes sense to use packet, so just ask these users or people maintaining these projects how and if they are happy. So, you might know a few of these. So, and these are the avatars you might come across. These are the main contributors to packet. These are the people from the packet team but since we are open source projects so there are more people around. We are participating like in Google Summer of Code or Red Hat Open Source Contests and other activities and since we are open source so anyone can help us to help other maintainers. So, let's start with the Fedora workflow. Okay, so in order to explain how we automated the release process in Fedora, let's firstly have a look at how the workflow looks like. So, in the one end we have upstream, so the place where the development happens. As Frantha said, GitHub, GitLab. And on the other hand, we have the user who just wants to install the application in his Fedora operation system. In upstream, the trigger is some release. So, for example, GitHub release and when release happens, usually a source archive is created and this archive firstly needs to be uploaded to the so-called look-aside cache and look-aside cache is basically the database of archives. Then we have also a distribution Git and distribution Git is a Gitforce where for each Fedora package there is a repository and in this repository there are multiple Fedora packaging related files. Most importantly, it is a spec file. A spec file contains information about the package and then how the instructions on how to build the package. And the other important file in this Git is the sources file. And sources file contains the name of the archive in the look-aside cache and the SHA hash so that it can be verified. So that was the distribution Git. As a next step, when the distribution Git is updated with the new version of software, the software needs to be built via official Fedora build system, which is Koji. And then as a last step, we have the Fedora update system, Bodhi. So, the builds need to be submitted in Bodhi and then after fulfilling some predefined requirements, for example, after some period of time or after receiving positive feedback from other maintainers, the update can be actually pushed to stable, which means that it can be actually used by the end users so the new version can be installed, for example via DNS. This is the picture which shows you the whole workflow. As you can see, it is not that straightforward. So, let's have a look at how does Packet help with this process. Here you can see the exactly same picture, but now with some details on how Packet helps. So, we can basically divide this process into three steps which are covered by Packet. The first is syncing the release. So, Packet can bring the upstream changes downstream, meaning Packet can upload the archives to look-aside cache and then open a pull request with usually the spec file change, where the version is changed, the change log can be changed as well, and also the sources file is changed. For this, users can utilize two drops, propose downstream and pull from upstream. They are basically doing the same functionality, but the difference is in the trigger. So, let's start with the propose downstream. Propose downstream needs to be configured directly in the upstream repository, so Packet needs to be installed as a GitHub application or as a GitHub integration and then Packet can react directly to the GitHub or GitHub releases. And the benefit of this is that there is also feedback in the upstream Git repositories in form of commit statuses on the release commits. On the other hand, there is a disadvantage that some Federa package maintainers don't have access to the upstream repositories or they don't have the code in GitHub or GitLab. And therefore, we recently implemented the pull from upstream job. And this job reacts to the upstream release monitoring, upstream release monitoring monitors bunch of upstreams. So, besides the basics as GitHub or GitLab, it can be PyPR or NPM. And in this case, user needs to add the Packet configuration directly to the diskit repository, so the upstream repository doesn't have to be touched at all. Since the workflow can vary across different packages, users can also customize the actions via some configuration options so that, for example, the change log generation can be customized or additional files can be synced from upstream to downstream or they can solve issues when the upstream texts are different than the versions so the schemas are different. Here you can see an example of a pull request created by Packet. And basically the only manual step for the maintainer is that he or she needs to review this pull request and then if he's satisfied, he can merge them. And then, if there is a configured code build Packet job, which the configuration you can see in this slide, Packet checks for the new diskit commits with the spec file changes and creates the builds automatically. By default Packet reacts only to the merged PRs by Packet, but this can be also customized via the configuration. And then, if there is also a body update job configured, Packet also checks for successful code builds and can automatically create the body updates. So that was quickly it for the automation of the process and now Matteo will describe some predefined, frequently asked questions. Right, so when we attend the conferences and we talk about the Packet, we usually get a lot of questions and those are usually things that people are interested in. So one of the most asked questions is the question where do we store the configuration, right? So Packet is basically a CI. So you want to just commit, you just want to create pull requests and not really care about it. And the question is, okay, how do you configure it? So you can put the configuration file in your repository and if you use Packet in upstream, of course you put it in the upstream. And to the respective other jobs, for example, propose downstream, which creates the updates in the diskit, you put it into the upstream branch where you would like to react to the releases. And to the other jobs, like pull from upstream, which should not touch the upstream repository at all, you just put it in the downstream and it's just a default branch, so it reacts to release and creates it to all targets that you wish to. And with respect to KojiBuild and body updates, of course they are downstream part of the automation, so it's just stored in the downstream. And people don't really like mixing the packaging files and the any maintenance that you need to do with the upstream projects and it's not really liked, so one of the things is the question that you can also, of course, configure the pull from upstream, so we don't need to touch the upstream repository at all, but in effect you don't even need to have Packet allowed in your diskit, because all you need is a file, so we don't need any access to your repository, we just create forks, we open pull requests and the changes should be reviewed by maintainers. And one of the other things is that where can you see the results? So we run copper builds, we run KojiBuilds, we run testing farm jobs, and you can see most of the feedback in the commit statuses, which is a bit more complicated with GitLab, but apart from that we provide also a Packet dashboard, which is a work in progress where you can see all of the jobs with their respective logs, and apart from that you can also subscribe to federal notifications. And yeah, if something fails, we usually create an issue, you can configure this, so if this GitJob fails, something on the downstream, like pull from upstream or propose downstream fails, or body update creation, you can configure any repository where we create the issue. And of course if something fails, well it happens, there are many services involved, there's a network of course, so these things can happen, only this week we had two Git Hub outages, so yes, it happens. So what can you do? In case of Git Hub you can just retrigger from the rerun checks, so Git Hub allows this functionality, and apart from that you can just post a comment and if it's prefixed with the slash packet, we react to it and we do our job there. All right, so since it's automation, one of the things that developer needs is very good documentation, so he or she knows how can they customize it and what it allows them to do. So we have very extensive documentation, some parts are even duplicated, so there's too many things that you can find there, and if you search, you can find it. If not, just dial at us, we will fix it. And this brings us to the current plans and what do we plan for the future. And lately you can see what our current goals are on the Git Hub Kanban board, so we have it publicly. You can see all of our current epics that we are working on, how's the work going, et cetera. Of course you are welcome to contribute. And just to sum it up, we are right now working on making the downstream automation more robust, so of course we need to retry a lot of stuff. For example, as I mentioned, the network errors and stuff like that. So we are working on retrying that. Apart from that, we are also working on Monoripo support, so that's been a pretty asked thing to be supported. And apart from that, we have recently implemented the support for building VM images, where the image builder, which is, well, kind of in a beta phase right now. And apart from that, we would also like to be able to test those created VM images on a testing farm, which is a pretty good asset of our project. And I guess that sums up. And also if you are interested in testing farm, there will be the next talk actually in this room. So make sure to join that talk. And now Q&A. So is there any script to generate, say, some co-post source files? Like if you need to just grab more files from upstream and create one archive for it. Is it possible to automate it in packet? No, no, yes. Yeah, you can tweak various steps we do. We have so-called actions. So you can redefine what we think that it should be done by default. So you can, like, various things can be redefined. So if you need to, like, yeah, basically various things can be tweaked. And also when downloading the... Instead of downloading the source, you can, like, build it or do anything. I mean, can you, for instance, download all the files and, I don't know, change it somehow or... Yes, yes, yes. And it needs some tweaking, but yes, a lot of stuff can be... So also, and how is the spec file updated? Can you have some script to update it or can you just create some template to update it? By default, we, like, bump the versions and correct the source and improve the, like, the change log, new change log entry. But you can also define this step yourself. I mean, can you have your own script? Yes, yes, yes. Okay, thank you. Basically, in the actions, you can define anything, any script, any command, and this will be run in an isolated environment. Any more questions? Maybe, worst case, you can... Let us know with your specific use case and we can happily help. That usually works the best when people reach us via matrix, for example. Yeah. Sorry. It may be a very silly question. How do you accommodate not just RPM-type packaging but also Debian Ubuntu-type packaging as well? So by now, we support just the RPM-based, like, implementations or backends. So for upstream projects, yeah, you can use it, for example, as a CI system using the testing firm. So it's, like, independent. Yeah, by now, we don't have capacity to, for example, the open-build system to support this. So we would be really glad to support more, but currently, as a team, we don't have capacity to program that. But, for example, if any student is, like, willing to spend some time on that as a Bachelor of Disease or as a GSoc project, so we would be really glad to, like, mentor that. Okay. Do any of the Koji builds automatically or any Koji build needs to be, like, triggered by the maintainer? No, the Koji builds are also done automatically by us as a reaction to these grid comets with spec file change. Do you have any estimation how much more builds are you making in comparison what the maintainers would do if he would do that by hand? How much more workload you put on the Koji system? Also, are you submitting the task with normal priority or some lower priority? Yeah, regarding the numbers, we don't have any, like, we know how many builds we are doing, but we don't compare it, like, for the user itself. So, yeah, no numbers with that. And regarding priorities, we use the default one. And, yeah, it can be also tweaked in a way that not, like, we also check the, like, who triggered that or who merged this grid pull request so you can, like, react only on a subset of changes. So, for example, we don't automatically build, like, the master builds or comets like that, so we don't mix and match with someone else. But if there isn't ask for some other tweaking, maybe you can think about it or help to implement that. And maybe regarding the Koji builds, we can also do scratch builds directly from upstream if this can be something interesting for someone. Any more questions? Let's thank our speakers for their presentation.