 Hello, my name is Martin Sahnoutka and I work as a software engineer for Red Hat and I would like to share with you my experience with Peket and it's a tool for easier integration of your upstream projects into Fedora. So first of all, let me stress that there is a hamming distance of one from Peket to Peket. That's the solar sponsor of all systems go and it's something completely different. So today I will try to explain what problems is Peket trying to solve and how you can get started using the CLI tool and also the service. So first of all, let me briefly introduce you into packaging. So I think you all know that packages are a very popular way to consume software and I personally almost never install things from upstream except for some, I don't know, nightly compilers or something like this. So it would be really nice if the interaction between downstream and upstream was a little bit better because downstream maintainers often don't interact with upstream at all. I have this nice example of a software that I used to work with. It's a very popular implementation of FTP daemon. It's called VS FTPD and this is our package. As you can see, we have like almost 60 patches on top of the upstream version because the interaction is, well, there is no interaction. So for example, when a packageer wants to do an update, there are a few tasks that the packageer needs to do. For example, download the source star ball, upload it to our this git repository, modify the spec file, refresh all those patches, build RPM, create update, et cetera, et cetera. And it's mostly boring, especially for upstream developers because I guess if upstream developers tried to support all Linux distros, they wouldn't do anything else. So that's why we are trying to help with packaging. So Packet has these goals. It's trying to partially automate the task of packaging software and I must stress that it's only partial help because it's not possible to completely automate the process of packaging. Then it's also trying to create some easier interaction between downstream maintainers and upstream developers. And finally, in those cases, like I was showing with VSFTPD, it provides like easier way to manage this git for maintainers with source git. I will explain later. So you can use Packet either through CLI or as a service. And again, please note that Packet CLI is not a client for the service. Right now it's actually more capable than the service because the service has some problems with authentication against Fedora infrastructure. So currently it cannot do all that the command line client can do. So what can the command line client do for you? Well, as I said, it's about bringing upstream and downstream together. So on the top you have the upstream repository. Currently I think they only support GitHub, maybe GitLab. But they are definitely working on it. And if you want to, for example, create an update for Fedora distgit repository, you can generate it automatically from GitHub release. So whenever there is a new release, you just type Packet propose update and it will download the sources, upload them, modify the spec file and propose this change as a pull request to our Fedora distgit. You can also use it to trigger builds in our community build service. So when you are working on some new features in your upstream project, you can quickly see if the RPM still builds. And you can also build the RPM in our build service and then propose update. So this is how it looks like when you run the packet command line tool in CLI and it looks very simple, right? Well, unfortunately, in order to execute such simple command, you need all of this configuration. So if you want to know the status of your project in Fedora, you need to create the configuration file in config slash packet dot yaml and you need to generate GitHub token, you need to generate Peugear user token. If you don't know Peugear is the web UI we use for distgit. And unfortunately, you also need Kerberos ticket for Fedora and you also need SSH key in your Fedora account. So that sounds like a lot of work. Unfortunately, this is how packet works like now. Yeah, but when you do all of this, you can, as I said, you can create source RPM, propose the update, build it and create the update in Bode. So let me show you some examples, for example. So this is the distgit repository that is completely maintained by packet, oh, sorry. This is our repository that is completely maintained by packet. And as you can see in pull request, all of the updates were automatically generated. I didn't have to do anything except for accepting these PRs and, well, this is not really interesting. It's here. It was able to automatically update the spec file, create a change log entry and upload the new sources. So yeah, when you, then when you merge this PR, you can easily build the new RPM and propose it to Fedora. Okay, let's go back. Yes, so again, this needs some configuration. So they have this little YAML file in the upstream repository. But even though it's pretty long, I don't think it contains any magic, basically just specifying where you can find the spec file, how the package is named in Fedora was the upstream name because sometimes there is a difference. The downstream package has a different name than the upstream project, simply because the name was already occupied. And the rest is just about what branches you want to build, et cetera. It's nothing complicated. Yeah, so this is what I have already mentioned. I don't think that all upstream developers have Fedora accounts. And that's unfortunately nothing you can do. There is nothing I can do about it right now. You just need one. Or you can wait until package developers remove this condition. And what to do if you don't have, if you are a maintainer and you don't have access to the upstream repository. This is a common case for maintainers in Red Hat. In that case, you can create something they call a source git. It's a fork. So on GitHub, you create a fork of the upstream repository. You rebase commits that contain the spec file and maybe some downstream changes on top of this repository. And then you automatically generate the content of this git repository. Even though it sounds almost the same, it's much easier. Actually when I was maintaining the FTP daemon, I did exactly the same just for myself. But it was little fragile because it was a shell script. So this time it should be more robust. And I hope it will work. And finally what packet can do for you, as I said, it's about partial automation. And people often ask me if packet can generate the spec file for them. And the answer is no. Because in general case, it's impossible to generate the spec file. For example, in the project I maintain, the spec file simply contains information that cannot be found anywhere else. So it's impossible to generate it if there is nothing you can use as a source. But you can still use all those utilities for generating Ruby packages, Rust packages, Python packages, et cetera, et cetera. But this still looks like a lot of work. Yes, microphone, please. There. Theoretically if they had a makefile that had a duster option, couldn't you just make dirt to a temp directory and then figure out what it installed when you did a make install? Yes, in case the project has makefile, it should be possible to determine what it installs. But for example, we don't have makefile. We just don't. No, I'm saying if I'm doing it to a GitHub account, it probably has a makefile. If I do a make install duster, you can generate a spec file based on the file paths that were installed. Well, that's the problem. It doesn't work like this for every project. You can have, as you have the gem to RPM, Rust to RPM, you could probably have something like make to RPM that would use makefiles. But for example, the project I work on, we have just setuppy. But setuppy does not contain all the information because it's not supposed to be used as a system-wide installation. So the spec file contains some data that cannot be found anywhere else. So it's pretty complicated to do this for any software project. But still, it's a lot of typing in terminal, so can we do better than this? And of course, with cloud, because that's the current answer to anything. So you can use packet as a service. It's available on GitHub in the marketplace under continuous integration. And you can install it and then enable it for your repository. It currently has only few features. The first one is build on push. So whenever you create a new pull request in your project and you push to this branch, it will trigger a build in our community build service. So you can immediately see if the project still at least builds. It looks like this. It's very simple. So you push and in a few minutes or hours, you should get RPM. Now why am I saying a few hours maybe? Well, the community build service is sometimes a little busy. This is the task queue. But if you are from some, for example, if you are from US, you are lucky because the peak is usually during the working hours in Europe. So if you work during some other hours, you should be fine. And yeah, the unfortunate thing about the community build service working with packet is that in order to somehow manage the build instances, each user in the community build service can run only seven builds in parallel. And since packet uses only single user for all their projects, it can build only seven projects in parallel. So if packet becomes popular, this will become a problem. And these are the features that it should have or will have. The first one is the blue. The blue one, it should automatically create the updates as I was showing earlier. So whenever it receives a webhook from GitHub, it should create the PR automatically. But there is a problem. When you create the new release, you can just wait. There is nothing you can observe. There is nothing you can do in order to somehow check that packet is working. There is simply nothing. All you can do is to contact upstream developers and ask them, hey, is there any log in your cluster? So that's a pity. I tried to use this feature, unfortunately, it didn't work for me. So I had to generate the updates by hand. Now the other feature that should work is the other way around. When you change the spec file, for example, in this git, this happens a lot when we have, for example, mouse rebuilds. Before Fedora release, there is some new entry in the change log. So you need to propagate this change to upstream and packet again should automatically create a pull request for the upstream repository containing these changes. Again I can show you how it looks like. So you can see this is the upstream repository was built and when there was a change in Python packaging rebuilt for Python 3.8, packet automatically generated this pull request on GitHub. So I could just merge it and the downstream and upstream spec files were synchronized. Okay, so, yes, and finally, this is not yet implemented. It should also create the new RPM automatically and propose it as an update. From what I've heard from the packet developers, this is currently blocked on the authentication system we use in our Fedora build service and update service and I hope they are working on it. Okay, this is what I was talking about. And of course, there are still some unresolved questions. So for example, what to do with RPM change log. This is something, again, everybody has a different opinion. So there is simple no general way to generate the change log so that everybody is happy. I'm personally happy with one line saying that there is update to the newest upstream version but some maintainers want a list of new features or something like this. So again, something that is very difficult to do in general. Some future plans. So as I've already mentioned during the talk, there are still some problems with current implementation so they should mostly stabilize and fix bugs. I hope they will improve the user experience because as I said, when you do something and it doesn't work, you have no idea what went wrong. For example, there were failing copper bills for a few weeks and we had no idea what's wrong and we couldn't do anything about it. So I hope this will get better. And finally, this is something I'm really looking forward. They promised to spin Fedora VM for each copper build so that when you have the RPM, you can actually run some integration tests because I think at least for me, that's the point of building the RPM so I can test that it actually still works with the rest of the system. So that would be really, really nice. Finally, the packet developers, they would like to hear your input. So first of all, is it interesting for you? And if yes, are you willing to include the packet YAML configuration file in your upstream repository? And are you interested in the information if it is still possible to build it for Fedora and if it works with the rest of the system? You can use their GitHub issues, I use them a lot. Here you can see the URL, github.com slash packet service or you can use IRC channel or this email. Yeah, I was a little bit faster than I expected. So thank you. That was it. If the goal is to automatically also build upstream stuff, I think then solution actually would fit for 70 or 80% of upstream software, which is better than to do everything manually. And the other thing I was wondering, so how would that solve your FTP program issue where you had 50 patches? As the approach with SourceGit, you can create a GitHub repository with the sources and you mark the commit that is the upstream release and then on top of it you have all those patches but this time as commits and then you generate the content of this Git repository from this Git repository, standard Git repository. It's actually easier than, no, in this case not because upstream releases new versions as star balls and there is no upstream repository. Other projects with the Git repository, oh yeah, I should probably, in this case they don't have a Git repository but if you have a project with 50 patches then there is a GitHub or whatever. For example, if there is a project that has a GitHub repository and don't want to include the packet.yaml file, you would fork the project and then you would again rebase the commits with the packet.yaml file, spec file and downstream patches. So whenever there is a new release you just pull in the changes from the upstream repository and again rebase the patches. I know it sounds almost the same but when I was maintaining these packages it was actually much easier than maintaining all those files by hand. Hi, yeah, so I wanted to ask because you mentioned adding a packet.yaml file which is necessary but there is also the spec file and I wanted to ask whether there are restrictions to the spec file, must it be like just one source, can it have other sources, patches because that's all going to the upstream repository that's not really pulled from the Fedora side in Pegger, right? Yes, actually you don't even have to have the spec file in the upstream repository. They have a concept of actions or hooks. So when they trigger some action they can run some little script that is written in the yaml file so you can use curl for example to download the spec file from somewhere else. So basically you don't have to even include it in the upstream repository but the packet.yaml file is mandatory and if you don't, if they don't want to include even the yaml file then you have to fork it and use the source grid approach. Okay, two questions. First of all about automatic versioning. You said that pocket automatically increases version but version of different packages may be vary. I mean a schema of versioning. So how do you deal if not if then your packet fails a versioning? Yes, so this is again a complicated question and again you can use custom command. For example we use or we did use a Python script to generate the new version and packet would just call this script to generate the version for it because in general case you have no idea how to automatically generate the new version. So whenever upstream uses some unknown or specific versioning they can always include their own script to generate the version. Okay, thanks. And the second question. Any plans for non-Fedora specs? Non-Fedora specs. Very good question. I was wondering about this as well but I don't think they have any plans right now. Because they are pretty busy just implementing features around Fedora so I think they don't have any plans right now unfortunately. But yeah, I was thinking about this as well because for example I was maintaining Wireshark and they have a spec file in their upstream repository and it's possible to use it with both SUSE and Fedora. Therefore I don't think they would be interested in change look entries with Fedora rebuilt or something like this. I mean in the end there are already a lot of CI systems like Travis or CircleCI or App Veyer or something like this so some projects are willing to add even more configuration files for CI stuff but I think you might be able to look at these files actually and somehow if the automatic stuff works and normally you have built stuff in there use this to do this integration stuff and second if you build stuff you need of course to make sure that there are no false positives so maybe there's some kind of database where you lock if there was a successful build and then only afterwards if it breaks in the future then you report issues and not before but to avoid false positives. Yeah I'm pretty sure there is a lot of efforts like this as I said when I was a package maintainer I tried to implement something like this myself and I tried to use whatever was available but as I said none of this works for every project so this is a brand new project to solve this again. Any other questions? There is one over there. I missed some of the talk maybe this has been answered before. In packet as a service are there any plans to allow Koji builds instead of just copper builds? Yes there is but there is a problem that if you want to trigger Koji build and build the update you need to authenticate yourself using Kerberos and as far as I know there is a problem with running Kerberos inside of OpenShift cluster right now so they are trying to debug it what's wrong or how to do this but currently there is no plan or no specific date I could promise you when this will be available and that's just for Koji builds. So if there are no more questions thank you for listening.