 Hello everyone. I'm Steph, and I have loved putting Linux together, integrating it. I've contributed to over 100 different projects in the open source ecosystem, and that's in many ways why I was one of the folks who built Cockpit. Cockpit brings together a whole bunch of stuff and has done awesome things, but it talks to about, again, about 90 to 100 different parts of Linux in order to do its job. When we were doing that, we had to integrate it, we had to test it, we had to make sure it worked together, and so I got dragged into the strange world of continuous integration and testing. We're trying to make the operating system more integrated and work better together. So I'm a CI freak, and I often get teased about this. I work for Red Hat. Hey, I'm Tomasz, and I like containers. So, a self-maintaining package. What did we like that? What we really want is for the package to do its dirty work itself, to automatically have that taken care of for you, and you have instead of having something that you have to take care of bit by bit by bit, you basically have to train and take care of it in a much higher level way, but why would you want such a thing? Why are we looking at such a thing? Why is it an interesting problem for us to solve? It's much more fundamental than that. We bring together tons of packages into a Linux distro, Fedora. There's, what is it, five digits worth of packages, somewhere between 15 and 20,000, and we integrate them. And yet each of those packages upstream, most of them, do not have any immediate feedback on how they affect the rest of the distribution, the other 15,000 packages. We all contribute back to upstream individually, and this is awesome. You can read it right in the basic description of how to be a package maintainer. Send your work upstream. But as a distribution, we do that very, very poorly. The distribution itself represented in the upstream project, it's not really on the radar. They know it exists, but it's not in the workflow of those projects. Some upstream projects bring it into their workflow. I know that SystemD does this. Cockpit does this. There's a bunch of them that do this, where a certain pull request, a change is proposed to the project, and immediately testing is, they figure out how to do this. It takes forever. It took them six months to a year to implement something like this. Figure out how it works on the distribution. Dora, Debian, Raoul, whatever, and bring that feedback immediately to say, hey, you broke shit. But it's tough for this to happen for each of those projects. It's difficult to make this work. So let's be clear. We probably have a lot of people here are excited by the active packaging. And that's nice, but keep in mind, it's not exciting for most upstream projects. It's like the nasty stuff that they have to do. It's the cleaning up the mess of the baby. The baby is nice, but they don't want to keep taking care of that part. That part's not the highlight. So having the latest bits integrated with the rest and usable, that's exciting, but the mechanism to do it is not. So we already talked about the example of SystemD, but keep in mind, SystemD is awesome as it is, is completely inert by itself. It needs the rest, a whole bunch of other things, to run. Needs to be integrated with those things, and most importantly, when it breaks shit, it needs to know right away. Or the people working on it need to know right away. So what if you could immediately know if your upstream change works in Fedora? What if your new upstream release automatically landed in Rawhide? What if we had something like this, just a mock-up, and here, pull request happens, propose change, and a Fedora packaging service comes along, similar to Travis, similar to CircleCI, Semaphore, and so on, and says, we packaged this, and here you can try it out, but it doesn't work. Or we packaged this, and it will land in Rawhide when this pull request is merged. What if we could do that? What would we need to make this happen? Samash, what would we need to make this happen? Okay, let's go through the list. So, first of all, it's not an easy task, right, as you can imagine. So, first of all, we need to package the upstream software, right? So, in Fedora, for example, we need spec files. Then we need tests, like, okay, we packaged it, now we need to verify that the software works. So, we need tests. Okay, what's next? We have these two. It's pretty nice, but if we don't use this, if we don't use the spec file, we don't run the tests, and we just, like, blindly put it into the distribution, we have no idea whether it works. So, we need to build a gate. We need to make sure that if the tests are failing, we don't put that content into our distribution, and we keep going on until all is green. And finally, it would be very nice if all of this was done automatically. So, I, for example, as a package maintainer, wouldn't need to go and fetch those star balls and edit these lines in spec files, and then type build and wait, like, 30 minutes, and then it failed, and now I need to do it again. I would love to have packaging service which would dot all of this for me, and all I would really need to do was just, yeah, I approve this change. Do it. Let's talk about all these points a little bit more. So, spec files, wouldn't be nice if we had spec files upstream. Yeah, I can see that this is, like, a very controversial team because some of the upstream projects don't even care about spec files. They don't understand them. They, or maybe they understand them, or maybe they think they understand them, and then you try to consume them downstream and realize that they are trying to support, like, five different distributions, and the spec file is horrible and unusable. So maybe spec files upstream is not the best solution, but somewhere it can work, somewhere it doesn't. So there's also another way how to solve this. So in Fedora specifically, we have a bunch of tools, like, I don't know how they are called, but I call them spec generators. So in an input, you gave the name of the upstream project, as an output, you get the spec file. So we could use such mechanism that, for example, there is a new package on PyPI, and I use the PIP2RPM tool, and I would get a spec file, and I can use it right away and build a package in Fedora. Obviously, the problem is changelog, because you need to populate it, and if the changelog doesn't make sense, if it's, like, 1,000 lines of commit messages, that's not useful, right? So we need to, like, figure it out how to do it. The other thing is release number in, like, name version release. The release number is specific to build system, and, like, build system cares about it. So why should we, as package maintainers, should, like, treat release numbers? Like, it should be automatically populated by the build system. Okay. Tests. So upstream projects have tests, right? So we can easily run them and see if the software works in our environment. Then we need to have distribution tests, or we already have them, as Adam Williamson spoke in his talk. So we just run all these tests, and when they pass, we are pretty sure that the software works in Fedora Ohite, or in the distribution of our choice. But the thing is that we need to use those which are coming with the change. So if there is a new upstream release, we should use those tests which are coming from that release. And finally, every project runs or invokes their tests on their own. So we need, like, a standard way to invoke it. So, for example, we would have a definition that you have to do, make tests, and, like, everything will run. Like, that would be the, like, API. So we need such standard way. So let's look at the diagram how this, like, how this would work. So the whole nice green box is our upstream project. We have a bunch of different branches for our, for releases of our software. And you would have automation to bring it automatically to Fedora this gate or to, like, different federal releases. So in this case, the upstream project is all cool. And put the stack file in the test wrapper, the standard way to invoke the tests in their branches and part of their development. Many projects do this. We know of many that we maintain that do this. This is not, this is not fictional. But, and you can see certain branches on the Fedora side tracking that automatically without any intermediate party. There's, there's the automation would need to take into account who made the change, whether they did a GPG signature on a certain tag, or whether a certain identity pushed into the Git forge. And then it could land in Fedora in a different, in these branches. Okay, thank you. What if the upstream project doesn't care about spec files and doesn't want to have spec files in their repository? Then we can create a new Git branch, like in GitHub or GitLab or Pegger or somewhere else, and get all the upstream get all the upstream code stuff, all our downstream changes which is spec file tests, test wrappers, and even additional commits which are fixes from the master branch, and you would have such branch and use this branch to track like the upstream release and use it in downstream. And it would be just like a, like similar single branch and we could use it to to populate multiple branches. So the benefit is, like, for example, there's a new upstream release, and you need to populate three different branches in Fedora. Like in this way, you would only set it in one and you would benefit in all three. All right. And this really is using Git in the way that God intended. Well, Linus intended, like the same thing. You have a branch, you have another branch, have some different stuff on this branch, you may push this to a different repo and again, what upstream says, and essentially using the tools, including GitLab, GitHub, the standard workflows that people are used to using in the way that they were designed. Okay, so the other thing we need to make all of this happen is gating of row height. So right now, row height is not gated. So whenever there's a new upstream release, it lands in row height, and that's it. It's very, like, very easy to break row height if your upstream release, like, changes something. So we need to build that gate and use it. So together, we are trying, like, we will do it with Fedora engineering. So we already started the discussions and hopefully it will be done sooner or later. And with the gating, we have increased stability because only the proven tested content will land in row height. Like, at start, we can't, like, enable it on all. We'll probably start with some core packages, some important packages, and then we start on board more and more. And important thing is that if you are owner of a package, and for example, some other package brings your package, like your dependency, you can contribute test to that package and say, okay, so please include also my test suite when you are updating your package. And whenever my test suite breaks, it means that you probably introduced a breaking change on there sent some issue, and we can work on it together. But I don't want to find out it after, like, it's already landed row height, and it's already broken, and there are bug reports coming, and there's fire on the roof, and I need to, like, do it very quickly. Let's do it where we are still working on the code. So, and finally, we want the automated packaging service. So we call it packet, like, that's the name of the project of the team. And it will be a set of tools, we are just starting. It will be a set of tools, you can easily run it on your laptop and do all the automation, or we will provide it as a service, so we will run it for you, and you can just enable it for your project, and use it, and all your packages will be updated. So the hardest thing about the project was the name, we were trying to figure it out for, like, three months, but finally it's it. So one of the things you would love to explore is opening full requests for new upstream releases, so when there's a new upstream release, the tool or the service would create a pull request with all the changes, and you as a maintainer would just review the changes and say, yeah, okay, it looks good, all the tests are passing, let's just ship it, or no, it's broken, I need to fix it, and you just fix it. At the same time, Fedora engineering is one of their services, it's trying to do the same thing, so in the end, it doesn't, like, it's an implementation detail who actually implements it, whether it's us or Fedora engineering, but we just need the feature. We also want to bring the feedback from downstream back to upstream, so whenever there is a new upstream release, and it breaks Fedora, we can easily create issue on their tracker, or send them an email and say, your new upstream release doesn't work, maybe it concerns you, maybe it doesn't, and we won't do it ever again. And it didn't land? Yeah, absolutely. Right. It didn't land? Didn't land it right. Tell them here are the logs, please try to figure it out, and maybe in the next release, it will work. Okay, so what are the benefits? When you are using this workflow, when you are using GitHub to develop your packages, you can use the tools you know. For example, right now, it's very hard to contribute to Fedora, you need to become the member, you need to use the tooling, and if you are using GitHub for that, when you just clone the repo, make the changes, push and create upstream pull requests, it's very easy. So we can use the tools, we can use the tools we know and keep using them. Everyone can contribute, we can benefit from the modern techniques like linters or CI, or I don't know what else. And finally, I can even go and fork some package, like systemd or kernel, make some changes, and then push them and see if it works in Fedora or maybe even real. So we are also planning when someone does such fork, and like changes something, we're planning to create a repository with the updated packages, and you can install it on your laptop and use it right away. And with the changes you just made. So the title of the talk is auto-maintain, and that might be, it's actually even confusing still for me, like what does it mean to auto-maintain. The thing is that you as a package maintainer or upstream developer, you are still responsible for the content, and it's still your baby, we don't want to mess with your baby, we just want to keep the disk it up to date. So whenever you do some changes upstream or in the source git repository, we just take the changes and move them downstream and tell you the results, how it goes. So we never lent anything broken, you are still doing all the decisions, it's still up to you, we are just doing the hard work, we are just changing diapers. Yeah, it's your baby, we just want to change the diapers for you. We need t-shirts. So as I said, this is not an easy task, this is not an easy change, there are still many things we need to address. So for example, this provide changes in Fedora when we need to change 3,000 spec files. So could this system be used for such a thing? We don't know, maybe yes, maybe no. As Stef said, like for some people, packaging is not exciting, but for some people it is. What do we do with such thing? We don't want to take packaging away from people who love it, so maybe they don't want to use such system. And finally, this also means that we will like close the gap between upstream and downstream and again like this might be disturbing for some upstream communities. So we need to work and figure out how to do this. So we are almost running out of time. Yeah, well let's wrap up then. So do we want to go to Q&A? Well let's just say, I think here's what we want to, where we want to get to. Fedora should be the de facto place to land upstream work. It should just happen as a side effect of doing the work. If you have set everything up, if you are the right person whose identity Fedora has signed off on, it should just land. And it should be packaged as a part of complete Linux. We should provide the feedback that's necessary downstream and the automation that's necessary to accomplish this. So yes, let's go to questions. Okay, so how can you help? Give us feedback, please give us use cases or become an early adopter or give us questions right now. Yes, please. So the question is, which parts of this exists today, if any, and whether it's pie in the sky? I would say that 90% of the ingredients exist. Rahid gating is one of the key things that doesn't exist yet, and we're basically tying them together. Adam wants to answer this? Another question. Okay, so we're not saying invent all the tools, tying together in a workflow that has this effect on upstream. So that's the job of the team to work together with the people who own these tools, to bring them together and do that. So yes and no. Dominic has a... I just want to point out it's not completely pie in the sky because we have packages that do that, so it's not completely dragged up. Yeah, by the way, like a bunch of the examples I had, like cockpit, lands, automated releases from upstream into Fedora, and of course into Debian as well, but all of these places automatically every two weeks without touching the Fedora tooling, just signing a tag in Git. Obviously, if we want, if people come and want to join in on the effort, work together on it, that's great. So it's obviously a harder problem than the ones where it's trivial to run in any environment, but I know that Debian has solutions for this with auto package tests. They do. They extract sources, sometimes even build it and then run the tests inside of their environment, or there's many solutions to this that we can try. In general though, if we cover the ones that are easy in Fedora, the thousands of packages that do work and leave those exceptions to then do the hard work of getting them on boarded later, we'll prove the idea before we then try and solve every really tough problem. Any other question? How much time do we have? One more? Three minutes. We've got time. Dennis, that's definitely a good discussion point. So the question is, can we make some of this work for new packages, like the new package workflow easier? And that has been brought up and I think it's plausible. And I think it's worth discussing. I don't think we have an answer there, but we definitely, in the last couple of days, discussed this. I think we haven't touched on it much, so we can definitely put it on the roadmap and start thinking about it, how to do it. That's a good question. So how do we envision it working with upstream maintainers taking care of more Fedora packaging or Fedora maintainers contributing packaging work upstream? Also other, right. So how do we, in general, I think all two are all three, really. First one is, in many cases, if you can contribute upstream and they're open to the idea of having a spec file there and an invocation for the tests and so on in a way that works, then yes, it's always good to contribute to the community directly. And especially for rawhide, we want that to be tight. But if it doesn't work, then we have the option of branching upstream with Git and doing the work there. And again, focusing the human tasks, the creative tasks of packaging, the crafting of the spec file, just so and all of that from the packageer and letting the mundane tasks be done automatically. That happens and land in rawhide. In addition, after a branch happens in rawhide, from rawhide to a release branch of Fedora, there's a lot of good packaging work that needs to be done there. And although perhaps the same tooling, like you can run these on your laptop and so, are interesting to use there, it's not really the focus of this effort. And so there's tons of packaging work to be done there to make sure that security fixes are applied appropriately, the right decisions are made as far as rebases, back ports and so on. And there's tons of activity for a distribution to do there. In fact, it's the main interesting part, I would say, of the distribution. Both the work for crafting, how the distribution comes together, and we try to hope that happens in SourceKit, and then making sure that's well maintained. Works continues to work well after branching. We're out of time. Thank you for coming.