 So, my name is Pavel Volano, I work for Red Hat as a package maintainer, I'm taking care of Ruby, RubyGems, Vagrant, Ruby on Rails, and various other gems in Fedora, and this talk will be about a collection of tools and scripts I did together during my work. I found that enhancing the workflow was the way to go for me, and I hope you find the same. So, why did I do it? I wanted to do a packaging consistently in a standardized way, and to have more checks. If I automate something, I can run it all the time, not contraining myself with better I forget about it or not, or if it makes sense, I just run it and I get results. And I also like it, I like getting stuff done and seeing how everything works by itself, and I can do new things, like rebuilding all the gems we have in Fedora with all the new versions to see how they work, and using them even before they are released. Even if the updates don't get merged, I can use them already or test them in between. So, what's the codebase? It's something that I just collected, it's a subject to change, I tried to have every script minimal, and every result checks appear, pull requests are welcome. It's mostly in shell, using various tools like Mock, Fed package, and it's highly specialized for regions, but there are also some generalized tools to work with copper, for example. Workflow is something that should be, from my point of view, enhanced all the time, so that's the main thing here, I think. So let me know if you know a better way to do something, and what is documented workflow? There are many ways to do something. There should be, I think, the best way to do it, maybe not for every package, but in some general way, there should be some guidelines, and there should be all in one place, from my point of view. So this is the codebase, in case you're wondering, you can give it a go, and now to the details, what it actually does. So usual workflow is I check for updates, try updating it, do some tests, create pull requests, pull requests gets reviewed, it gets merged and built. What I did automate is actually stacking on top of each other, there's a script to check all the updates, to run all the bumping and building and testing, and creating pull requests, and it can create a pipeline, and it can even merge itself and build. So how does it look like? I have a check update script, which runs in a loop, it checks whether there is some update on RubyGems.org, then if the update is detected, and there's no pending pull request yet. It runs the update, which is generally a complex way to gather logs from various tools, and to create a pull request, if the update succeeds, if the copper build succeeds, if the code build succeeds, and if all the text succeeds. So that is how the pull request gets created. It's not like every script does everything itself, they are layered on top of another. So the functionality can be reused again and again. In the second step, there's another script running in a loop, which checks whether there's a build present for commits in my branch, I call it rebase branch, and when there's a commit but not a build, it checks the status of the pull request, if there's an LGTM in the comments, it actually merges the pull requests, and if there's an error, there's not supposed to be gems run, so don't worry about it, it just runs a build, so it gets built, merged, and even the bug gets closed because it adds a bug to the changelog. So if the package worker gets created, someone else writes LGTM, and then my script merges it and builds it, I didn't actually touch anything, and the package was updated, reviewed, and tested. But there are also standalone scripts, which are actually used from the tooling, a tool for copper builds, it actually runs the build and gets the logs, and checks if the build has succeeded, it stores the logs on the host, the same with Koji build, there are various tools for working with Pagier, and getting a bug that says the package update is available, and also a forking tool because I need to have pull requests created, but I don't always have already created it. As I run this on all the Ruby gems in Fedora, I don't even often know what the Ruby gem does or who maintains the package, so that's a necessity. What about bootstrapping? Because there are also packages which depend on each other, I said I'm a Ruby on Rails maintainer, so I did work with this, and I found out that I can use the same tooling for package updates. I have a script to do the sources preparation, and actually handling the build order, and some script doing the bootstrapping, but that's essentially it. There's some testing on top of all the packages, and then I build it in Syntec, after building it and testing it in Copper. I simply build it in Syntec, and it gets merged into Fedora. So, what are the caveats? It's a still prototype. There's some hardcoded variables, but I try to eliminate them all the time. There are some dependencies which are needed to run this. There are also mock configs. I expect various coppery posts to exist, but I would like all of those to be configurable, as well as any folders for results or logs. Since this is a new upstream project, I might have missed some scripts that I did not include, because I have a lot of scripts on my machine internally. As I've said before, this is mostly aimed at Ruby gems, so I'm very interested in enabling other packages or general packages without prefix, but that's not done yet. Always, there are some configs that need to be handled, like being able to create the pull request, so you need a hash and mock configs. I didn't do any integrations yet, certainly, so I will be investigating how to do integration with Rebase Helper, what could be reused. The same with packets, but that's still GitHub focused more, so I'm not sure how or why would do that. TMT is good for testing. I plan to do testing better in CI, but I don't know how to currently get tests from CI, but when I get them under TMT, I can run it on testing farm, which would be nice, and there's probably endless amount of other tooling that can be integrated or replacing some of the scripts, so let me know. Now, for some of the demos, I hope my screen will be legible. So, top left, there's a NetSSA package, it's a revision package. I simply run some linked scripts, which tries to update the package, but there's no need to update because the version is grant. The second half of the screen is actually a package that needs to be updated, the same script. It pulls the upstream GitHub repository because there are also tests package in a standalone file, so it actually parses out a command from spec file and executes that to get an archive, and then it creates sources file and gets ignore entry, and does the commit with all the changes and runs copper build, which unfortunately fails, as there are some TM issue. So, following on, this is the update script, it checks for the updates and basically runs what you have seen on the previous screen. There are some parse issues, but apart from that, there was an attempt for update, which also fails as there needs to be manual work done, but I didn't have to investigate before I simply get a log that some update fails and it needs manual intervention. There are various options. For example, if I just want to test some package, I simply run. Pavel, would you mind to make it bigger? Yeah, I can make it bigger. Sorry, I have a large screen. So, as you can see, it runs the mock build as well. Let's get back to that later. This is the checked build script, which goes through all the same repositories and checks whether there's nvr that is expected versus the current nvr in Koji and whether there's a correct commit that corresponds with my branch in the pull requests or it does various other checks, like if the email from the commit is the same than in the pull request and that it's my I mean, in the change look entry and that it's my commit. Yeah, this is the first one. So, these are the tests that get run, for example, dependency check, which actually succeeded because there are no dependencies which should block this, various other checks. So, it didn't run the copper and Koji scratch build for some reason, but this is the error message. This is the same message that will be in the pull request. There are a lot of pull requests on my source Fedura project org. So, how about the Rails? It actually runs the builds now for Rails, because I did start an update. I did update Rails two days ago, but now another version is out. And I simply run the script, which resets the folders as the same links, not to check out the master repository. I think it helped one because it has half of the gigabyte. That wouldn't be nice to check out for every package. Then it runs the update. It actually tries to recreate the original source RPM first, and then creates the proper tests, star balls, et cetera, same as regular update. We have also gem comparison tool, which compares the differences between gems, and so on until it rebases all the packages and then I can run the real build. There are also monitoring tools, which I have. There's a status tool, which checks for me what's the status of the gems in the current directory, or any packages, generally. I specify the branch, which I want to be on. And if there are some commits, it actually writes the one line message for the commits, for me to know what's the status of the repository. And there are other as well status tools, for example, the one for monitoring testing. So this actually writes all the failed packages from the copper locks. It parses those copper locks it did download, and those are packages that it failed. It would succeed it or running for other packages. And if I write the name of the package, I get the lock with the failure. Hey Pavel, just a gentle reminder that you have five minutes left. Okay. Yeah, this is actually right on time. So questions. You're welcome. There's actually a lot of stuff I didn't go into. There's a lot of other tools that I didn't go into various options for all of the tools that I've mentioned that extend the functionality. For example, newly I check whether the mock build route is being used. So I have more mock build routes, similar to the same cash data. So I can spawn multiple build routes, which fit my needs, for example, with the copper repository attached. So I can do multiple packages at the same time.