 Hello everyone, my name is Sergey and my colleague Pavel. Today we'll be talking to you about the journey of automation that we had on the project that we are working on Linux system roles. So Linux system roles is a set of Ansible roles for managing Linux systems, subsystems like networking storage and so on and so on. And it is critical to automate low level, labor intensive, repetitive tasks, in the modern world. And we're lucky to have many tools that allow for this automation, such as GitHub actions, packet and so on. So today we will cover two main topics, which is the automatic release of the content of GitHub repository to Ansible Galaxy and also how we release the content as a Fedora RPM using packet. And so to begin with, about how we release content on GitHub itself. We have a script that developers launch when they need a new release and this script does three things. First of all, it uses conventional commits format to decide what should be the new semantic version and to generate change log for the new release. After this, it pushes the PR with the updated change log and with the new version to GitHub for a review. So after developer performs this review and merges the PR, we have a workflow that runs automatically. And this workflow does two things. First of all, creates a Git tag and GitHub release and publishes the repository content to Ansible Galaxy. And to begin with, I need to explain first what is the conventional commits format. So you have see the format on screen. So you have the type of the change, then the optional exclamation mark, exclamation mark marks if the change is API breaking or not and the title of the PR. One note here is that we used conventional commits format initially for the commits themselves, but it turned out that commits are more like aimed at developers. So they have many technical details that and users don't care at all about. So instead, we decided to use this format on PR titles that may be more vague and may just describe the actual feature that the PR introduces. And you see the example of our new Git log using this format, rather the PR titles, the titles of the PR merged. And conventional commits format, first of all allows us to define the next semantic version and it is done using the type of the pull request. So it has this exclamation mark that introduces a breaking API change. We bump the major version, then for features we bump minor versions and for other changes we bump the patch version. And of course you can have the patch with the exclamation mark, so bumping major version as well. And the second thing that's semantic, that conventional commits allows us to do is the real change log. And again, we use the type to automatically put PR titles into the new features section, into bug fixes section, and into other changes section. And on the right of the screen, you can see the updated change log with the formatted entries, with the new release, the date, and then all the sections that we need. And to sum up the process. So developer runs the script, the script collects merged PRs since the last release and processes them to identify the new semantic version and to generate new change log. And the pull request with the updated change log is pushed to the PR and developer goes and merges this pull request. And after this we have another layer of automation which is a GitHub workflow. As you can see on the screen, this is the workflow that works on pull request pushes to the main branch and only affects requests that change the change log.md file. And after this we create a GitHub release and propagate this new content of repository to in our case to Ansible Galaxy. And from now on, we continue to releasing Fedora RPM using packet that Pavel will cover. So let me take this first. So packet, that's a tool that automates common packaging tasks and packet service in particular automates proposing Fedora releases from GitHub releases. So the service is triggered by releasing on GitHub. Then it updates the RPM spec file. That means that it bumps the version field and it updates the change log. It uploads the source tables to local side cache and it opens Pagura pull request with those updates. So it doesn't perform the update itself that why I say that it proposes the Fedora release because now the Fedora maintain has to review and to merge this Pagura pull request into Fedora release Git. This is the only manual step in the process. And then when the packet service sees that the progress is matched, it performs a code to build and body update. Today actually there was a talk specifically by the packet team about this tool. You can watch the recording and they have also a booth. So I will not go into details how to configure this service because you can check the talk or check the documentation. I will only give you a very brief overview. So the brief overview is you create a packet YAML in the upstream Git repository and you configure the proposed downstream job that proposes that downstream pull request. Or another option is instead you can create the packet YAML in the Fedora release Git and instead you configure a job called pull from upstream. This is useful if you don't have commit access to the upstream project if you are not member of the project. So we can configure everything on the Fedora. And you anyway should create the packet YAML in Fedora because you need to define the other two jobs that do the code you built and the body update. So this is quite simple. It looks quite simple at least and quite useful. But during this process we encountered a few problems and I believe if you are a Fedora maintainer interested in using packet you would probably encounter them as well. So I thought I would share what we encountered and the resolution. So first problem where to maintain actually the spec file because packet by default assumes you can find it in the GitHub repo but this means that any Fedora changes that were made in Fedora's this Git for example someone proposed some changes using a pull request in Pagura would get overwritten during the next thing from by packet from GitHub. So the solution of this is to just keep the spec file in the Fedora this Git as the primary version but then packet needs the spec file and it is not available. So the solution is to fetch it from the Fedora this Git when packet does the update and fortunately packet has actions which are kind of hooks that can be executed during the process. So we set up a hook that downloads it's a simple shell command which downloads the spec file from Fedora this Git for packet to use. And we actually need to download all the files that also are included from the spec file because we use include in our spec file. Another option is to configure packet in Fedora this Git as I shown on previous slide instead of in the GitHub repo and to use a profile upstream instead of proposed downstream. Why I haven't used it in our case it's because this job was not available yet when I started this automation. So as you can see automation as any other software development is subject to change and often you are lagging behind the newest features that are available in the tools. Anyway, another problem we encountered is RPM change log because by default packet collects all the Git commit messages, summaries in the GitHub repo and uses them as a change log entry for the RPM or it can optionally use the GitHub release description as a change log entry. But both of those options produce quite variable change logs which is contrary to the Fedora packaging guidelines that state that change logs should be brief and in particular they must never contain entire copy of the source change log entries. We wanted to comply of course with the packaging guidelines so the solution was again a custom action that just this is called by packet when it needs to collect the change log entry and there's a simple shell command that echoes the base to version and the version that Git's functionality provides us in environment variable. The problem was related to multi source RPMs because actually our presentation is simplified. It talks about one repository on GitHub but our RPM is assembled for many individual repositories. So this means we have multiple source tags and we need to update them if any of the source table changes and packet needs to upload those tables to local site. So to update them if any source table changes we again use a custom action that regenerates a part of the spec file where we have the source tags. It's marked in the spec file it's actually the spec file is used as a template for this generator and the generator replaces all the source tags by the new source tags that are needed. And for uploading the new sources to look aside packet now supports this multi source RPMs and if any source is URL packet uploads to look aside. And this was actually implemented by me because I needed in this project but now everyone who uses multi source multiple source tables in RPM can benefit from this. Yeah so in the conclusion I'll just hold this for now probably so conclusion I'd like to say that it's really critical to automate all repetitive tasks that you can and to live up to this standard we developed this presentation using the tool Marp and the Marp to get GitHub action that converts a markdown file into presentation and then using GitHub actions again publishes it to GitHub pages. So now let's proceed to the Q&A section if anyone has any questions then all right. Then yeah so just for you to know I have another slide with references about every subject that we were covering today so you can go and click on the links and study everything. And also we'll be here around the corner for some time after the presentation so please come around if you need to. Thank you.