 Welcome to our session about Pekit and as Luzka said, like it's been two years since we're working on this project and it's been quite a journey and we'd like to share it with you right now. So in here, it's three of us, myself, Tomasz, Franta and Honor. And I will start in the beginning and then guys will follow up with some more interesting in-depth topics. And like all of this work will be possible without the whole team and you can see their avatars right now. And if you're interested, like these are their GitHub avatars. So you can go ahead, look at GitHub, like who are they, what they're working on if you're interested. And I'm really grateful for all of these people to have them on our team because they're like amazing engineers. We have awesome manager and it's been a pleasure to work with this team. Okay, so let's start. Let's talk about what Pekit actually is because we have 46 people on right now and I'm pretty sure some of you know our project and some of you don't. And Pekit, it's actually a CI solution for upstream projects using RPMs. This means that if you have, if you are an upstream maintainer and you have a project, you can easily use Pekit to integrate it with downstream distributions such as Fedora Linux or CentOS Stream. It works with github.com and github.com as well and you'll hear Franta to showcase it a little bit. And another thing, aside from the CI part, you can also use Pekit to deliver your new upstream releases into Fedora Linux. It's being used now, it's being used by many projects and you can, you'll see that later by who nor, like what other projects and how are they using it. And if words are hard to imagine, let's see some pictures. And this is how the Pekit as a service github looks like. So you can go to github marketplace, look at it, set up a new plan, which means that it's for free. So you just add it to your repositories and yet you can start using almost right away. There's just one glitch or a thing which we need to approve you to use it because of like legal reasons. And then you are able to start building your projects for Fedora or for Yipple or for CentroStream. And let's have a look how it, how you can experience it in your github pull requests and it's look like this. So this is a screenshot of Anaconda and you can see they're using github actions to do CI, aside from Pekit. And you can see that built in Rohite past, built in Fedora ADNL failed because the truth is bored this week and I'm still waiting for it to become fixed. And you can see that their installation actually passed in Rohite. So their change is not disruptive. So this is how it looks and this is how it can work for you as well. So if you're interested, you can set it up or let us know when you'll be happy to help you. So as I said in the beginning, like it was quite a journey two years ago when we started the project. And for those of you who know how it feels like to start a new project when you start writing first lines of code and writing read me or setting up the operation scripts like in the beginning it was like, I called it punk development. Basically it was just a proof of concept to try if this would work. And we saw that it can work. It just take, it just took a lot of time to make it work like reliably and support different use cases. So you can read the list what like things we didn't have two years ago and let's just forget it and let's move what we actually have today which is I would say more interesting. Sorry. So right now what I'm really amazed that we have very clearly defined development workflow for all our repositories have contributing MD. And if you are a new contributor you can go read it and be able to create a contribution to any of our repositories easily. And if that's not enough we'll be happy to help you in pull requests or defining issues or even chat with you on IRC on free note. And this was really helpful for us because we frequently participate with Google Summer of Code or it had open source contest. So for those of you who would like to participate like this is really important to define your development workflow so that new contributors they don't have hard times trying to set up contributions or set up the development environment. The next thing we do a lot is we test a lot. We have different types of testing. We have pull request testing. We have daily tests which are running against the production service. And this helped us improve the stability a lot. And if we ever regress which happens from time to time that we have a new deployment and suddenly some things break we can revert or we can spot it like within minutes or even like hours because back then like our main problem was that we had outages but we didn't even know about them. Like we literally had to have users tell us that hey, this doesn't work for two days. And we were like, oh really? We don't know. So all our things and these daily tests really helped us to improve this. So those things which are in red they'll be spoke about later for staging environment which you'll be able to use and Frantha will describe it better. And also who nor has some very nice slides about monitoring. So, okay, what I didn't and last thing I wanted to mention for this slide was that if you have a project which is a service as a service which packet it is you suddenly have a need for different roles in your team. I mean, if your project is just a library or a binary or something like that it's not being deployed. It's just code which is being released and then put into downstream distributions but if your project is a service you suddenly need to watch the service and make sure that someone looks at alerts and tries to address them and you need to deploy things into production and write change log or all of these things. So what we did in the past year we defined rotating corolls within our team and we keep rotating them every spring between the team members. And with these roles we know like who is responsible for what for particular sprint. And this really helped us to spread the load across the whole team and at the same time like every team member gets to try different things and we even swap often or someone likes to watch the alerts more than just release new things. So this was also very helpful. And the other roles we defined was that we split the team lead role into like product owner team lead and we actually have two product owners in our team right now. And that's also very helpful to have multiple roles all multiple tasks spread across these roles so that one person or two people are not overloaded with responsibilities. And actually the younger engineers can grow in their roles and like become senior which I think is super helpful. And I completely recommend this to other teams as well. So this will be how our things today and you'll hear about it more from guys later and right now I would like to talk a bit about SourceGit and that's our initiative to change how packages are being maintained downstream. So we want to introduce a modern development workflow for CentoStream for eight and nine which is coming out soon. And it's gonna work differently for both because we are in different stages of development of those two distributions. I'm in CentoStream eight and nine. And in coming weeks you should be able to see the SourceGit repository is be available for CentoStream eight in GitLab and for nine it will be later. And we would also love to bring this to Federal Linux as well but we had a big challenge with the infrastructure move that the infrastructure move took a lot of resources from the team. And we couldn't find a place where would we host these repositories in Fedora Infra. So if you have an idea where that place would be please come to talk to us. We'd love to discuss that possibility but that's how we are for now. And for those of you who don't know what SourceGit is or let's have a look, it's a... So if you're familiar with how things are being maintained in Fedora or L or CentoStream we have these repositories called DiskGit and they have spec files and additional sources in them and then they contain hashes of upstream targets for the corresponding releases. And this is problematic if you need to change the code because you need to change the code somewhere on the side and generate a page file and they integrate it into this repository and then create a pull request with file which is a page and it's not actually changing the code so it's really tough to review and hard to work with and all the people who do this actually invented their own ways of how to interact with it so it's like very scattered around. So with SourceGit we're trying to have the repository look like upstream repository and these downstream changes they would be just like additional comments. So in this picture you see that the part which is grayed out as upstream that's literally the upstream history up to the 2.33 release for GLC and then on top of it we have additional comments which are again like code changes but also there is like spec file and configuration for packet and they are just additional comments and packet is able to work with this and is able to transform this type of repository to DiskGit so that we can still build production builds in Koji the same way we do but the development happens in like different way and more convenient way for developers. And I have another example for this and we actually worked with Florian Weimer recently and he was doing like he wanted to bring the 2.33 upstream release to Fedora 34 and he actually used SourceGit for that and for him it was as easy as just cherry pick a few comments from upstream so that the package would build correctly in 34 and like that was it like he didn't have to create any additional files or like set up field kit repost just for the sake of being able to create the files and I was really like I was really amazed how well it worked and in the end he got successful build and now we can put it to 34 and have this GLC up terminates in there. That will be all from me and for SourceGit and I'd like to hand it over to Franta to discuss what's new. Okay, so let's take a look what happened in packet in the last year. We have implemented a lot of new things and also fixed some bugs and introduced new bugs and fixed some of those so let's see some new stuff. So our main feature is creating copper builds and we added a possibility to edit the settings of those copper projects. So now you can use custom projects, custom owners so we can build it in your own projects. Also you can set the visibility on the copper project page and some repositories that will be used during the build. Also by default we set the projects to be removed after 60 days but you can disable it now. If you use custom project and custom owner for building copper via packet we can help you with updating those settings and we will send you this comment and you have multiple options. You can grant us the admin permission to a project and we will do the update for you or you need to do this update yourself. Next big feature is that we've added new triggers so by now you can build also for new comments to the branches and also for new releases which with combination with the previous options now you can use packet also for creating copper repositories that are stable and you can use for example copper repository for main branch or stable branch or development branch and provide your users quicker access to your code. Next feature is that we added support for building in Koji so to match closer the behavior you have in Fedora by now you have two options. You can scratch builds which are some testing builds that are not going to go anywhere or you need to set up your custom tag which we can build in and so you can preserve your builds. The slides are a bit slow. Yeah, here's another example of the commit status we put to your pull requests by now you can use some new target, the truth you can build in. There is a Fedora ELN new edit and also you can build in Apple 7 and 8 and also in the center stream. Here we need to note that the center stream will be switched to center stream 8 and center stream 9 to make this screenshot clear. Another killer feature we have is running tests on those builds and that was a really tough year with this because we don't run the build ourselves in our infrastructure but we use testing farm to do the tests for us. And about testing farm you can, there was a workshop right before our talk so if you want to know more watch it on demand if you missed that. So and testing farm was switching the API and also the infrastructure and the whole architecture to improve it a lot and sadly the old cluster the old version died slowly and cannot be corrected but fresh news we have there was a lot of work done in last weeks, last days on both sides and the packet side and also on the testing farm side and now you can try the new version of their API also on the production version. You have a better result page and also full TMT support which is the format you can use for defining tests and the same definition of the test can be used across the federal ecosystem and also our packet in upstream and via via TMT tool also locally. So one test definition for multiple environments if you don't want to define your tests by default we run installability test. So if you just add the test job to your configuration file we will use testing farm to install your package from the copper repository to verify that it can be installed and so another nice feature. So let's move on. A lot of simplification and cleanup was done on the config field. We are not now much clever and much more configurable and interesting part is for example that you can have some inheritance and overrides of some options that are general for all jobs and you can specify some override then for the single jobs. You can also have multiple jobs of the same time which is not possible year ago and we also introduced a packet validate config command line command that you can use to verify your configuration file. As Tomasz said, in June we made our staging environment widely available. It is another GitHub application so we now have packet as a service and packet as a service stage GitHub application which you can easily install both together with the production one or only the staging one. So if you want to help us or let us give us some feedback sooner then it's the code lands to production then we would be glad if you can try our staging environment but there are some caveats, some functionalities are not possible on stage yet but we would like to work on this more. More information can be seen in this pin issue which you can find on github.com on our packet namespace in the packet service repository. So we would be, yeah, thank you. Next one is our documentation. We improved it in the last months and year a lot. I think it's much cleaner now and more readable. So I hope you can find the relevant information more easily and also what you can find here is our blog posts in production. We do deployment weekly and each week with this deployment we make a short blog post describing what's new, what you need to think about, what are some issue fix or some news. So if you want to know more about or some news from the packet land so take a look on this. Since packet is built on top of many external systems or couple of the information or trigger some other systems it's really, really hard to be stable but we tried hard to be stable and we in the last year mostly we spent a lot of time on this in the last weeks and edit a lot of retries and retries and retries on many levels. So our service should be more stable now and also we have some babysitting so called does that that tries to take care of some dangling results or if we haven't received any results. So we fix that afterwards. So hopefully this will be much better now. Another thing Tomar mentioned is that we participate in some upstream actions or projects as well the Red Hat Open Source Contest or for example Google Sum of Code and last year in summer we have two projects in the Google Sum of Code and one was a GitLab support for packet service. So now we can use a packet service on GitLab instances. By now we have our users that are required to make it work only on gitlab.com but if you want to try it on some different instance it's just from our side to create new user in that instance and edit at an update location to our configuration. So that's pretty easy. So there are not so much users by now but let us know if you found some problems with that we would like to work on this more and also the CentOS Stream Development Workflow is done on GitLab. So this should receive a lot of care. Yeah, another project was about creating dashboard. So now we have a nice dashboard that lists the last copper bills, coji bills and so on and also the projects. So take a look and with that we are currently working to use this dashboard to show the result pages which will be much user friendly than the current result pages. Yeah, and last news is our GitHub project, this Kanbar board which we introduced to provide some visibility because we've started to work on the CentOS Stream and but still want to work on this upstream part the this packet service projects for GitHub and GitLab. So we want to correctly use our time and benefit to work on some important parts and important issues. So we've created this projects where we track what we are working on, what we have planned to do so you can easily see that and if anyone has some issue that needs to be raised in priority. So let us know. So we know that you require something hard and we know that we need to work on this. And that's probably all new stuff and I'm going to move my board to who knows with some more interesting numbers. Thank you, Franta. So let's have a look what actually happened the last year and I spent the last day trying to dig out some numbers from all the metrics and databases we have to learn more about what happened in packet word since last year mid February. So just looking at GitHub, we had 45 contributors merged over 1300 merge requests and touched on 970 issues from which if I remember correctly 190 something are still open. So there's still a lot of work left to be done. Actually from these numbers, the one I was a little bit surprised is the high number of contributors, which is like four times as the team who is assigned to work on this project. And yes, like from these 45 some are bots and actually bots are the most active ones, but it's good to see that with all the Google summer of code and internships and external contributors. We have so many people contributing to this project. Yeah, this is our monitoring dashboard, a few screenshots from here showing this is the last week or the GitHub events we are receiving and then all the copper bills we are triggering. Actually, there is this difference between these two slides in matter of numbers, not all the GitHub webhooks we receive, we are going to actually run a copper bill for sometimes people what we learned, just install the application on their repositories, but then they never bother to configure it. So there is a little bit high noise to signal ratio here between these two. SRPM builds, this is what most PRs and branches are going to run before getting to the copper build. SRPM builds are actually done in our like in packet infrastructure. We did more than 7,000 of them in the last year. Most of them were successful, many of them failed. If you ask me, that's the piece of our infra or process that I would like to improve stability on even further, I just find this 5% which is a bit annoying. Then copper builds, that's a high number and I hope I did the numbers right because it turns out that we ran like 47,000 dish builds copper builds in the last year. We have a small percentage of pending which is actually a bug, we have one roadmap to fix. Sometimes we just forget about builds and we never update their status in the database. And now, if you participated in the poll, you might already guessed that we processed or contributed to over 1,800 PRs and this is a breakdown of all the repositories where we did this. This number doesn't include the test repositories we run for ourselves. So, and yes, I guess somebody in the chat I saw was mentioning Anaconda. It did over 600 PR contributions to this. So this was the runs on PRs. We have the other feature where, as Franata mentioned, where we can build on when a push happens to a branch. This is not that used yet but still they are like around 700 of these happened during the last three year. And this feature with the combination of custom copper projects, it's really nice because if you are maintaining some packages in copper, you can basically have builds and new versions of them just by working in GitHub. Testing fun tests, because that's the other big feature we are running. I guess this pie chart shows the stability issues we were having with testing farm during the last year. And yeah, we have the same bug here. We sometimes forget about tests being triggered and they just stuck in the running state. Activity, that's the feature I would like to see being used more often. Currently we have in Fedora, this gate, 22 packages which received some kind of contribution during the last year done by packet service. This means that whenever you do a release, you can set up a job to, so that packet actually takes that release and proposes as a PR to the Fedora package. So usage is not that high here. I think we need to work to get more users on this because it's really convenient. So in general, we try to push ourselves to be a straightforward way to test and releasing Fedora. Yesterday we were discussing with Tomasz why preparing for this talk that a few years back there was a lot of frustration about developing in GitHub but having no easy way to test your working Fedora and to get feedback quickly. And that stays the main goal of the packet project. So that was our session. You can find us on packet.dev, on packet channel in FreeNode. And I guess we are ready for questions. I really do. So the question is when setting up source get work best when integrated directly upstream or is it preferred to set it up on a fork? Tomasz, do you want to answer this? I can try. So it works best when set up in upstream. And we already mentioned Anaconda and I would like to use them as an example here because they set up packet for their main branch and right now for 34 branch so that when they are working in upstream in like new features in Anaconda or changes like they directly get feedback whether they change works in stream, in raw height or in different releases. And that's the closest you can get when you are developing new code. Like when you're creating new code you want to know as soon as possible if it's gonna break someone in future or like in the downstream. Like if this is not possible that the upstream developers are not interested in Fedora Linux or like in the downstream then setting it up on a fork is the second best way and the third way would be like to create the source data repository like to maintain your downstream package like this but setting it up in upstream is definitely the best way and we can definitely help you set it up if you want.