 So, okay, so hello everyone and welcome to our today's presentation about Pekit and its Arcium integration capabilities. My name is Laura and here are my teammates Maya and Matjok. So let's start with what have you prepared for you today. So even if I see some friendly faces, so some of you already definitely know Pekit, so that everyone is on the same page, we will do a quick introduction of Pekit and its core functionality. Then Matjok will guide us to the Pekit Downstream Automation to Fedora. After that, Maya will talk about the automation for the virtual images and in the end very shortly we will cover the future plans of Pekit and we will answer to any of your questions. So, what is Pekit project and who is Pekit team? Pekit has two main goals and these are validating upstream changes happening in repository in GitHub, GitLab, during the time the changes are developed before they get downstream distributions such as Fedora or REL and the second one is bringing the upstream release to downstream specifically to Fedora. And to do this, Pekit operates both on GitHub and GitLab so that's the form we will mostly refer to today, Pekit as a service and there is also a CLI tool which you can install on your Fedora distribution and run locally. So first point, validating upstream changes, you can do this by multiple so called jobs that you can configure Pekit to do and the most frequently used one is the RPM build. So for the RPM build, Pekit can react to any pull request, commit or release in your upstream repository, take the changes and forward it to the copper build system and then provide the feedback in the place where developed change so you can see on the screenshot GitHub UI with the commit statuses and you have there a link to the dashboard, you can see the information about the build, the time, the logs and everything you need. Then the other also very frequently used job is test and this one is to test your upstream changes in a very similar manner and for this we are using the testing farm infrastructure so similarly you will have the feedback about the tests in the GitHub UI as you can see and then you can be redirected to the testing farm infrastructure and check the logs there. There are also other jobs that you can configure such as Koji scratch build or the VM image build that Maya will guide you through. And then there is the second point and then the other part of the federal releases. This I will leave up to Mateo since he will very detailedly show you how we do this. In this diagram you can basically see all the services Pekit interacts with. So on the left side we have the Gitforge, GitHub or GitLab. Then we have the copper testing farm and on the right side there are the federal specific things like distribution, Git, Koji, Bodhi and Pekit builds everything together and it is the integration of upstream and upstream. And if you still don't believe me you can check the icons on the screen. Here are only some of our users hopefully satisfied with Pekit. And to cover all the parties on this slide you can see the Pekit theme or our GitHub avatars. So, stay very. And now I will pass it over to Mateo. Okay, so how the code gets to the federal Linux? I will do a quick poll to make sure how deep do I need to cover the topics I will talk about. So raise your hand if you have ever been in the Packager Group and has done any releases in the PistGit. Okay, so that's like maybe a half and so I will go into the details and how many of you have ever used Pekit? Okay, that's a bit less but still enough. Okay, so how does it work? I as a user of my favorite Linux distribution I don't care about any stuff in the between, right? So there is an upstream, someone develops a project, I develop my app, I do something with it, I do some releases and as a user, what I want to do? I want to run DNF install and upgrade, get the latest version or get the application itself in the beginning. I don't care about anything in between and that is something that I will go through because someone needs to do that. So what happens in between? As I said, in the upstream someone develops and they release versions and then someone needs to get them to the downstream. So how do we do that? Of course there are many Linux distribution, each of them has a different way of building the packages and in our case we will talk about RPM-based distributions. So what do we do? We have something called Locosite Cache. So someone releases the new version, we take the sources that we need to build the RPM from and we upload them to the Locosite Cache. It keeps the sources and that's still not enough, right? Is our sources the only thing I need? Of course not. I need to know how to build them. So in this case, you can see that we have this git and what do we keep there? We keep there some files that are arbitrary. In our case, we sync our configs and also a current version of the project around the last job and apart from them, you can see a spec file which is basically a series of steps that is used to create the RPM package and sources which references the archives in the Locosite Cache. And we're still not there, right? We have sources, we have some scripts that can build them but we still don't have the packages. So what do we do? We have a build system that's called Koji. So you can trigger build there, what does it do? It takes the spec file, it takes the sources, it produces an installable RPM package. So we can install it and we can use it. However, we get a package but that's still not in the distribution. So what do we do next? We have an update system. What does it do? We basically have the RPM build and what do we do next? We need to get it to the users. So we create the update and what happens next is someone going through testing stage. I can install it via the instruction in the update. I will show them in more detail later. So you test it, you can provide any feedback on it, you can give karma and if you find bugs you can basically postpone the release to the stable branch. If you are satisfied, you can give the positive karma and the release can get into the stable. And then we're there. The user can install it and is happy. Okay, so I've went pretty quickly through those steps but I will go one by one with a real-life demonstration. And how can we help with that? So today I've released a new version of our package and also a warning to photosensitive people. I will be swapping a lot between slides and browser. So be careful. And what do we have? We have a release, it's a pull request. So what can we see there? We can see there is a change log and we also keep a spec file in the upstream so you can see changed version and also there is a new entry in the spec file. So what happens next? It gets merged. I have a prepared release. I called it the Defconn release because it's for this demonstration and I've released this an hour ago. I wanted to do this live but we have a lot of releases for Fedora, for Apple. So I didn't want to take chances of waiting 10 minutes for the PR to propagate this git. So I've done this beforehand and we can see them right here. So those are the pull requests that we create. There might be a question. Why do we create the pull requests? And the reason is that maintainer has the last word. He has to decide if it meets the quality requirements, if it meets the packaging guidelines. So we create those pull requests. You need to review it and then you can merge it. So for the live demonstration, I will probably pick the one to the latest Fedora and we can see the changed files and I also have enough time to run the Zool pipeline and what do we have? We can see the version of the package that created this pull request. We can see the changed version. We can see the changed change log and we have also uploaded the... Why do you do that? We have also uploaded the new source to the look-aside cache so we can use it for the build. So what am I going to do? I am going to merge it so we can talk about our integration while the service does its job. So we have seen a lot of PRs and the reason is that we have two instances. We have one for testing, we have one for production and at the same time we test both proposed downstream and pull from upstream. So we create a lot of PRs. It's like proposed downstream, pull from upstream twice times two pair instances and times each truth. So there's a lot of rules. And of course, I mentioned the cogibills where we build the RPMs so we can automate it too and we can also automate the both key updates. So as you can see the PRs there is upstream to downstream. We have two jobs. One is proposed downstream and the second one is pulled from upstream. I will describe the difference between those and as I said what we do we just take the archives upload them and we create the pull request with the needed changes. So what does the proposed downstream do? You can see a pretty simple definition what's the job, it gets triggered on the release and where do we propagate the changes in this case federal. So what does happen? Proposed downstream lives in the upstream repository so it's very good for the maintainers that have control also over the upstream repository. So you create the config you enable the GitHub app and well we listen for releases and if you have it configured we do our job. And the other one is pulled from upstream and there was a demand for this feature because not every upstream likes to have the distribution specific packaging files in their upstream repository. So what can you do? You can switch to pull from upstream you define the job in the distgit which is directly in the federal and what we will do we will just listen to different service that provides the notification for release and you don't even need to touch the upstream at all. Okay, so we also have advanced configuration there are actions so you can configure the way to create a pull request and also the change logs if you want to sync some files you can configure it. And by this time, hopefully it got merged so there should be running build and we don't have it yet like this unless I missed it. Okay, so there's no build yet so I will carry on and this is basically what I've just set so we can skip this and the Koji build. So what has happened here? We have the changes included in the downstream what do I need to do? I just basically need to build the RPM, right? So in this case, we have Koji build job and we listen only for our commits and our pull request and the reason is that there are happening misery builds and we would trigger the builds for those and we don't really want to do that and also other use case is rebuilding Koji site tag I won't go into those details so we don't want to run for everything just what we know that is ours and makes sense for us to go and I guess that way we get to body updates so after the Koji build is done what do we do? We create a new update in the body you can review it you should test it of course before pushing it to stable and then after it gets to stable user is happy and can use it so let's have a look if we got yeah, we have a one do running and I'm not really sure if that build will make it till the end of my section so we can keep it for later and I will give the floor to Maya that will describe the upstream code into the languages Yeah, now we talk about how we can take our upstream changes and get it into a complete operating system to be able to test it into the system and first of all we need the changes and we in the example we will use this simple project we added back it and the change is just it is just a change in the color of the Hello World sentence and after this we should put these changes in a RPM but this RPM should be shared with others it's not yet released and the way to do this is to put it into a copper shareable repository and last step is to use the image builder from console.redact and we impact it oh yeah, and the last thing the final image can be any of this so you can build many different kinds of images and we impact it can help you with these different steps out you should create these two jobs inside the packet YAML and yeah the first job is really simple it's the copper build job you are telling us how we should create the copper build for which target the second job is a bit more complicated but most of these data are needed to customize your image so are needed for the image builder yeah we are saying here that we want an Amazon Web Service image and also that we want to install in the final image the Hello World package and we are also sharing the image with our Amazon Web Service ID and this is almost it just one more small thing by default if you don't specify anything packet will create a temporary project copper project where to put your custom RPI and the image builder job will use that copper repository in the case you want to customize your copper project you need to put these keys owner and project both in the copper build job and the VL image build job yeah we already said that we can share the final image with our Amazon ID and yeah final step when you are ready when you want to test your change then you can comment your full request with this comment we don't do this automatically to save some resources we think that it's not always worth to test the last change we have done since an image is something big we'll let you to tell the system when you need it and yeah what you get at the end is this link it's a link to the Amazon Web Services that brings you to your image and there is also another way in which you can do the same thing but without using the service instance you can do this by your command line and this time you need to create a token to be able to access the image builder service you need to put this token into the dot-config package Yelma but after you have done this you can simply build your sources through this common into the image builder approved image builder this time you will get a link to the image builder service and into the image builder service you will find the Amazon Web Services link and yeah in both ways you can arrive here you can launch your Amazon Web Services instance and you can test that really we have changed the color in the allow word sentence and yeah that's it all for this machine okay so if you are interested in any auto functionality that was mentioned here is our documentation page where you should be able to find what you need and now let's talk a little bit about the present and future of Packet so in Packet we are still developing new features but of course we are focused on some things more like on the others currently we are trying to make the dancing automation more robust and fitting multiple use cases of dancing packages and then a big focus is on the VM image build where at the moment the functionality is really simple as you could see but we are definitely trying to make it more complex and customizable so these are all the steps we have been through the only thing left is the Q&A but before that if you want to get in touch with us there are multiple ways you can get in touch with us on Matrix in Slack we have a fostered account and then if you have an issue you can also check our namespace on GitHub and we will definitely answer you there and also if you don't know what to do next on the conference we will have or we are already having a Packet boot today and tomorrow in the e-blog and then there are some interesting talks you could see so we mentioned in the beginning the testing farm so testing farm is having two talks and then if you want to hear about how our users use Packet you can go to the Journey of Automation talk and that's it so now it's time for your questions if you have any yeah girl I have a question well, how could it be told turn back to all slides we are mid-fielder while pitching from the sources and to the... yeah summer here? that's enough thanks well, in real life Packet Montaigneurs have to deal with many patches many rebasing patches and so on and so forth I understand that it's near impossible to automate rebasing patches but are there any best practices for using Packet in such real life exploration? thanks could you repeat the question for me? yeah, sure so the question is when you have a lot of patches in the downstream how should you use a Packet with that and automate it yes, so we realized that there were some issues with the patches and if you have a very lot of patches we had a SourceGrid initiative and that was led by our former colleague so the idea was to basically unpack the sources and have like a mirrored repo that acts like the unpacked archives so we have the whole commit history and on top of that you have the patches and you were able to locally rebase it and we also tried to automate it and I'm not really sure in what state it is so you could probably reach to us at the book and talk about it in more detail and maybe just to add like PacketRest to cover mostly the cases where the Packet is really simple and automate this type of PacketGest any other questions? yeah, no? yeah in the... we were talking about building virtual machines here I was just wondering if you're planning to add the building container-based image for example to this and how is it related to this and both men or if there's any plan in such process because you're building virtual machines so I'm thinking about the container work yeah do you want to answer? no, I'm ready I'm an author yeah so like Packet you can skip the over-building you can have the VM be ruled and you can do whatever you want to you can have the 3... the v3 you know you can check out on the screen and you can basically look through the virtual VM so we can check which actually and you can use all their local support it will be not... but if you want to run it down with the values only you will go through all the way up on the project it's pretty easy not only why it's like you could just do it you know what I'm saying if you want to run it down with the values yeah so you need to... that's going to be the problem that if we are not able to run fast forward and secret because if we couldn't then you can just fast forward it like secret or secret to the docker registry and upload it yourself actually arrivals is building all testing parts their own VMG support so we are not releasing the Dora-based container images using Packetable okay and maybe... okay so the answer came from Mira from the testing farm so good and just to add so yeah in the testing farm you can either use the copper build build by us or you can skip completely this step and then I also forgot to mention that in our future plans we would definitely like to integrate building of the VM images with the testing farm so that's something okay any other questions? I'm going to release my... yeah let's see so the coji build is done and we should have... yeah so we have the update there created nine minutes ago and we see the changes and somewhere there should be also instructions how to install it and you can test it provide karma and if you are satisfied it can get shipped to stable and all automated tests? I think we have a zoo and in Bodhi we don't have the tests not after that but we also have a testing farm for this so why would we test here? as the plan for the Dora-based security plan okay I'm curious you showed us the method creates some type of file during the Dora-based what do you use the reference for the version of the backing why is it there? you mean the readme? in this? yeah it can be used for debugging it's usually done with debugging if the production we move the production once a week so we should be able to tell which version ran the job but in the case of stage it's tracking the main branch so as we merge it gets deployed to stage so with this one it makes it easier to check when the bug was introduced as you described this job has some definitions stored somewhere and I'm curious what are the places where this can be stored because I can imagine that not each upstream project is okay to have this definition in the main repository yeah so in general if you are using the github or github integration and you want to verify if change is upstream you will have the configuration in the root of the upstream github repository but if you want to spec it only for the automating of the releasing to Fedora you can only place the configuration in the github repo and don't touch upstream at all so to sum it up on the upstream you can run copper builds and also the upstream koji builds and tests and on the downstream you can define just this part so it gets released you create a pull request once it gets merged we build in koji and then create the koji update any other question? yes, Peter? are there any plans about automating on github and github? yeah so it's a topic that we discussed multiple times and currently there is some ongoing discussion with an outside contributor and we might consider it but it's not our primary plan okay anything else? if not then thank you for attending this talk and see you at the booth maybe if you have anything else go on talk