 So hello everyone, my name is Laura. I'm from the packet team and today I'm here with Simon from the testing farm team We both work in a red hat and today we will talk about packet and testing farm and their integration together So let's start with some agenda for today Firstly, we will very shortly talk about packet then we will switch to the testing farm Simon will explain Testing farm its users how it works Then we will switch back to the packet and deep dive into its features Then finally we will talk about packet plus testing farm and their integration and we will also talk about the use cases the users and In the end we will show you some numbers graphs statistics So starting with packet packet is an open source project That tries to bring upstream and downstream closer closer together And packet has two main goals. The first one is to validate upstream changes downstream So it's kind of a CI system that works in github or GitLab And then the second goal is to bring upstream releases to downstream and Automate the process and make it easier for Federa package maintainers So as today we will talk about packet mostly in terms of packet service Which operates on github and GitLab and reacts to the events in the Federa disk it But there is also a packet as a CLI tool that you can install on your Federa and run it locally so When last block happened in Budapest packet was in its beginnings and it didn't have that many users But since then I think packet user base has grown rapidly So here you can see just some of logos of some of our users. So for example podman system the Cockpit and a state and a lot of others Yeah, so now I will hand it to Simon Thank You Laura. So I have some points to Keep me on track. So I don't get often to the weeds so first of all testing farm is a Infrastructure as a service. It's a service that you can run your tests on and get results There's storage for artifacts. There's cues and so on but it's it's more than that Because you can run your tests on multiple OS's and multiple versions of those losses and on multiple architectures, so There's a Yeah, it scales you can your testing will scale Yesterday, I don't know you guys probably maybe some of you were at Adam Williamson's Fedora see I talked he talked a little bit about How Fedora see I uses a testing farm and CentOS streams will see I use this testing farm That's in a public ranch and there's also a red hat ranch which Red hat all the red all the rails are tested on And Yeah, you can actually use that Ranch as well you get You apply for permission to use it and you change your configuration and packet and you're actually able to run your tests in the Internal range too if that's if that's allowed and of course testing farm is used by packet So Testing farm generally it's it's I mean there's there's lots of moving components, but You're you're there's one API endpoint that your test is submitted to submit a Jason request post request to the API then the ranch is selected Your request waits in a queue it gets picked up by a worker The system under test gets created Installed with the fresh OS of your choice that you specified And then the pipeline starts to execute your tests on that fresh system the plans run and The results and the artifacts are stored and you can access these Even after the the VM is destroyed because yeah, that's that's art in artifact storage So you may ask what's what's the benefit? Why should I use? Testing farm, you know, I could I could probably hack something together myself Yeah, you probably could the benefit to using testing farm is that well its skills to all the different versions of OS is but also You don't have to maintain that infrastructure You don't have to pay for it. So testing farm is Slide that yeah testing farm is Open to any fedora or OS stream contributor team or special interest group Testing farm is also open to any public project service or initiative which red hat or fedora maintains or co-maintains and of course Testing farm is available to any packet user So Yeah, so testing farm can be thought of as a back end for CI But first yeah, we need to talk about TMT a little bit TMT in order to use testing farm your tests need to be Managed by TMT. I don't know if any of you have seen or used TMT before TMT stands for test management tool and There is the notion of Hierarchy and inheritance so These are two things that TMT will allow you to do really well you can have core attributes that your Test is all your tests have access to a very simple example. It's like a version number or something and Then you have your tests and you have your plans And then there's stories so stories are more something that is optional actually but At a minimum you'd need tests and plans you need one test in one plan at a minimum But stories will help you to know why you wrote the test Why it's written that way? Do you really want to? Optimize it you really want to change it was written that way for certain reason and help you to Understand your tests and of course other people to understand your tests as well So yeah, TMT also runs locally on your computer. It's a tool So you can Yeah, that's how you would develop your tests you would you would first try it out on your laptop see see how it's going TMT will create the SUTs the system under tests using VMs or containers and That's yeah, you don't need to because of that You don't have to worry about cleanup because once the the test is done It'll just destroy the VM and not contaminate your laptop your workstation One thing to know about TMT is that it's not restrictive. You can write your tests in any Language you want in any Testing framework you want. Basically, you just have to call it with TMT for it to be able to run in testing farms So say you're using pie test or something just call that test through a wrapper or whatever you like and Then it will run so it's it's a test management tool. It's not a test writing framework So it's it's very flexible. Another thing you can do is you don't so when you submit a test to testing farm Part of the request is the location of the where your tests are it's a URL where your code is In a git repo and in there it expects to find a plan Where your tests are so you could actually have just one plan in there with the URL to another repo Where all your tests are actually there? So you don't have to technically keep all your tests with your code if it doesn't work out for some reason Yeah, so actually yeah one more thing to note with testing farm you have all that skill but with packet you have even more skill and Laura's gonna tell us more about that Okay, so in the beginning I mentioned two main goals of packet so now we will deep dive to them so we will start with the packet as a federal release automation and So that I can explain it I will very shortly go through how does actually the new code get to the user using Fedora operating system I assume everyone knows this process. So just very shortly at the one end we have the upstream the code and On the other hand, we have the user who wants to install the latest greatest change So there is a release that can happen for example in github or githlap and Then as a next step the source code needs to be stored somewhere and for that there is a look-aside cache Which can simply be archive database, let's say and Then we have the distribution git which is in Peggy or there are the package related files Packaging related file. So we have spec file there the sources file and these needs to be adjusted to the new change There is of course the Koji as a next step which is Fedora official build system Bodhi the Fedora update system and After Bodhi here comes the change and user can install it via DNF for example So how does packet fit into these steps and how can help with these steps? here you can see all these steps in one screen and Actually on the right side, you can see how packet covers everything in the middle between the upstream and the user and the installation of the new software. So packet has basically some jobs that can be configured and For the federal is automation there are four jobs that can be configured and they cover syncing the release building the updates in Koji and then bringing these updates to the Bodhi So let's start with the first one Firstly we have the syncing the release That means we need to bring the changes from upstream to the downstream What needs to be done is that the Archives need to be uploaded to the look-aside cache and then the spec file needs to be updated probably the version the change log and the sources file as well and For this you can use one of the packet jobs either a proposed downstream or pull from upstream and You will choose based on multiple factors So if you are upstream maintainer of the package, you can configure the proposed downstream Proposed downstream is configured directly in the upstream repository So you need to place the configuration file in the upstream git repo and then packet will react directly to the release in github or githlap The benefit of this is that packet also provides you the feedback The results directly in the github or githlap interface Here you can also see the snippet of the configuration so this needs to be placed in your upstream and then you can also see a screenshot how packet provides feedback about The job so the proposed downstream finished successfully You will get the link and you can then see the PRs created in this kit, which I will show you in a while But then of course sometimes you have a package in Fedora and You don't have access to the upstream git repository. You don't maintain that code But in that case you can utilize the pull from upstream For pull from upstream the only thing that you need to do is to place the configuration file directly in this kit You will add the pull from upstream job and after that packet will react the upstream release monitoring messages and will do exactly the same process so it will bring all the changes to the Peugeot to the Fedora this git and As I mentioned here, you can see on the screenshot That this is the PR that packet opened You can see that the version is changed the change look is added for the new version and also the sources are updated Okay, so what's next? After the maintainer of the package reviews the change the PR in the disk it and is satisfied with the result He can wait for the CI and if everything is green he can merge the pull request But then of course the new changes should be built in Koji And it can be tedious to do this every time there is a new release. So packet can help with this as well The only thing is again in this git repository add this little configuration for the Koji build job and after that each time The pull request is merged in this git packet comes takes the changes and builds them automatically Here you can see some packages built by packet in Koji Okay, but there is also another step and that are both he updates again manual step and It's very repetitive. So how can packet help? There is the body update job. So Again code snippet You can put this to your configuration file and packet will watch out for the successful Koji builds And once there is a successful Koji build packet comes takes it and create the relevant updates for the particular release Okay, so that was it for the release automation Now let's check the other aspect of packet and that's packet as a CI solution so previously when we were talking about the Downstream automation mostly packet should have been configured directly in this git But if we want to spec it as a CI solution You want to validate the things in upstream. So the setup needs to be done there So firstly unit enable the interaction with packet And that's either in github. For example, here you can see the screenshot of the packet github application Or in github as an integration. So you just do few clicks and install packet in your namespace or repository The next step is that your namespace needs to be allowed So you just provide your federal account system login and we do the automatic matching so very quick step and then Almost the last step is you create the configuration file There you place what you want packet to do for you And if one of the things you want from packet is rpm build You can you need to also place a rpm spec file or at least add Some script how to obtain the rpm spec file Okay, so after setup what can packet do for you? The most used job That packet can do are the rpm builds. So the for the rpm builds packet uses the copper build system and Basically, you can configure packet to build your rpms for any pull requests commits or releases and then For example, if you configure packets to react on your pull request with each pull request packet comes forwards the new code changes to copper the Changes are built there and packet provides the feedback about the builds in github again As you see on these screenshots we provide the links to the copper web ui The locks and everything you need One more note so with rpm builds you can either validate your changes, but also for example, you can Configure the builds for the pushes to the main branch and or for the releases And then you can have some dedicated copper repository and users can consume the builds from there directly Other ci job that you can configure are the vm image builds So these are the follow-up of the rpm builds and if you want to also create the vm image build Then you just place a simple comment as you see on the screenshot and packet will come Check whether there is the build rpm Take it and create the vm image build for you For this packet uses the red hat image builder. So you can see in the screenshot you will again Get everything you need in the github ui you have the links there the status and you are good to go And finally what we are here for are the tests. So as you probably got now packet uses testing farm for the tests and the configuration Is very similar to the other jobs So, how does this work? User enables packet as I previously talked about sets it up in the upstream Then optionally also configures the build job And after the rpm seller build packet forwards to the testing farm the Package and vr's of the copper builds Sends the request to the testing farm and then checks for the results once the results are Then the packet provides you the feedback again as you've seen in the screenshots So as for the configuration the tests can be again Configurable for pull requests or the branch pushes or also for releases You can see that the configuration is really simple. So you specify the trigger And then you specify the targets you want to run the tests on So let's have a look at monius cases how you can utilize the tests. We are packet So as Simon already mentioned, there is also Red Hat range of testing farm and It is really simple to utilize this via packet. It is basically a one configuration option And there's the use internal tf. So you enable this one And of course you need to reach out to us. We need to allow you So that you can use testing in the red hat infrastructure, but that's it. You're good to go Then for example, if you have some really resourceful tests Which you don't want to run on each pull request In each push to pull request But you want to run it only manually on a comment You again specify one more configuration option and that's the manual trigger. So after that If you are ready to test your changes, you can just post a comment and packet will react to that Another useful thing That can be done via packet is that basically there is this configuration option tf extra params and Here you can specify anything you would specify in the request to the testing farm So one of things you can specify is some additional artifacts So if you want to do some reverse dependency testing cross project testing You can just specify some repository in the artifact and we will send these parameters to testing farm And you can ease your reverse dependency testing like this Then there is another use case if you want to define some custom mapping between your build and test targets so Here you can see we have rpm build job copper build job And the targets are configured for ipo 7 and ipo 8 But you want to run tests Then and define some mapping. So for the builds With target ipo 7 you will run these on the center s7 on and oracle linux 7 for example So it's possible to define one to one mapping or one to n mapping And another thing that was already mentioned If you have your fmf metadata somewhere else not with your test with your code You can also specify the fmf url That points to some other repository and you can also specify the fmf ref and Yeah, packet will forward this to testing farm and everything will work And now salmon will talk a little bit about the interesting packet usage examples So when preparing for this talk Miro and I looked through some of the stats um on some of the the users and There was a couple more, but these were interesting and they had they were running a lot of tests The first one These guys stream z they actually contributed to Um Packet you can take a look at how they're how they did it there. They they they documented it. Um Maybe you already read this. I'm not sure. Um, but They they don't use any of the building They basically use the testing farm infrastructure to run their tests, but they don't do any building um and cockpit of course Uses packet They run the same tests that fedora ci does but they do with packet and This project scupper they uh, this Is a default plan That will run if you have no tests defined and So even just enabling the packet service the packet packet integration in your repo What you'll get is you'll get this Sort of sanity check It builds your packages and tries installing them So at a minimum just by enabling it you get that functionality. This project is using that So there's there's several use cases. Um, you don't have to use all of the functionality Uh, at a minimum. Yeah, you you still benefit a lot statistics from packet Okay, so just for The sake of having some numbers to show you like how many users actually use packet So you can see some numbers for the past year. Uh, so the most used job is The rpm builds in copper copper builds as you can see in the past year. It was 76 000 builds And then of course the the testing farm usage. So there were More than 40 000 testing farm runs And as I already mentioned, uh, the Downstream automation As for the syncing of the release from upstream to downstream There were more more than 700 runs of the sync And here you can you can also see the activity of packet bot in the disk it So you can see in the recent months really active And also some badges we earned um, and then We have the statistics also for the testing farm. Uh, so on the Image below you can see the numbers. Uh, so in the Uh, 2023 it is uh 680 000 That's projected. So this it's not the end of the year yet. Yeah, so you can see it is really growing And as for the distribution of the users of testing farm Firstly, we have the federal ci But then there is also packet uh with around the third of the usage of testing farm So really nice And if you would like to try packet and testing farm Together or separately it's up to you. You can check out our documentation. So Also packet and testing for documentation And one more step if we even want to contribute, uh, we are very open about the contributions So, uh, you We will share the slides with you and you can definitely check out the links And uh, we are really happy to help help anyone who would like to do some contributions As saman mentioned, uh, the stream z team helped us And they implemented some awesome features. So And the same applies for the empty and testing farm. Yeah, um So I just want to mention that testing farm We uh, the code is as public and you can contribute to it, but we don't have Yet a nice developer guide or any kind of style guide or Community guide or anything like that. So you might feel a little bit lost But if you have the confidence, you know, what you're doing go ahead and make merge requests It's it's up to you. Uh, but there's no we we don't have it very Welcoming yet so to speak Okay, and lastly get in touch with us if you are interested in anything we have talked about Uh, here are some contact information. So, uh, matrix email, uh, and we have also a master plan account So yeah, uh, make sure to get in touch with us And uh, now it's time for your questions if you have any Thank you, uh, I have a question probably related to testing farm if my test requires a specific hardware Is it possible to define somehow or Yes, there's um There's uh, if you look at the tmt documentation You can see that you can specify specific hardware In the public ranch, you'll you'll have access to x86 and arm In the uh internal red hat ranch, you'll have access to beaker, which is Full of Interesting hardware. So this is not just about vms, but I can say no you can use bare metal In With beaker, there's bare metal access with the public ranch. It's only vms. Okay just for Cost All right. Yeah, I see. Thank you. Yeah um, are there any resources for learning about how the, uh Which is the the fmf is the syntax and the tmt is the tool Is there any resources for how to actually use fmf because I guess the way it works is a bit Different than other traditional ci systems work. So um, you don't really have to fmf is technically like a library or it is a separate project But you don't have to know about it the documentation for tmt is sufficient to to Help you to use test management tool. Um, and it's it's yaml. It's uh Yeah, it's it's not very complicated and Um, maybe I can go back to the slide. Yeah, if you go to the If you go to the uh All right, we have oh, we don't have a link to the tmt documentation I guess I'll if anybody's interested in that I'll I'll send it to you. Uh, but you can just google tmt and you'll find the documentation And uh, yeah, it should help you get started. Um, yeah, if anybody's interested maybe We can do a workshop later or something any other questions Okay, so if not, then thank you for coming