 Test one two, check, check, one two. So, welcome and thanks for joining. We'll be talking about upstreaming Fedora CoreOS. So, my name is Clément Verna. I'm one of the engineering manager working with the CoreOS team. And today we'll talk about this with Ellen. So, hi, I'm Ellen. I'm a product owner on the CoreOS team as well. So, yeah, we'll get started. Yeah, so we're going to talk a little bit about what is Fedora CoreOS, how we're making it and testing it right now, on feature plans with where we're going with it. Yeah, so what is Fedora CoreOS? So, we're now an official Fedora edition. So, we became an official Fedora edition in Fedora 37, together with Workstation, IoT, cloud and server. So, this means we're more integrated with the Fedora release process, including release blocker and go-no-go processes. So, we focus on single-no clusters in these cases. We're a successor of two container-first OSes, which were CoreOS in container Linux and Fedora Atomic host. So, we've incorporated ideas from both provisioning, stack and cloud-native expertise and Fedora foundation and up-stack SEO fix. So, the philosophy of CoreOS comes from the merge between container Linux and Atomic host. So, the idea was to take the best of each project, which resulted in the following key values. So, the first of which was automatic update. So, this was something that only Fedora CoreOS does in Fedora. The idea is for F cost users to not have to worry about the OS. Automated provisioning, so it makes it easier to have one or a thousand nodes that are the same. And then a mutual infrastructure, which leverages the OS tree technology to make it easier for the update nodes. So, our release model is different compared to other Fedora variants. So, instead of major releases every six months, F cost releases every two weeks on three different systems. This differs from other Fedora editions. The model is linked to the automatic updates. So, in order for user to use automatic updates, this has to be stable in order to not put pressure on the system, and this allows testing in advance of the stable stream to avoid issues. So, we were supported on many platforms, and a few of those are now integrated into our automated testing. So, some of those are cloud or VR platforms and some of those are bare metal options. So, some of the automatic testing happens during our release process on F-86 or AR-64. We also have bills on S-390X, but we don't have automated testing on those yet. So, it's just some of the statistics on some of the nodes. So, the red line you're seeing there is the transcendent line, and the blue is the static line. So, the transcendent are including systems that have been live for less than one week, and the static systems that have been live for more than a week. The dip in the graph you're seeing there was due to an infrastructure issue that we had. The number of CoreOS instances are coming up and continuing to grow. So, we're reaching over 40,000 instances out of which 30,000 of those are up for more than one week. So, just some statistics on Fedora releases. So, you can see the success of the automated updates, majority of the S-COP instances are running on the latest version of Fedora. So, why can we see older versions of Fedora? So, the OKD, so the community version of OpenShift is using FGOS. Updates are managed by the OKD admin, and a specific version of OKD is tied to a major version of Fedora. And for the architecture, some of these are very popular on X86 and AR64. FGOS is the most popular Fedora variant on AR64. Okay, so I'll hand you over to Clermont. Yeah, so, the wanted to take a deep dive a little bit into how the Fedora CoreOS is released and the release engineering. Since we don't have like the same model, release model as other Fedora variants, we have a bit of a different process. So, yeah, how does a package update launch in Fedora? So, we'll see the beginning is very traditional. It all starts with a commit in this git where the package will update their software, code you build, body update, and it reaches the stable repose of Fedora. Then, that's where the Fedora CoreOS magic source happens. In Fedora CoreOS, we have like this concept of log files. Talk a bit more about it. Then, we do our Fedora CoreOS build, Fedora CoreOS test, and then release. Let's look at this in a bit more visual way. So, commit in this git, like I just took the example of OpenSSL. Packager will do this work. Build it in CodeG, then push the update in body. As you can see, the update reaches the stable repose. So, now it's available for every users in Fedora. The thing is that, up to this point, it's this update has never seen a Fedora CoreOS system. Like, we went through a lot of testing, people have been providing feedback on body, but this has not been seen by Fedora CoreOS. Yeah, so that's a big point we want to try to work on, is that really, the Fedora CoreOS release cycle comes quite late in the packageer process. So, to get this stability for Fedora CoreOS and to try to keep users to use automated updates, we have like this system of log files. When we're going to, for each release of Fedora CoreOS, we're just going to fix the package versions. So, for example, if we go back to the OpenSSL, we're going to say that in the next stable stream of Fedora CoreOS, we're going to ship with OpenSSL 3.084. And that's going to be the fixed version for two weeks. And then, two weeks later, when you get a new automated update, you're going to get a new version. So, that's really how it's done. Everything is stored in a Git repository on GitHub, so you can have a look at that. And you can really have all the versions of all the packages that are provided in Fedora CoreOS. So, to do this, we have a robot file, like a bot, that's just that periodically. And it will just look at the new builds that are pushed on the stable repository in Fedora, and bump those, like create the commit automatically. Once we have this, we have a scheduled test job that runs every day. And that's the first time, pretty much, that we're going to try to build a new version of Fedora CoreOS with the new content. So, you can see it's a Jenkins pipeline. So, we've got, like, different stages. We fetch the new content, build the new OS3, sign the OS3, and then we build the QMU image to start to do our testing. In Fedora CoreOS, most of the tests that we're doing, it's using Kola, which is just a test runner that we maintain and develop. And in the happy path cases, everything works. And sometimes there's just, like, we start to have, like, tests that are failing. So, at that case, there is quite a lot of investigation that needs to happen to understand, okay, we've, like, the new updates that we've consumed from Fedora stable, what has changed to cause the failure in the Fedora CoreOS test. So, if we go back to, like, the release process, you can see that the feedback loop comes quite late in the process. We really only start to test for AFCOS quite late. So, if we have to make any change, or if, like, an update in the package is actually breaking tests for Fedora CoreOS, we can only provide the feedback a long time after the package did the work. So, what are our options in the release engineering? We have three options. So, if there is a failure first, it's like the investigations. So, we'll try to, we open a bug, say, okay, this test started to fail, start to look at it, try to do the diff, okay, which packages have changed, which package could be the cause of the failure in the test. If it's an upstream issue with the update, we can just then file a bug, bugzilla, or GitHub issue with the upstream. So, yeah, like, this is a recent case where a change in system deep broke Fedora CoreOS. So, we'll report the issue, it will get fixed and upstream and then flow down in Fedora. We have the possibility also with our test runner to snooze test. So, if we see that the test starts to fail, but it's not necessarily something very important that we don't consider is going to break the stability of the system, we can say, okay, let's snooze or ignore that test for the next week or so by the time to get it fixed, but we are not going to block the release. If it's a package or a bug that is a bit more important, for example, like the system D1, we have the ability to lock the package to a previous version. So, if system D update is breaking Fedora CoreOS, say, okay, we have identified a bug there, we're waiting for upstream to fix it, but for the next release of Fedora CoreOS, we're going to release Fedora CoreOS with the previous version that was working of system D. So, that's where the lock files that we have, the system of lock files is quite useful because we can then play with the versions that we release to users and keep that stability. We have also the ability of fast tracking an update. So, if we know that a bug fix that we are waiting for is already in body waiting for people to test it and it has not reached the stable repose already, we can fast track that update and take it from body, which we can reference the body update and take the actual version and release this in Fedora CoreOS before it reaches Fedora state. So, that's where also we have all those different streams for users, users will be very familiar with the next testing and stable streams, but we have also development streams. So, we have a row-eyed stream that really try to get us early feedback on the changes happening in row-eyed. So, we try to test early and get to see what is going to break Fedora CoreOS in the future. And we have the next devil and testing devil, which is where a lot of those nightly builds are happening and where we run our CI and the test. I think we consider it a little bit like our main development branch. It's where the development is happening. And for user, the next and testing streams are really where we encourage people to test and provide feedback and tell us in advance like if something stopped working for their system or their workload. So, what we would like to try to improve in the next Fedora releases, it's how we can try to upstream our testing. And it's all about feedback loop. I talked a bit about that. I quite like this thing, it's like when our feedback loop better, when they are shorter. So, as we've seen previously, we were only testing quite late in all the process. And here the idea is to try to provide a very early feedback when the package is building its package in Koji, trigger Fedora CoreOS build, run a subset of our Fedora CoreOS test and loop that feedback back on the body update. So, the package would know when he creates the updates in body if his update has impacted the Fedora CoreOS build or not. The idea, it's really to try to reduce a lot of the effort that goes into investigating why the tests are failing late in the development cycle and have the feedback directly on the update. So, instead of having to look at which package was the root cause of the issue or like to do some deep and look at the differences, we would directly know in body, we would be able to see like, okay, when system D or the latest kernel came, it actually broke that test for Fedora CoreOS and there is a lot less investigation needed. So, how would we be able to do that? Actually, it should be relatively easy. It's like a quote marks, but a lot of the infrastructure is already in place because we already have like testing on updates and it's about like trying to plug our Fedora CoreOS release engineering testing to plug that into the rest of the testing infrastructure used by Fedora. So, there is a system called ResultDB, which is really a database with all the results that are with all the test results that are run on updates and on Compose. So, for us, it would be interesting to do, to look at updates. And when, pretty much when the code you build is started or when a code you build is successful, we would have like a Fedora messages message that we would catch, trigger our Fedora CoreOS testing and report the result of those tests into ResultsDB. And then body is able to, body will be able to get the results through ResultDB and display that on the update. So, this is already working for other tests. And you can see pretty much here we, for each update, we already run a quite a lot of tests for just like other variants of Fedora. So, instead of having, for example, update-based system logging, we could have like something like update Fedora CoreOS like kernel test or whatever. So, like the package would be able to see on their updates if there is any failures or like if there are any also required tests that failed and so on. So, if you are not familiar with like the tests that have like a small star or aesthetics, our test that the package marked as required for the update to be able to be pushed to stable. So, that's like the gating. Yeah, Adam? Okay. Those are, that's a test you're aware of. Okay. So, they are not marked by the packages, but they are like distro-wide policies where they are like some gating required tests for updates. Thanks. So, yeah, the big difference is that we would run all the Fedora CoreOS tests before the packages are committed in Fedora Ripples. So, if it breaks Fedora CoreOS, it would not reach stable. And it would let the packages get that early feedback and know how their updates impacted Fedora CoreOS. And it avoids a lot of like later work into communication, going back, say okay, this was already in stable, but it broke that build, like can we get it fixed and so on. So, it's really trying to save a lot of time for people working on packages. So, how do we get packages that fail in CoreOS? See eye tests. So, we enforce passing Fedora CoreOS tests before pushing updates. So, it's critical to make RPM OS3 variants at first classes in Fedora. This will benefit from RPM OS3 based variants like CoreOS, IoT, Silver, Blue, et cetera. So, future considerations. So, building a common and minimal shared Fedora Core, minimal OS3 image, either to use as a base for other variants to layer packages on top of or could be used to create other OS3 commits. Thank you very much, and does anybody have any questions? Is it on? Yep, yep. So, I have some more detailed things which I'll come talk to you after, but I'm curious how long does your pipeline take? So, it all depends how much testing we want to do. I think the idea for the original idea would be to run just a subset of core tests that we think are important. And it would probably, I don't know, be less than an hour, between 30 minutes to an hour. That's good, thanks. Thank you for providing us insight into how Fedora CoreOS works. So, since my background is providing feedback like from downstream to upstream, I can see that's an improvement, what you proposed to provide feedback when the build is done. But honestly, I don't think that's enough. Like, I feel that in one or two years, we would be seeing presentation when you go even further. So, I'm just thinking about if we could do something better, like for example, providing that feedback when the upstream list is created or even in pull requests or like going that far. Yeah, that's, I think that's definitely maybe a talk for the next two years. There is a saying that I like, it's like how do you eat an elephant? It's like one spoon at the time. So, like that's the small step forward to... I think we are in the situation where we really want to try to like integrate a bit better. Like, as I said, the infrastructure is there already. So, this is maybe like a low effort, high impact, a step that we can take forward. But yeah, I definitely think that this shouldn't be the end of, like, if we put that in place, that shouldn't be the end of the effort and like go even more like, shortening even more like the feedback loop. Definitely. So, I have a question. So, I don't, maybe it's two in the weeds, but I feel free to say that I can talk to you guys after. But with the Podman desktop team, so we rely on Fcause for delivering Podman on Mac and Windows. And one of the issues we run into sometime with releases is Podman desktop has a release cycle, Podman does and Fcause does. So, what ends up happening, like when Podman 4.6 came out, it seemed to just have missed, like by a day or two, an Fcause release. So, then we're waiting for it to go to stable and Bode for like a week. And then the next Fcause release is another, and it's like, we've made a release and then our users will not get the latest Podman that release announcement went out for like a month. And I'm just wondering, I'm intrigued by your new model of having an additional test build. And I'm wondering what are the implications there? Can that somehow help speeding this up or is it orthogonal? I think what will help a lot is, so it's not necessarily tied to the feedback loop and doing our testing earlier, but it's more like having a minimal Fedora OS3 image that could have like a different lifecycle or have like just follow the normal six months release of Fedora and you get the updates just like normally. And this could be like a base for all OS3 systems or Fedora IoT, Fedora Core OS. It could become like what actually is consumed by the Podman desktop team. And yeah, I think that could be something to explore if it fits better. Like the life cycles and release cadence is a tough topic I think to try to align everything. Yeah, I think we have some ideas around trying to have like that core set of packages that is useful in all the, and that could have a different lifecycle or like consume the updates in a different way. I think that's a future consideration. Does that mean it will happen or? I think we definitely have the idea to do it. So you know, it's still like the Core OS group is like still part of the like change process and so on. So that would be something we have to propose to the community to have discussions and see if there is interest. And but yeah, that's definitely something that has been in people working but like very closely to Core OS mind in the last few years. Thanks for the presentation. I have two questions here. So first up, how would Fedora Core be exactly different from Fedora Minimal except for the base that is gonna be OS-3 based? So would there be significant difference between what Fedora Minimal today is and Fedora Core would be? So when you mean Fedora Minimal, you mean like the container image? Yep. First there will be the kernel. Since in the container image you don't have the kernel but yeah, that could be a good start. It would differ from Fedora Core OS in terms of like in Fedora Core OS we have a lot of, so for example, we have Ponman, we have Docker, we have things that are very opinionated to run containers where in the Fedora Core Minimal OS-3 we would not necessarily have this. I think we would look at what's the bare minimal to have an operating system that boots almost and then let users or different working group or editions customize that to their needs. So yeah, it could be like the package set that is in the container Minimal image could be a good start for that. Okay. I have another question which is currently when Kola tests they kind of create if you will from one particular stream to another particular stream like 38, 39, so on and so forth. But with automatic updates you can go from 35 to 39 all the way but in recent past that has failed. Like there was an error with migrating from 35 to 39. So is this going to fix it? I think for, so when we have like updates, like for example as you mentioned like from 35 to 38. So there is a mechanism that I didn't put in the slide but we have like a barrier release. So we can force the system to go through a specific update. So we would have like, so you wouldn't jump directly from 35 to 39. We would first update the system from 35 maybe to 36 because it needs to happen at 36, 37 and so on. So we have that concept. In terms of having earlier feedback preventing that, I don't think that's like the, that would be like the end goal but that might help. I think like, you know, if we are able to catch like bugs earlier and prevent package to go to stables because they broke some of those updates, that would help but I don't think that would be a catch hole. I think we could still have like cases where like, yeah, switching from a major to another major would bring some issues. All right, thank you. Anybody else have questions? A lot of questions. I walked in right during the count me thing which is relevant to my interests. Do you, this connects to Pubman desktop, do you have a estimate of how many of the running CoreOS systems reporting in there are Pubman desktop? Nope. No, yeah, it's hard to, well, you know that. Yeah, it's hard to differentiate. Way of knowing the Pubman desktop playing from the Pubman desktop. No, because they just use like stock Fedora or CMED so they report as CoreOS. I wonder if we could try to do some, I'm just wondering now in which segment they sit, like are those systems that lives longer than one week or not? No idea. Yeah, I'm not so sure, we can, I think maybe like this we could try to get some kind of rough ideas but we don't necessarily have like a very... So if we move to that core minimal image then could that be built as Pubman machine with some kind of metadata that then you could track it? Yeah. Any other questions? Give me a work out. Thanks very much. This question is a little tangentially related to the talk but there's a request for silver bull support on a new platform and I'm thinking if we started working on that today, it might be... On which platform, so... Silver and Blue? It's specifically the Asahi platform but it could be any platform. That's just an example. So I've seen Colin do a couple of talks and Bootsy has the new way for OS3 based operating system. If you're porting some of this OS3 based stuff to a new platform and say you're thinking it might be ready Fedora 40-ish, should you bite the bullet and start looking into Bootsy or I know that's bleeding edge stuff. That's a big topic. So I don't know how many people are familiar with the work that Colin Walter's been doing. There's two topics, like there's Bootsy and OS3 native containers. I think what you're mentioning is more on OS3 native containers which the rough idea is that you treat your operating system just as an OCI container. So you would have a base. So coming back to this minimal core OS3 definition. So this could be like a Fedora OS3 core image that we provide as a container image which is an OS3 system. It's just an OS3. It's all the root file, the root FS, the root file system of your system. And you can just use that in a Docker file. So you would say from Fedora OS3 minimal and then run your command in your Docker file to customize that. So if you need a specific kernel for a specific platform, you could then like RPM OS3 replace the Fedora kernel with the kernel that you need for a specific platform and so on. So there's a lot of work happening now on this. It's still a bit fresh, but I think that's like the future of, that's definitely the future of how we want to release or at least Fedora core OS, but I think long-term all the OS3 variants. And Bootsy is related to this in a way where this is what allows you to install and deploy those container OS3 images. Like, so they are linked, but not necessarily like the same thing. So Bootsy would mean that you can start from any Fedora variant or editions so you could even start like from your Fedora workstation on the platform, use Bootsy to convert that Fedora workstation into silver-blue for that platform. I think that's an overtalk. There is a lot of details to go into that. We can talk about it if you want after. Any other questions? Awesome. Thanks everyone. So hello everyone. My name is Laura. I'm from the packet team and today I'm here with Simon from the testing farm team. We both work in Red Hat and today we will talk about packet and testing farm and their integration together. So let's start with some agenda for today. Firstly, we will very shortly talk about packet. Then we will switch to the testing farm. Simon will explain testing farm, users, how it works. Then we will switch back to the packet and deep dive into its features. Then finally we will talk about packet plus testing farm and their integration and we will also talk about the use cases, the users and in the end we will show you some numbers, graphs, statistics. So starting with packet, packet is an open source project that tries to bring upstream and downstream closer together and packet has two main goals. The first one is to validate upstream changes downstream. So it's kind of a CI system that works in GitHub or GitLab. And then the second goal is to bring upstream releases to downstream and automate the process and make it easier for Federa package maintainers. So today we will talk about packet mostly in terms of packet service, which operates on GitHub and GitLab and reacts to the events in the Federa disk it. But there is also a packet as a CLI tool that you can install on your Federa and run it locally. So when the last block happened in Budapest packet was in its beginnings and it didn't have that many users. But since then, I think packet user base has grown rapidly. So here you can see just some of logos of some of our users. So for example, Podman, SystemD, Cockpit, NM State and a lot of others. Yeah, so now I will hand it to Simon. Thank you, Laura. So I have some points to keep me on track so I don't get off into the weeds. So first of all, testing farm is a infrastructure as a service. It's a service that you can run your tests on and get results. There's storage for artifacts, there's queues and so on, but it's more than that because you can run your tests on multiple OSes and multiple versions of those OSes and on multiple architectures. So it scales, your testing will scale. Yesterday, I don't know, you guys probably, maybe some of you were at Adam Williamson's Fedora CI talk. He talked a little bit about how Fedora CI uses testing farm and CentOS Streamzool CI uses testing farm. That's in a public ranch and there's also a Red Hat ranch, which Red Hat, all the rails are tested on. And yeah, you can actually use that ranch as well. You get the, you apply for permission to use it and you change your configuration and packet and you're actually able to run your tests in the internal range too, if that's allowed. And of course, testing farm is used by packet. So, testing farm generally, I mean, there's lots of moving components, but there's one API endpoint that your test is submitted to, submit a JSON request, post request to the API, then the ranch is selected, your request waits in a queue, it gets picked up by a worker, the system under test gets created, installed with the fresh OS of your choice that you specified, and then the pipeline starts to execute your tests on that fresh system. The plans run and the results and the artifacts are stored and you can access these even after the VM is destroyed because yeah, that's in artifact storage. So you may ask, what's the benefit? Why should I use testing farm? I could probably hack something together myself. Yeah, you probably could. The benefit to using testing farm is that, well, it scales to all the different versions of OSes, but also you don't have to maintain that infrastructure, you don't have to pay for it. So, testing farm is, next slide, is it? Yeah, testing farm is open to any Fedora or OS stream contributor team or special interest group. Testing farm is also open to any public project, service or initiative, which Red Hat or Fedora maintains or co-maintains, and of course, testing farm is available to any packet user. So, yeah, so testing farm can be thought of as a back end for CI, but first, yeah, we need to talk about TMT a little bit. TMT, in order to use testing farm, your tests need to be managed by TMT. I don't know if any of you have seen or used TMT before. TMT stands for Test Management Tool, and there is the notion of hierarchy and inheritance. So, these are two things that TMT will allow you to do really well. You can have core attributes that all your tests have access to, a very simple example is like a version number or something. And then you have your tests and you have your plans, and then there's stories. So, stories are more something that is optional actually, but at a minimum, you'd need tests and plans. You need one test in one plan, at a minimum. But stories will help you to know why you wrote the test, why it's written that way, do you really want to optimize it, do you really want to change it, why it's written that way for a certain reason. It'll help you to understand your tests, and of course, other people to understand your tests as well. So, yeah, TMT also runs locally on your computer. It's a tool, so you can, yeah, that's how you would develop your tests. You would first try it out on your laptop, see how it's going. TMT will create the SUTs, the system under tests using VMs or containers, and that's, yeah, you don't need to, because of that, you don't have to worry about cleanup, because once the test is done, it'll just destroy the VM and not contaminate your laptop, your workstation. One thing to know about TMT is that it's not restrictive. You can write your tests in any language you want, in any testing framework you want. Basically, you just have to call it with TMT for it to be able to run in testing form. So say you're using PyTest or something, just call that test through a wrapper or whatever you like, and then it will run. So it's a test management tool, it's not a test writing framework, so it's very flexible. Another thing you can do is you don't, so when you submit a test to testing farm, part of the request is the location of where your tests are, it's a URL where your code is in a Git repo, and in there it expects to find a plan where your tests are. So you could actually have just one plan in there with the URL to another repo where all your tests are actually there, so you don't have to technically keep all your tests with your code if it doesn't work out for some reason. Yeah, so actually, yeah, one more thing to note with testing farm, you have all that skill, but with packet you have even more skill, and Laura's gonna tell us more about that. Okay, so in the beginning, I mentioned two main goals of packet, so now we will deep dive to them, so we will start with the packet as a federal release automation, and so that I can explain it, I will very shortly go through how does actually the new code get to the user using federal operating system. I assume everyone knows this process, so just very shortly, at the one end, we have the upstream, the code, and on the other hand, we have the user who wants to install the latest, greatest change, so there is a release that can happen, for example, in GitHub or GitLab, and then as a next step, the source code needs to be stored somewhere, and for that, there is a look-aside cache which can simply be archived database, let's say, and then we have the distribution git which is in Peggyr, there are the packaging-related files, so we have spec file there, the sources file, and these needs to be adjusted to the new change. There is, of course, the Koji as a next step which is Fedora official build system, Bodhi, the Fedora update system, and after Bodhi, here comes the change and user can install it via DNF, for example. So how does Packet fit into these steps and how can it help with these steps? Here you can see all these steps in one screen and actually on the right side, you can see how Packet covers everything in the middle between the upstream and the user and the installation of the new software. So Packet has basically some jobs that can be configured, and for the Fedora release automation, there are four jobs that can be configured, and they cover syncing the release, building the updates in Koji, and then bringing these updates to the Bodhi. So let's start with the first one. Firstly, we have the syncing the release, that means we need to bring the changes from upstream to the downstream. What needs to be done is that the archives need to be uploaded to the look-aside cache, and then the spec file needs to be updated, probably the version, the change log, and the sources file as well. And for this, you can use one of the Packet jobs, either a proposed downstream or pull from upstream. And you will choose based on multiple factors. So if you are upstream maintainer of the Packet, you can configure the proposed downstream. Proposed downstream is configured directly in the upstream repository, so you need to place the configuration file in the upstream Git repo, and then Packet will react directly to the release in GitHub or GitLab. The benefit of this is that Packet also provides you the feedback, the results directly in the GitHub or GitLab interface. Here you can also see the snippet of the configuration, so this needs to be placed in your upstream, and then you can also see a screenshot, how Packet provides feedback about the job, so the proposed downstream finished successfully. You will get the link, and you can then see the PRs created in this Git, which I will show you in a while. But then of course, sometimes you have a package in Fedora, and you don't have access to the upstream Git repository, you don't maintain that code, but in that case, you can utilize the pull from upstream. For pull from upstream, the only thing that you need to do is to place the configuration file directly in this Git, you will add the pull from upstream job, and after that Packet will react to the upstream release monitoring messages, and will do exactly the same process, so it will bring all the changes to the pressure, to the Fedora, this Git. And as I mentioned here, you can see on the screenshot, that this is the PR that Packet opened, you can see that the version is changed, the change log is added for the new version, and also the sources are updated. Okay, so what's next? After the maintainer of the package reviews the change, the PR in the this Git, and is satisfied with the result, he can wait for the CI, and if everything is green, he can merge the pull request, but then of course, the new changes should be built in Koji, and it can be tedious to do this every time there is a new release, so Packet can help with this as well. The only thing is again, in this Git repository, add this little configuration for the Koji build job, and after that, each time the pull request is merged in this Git, Packet comes, takes the changes, and builds them automatically. Here you can see some packages built by Packet in Koji. Okay, but there is also another step, and that are the body updates, again manual step, and it's very repetitive, so how can Packet help? There is the body update job. So again, code snippet, you can put this to your configuration file, and Packet will watch out for the successful Koji builds, and once there is a successful Koji build, Packet comes, takes it, and creates the relevant updates for the particular release. Okay, so that was it for the release automation. Now let's check the other aspect of Packet, and that's Packet as a CI solution. So previously, when we were talking about the downstream automation, mostly Packet should have been configured directly in this Git, but if we want to use Packet as a CI solution, you want to validate the things in upstream, so the setup needs to be done there. So firstly, you need to enable the interaction with Packet, and that's either in GitHub. For example, here you can see the screenshot of the Packet GitHub application, or in GitLab as an integration. So you just do a few clicks and install Packet in your namespace or repository. The next step is that your namespace needs to be allowed, so you just provide your Fedora account system login, and we do the automatic matching, so very quick step. And then almost the last step is you create the configuration file. There, you place what you want Packet to do for you, and if one of the things you want from Packet is RPM builds, you need to also place RPM spec file, or at least add some script how to obtain the RPM spec file. Okay, so after setup, what can Packet do for you? The most used job that Packet can do are the RPM builds. So for the RPM builds, Packet uses the copper build system, and basically you can configure Packet to build your RPMs for any pull requests, commits, or releases. And then, for example, if you configure Packet to react on your pull request, with each pull request Packet comes, forwards the new code changes to copper, the changes are built there, and Packet provides the feedback about the builds in GitHub again. As you see on the screenshots, we provide the links to the copper web UI, the logs, and everything you need. One more note, so with RPM builds, you can either validate your changes, but also, for example, you can configure the builds for the pushes to the main branch or for the releases, and then you can have some dedicated copper repository, and users can consume the builds from there directly. Other CI job that you can configure are the VM image builds. So these are the follow-up of the RPM builds, and if you want to also create the VM image build, then you just place a simple comment, as you see on the screenshot, and Packet will come, check whether there is the built RPM, take it, and create the VM image build for you. For this, Packet uses the Red Hat image builder, so you can see in the screenshot, you will again get everything you need in the GitHub UI, you have the links there, the status, and you are good to go. And finally, what we are here for are the tests. So as you probably got now, Packet uses testing farm for the tests, and the configuration is very similar to the other jobs. So how does this work? User enables Packet, as I previously talked about, sets it up in the upstream, then optionally also configures the build job, and after the RPM seller build, Packet forwards to the testing farm the package and VRs of the copper builds, sends the request to the testing farm, and then checks for the results. Once the results are, then the Packet provides you the feedback again, as you've seen in the screenshots. So as for the configuration, the tests can be again configurable for pull requests or the brand pushes, or also for releases. You can see that the configuration is really simple, so you specify the trigger, and then you specify the targets you want to run the tests on. So let's have a look at more use cases, how you can utilize the tests via Packet. So as Simon already mentioned, there is also a Red Hat range of testing farm, and it is really simple to utilize this via Packet. It is basically a one configuration option, and there's the use internal TF. So you enable this one, and of course you need to reach out to us, we need to allow you, so that you can use testing in the Red Hat infrastructure, but that's it, you're good to go. Then for example, if you have some really resourceful tests, which you don't want to run on each pull request, in each push-to-pull request, but you want to run it only manually on a comment, you again specify one more configuration option, and that's the manual trigger. So after that, if you are ready to test your changes, you can just post a comment and Packet will react to that. Another useful thing that can be done via Packet is, that basically there is this configuration option, TF Extra Params, and here you can specify anything you would specify in the request to the testing farm. So one of the things you can specify is some additional artifacts. So if you want to do some reverse dependency testing, cross-project testing, you can just specify some repository in the artifact, and we will send these parameters to testing farm, and you can ease your reverse dependency testing like this. Then there is another use case if you want to define some custom mapping between your build and test targets. So here you can see we have RPM build job, copper build job, and the targets are configured for EPUL 7 and EPUL 8, but you want to run tests then and define some mapping. So for the builds with target EPUL 7, you will run these on the CentOS 7 and Oracle Linux 7, for example. So it's possible to define one-to-one mapping or one-to-end mapping. And another thing that was already mentioned, if you have your FMF metadata somewhere else, not with your test, with your code, you can also specify the FMF URL that points to some other repository, and you can also specify the FMF ref. And yeah, Becky will forward this to testing farm and everything will work. And now, Simon will talk a little bit about the interesting Becky usage examples. So when preparing for this talk, Miro and I looked through some of the stats and some of the users, and there was a couple more, but these were interesting and they were running a lot of tests. The first one, these guys, StreamZ, they actually contributed to packet. You can take a look at how they did it there. They documented it. Maybe you already read this, I'm not sure, but they don't use any of the building. They basically use the testing farm infrastructure to run their tests, but they don't do any building. And Cockpit, of course, uses packet. They run the same tests that Fedora CI does, but they do it with packet. And this project, Scupper, this is a default plan that will run if you have no tests defined. And so even just enabling the packet service, the packet integration in your repo, what you'll get is you'll get this sort of sanity check. It builds your packages and tries installing them. So at a minimum, just by enabling it, you get that functionality. This project is using that. So there's several use cases. You don't have to use all of the functionality at a minimum. Yeah, you still benefit a lot. Statistics from packet. Okay, so just for the sake of having some numbers to show you how many users actually use packet, so you can see some numbers for the past year. So the most used job is the RPM builds and copper builds. As you can see in the past year, it was 76,000 builds. And then of course, the testing farm usage. So there were more than 40,000 testing farm runs. And as I already mentioned, the downstream automation as for the syncing of the release from upstream to downstream, there were more than 700 runs of the sync. And here you can also see the activity of packet bot in the disk it. So you can see in the recent months, really active and also some badges we earned. And then we have the statistics also for the testing farm. So on the image below, you can see the numbers. So in the 2023, it is 680,000. That's projected. So it's not the end of the year yet. Yeah, so you can see it is really growing. And as for the distribution of the users of testing farm, firstly, we have the federal CI, but then there is also packet with around the third of the usage of testing farm. So really nice. And if you would like to try packet and testing farm together or separately, it's up to you, you can check out our documentation. So also packet and testing for documentation. And one more step, if you even want to contribute, we are very open about the contributions. So we will share the slides with you and you can definitely check out the links. And we are really happy to help anyone who would like to do some contributions. As Simon mentioned, the Streamsy team helped us and they implemented some awesome features. So, and the same applies for the MTN testing farm. Yeah, so I just want to mention that testing farm, the code is public and you can contribute to it, but we don't have yet a nice developer guide or any kind of style guide or community guide or anything like that. So you might feel a little bit lost, but if you have the confidence and you know what you're doing, go ahead and make merge requests, it's up to you. But there's no, we don't have it very welcoming yet, so to speak. Okay, and lastly, get in touch with us if you are interested in anything we have talked about. Here are some contact information, so matrix, email, and we have also a master plan account. So yeah, make sure to get in touch with us. And now it's time for your questions if you have any. Thank you. I have a question probably related to testing farm. If my test requires a specific hardware, is it possible to define somehow or? Yes, there's, if you look at the TMT documentation, you can see that you can specify a specific hardware. In the public ranch, you'll have access to X86 and ARM. In the internal Red Hat Ranch, you'll have access to Beaker, which is full of interesting hardware. So this is not just about VMs, but I can say. No, you can use bare metal. Okay. With Beaker, there's bare metal access. With the public ranch, it's only VMs. Okay. Just for cost. All right, yeah, I see. Thank you. Yeah. Are there any resources for learning about how the, which is the FMF is the syntax and the TMT is the tool? Is there any resources for how to actually use FMF? Because I guess the way it works is a bit different than other traditional CI systems work. So you don't really have to, FMF is technically like a library or it is a separate project, but you don't have to know about it. The documentation for TMT is sufficient to help you to use test management tool. And it's YAML. It's, yeah, it's not very complicated. And maybe I can go back to the slide. Yeah, if you go to the, if you go to the, where do we have, oh, we don't have a link to the TMT documentation. I guess, I'll, if anybody's interested in that, I'll send it to you. But you can just Google TMT and you'll find the documentation. And yeah, it should help you get started. Yeah, if anybody's interested, maybe we can do a workshop later or something. Any other questions? Okay, so if not, then thank you for coming. By Don Narrow, scaling the Ansible community to new heights. Good luck. I'm pleased to be talking to you today about some of the work we've been doing on the Ansible community team to basically help grow and strengthen and sustain the Ansible community. And I'm gonna start out with some intros of our team, some of the things we're gonna talk about today. By the way, the slides are in the, they're linked in the schedule. So if you wanna grab them there, you can find them. So the Ansible community team at Red Hat, we are all kind of funded positions to work with the Ansible community. And I thought it'd be great to introduce some of the people that do some of the work that I'm gonna talk about today. So there's Andre. He works a lot with collections. He's been doing some great work with execution environments and doing a lot there. There's Carol who is really kind of, she's almost like the engine of the team. Like we'd be nothing without Carol. Carol does like amazing outreach, organizing events and is just kind of almost like our spiritual guy. There's Anuisha who's the release manager for the community releases. And she also does great outreach. She's very active there and just all kinds of things. Then there's me. I just, I don't know. Sometimes I just try and make it look like I do stuff. Carrying on. There's eight of us in total. There's Greg who's the team lead and community architect. Just unbelievable insights and experience from Greg. And really he's awesome dude. He's Sandra who's the docs lead and she's also kind of like a project manager for us. And like keep us on track with a lot of things. Leo from Argentina who does like a lot of also outreach. He's working on labs and he gets involved in some of the schools with red hat ambassadors. I'll tell you a bit more about that. And also Walter. Walter is, he's kind of the guy that helps us interface with red hat a little bit. And he advocates for us and he helps make sure that like we have, and I say we, the entire Ansible community have the direction and support that we need from red hat to succeed. It's really a critical role there. So one of the things I'm gonna start talking about, this is all about like work that we've done in the past year to build and grow the community. And one of the things like if you go to ansible.com today you'd notice that there's a lot of red hat focused content that's you know about the platform and products that red hat sells. And there's not a terrible lot for the community and it exists, there's a community page. But what's been missing is the central place, like where can the Ansible community come online and have discussions. I think like one of the things we've been dealing with in the community is a lot of fragmentation. And this comes in like different ways like ansible.com but also discussions. There are things that happen in GitHub issues, GitHub discussions and there's lots of decisions and chats that happen in these various places and without having a central space it's hard to kind of have a view into everything that's going on. It also makes it extremely difficult to know when a community decision happened to have like a historical record. When you get several years kind of down the line from a decision and it's like, well how did this come to be? And you kind of, you lose that record. That's an important thing for the community to have. So we've been building a community website. Again the links are in here. We've got some fantastic help from the Fedora community. This is places where things have come together. Mo who's here has graciously given us some great advice to help us get started. Also like Michael Sher gave some great tips for choosing a static site generator and static site generators come and go. We've had to rebuild things in the Fedora community a couple of times. What's important is you kind of abstract the content away from that and choose tools that the community uses and meet them where they work. So we're going actually with Nicola which is built in Python and uses Genja templates and it's all familiar tools in tech for the Ansible folks. So building a, this is the wireframe but if you go to our repo, you can see the work in progress. It's deployed to GitHub pages. You can join us on matrix if you wanna get involved. We're gonna start building out the final thing based on wireframes that we've got and this will like serve as that central point for the community in the web where you can like get access and nowhere to go and kind of have a home on the internet. Along with this, we're launching a community forum discourse. Greg again has been spearheading this. He's, I think he's wanted this for years like four or five years at this point and coming together and again and kind of collaboration and cooperation with the Fedora community and like working with CDCK. We will have a place where community discussions can take place and where we can have those records, historical records of decisions and like have a lot of the discussions and community topics and votes that go on, they use GitHub which is kind of it works but it's a little awkward sometimes and like kind of the flow for like having a vote and then closing a vote and like closing the discussion and just be a little awkward and it can be difficult for new people to get involved with that. So obviously having a central forum and a place to discuss all things in the Ansible community is gonna be great and that's coming real soon so stay tuned. So another big part of the Ansible community and the team and the work that we do are meetups and events and all the outreach that goes into it and we've Ansible community days. There have been a couple of RA this year, there's gonna be one in Berlin soon. The community days are just, it's a time for everyone in the community whether you're just individual contributor or not if you're just a user, if you're an enthusiast or even if you're a Red Hat customer, you can come along and talk about Ansible and share and it's a place for everyone to get together. The contributor summit is more an opportunity I think for contributors to come and talk to Red Hat teams and work with Red Hat engineering to find new solutions and all that good stuff. Some upcoming meetup and events, I already mentioned the community day but there's stuff that's going on all over the place. DjangoCon should be really cool too. One of the things that I mentioned Ann Weisha, she has created a meetup organizers toolkit and this is to facilitate the community with a set of resources that will help you plan and carry out Ansible meetups successfully. It's available here and it's also something that would benefit the Fedora community and we kinda wanna share this because it's broadly generic for anyone who wants to organize a community meetup but this is something that Ann Weisha noticed that hey, we need this. People are trying to organize meetups and it's not always going so well because they might be new, they don't know how to do it so hopefully this will facilitate a lot more in-person interactions. Leo, as I mentioned earlier, this is him at a university in Buenos Aires and he's going in and working with Red Hat ambassadors to reach out to students and Ansible is this great starting point for open source and it's if you wanna be a Python programmer, if you're interested in like sysadmin, kind of like DevOps, there's something for everyone with Ansible so getting into universities is something that Leo's just started but it's really great work. So, community releases. How many, just quick show of hands because I'm doing a lot of talking, I kinda see people out there. I wanna get a bit of interaction. I like some conversation. How many have, how many of you have downloaded Ansible from PyPy, PIP install Ansible? Really? Okay, okay, there we go. I'm sure some people on the virtual audience have. Well, so that's the community release and that involves Ansible Core, bunch of Ansible tools like for running playbooks as well as community collections. And that release process has been handled by Anisha Das, some seven RC1. She started, she was the shadow release manager now. She's the release manager. She's been doing great work with those uploads and building and making them available to the community. And of course, we've had some help from friends along the way like Christian and Sandra and the steering committee. Also, one of the things that it's, this year this team has been doing is trying to open up more and give back to the community and give them the ownership of some of these processes and making sure that they not only have visibility but they can participate and they can drive this. And it doesn't have to be, it doesn't have to come from someone who's a red-hatter and the processes shouldn't be behind red hat firewalls or whatever. And Anisha's been documenting this process with the steering committee and she's also been automating a lot of it through GitHub actions and different workflows so that it'll be easier for people from the community to get in and handle that process. One of the things I've been working on is what's become known as the doc's lift and shift. And this is, we kind of had this point where we were trying to get all these community doc initiatives going, creating new content, revamping stuff and restructuring. We restructured the user guide that was just this big honking chunk of all these different topics and being able to retrieve and find information within that was a lot of work. So we decided to kind of break things up. So we're doing things like that to improve the doc and to get more contributors in through the documentation. And one of the things that we found was that some of the people in the core team, we were just kind of getting in each other's way a little bit, like, you know, were there sanity tests and some of the things that the core team, some of the needs that they had didn't exactly align with what the community needed. And there's a lot of, you know, there's a lot of, there's all the documentation for, you know, kind of end user documentation, most of collections and stuff like that. And like stuff that's owned by the Ansible Community and the Steering Committee. So we've created separate documentation project. And that's been a little controversial because some people think that, like, documentation should always belong in the same repository as the code and the thing that it's documenting. And that kind of dox's code approach, which I adhere to myself. But we just kind of got to the point where it just made sense to create a separate documentation project so we can really accelerate some of those efforts. And it's, so far, it's been really successful. We've seen a lot of community engagement with that. So that's been great. We've been adding, like, new workflows and doing all kinds of fun stuff there. Another thing that I've been working on is revamping the Ansible Dock site. So the Dock site, just to quickly kind of disambiguate the term that I'm using here, this is just like a set of, like, static HTML pages. It's kind of the landing page when you go to docs.ansible.com and you kind of navigate around. It's those top-level HTML pages that are in front of the actual documentation once you drill down. So it's the main entry point for most new Ansible users. At the start of the year, this is a snapshot, kind of, actually, when I joined, I'm still fairly new to Ansible myself. And around, like, May 7th last year, there were these big kind of cards that took up all this real estate and then we focused more on, you know, this community, then there's the platform, the downstream offering, and then there was some core stuff. So it was more focused on kind of, like, the tools and not really taking the user's perspective too much. So we decided to create a more user-journey, user-centric approach. And to start doing that, we started identifying personas. And, you know, persona is just a representation of the user or the person who's looking at the content. And we defined a few, you can find them there here in Markdown, which we put everything in Markdown in the open so it's in plain text and it's there for contributors to look at and it's not in, like, some kind of, like, slide or, you know, whatever. But we focused on the needs, the attitude and the knowledge of the personas as well. We identified them and then we said, like, you know, what do they need? What's the type of content? What's the attitude? You know, the attitude helps you determine the level of verbosity. You know, say, like a, like a Python program is gonna want, like, all the programmatic options and their expected behavior. But if it's like an SRE somewhere, they just, you know, show me, like, when the red light's flashing, show me remediation. And then the knowledge also helps you tune in more and, like, meet the needs. Because, you know, a hobbyist is gonna have a different set of knowledge than, say, like a solutions architect. So, once we had our personas, we decided to, like, what do we do with those? What's the next step? Well, I started as new to Ansible and I came over from Jboss. I've spent, like, a lot of my career in middleware for my sins and a past life, I think. But, you know, I was, like, really super familiar with Kubernetes and there's this, you know, this for me was like the, you know, the Kubernetes journey. And it seemed like a very, like, sort of abstract kind of, you know, this could be applied, like, these different milestones could be applied to most, like, projects with technology. Someone starts out, they become aware, then they evaluate, then they adopt, and they start using, then you scale out. So, these milestones describe these progressions that you would kind of go through. And we decided to, you know, start with human motivation as the first thing. And we started mapping out, like, the journeys against those milestones for each of the personas. Again, we've got those in markdown. You can check them out there if you want to find out a bit more. And once we had the persona journeys mapped out, like, we're at each step to 10 minutes left. Is that 10 minutes? Sorry, just so I'm clear. Did quarter pass? Like, 10 minutes in. Yeah, 10 minutes roughly. But we're running into lunch, so if you run over it. Okay, yeah. I've got, like, 40 slides here. I've been practicing, so I'll try it. I think 10 minutes would be good. But hopefully you guys are into this anyway and we're having a, everybody's having a good time. But you've got the milestones and then the steps underneath them to complete them. And so we've got these things and we're gonna build the new dock site based on these. So when somebody comes in, they're not gonna say, like, oh, here's community docs or here's platform. You're gonna see the entry point and you're gonna see these journeys. How do you do something with Ansible? So once we had those things, we decided to make things available to the community. Again, we're trying to use tools and tech that the community is familiar with. They created this ginger dock site. Naming things is actually one of the hardest things to do in tech. And I still kind of hate the name of that repo, but it's gonna go away. But yeah, the ginger dock site. And when we started building the new dock site, making sure the community gets to it was vital. And this is actually the first thing they came up with. I went wild, I went bold at first. And part of the idea there goes back to Cunningham's Law. We wanted to get feedback from the community. And if we release, like, oh, here's this great, like really high polish site and I'm like, yeah, it's great. But we're building this for the community. And so intentionally putting some things in there that didn't really fit and kind of like bold colors. And it seemed to work. We got a lot of really good feedback. Sandra, who I mentioned before, the dock slate was fantastic. And going out to the community and finding out what she hit Reddit, like Matrix, we have this docks meeting and she just, you can see her here. But we got a lot of feedback. A lot of it was like super critical. A lot of it was like, yeah, this isn't so good. But we kept going. We gathered feedback through the Bullhorn newsletter, we iterated quickly over it. And over time, we got more and we started, things started to trend more positively. And then we released our journey-based dock site. So if you go to docks.amsible.com now, you'll see there's, you know, like each of those kind of like milestones and then the steps you need that are direct links to the documentation. So you just get in and you find where you're going much quicker. And it's more mapped to actual things that you're doing and your tasks. It's not mapped to like a product or a tool or something. But yeah, so this is where we, and you'll see there are sections for each of the personas that we identified. So that's some work that we've been doing. Along with that, we've actually done a lot with the documentation in the past year. One of the things that I noticed, particularly when I was joining and trying to like navigate around, was like, there's this whole ecosystem of all these projects in Ansible. And it was very, they were all like, some of them were hosted on Netlify, some on Read the Docks. And some of them, they were even like third party, kind of like forks of documentation or like mirrors that were on Read the Docks. So you couldn't trust anything from like just looking at the URL or looking at it. Is this Ansible or is this like this third party thing, which it's fine, but it's hard to know if it's I guess official or whatever. And they're like marked down and like some repo. So you get the idea. There's docs all over the place, but it's hard to know what is Ansible community and what is not and what can I trust? And it all looks different. So one of the things we've done recently is we got the Ansible namespace and Read the Docks. And we put all of the Ansible projects under that namespace. So now it's like a consistent URL. And if you go to the Ansible Read the Docks, you can see all the projects that are in the ecosystem. I was like, deterministic URLs and there's also community themes that we've been applying so you get a consistent look and feel while you're browsing the documentation. And this really helps to kind of build trust with the community and create this cohesive identity. We've also been working on removing barriers to entry and making sure that community users can get in with Ansible and start, you know, get up and running quickly. You know, again, it's something like coming from like the J-Balls world or you know, go in and like the first thing you see is like a hello world. And when I joined, I was like, I went to Ansible, docs.ansible.com and it's like, where is that? How do I start using Ansible? And there's like a link to a quick start video that went to a red hat site. And the video didn't load. And then there was another link that took me back to docs.ansible. So there's this loop. I remember spending about like 10 minutes just trying to answer the questions like, what is Ansible and how do I just even use it? So, you know, even starting quick start guides and getting started. Andre on our team has done a fantastic job and he's, you know, execution environments which it's just basically it's like container image that kind of access the control node. And this is something that we noticed was causing confusion for the community. And like a lot of people didn't even know like what an execution environment is. And it's this thing that gets talked about a lot more kind of downstream than in the community even though it should be available to the community. So even working on that is one of kind of our points to fix and make it easier for community users to get in and start using stuff quickly. Also, Leo has been working on community workshops. I mean, you can find a couple there. They're a little bit out of date and there's a Red Hat one that is kind of mixed with AAP stuff which is it's all good to learn from but it's not quite Ansible community and we've got some new stuff that's coming up with Instruct Labs based on like new community users is one of the main personas and then like more advanced users. So real soon we'll have a whole set of workshops that you can go online and it's self-paced. Now, that's kind of a whirlwind tour of all the stuff we've been doing. One of the things that I wanted to do when I came to Flock and this is actually my first lock but I've been a Fedora user for years now and I've been having conversations and trying to talk to people and like how can we get the Ansible community and the Fedora community to come together and we've learned so much from the Fedora community already and we want more of that. I think like the toolkits that we've been working on would be great to have Fedora community users using them and even stuff like Ansible tests should run on Fedora and I was talking with Kevin about all the Fedora infrastructure uses Ansible and maybe we can help there to update some of the syntax and find some ways to improve it. So this is a direct call to the Fedora community and just saying let's hang out. Yeah, let's hang out. Why not? And as always, we're totally open. Please come and join us. You can find us on Matrix, you can find us on Mastodon, you can subscribe to the Bullhorn newsletter if you want to get all the info about it, what's going on in the Ansible community and you're welcome to come share your news and add to the Bullhorn. The Ansible community weekly meetings every Wednesday. There's lots of special interest groups. I mentioned docs. I know there are a couple of documentation people in the room, some tech writers. Please come and join us. There's tons of work and if you're looking to get involved in open source or if you're like really experienced and but you don't know Ansible that much and you'd like to come hang out with us. We're on Matrix. We're your friends. Thank you. Any questions or anything? Do we have time for that? But I think it's lunch so we can take one or two questions if anybody has. Well, you know where to find us if you do have questions. Thank you very much. Thank you. We break for lunch now. Thanks everyone. So, good afternoon guys. I hope you had your lunch and you're all ready for the next session. I'm Shumanthro. I would be talking a little bit about Toolbox and if you don't know what Toolbox is, I would give you a brief intro of why we have Toolbox, what's the use of it and preferably where we stand with the project. At this point, me and Debershi, another guy, we both are maintaining Toolbox, RPMs and the images and I'll go ahead and start this presentation. So, couple of things about me. I am Shumanthro. I work for the Fedora QA team and my primary job role is to test packages, images, as in the compose images and anything which comes new as a part of test days. Other than that, I do few other things in Fedora. One of that is making sure that council understands or rather works with whatever new community stuff we want to roll out, so objectives, federal ambassadors and few other things. I also am somebody who coordinates outreach in Fedora. So if you are willing to do internships in Fedora, that's one of the programs I coordinate with COVID that has not been a very successful one. But yeah, we keep having more and more projects. So if you are someone who is having, who wants to have projects, initiatives or very small bite-sized tasks you want to accomplish, reach out to me, I coordinate internships as well with GSOC as in Google and Outreachy. So I would be able to help out. I usually hang out on Liberachat and couple of channels that's mentioned there and that's mostly what we have. So let's get started. First up. So a long time back when we started doing Linux distros, everything was based out of a specific packaging format. So you have the devs and the RPMs and you basically package everything in a dev RPM style. The problem is everybody's machine becomes or starts becoming a snowflake problem, which is if you have a machine and you are learning Fedora something, Ubuntu something, and you have your own package set or the packages you use, it's very hard for me to debug what's happening in your machine. So if something crashes, it's nearly impossible unless you give me a full trace back of what has broken. It becomes very hard for normal general users to provide us with that much amount of logs every time they file a bug. Hence most of the bugs that people file are marked as invalid and they do not contribute much to us solving it in short. So as a result, it becomes very hard for us to test what these problems are, how this problem started. Other things with when you have RPMs and devs, mostly these are packages and updates are not really fault tolerant. So if you ever try to upgrade your machine you run out of space or your power goes out or your battery fails. Well, that's a whole different issue after that point. So historically we had exactly these problems and other problem is we had this, we had almost no separation between the apps and the operating system itself, which means think about it this way. If you wanted to have the latest version of Firefox or latest version of dark tables on your machine you would probably have the latest version of operating system. If you were to expect Fedora 39 to have X version of Firefox running and even if that's until that's packaged for F38 specifically you cannot really really have it. That becomes a little challenging because there is no separation between apps and you have to might upgrade your entire production system just to support your apps. And that's a problem that has been there for a really long time. One advancement that we had few years back was we started having these things called OS tree. Now all of you who don't know what OS tree is OS tree is think of it like Git but for your operating systems. So when a package maintainer or whenever a maintainer commits something they actually package the entire thing under one hash and every time you upgrade to that particular hash. So if something is failing let's imagine a particular hash that we all are sitting in and there is a very high chance if my Firefox crashes your Firefox will crash as well because we are sitting on one simple hash and which means if I were to fix something and deliver you a fix it is very easy for me to tell you okay that hash works just upgraded and it would just solve your problem. That's a very simple way to detect problems and fix it or rather try to find a root cause of it without going into what packages changed between last two days before the failure started happening on your system and you could actually register. So that started happening with OS tree and OS tree used to maintain a lot of flat packs and podman apps. So all the apps with OS tree were very sandboxed. So technically since OS tree is immutable all the things came as a part of either flat packs or podman so all the client applications were flat packs all server applications were podman apps and then it used to make life easier for most of the people who used to use OS tree or at the still use OS tree. The only problem with OS tree is there's no DNF and there is no, it's immutable so you cannot really modify slash user. I mean you can, you can do a lot of things but that is not really how OS tree is meant to be used. Now as a result one of those things that we started talking about was how can we solve this? So like for example if you were a C++ developer or you were a Golang developer you still need binaries and those binaries are not in flat packs anymore. They are still RPMs and you would need to install those RPMs but without DNF that's not really possible. So the other way that you could go around was installing that onto a container and then pulling that out from something. Problem is when you use or keep using podman for multiple things you make your commands start getting like one sentence or one paragraph longer. Like because it sandboxes its own self and other thing is when you try to SSH between podman and podman and podman now that gives you another level problems. As a developer when you start building apps you really do not want such things. So the simple implementation that we came up was to ensure that all these compilers, debuggers, SDKs were kind of set up using one single place and you wouldn't have to spin up containers after containers after containers for doing the same. Example, these were the few questions that a lot of people had. Last one is actually a hack that is layering works but every time you layer you restart. Every time you restart well you lose things. You really don't want to install NPM dependencies and keep restarting over and over. So yes, moving on we started having a new thing called toolbox and that was the point when we decided toolbox would help developers run all of these things inside a toolbox and use it system wide however way you wanna use it. That's the point when we started making toolbox more developer or debugger friendly. So if you are someone who is debugging code and you do not have a particular system like for example, I want to debug something for, or rather I wanna build something for CentOS but I'm running Fedora. Well my original option is to run a VM. Here's the thing or a container. Anything for that matter but every time I keep running Podman and keep giving it commands and more commands and more commands that's a problem. Also a lot of integrations with system D, Avahi, network drivers, they need to be explicitly set up most of the time. Toolbox, you don't need to do that. That's implicitly done for you. It increases the quality of life for a developer and that's exactly how. So as part of toolbox we kind of have all these images that we are talking about. Hosted in registry.fp.o. Some images come from access.fredhead.com specifically the REL9 images. So if you are somebody who is using a Fedora and you want to run REL for testing your code or building something, you can have a REL image. If you are on CentOS you can have a Fedora image vice versa. Toolbox makes it really easy for you to go ahead have a container, a bunch of containers running very specific things and then you can go ahead and run like your graphical applications because it supports well ended. You can run your graphical applications as well as your CLI applications you use it wherever you feel like. This is one of those things which we kind of started talking about for a long time from a developer perspective very precisely. But very recently we have also learned the fact that it becomes really hard when you are supposed to one of my use cases back in the day was whenever I used to commit something for a repo it used to let's say that's a Ruby repo and I need a bunch of dependencies to run a lint and I don't have 1.9 GB to give you on my system for it to download things and which I would probably never use after that point. In that case I would probably have the entire thing run on a toolbox and then run whatever commit I want to have. That way it creates a very simple separation between things that I don't want, want it one time and go on and I can still if I want because it uses Podman on the back end I can actually snapshot all of that keep a tar off it, import it every time I want to use it and reuse it again with toolbox and that makes developer life easy. So couple of things that we have decided with toolbox or rather we kind of are trying to do with toolbox. One thing is we are trying to go ahead and give you a very simple command line debugger for OS3 and OS3 based distributions. Namely IoT CoreOS, Nome Endless OS and goes on. It is actually very important for adoption of these OS3 images or other OS3 based operating systems because without DNF a lot of the people who are actually working and developing apps they wouldn't get the libraries they would want. Without the proper support of DNF it is going to make an adoption curve really hard for folks who are using OS3. Also there is a major thing that a lot of our developers look forward to which is whenever the run applications on let's say toolbox it is very seamless. It doesn't make your life revolve around multiple flags and parameters I will come to it how. So other thing is we have grown our code base or rather the test code base which is currently batch which expands to bash automated testing which is like 250 tests and it's growing mostly supports CentOS, Stream9, Fedora and Fedora and Fedora Rohides and Ubuntu. Rohide is the latest addition we have here because toolbox used to just get images of Fedora for last like three releases now. But in the recent one we have added it to the workstation by de facto which means the RPM would still be shipped in the workstation the image won't be but the image that you can pull this time with toolbox is the 39 image which is the latest image and that's exactly what a lot of people in relenge are working so Mikal is here so thanks for helping us build that. All of this would also be moving to quay.io in some time so it would not be reg.fp.o like it is today. It will be moving out and that's one of those primary goals we have with toolbox. Now that's mostly what I wanted to cover up. We are looking at four specific areas we want help from contributors and that would be with test coverages. When I mean test coverage, this is exactly what I want to point out. We have the basic, yeah that's a fire alarm. Should we go out? Yeah, what? Yeah, I think we are not burning. I mean we are alive, we won't be burning. So yeah, you know we are looking at actually test coverages, a couple of things on test coverages. We already have the basic test coverage running currently which is fine which is not extensive. It is just working and it is enough but as we go on increasing the number of platforms which is Arch and if we move beyond Arch which is Cali and the rest of the things we do not have batch testing enabled for all those and we would want contributors to come and help us over there. Other thing is as we go ahead add more features to toolbox because it still uses Sportman on the backend. We can actually do a lot of things with toolbox which are currently not being done mostly because we want to enhance whatever is today make it offer stable quality before moving on to giving multiple options that Portman can still expose for you. So essentially we want to have those features but for that we need contributors who can help us write test cases and manage those. Both on the manual side of things and on the automated side of things. Open QA is one of the good place where we are trying to dump most of these test cases which would automatically run it for Rohit because now it gates Rohit so it would be more easier for us to catch something which has changed which then breaks toolbox. That is one of the places. We are looking for anybody who is willing to go at their conference because toolbox is a very small piece of software. If you have a conference around you and you want to speak about toolbox get in touch with me or Rishi. We would help you with the slide deck which you can use to go ahead and talk about toolbox in your local developer communities. Other thing is we have a bunch of documentation mostly upstream. We don't have things in Fedora docs yet but if you are somebody who is willing to write documentation for us, reach out to me. I would be all game to help you write something. Now the last part and this is where it gets a little tricky. Toolbox relies on a lot of dependencies and when I mean a lot of dependencies it means if something changes in system D, Podman or rather whatever Podman depends on it usually would break toolbox and Gens here has filed multiple bugs which has broken toolbox previously. Some of these issues are easy to debug and some of them are not and this is where we would probably want you to test toolbox with multiple OS environments and let us know if something is breaking because that way we would be able to fix it in time. This is just me and Rishi working on it so we still need more people to come up help if things get broken. One more thing toolbox is blocking which means if toolbox is broken by some chance Fedora workstation would be blocking it. In other words we would block the Fedora release until that issue is solved which brings us to the point of how critical this can be. So having said that that's mostly what I wanted to cover at the state of toolbox and I would leave the rest of the time for the questions. So if you have any and every question that you have around toolbox I'd be more than happy to take it. I'm just wondering how one of these test cases working for you, are they giving any interesting results? So the BASH automated testing the BATS which is upstream they give us some results and those results are usually in a very binary format which is either ones are zero saying this particular parameter did not work at all or this worked and all of this is done using Fedora CI so which is fine right now but with the time as we extend more and more features I think we would want to rewrite some of these test cases refactor them and make sure something interesting comes up but my concerns with the bugs are not coming from what test cases we have today. It is coming from a concern of if something changes in let's say podman it is supposed to break us right and which intently means that before you know before chain sets are filed we me and Rishi we probably have to look through the entire chain set once and see whatever new is coming and if that depends or if that has a possibility to break toolbox in some other fashion like if there can be a regression on that and we don't have a blocking policy for it currently but very soon with the fact that workstation would be blocking toolbox we have to look at bugs well in advance which might be coming from a new chain set which has not yet been deployed to Fedora but when done that will break toolbox for sure. Toolbox is that sometimes very long-lived toolboxes can have problems and I think this is quite difficult to detect in CI so I'm wondering if it would be useful to have some kind of archive of older toolboxes or something like to test on yeah it's a slightly tricky problem but. I couldn't get your question like. The problem is that sometimes over time toolboxes stop working because of changes so you can't detect that in CI easily so I'm saying to have some archive of all the images to all the container. Yes. To test these things. Yes so that's actually a very very nice observation so the one CI that we run we explicitly put it to Fedora 34 and we test Fedora 34 and see if 34 works and then on a 34 we test let's say the last N plus one so the recent F37s and F38s and sent us nines and rel 10s or rel nines right so in that way we have one point of like understanding if toolbox works with F34 if F34 works then everything else works. Usually if that test case fails that's usually a direct blocker for us like that's one thing that is done by the CI right now so if you where to look at I don't know if this can be so let me show you something. See that line that's the one that got added very recently but that's how we are like making sure so there's an older version of rel as well just to ensure that things are not broken so we still have rel base image 9.1 like UBI 9.1 but we still maintain or rather test it on the CI. Okay my question is why Fedora 34 because it's no longer a support it's an appliance already. Yes that's actually 34 is not actually is very it's not very static we can make it anything but we want something which is outside the supported variant tree right so that we know that this used to work so if something has changed between that and the latest image right then we know exactly what changed remember the toolbox only comes with a very specific set of packages that are one-to-one compatible with the operating system that you use so it probably won't have package binaries like uptime on it but it would definitely have things like DNF which are like whatever you would find the used lib DNF if that's DNF 3 you'll find DNF 3 that's DNF 4 DNF 4 and DNF 5 and DNF 5 but that would be one-to-one compatible with whatever but we just wanted to keep it 34 keep it outside the series that supported right now to ensure that we can have an understanding plus this was done when we did not have raw hide so this still doesn't talk about how we would test raw hide stuff so moment raw hide lands and we actually have something that is shown up on Quay maybe we can add that as an image and see if it runs on raw hide so it would pull base as raw hide and then try deploying whatever we want to yes thank you so that's mostly what it is if you guys have any questions reach out to me thanks for attending my talk one of the manager with the Core-S team this is a workshop called ends hands with Core-S assembler so I'll start with a very short presentation about what's Fedora Core-S give a bit more context about what's Core-S assembler and what we use it for and then we can step into like the lab we have like a lab and a set of instruction that you can run to get a better idea and use by yourself Core-S assembler and start to build Fedora Core-S so quickly Fedora Core-S is a Fedora edition since Fedora 37 it's focused on running container workloads so either on single nodes or in clusters in clusters it's used with OKD which is the open source project for OpenShift but we also order flavors of Kubernetes it was the result of the merge between Atomic Host and the container Linux from like the Core-S company and it's really like a merger of like different technologies from those two projects so in the philosophy there are like three main points automatic updates so we really want users to consume to have like the latest security updates to have like the latest version of their software and we want them to have that they don't have to care about it it's going to be provided automatically for them by the operating system automated provisioning so we want it to be very easy to provision like one or thousand nodes they should all start the same, be the same that's the philosophy behind that and immutable infrastructure so we want a solid operating system that you can control where you provide updates that doesn't break or you can roll back and have like a more robust update model so in order to do that the release of Fedora Core-S is a bit different from other Fedora variants we have like three streams next testing and stable and the idea is like if you actually run your workload on the stable stream you also try to test in advance the updates that are coming in Fedora Core-S so we encourage our users to run also testing and next so they can provide the feedback early and that gives more chance to have like a very stable stream Alright, so a bit more just like a few words about Core-S Assembler so Core-S Assembler was built specifically to to build like Fedora Core-S and also Rail Core-S and it's pretty much like a Swiss army knife for like building a distribution it contains all the tools and all the applications needed to build a Core-S system and there are like a few key aspects like key design decisions that were chosen that were taken when the tool was developed first it was needed to to actually be able to run that tool on your laptop like you could just like use that tool and don't need anything else so for that it's running in container and it runs on everyone's laptop you don't need to have like some specific settings or so on it's also the same tools that you use for like the development process so that you can use in your laptop to develop to build new features or test new features and it's the same thing that is actually running to build the production build that we use in the build pipeline and as you will hopefully see in the workshop this is a great way to quickly iterate and because it's so easy to use that tool to build Fedora Core-S on your laptop you can quickly make changes investigate if there are problems and it's a very iterative way of developing any questions on that or like before we start the lab I just gave like a five minutes overview but if there are more questions I'm happy to okay so so for the lab we prepared like some lab machines in the cloud that you can access I'll give you the details if I find where my cursor is it is okay so we're going to I'm going to give you a number that you can use here like so lab user and the X you just replaced by the number that I'm going to give you so we're going to start with number one two three and four so to connect you can just like run in the common line SSH lab user your number the IP address and the password to to log in on the machine it's Flock23 Kozalab if you have any trouble logging in just raise your hand I'll come and try to see if I can help out once you're logged in so you should be able to follow we have like the tutorial step on this address so github.com slash Kozalab slash Kozalab tutorials and then you can just look for the Kozalab assembler fine any issue or the typing maybe next year we'll do a QR code or something like this making it easier so hopefully the tutorial should be relatively like self-service like you should be able to go on your own but yeah let me know if there's any questions or anything that is not clear, not working don't hesitate and I'll come to you second that's a good question don't think it matters too much for the build yeah yeah you're actually ready sorry say again don't think it matters yeah it doesn't matter so you can just copy yeah you can just copy the function directly and don't care about the you're ready it matters for some specific test but for what the lab does shouldn't really be significant you can use user 5 I don't know what to do okay one thing that will matter that's the good thing to do with multiple users when we tried we had only one user in the lab when you copy paste that function to use later the COSA function you need to put your user number behind the COSA name so just like COSA 1 COSA 2, COSA 3 because it's running containers in Ponman and you cannot have the same name for multiple containers you should be okay if you're already done it let me know if you have a weird error the first pool like the first time you run COSA can take a bit of time because it's pooling the container and the container is quite big yeah it's usually better so the machines are on AWS usually it's fast but it can deepen anyone's got a build already did you manage to go did you manage to get Fedora Core's build already after the COSA fetch COSA build not yet okay how do you should have like we've lived it yeah I think no you can't really do you need to go for inside COSA because you will have like in the COSA in the container image you have like all the tools installed to run the virtual machines not on your on the lab so if you if you see in the you have like the COSA run Dev Shell console it's a bit inception wait works because pretty much all the virtualization then you will be you will have like all the libvirt all the comments that's really like that was one thing for like one of the design decision of COSA is that you really have like everything inside the same container image or binary so it's kind of yeah it does a lot of magic for yourself so yeah so that's the yeah you have to exit can open the tickets so it would be 1.3 there and then you would use the try that one while I get through customer service you can you can give it a try technical support have you tried rebooting did you plug and unplug what's your user what's your no the lab user number 2 you have the same you have the same also which user you see the conversation anyway it's like the next person which why not the user specific should have said this is the first time we do this workshop so your guinea pigs thank you for your contribution the set set enforce 0 but if you want to go and then it's the same and like you see it's all by user 90 all the files are all by the lab user which is the user we used to test it worked it worked before but now that we do I see that when we do the beauty user it's one minute do you know that it's like a serial you have to take a ticket you have one of the team member looking at it no just like the first batch common the lab machine is made so everything should work that's the good that's like one of the that it was the same okay so we should have a fix it's like iterative development so if you refresh the tutorial the markdown file pretty much in the first batch function where you define the COSA alias just remove the slash bar the volume for you can just re-copy the alias re-copy paste that one you have one minute left we leave the lab machine on anyway like for today so we can continue the the actual tutorial yeah yeah so I'll put it in the session in the schedule session and if you ever have like any questions or anything related to this there's the Fedora Core OS channel on matrix or IRC and you can just pop up and ask to run this it doesn't work help be happy to help what time is the bus it's like half past the bus half past yeah second ah yeah yeah thanks Anka thanks everyone