 Hello everyone, so I'm Hanata Havaneli, I work for HedHed, and you also have a talking. Hey folks, I'm an intern on the core listening, and we can just hop right in. Okay, you're here today to talk about how we'll build Federa Core S, so let's get started. So today you're going to talk a little bit about the history of Federa Core S, what is Federa Core S? Some difference between HedHed Core S and Federa Core S, how you have multiple streams of regular releases, and why you should learn how we'll build Federa Core S, our build process as the components of the configs, our core S assembly, how you do overrides and add new packs to it, and a little bit about our test ends and how you deliver the OS as the demos, a couple of challenges for you and how to get involved with the community. So Federa Core S basically came from the merging of two communities, the Core S Container Linux and the Project Atomic Host. It basically incorporates the philosophy from the Core S Container Linux for the automatic updates, the provisioning stack, the mutable inference, and the cloud expertise. And for the Project Atomic Host, it incorporates the Federa Foundation as it says basically OS, each structure such as packets and kernels, and also the update and stack, and Selenix enhanced security. But what is Federa Core S? And it's usually said that Federa Core S is an automatic updating minimum operation system for running containerized workloads, security, and at scale, and it's currently available on multiple platforms with more coming soon. Here, I'm going to talk a little bit some difference that most of the people usually ask for us between Federa Core S and HedHed Core S. And you can say you have a basic two major differences. And for Federa Core S, it's an OpenShef component. It's not an OS that you can use at their standalone. It must be used with OpenShef itself. And the second big difference is the way it updates itself and the configuration as well. So as it is part of an OpenShef cluster, all the configurations updated are controlled by the cluster operator. And for the Federa Core S, you can use it as a standalone operating system. And it also provides you automatic updates itself in a reliable way that we will see soon how that happens. And it can also be part of an OKD cluster, but it's not required to. You can use it as a standalone operating system. And both work with the R-P.M.L. Street technology as well as the provision of VIN ignition. So, you are able to provide those three streams for the Federa Core S, and do your luncheasies every two weeks. And for them, you have those three, and we will start with NACS. So it's basically where the development happens. And it's basically focus as well on experimental features and managing Federa databases. For example, when do you move from Federa 35 to Federa 36? It will have the first NACS, and then after two weeks, it will be promoting for testing. And so it will have a couple of weeks to test and validate if everything is OK. And after that, it will also be able to have time to fix issues. And then it will be promoted to stable. And with this time, we will make sure that once we reach a stable, it should be a reliable operation system because we will have a couple of times to test and validate everything that goes between all those three streams. And here's a couple of architectures that you are able to provide as well in all those three streams. And they also provide three different platforms. So we have three streams and three different platforms with all those architectures. We have, for example, GCP, Azure, AWS, everything, basically, for the cloud providers, as you can also have that for bare matter and all other cloud-legible things. So I will now pass over to Saka. So thank you, Renata, for that introduction. So as we transition to talking about how FCOS is built, let's take a brief look at the why. So why learn how to build FCOS? First reason, which a lot of people are probably thinking of, is building FCOS yourself. You can swap in the kernel that you need or add a new package to the base image, for example. And just a heads up, we'll be doing demos on these later. And then the second reason is you could also try building an FCOS-like or FCOS-derived OS. And when I say FCOS-like, I'm specifically talking about Ignition plus RPMOS-based systems. Ignition, RPMOS, you're being important parts of Fedora CoreOS. And lastly, you can learn about the components that make up FCOS. And by that, I don't mean the Ignition and RPMOS-tree side of things, rather the parts of the build schema. So what goes into the Fedora CoreOS config? And yeah, let's go to the next slide, please. Oh, I think, did we skip a slide? Did I? Oh, yeah, we skipped that one. OK, so we'll talk a little bit about our build tooling. CoreAssembler or just COSA, as we like to call it on the CoreOS team. It's the bread and butter of our build processes. It's a containerized collection of tools that are used to build FCOS-like systems. I use the phrase FCOS-like systems here on purpose. So if you were to create your own FCOS-derived OS, you should be able to take full advantage of CoreAssembler in your build processes as well. So that's pretty cool. Another cool thing about COSA is that it serves both local development use cases and production-level build systems. So the tooling itself is very flexible, but since the tool is containerized, it also makes it easy to just get it up and running on different systems. And here's just a link at the bottom to where you can find the build images. So quay.io slash coreosassembler, coreosassembler. Relatively easy to remember. Next slide, please. OK, so we've had a look at, well, introduction to COSA. And now we can look at how COSA fits into our build process. So we have this nice diagram here from the coreosassembler documentation. Definitely do go check out the stocks if you're interested in this stuff. We'll start with the coreosassembler repo at the bottom left. So that's one of those blue rectangles. So as a simplification, we can think of this repo as a build script and a Docker file. So we use quay.io to create regular container builds. So we have them available whenever we need them. And so now that we've, let's say, pulled a container image from quay.io, and using that image we've set up a COSA container, what's next? Well, so if we go back to that simplification from before, we have the build script ready. So what we need now is the build configuration. And that's exactly what coreos config is. It's the build configuration. It tells COSA what RPMs should be present in the build configuration. And system D units that need to be launched as part of the first root process, so on. I'll talk more about the config in the coming slides, but for now we'll leave it there. And using this config, COSA knows how to set up the base OS. So lastly, we'll look at that purple rectangle on the right there. And that's the outputs from our build process. So there's two main outputs from our build process, disk images and OS recommits. So OS recommits contain all the information about the file system. So we can generate the root file system from the OS recommit. Functionally, the disk images are your typical images that you would use to provision your systems. But if we take a deeper look, the disk images are just wrappers for the OS recommit. And this means that if you have an OS recommit, you can generate various disk images from that commit. And this avoids having to redo a lot of the work that's involved with a fresh build. Next slide, please. So in the explanation of the build process, I mentioned Fedora-Cora's config. And this was an example of a build schema that COSA accepts. And COSA expects all configs to have certain main components. The F-Cos config also abides by these components. So there are three main components. There's the manifest.yaml file, which tells COSA and by extension RPM OSG, since COSA uses RPM OSG, how to generate those OS recommits. There's the overlay.d directory, which is a directory that contains additional information that we want to add to our OS recommit. And lastly, we have the image.yaml file, which just contains the final configuration of the disk images. So if you wanted to make an F-Cos-derived OS, you would need to declare your new config with these three components in mind. And that would really allow you to make use of Cora's Assembler as part of your build tooling. Next slide, please. So we looked at an overview of F-Cos config, and now we'll talk a little bit about those components in detail. So first up is the manifest.yaml file. This is the file that's responsible for generating OS recommits. We can use it to define a set of packages and the corresponding RPM repositories that come from. And another cool feature of this file is that it has this post-process key, which takes in a list of strings that represent inline scripts. And these scripts are processed by RPM OSG to make arbitrary changes to the root file system. So there are actually a bunch of other keys as well that you can use to customize your OS recommit. And in an RPM OSG context, this file can be referred to as a tree file. So if you would like to learn more about it and learn about those different features that I didn't mention, then you can check out the RPM OSG documentation. So I specifically look for the tree file spec. Next slide, please. Okay, and then the second component was the overlay.d directory, which provides a convenient method of overlaying OS recommits with additional information. So we have the overlay.d directory, and then there's subdirectories within overlay.d. And then these subdirectories are added to the paths within the subdirectories are added to the OS recommit. So in the example on the slide, we have disabled fsh passwords by inserting a file at slash xc slash fsh. I won't say the whole path there, but you get the idea. And this file has been inserted at the same path in the OS recommit. So one important thing to note here is that this isn't some sort of irreversible change. So if we wanted to, we could use our provisioning tool ignition and write a config to turn SSH passwords back on. However, this sets a default, and that's what, so that the default means that's that's what you'd expect to see with an empty ignition config. And that allows us to opinionate our OS and having mechanisms in place to opinionate the OS is important when we're creating such an opinionated OS. Next slide please. So lastly, we have the image.yaml file. And this is what deals with the other set of build artifacts, which is disk images. We can set this image configurations through this file. Some of the changes here, like the other files can be made via ignition, like inserting ARGs. But similar to the other files in the config, this provides a default that allows us to opinionate the OS. Next slide please. So we also have some mechanisms in place that help speed up our development cycle. And I'm going to quote the rationale for overrides straight from our documentation. Development speed is closely tied with the edit compile and debug cycle. So override speed of development by allowing us to easily test out local changes. We have two ways of running overrides. You can either pop an RPM into the override slash RPM folder, or you could directly run make install into on a project. And you can install it into the override slash root FS directory. So if you find packaging RPMs to be a hassle, then the root FS option comes in handy. Otherwise, I've personally found that RPMs approach is more robust since you don't have to worry about dependencies and that stuff. There are also lock files like manifest lock dot overrides that come in handy. They allow us to use older visions of packages. And there's mechanisms of adding new packages as well, which we'll see in the demos. But I won't go into too much detail about the lock files since Nata will talk about them in a bit. Next slide please. Okay, so we've almost completed the full lap here. We talked about building Fedora CoreOS, but what about testing, right? That's also important too. So let's take a moment to visit the aspects of the build process, or that aspect of the build process as well. So we have multiple ways that we can write our tests. There are tests that are compiled in COLA, which is our test framework that is included under the umbrella of CoreOS assembler. And then there's external tests, which are also run by COLA, but the tests are written in bash and they live alongside the config. So the tests that are written in go and compiled in COLA support more complex operations like interactions between two Fedora CoreOS instances. But the disadvantage there is that those tests are written in go and they live in COLA and by extension CoreOS assembler. So this is inconvenient if you're working on like a downstream project and you want to add tests, you would have to go to CoreOS assembler to add a test that's compiled in COLA. But the external tests are simpler and they can be easily written in bash. And they also have a more favorable model for adding tests to downstream projects. So if you were to make your own custom config for your AppCost-based OS, you can put the tests alongside your own config instead of having to worry about adding them to CoreOS assembler. Next slide, please. So in the last slide I mentioned COLA. I'll explain a little bit more about COLA in this slide. So COLA is our testing framework that we use to test Fedora CoreOS. Like COSA in general, it supports local testing with TeamU, but also has the options for testing on different cloud providers. And it's a very common theme that we tend to see here that both the local hacking tests or both the local hacking use cases are supported alongside the production level systems. So COLA has some nifty features as a road here. And those features are, you can write a different feature for each test. We support reboot, so your VM could reboot during a test and we can pick up from where we left off. And reruns and timeouts come in handy when we run into infrastructure plays. And normally we run tests, each test is kind of in its own VM, and this helps us to avoid conflicts. But there are ways of getting multiple tests in one VM to save resources. And lastly, it's easy to use. It's included in the CoreOS assembler suite of tools, so you don't have to worry about setting up yet another tool. Next slide, please. Okay, so I've shed some light on the local building aspect of Fedora CoreOS. But we really haven't talked about how do we deliver FCUS and get it out in the hands of users. So for that, I will be passing it on to Hinata. Thanks, Saki. So how I mentioned before, we have the ability to see three different streams and three different architectures for average rates. And how that's even possible, because you need to test everything that you are making available. And that's where you came to the investment you do in our CI. And as Saki explained, you have a really powerful tool that's inside CoreOS assembler that's in Color. In Color, it gives you the ability to launch FCUS instance in the cloud providers that will provide images. And you have the ability to go there, test and validate those images as well, run a couple of other tests, and make sure that the update is working, the kernel is okay, and everything else that we need to validate are able to run in those provides itself. And every time you find some issue, you also try to write more tests for that to make sure it will fail or if you get some bug, it won't work again. And that's make our distribution more able with the time, because every time you see a fail or a bug, it will try to have those tests in Color and make that reach our upstream. And as I told you, COSA is a really powerful tool that gives us the ability for building everything that you need and also test that locally and have the same exact environment for productions and also if you are testing locally. So if your developer wants to use their own Color, it should be the same Color that we're running to release all those streams. So it's a really powerful tool and it gives us this also ability to launch tests on the major clouds provider. That's the trick for us to be able to deliver all those screens every two weeks for those three architectures. Now, we'll talk a little bit about lock files. It's kind of unique the lock files the way that you use. So it has a Jenx job that will kind of create a bump file for us with ever single version for the packs that you have. And it's kind of nice because if something changes in Fedora, we have the ability to see the difference between the versions that change. And we have case that sometimes a pack to you changing the Fedora repository and it will break something. Could be a task, could be a build. And with this difference between the packs that have changed, you are able to track and kind of identify that I need to work with, for example, those three packs that have changed, maybe something that occurred broke a thing that I'm working. The lock files also give you the ability for overrides. Since I say, so this packs in this version, but for some reason, this version is broken my past. So I could ping an older version until this package read bug fix in the repository from Fedora. So I can wait for this fix to launch, but it won't affect Fedora because I can go back to the older version and make that still work. So I won't add any issues for us if I lock that in the old version that's still working. It just gives us the ability to add packets that are not in the Fedora repository yet. So you can have new packets and wait for them also to reach the Fedora repository. But it gives us the ability to control the packets that go in the operation system. And that's, it's nice because it's in that way you can avoid issues or even prevent other issues to come. Okay. And now I will talk a little bit how we'll have this architecture infrastructure for us to build the three different architectures that we have. So we have basically one instance for Jank instead of have three instance for each platform. And this instance for Jank is x86. And it will basically start the build process. And once each stable phase that you make sure that the job passes, it will automatically trigger the root chart jobs for us. And it will create one Jank job for each platform support. And it will be far out in the, everything will be running Fedora class in the back end as well. So the way we'll do, we'll have this Jank instance that will trigger the first job and the other platforms will make a call. It's in pod main mode for the server itself. It could be a VAM, a bare matter or even a server that is in a code provider. And what podman essentially does it's in the back end use SS8. So we'll access the service run the exact same process you did for x86 past and go back with the results as the architectures for the instance that's called that. So in this way you are kind of able to minimize the time that you use because you can also run all those three platforms in parallel once the root chart jobs are triggered. So for now we have instead of x86 we have ARM, S390X you are hoping to add PPC in the future. Okay, now it did kind of a small bug since the build process and the Fed process take a long time. And as you don't want to waste anybody's time you try to cook those things. So everyone can at least understand what we have said. So let me just ask her. So I mean, in an implicit cluster in this case to make things faster so I mean in a pod and this pod is using chorus assembler. So chorus assemble right now is based in Fedota 36. So I'm just showing that but it has all the tooling necessary for building. So I create a year here for the demonstration. And when you say chorus assembly first need to run COSA in each and pass the HEPA star or branch that you want to work with. In this case I will work with the testing the VAL. So they need to COSA in each will create a tree of directors for us that contains the build. And basically the overhead first that will give you the ability to override put a fast as well as RPMs. And the source directed that has basically the checkout for this post or with the configuration that will be used for the chorus build. So I will do some fat here that you essentially download the packets and manifest and everything else that is needed for the build. And after this comment, you can use the asset itself running COSA build. So each will generate the S3 commit for us and from the S3 commit you generate the images. So in this case, we also have a commute image done after all the process. So you can see that creating inside the build directors. And you can have multiple builds in this build director. So if your brand COSA build again, it's with some difference between those files each will generate for me, another build with another version as well. So you can see everything that's generated. You have the manifest lock that you tell us the version of the packs that is inside that also have the commuter metadata that also the information about packages and other stuff. And the commute and S3 image. So in this case, I will do an override for the kernel packets, because most of the people have maybe some case that they want to override kernel packets and they don't know how. So you can do that in the Fedora class itself via RPMOS3, but you need to access that and run that most of the case if you're kind of hacking around. But if you want to have an image with the packs that you want, you can override that in the bridge process. So in this case, you are using this version of kernel. And I will put the packets that I need for kernel inside the override RPM. Okay, the downside of this override via RPMs is that dependencies. So if you have a pack that depended on other packs, you need to put all the dependence inside this directory. So if you have a pack that you use, it's best for you to put that in the manifest overrides because we need to care about the dependencies for that. So you can add these packs in the Fedora Chorus-based YAML that Sakudo you show after. So maybe the best way if you don't want, oh, it's too hard to handle all the dependence. But for hacking around and doing some small changes, let's say it's a nice way for you to work when you're trying to be testing. So I will do some fetch and build that again so you can see the result of this override. Okay, and as you can see, you also have the difference between that was the upgrade. So you can see in here, it was the rate from this version from high version to the other version. And that's the thing that is done with the ability of lock files that you always tell you the difference between what the previous build and a new build. And you can see it was create a new version for you. And in the community metadata, you can also check the package version that is inside this image. And another thing that is cool about our COSA is that it gives you also the ability to start VMs via QMU. So you don't need to care about the commands and everything else that QMU requires for you to start VM. So you just need to close the run. In this case, for example, I'm passing a parameter for C that stands for console. So you can see that that shell console for this case. And I mean, just a demo here for you to see that I can access this image after it is build it. And I don't need to tell COSA run or the location it is. It's no by default. So for example, I'm just double checking here the version that I draw great for kernel and the OS version. Okay, another edge over edge package is via manifest block. So it also gives you the ability to lock a version that you may want to keep. For example, for waiting some, but to be fixed in the nearest version and such. So I will add here, for example, package for kernel is active. So just need to describe the packs that you earned. And it usually provides some kind of information why this package is locked or not. And for you to understand and such. And it will do the same process as before. You will need to build the COSA again. And that's another thing. So you always need to run the COSA comments in the main director. Because it needs the free to understand what it's doing. And I just open here the commit metadata just to make sure that the version that I add for kernel is active is the same that I put in the log file. And it's the same. So it worked. And another thing as well, because you have the ability to do for testing is passing your ignition. For example, in this example, I will pass my own ignition. So if you wanted to pass some configuration validator ignition, you can also do that manually running COSA. And for example, my initial in this example is very simple. I just try to add some key. So just to show you guys how that will work. So the ignition will inject for me. My key. So it will match the same that I passed before in the ignition configuration. That's it for this demo. And I'll suck it will show how to add new packets in this demo. You're moved if you want to say something. Sorry, I didn't realize I made it. Right. So I would say, so here I've just, I've printed a snippet from my batch RC file to show the COSA wrapper that I'm using here. This isn't, it looks intimidating, but it's something you can find on the chorus assembly documentation. So it's, it's a standard copy paste from there. And yeah, we can continue. Oh, I think it's already going. Right. Okay. I gave myself some time to talk. All right. So now we want to initialize our build directory. So we're running COSA init here. I'm passing the branch flag to use the testing develop branch. But testing develop branch is like the default branch. I'm just doing it here to, to show off the branch flag. So I don't really need to specify. So this will take a second to complete. And most of the workflow, if you notice is the same. We're just changing around a few things. So now that we have our config ready, we're going to go and edit one of the manifest file. So, and we're editing fedora chorus base.yaml here in, in fedora chorus config, we have multiple manifest files because they're, they're chained together with include tags. So this just helps in, in our workflow. But in a regular config, you don't have to have multiple. We're going to find the packages key here. And we're just, we're going to go in and add the calc package. This is just something we decided to use for the sake of the demo. And so now we've added that we can go ahead and run COSA fetch and COSA build. And we're not a good job of explaining what those were. So I won't go into detail here, but we'll just, we're fetching the packages. And then we're using that information to build our new build. So I've sped this up a lot. It shouldn't take too long. It would be like 10 more seconds, I think. Okay. So I've got that going. And now that the build is complete, I'm just going to run COSA run. So unlike Renata here, I haven't passed the hyphen C flag. So I'm just running sort of the lightweight version of COSA run. And I'm going to bring up a VM from that's running the latest build that we just made. So otherwise, if you don't want to run the latest build, you can specify which build you want to run through a build flag. This should take like 30 seconds, I think. Okay. So now we're in our VM. And what we'll do is just test out if the calc package is there. So I'll just run the help command. And yeah, so it's there. And that's when we wanted to see things are working as they should. So our, us adding, we were successful in adding a package to the base image. But we really haven't tested if this somehow indirectly broke something else. So what we could do is also go ahead and run our test suite. And doing that locally is as simple as just typing COSA, COLA run. And yeah, that launches our test suite. So it would automatically pick up the latest build and launch it. We don't have to wait for the suite to finish. I think I accidentally left in a little bit at the end of the demo. Yeah, that's, that's it for the demo. I think we can move on. Okay. And we will give you, if someone would like to get involved. I think that the first thing would be try out your further currents and try to join your community. You are always available at the library of chat. If you have some knowledge when you try to build further careers, you can reach us and if you have your own inside the channel, it will be there to help you or at least point someone or a place that you can find answers. And I think now we are going to the questions. Does anybody have questions? Let's receive the hearing day Q&A question. So there's a question about snippets. And Dusty's addressed the questions and I was just going to say the same thing. You don't have to write an ignition conflict directly. You could use our transpiler, which I think is a bit more readable butane. And there are examples that we have on the documentation for that. And Michael's also pointed out that we have good tutorials as well. So FCUS derived operating systems. So one example that comes to mind is I'm just pulling it up here. Arcos and that's the config that we have. So maybe it's not the ideal example in the sense that it uses a different package set. It uses RHEL. But it's set up in the same way that we set up Fedora Chorus. So you can check out the config for that if you would like to have an example of how you might set up your own configuration. That extends Fedora Chorus config. Any other questions? Again, look at this question two different ways. So like I'm not sure exactly what they were wanting, but so one is like, can I specify an OS tree repository to pull from to do a build? And the other is can in the image that I'm building bake a reference to a separate OS tree repository to then do updates from. And so for the build itself, we kind of like the inputs to the build is not an OS tree repository. It is RPMs plus config essentially. And then the output is like an OS tree commit that we then push to an OS tree repository. So if that's the question, I think the answer is no, because we don't use an OS tree repository as input to this process. Although there's some nuance there because we actually do have a local OS tree repository that we use for like previous commit history information. So when a new commit is built, that commit knows about its parent, right? So like there's a history as far as baking in your own OS tree repository remote, you can do that, like you could just do COSO build with your, you know, remote in Etsy OS tree remotes.d, I think is the directory. And so like if you want to build your derivative and point your derivative users to your own OS tree repository, you can do that. Thanks everyone for coming. Thanks everyone for joining.