 to mention some community and release management stuff in the beginning and then talk a bit about various features and I try to build some some Coherent story to the to the to the this middle part, but actually it's like a you know a fully connected graph that everything matters for something else So We keep producing releases This It's not a steady pace. We we try to make releases every two or three months and we fail at this every once in a while When I was preparing those slides the We're still planning to make a release in June-July, but we still haven't produced our C1 for version 254 and we already had a 2500 patches and Usually we merge another thousand to two thousand before Between the time where we try Preparing for RC1 and the final release so it will be a huge release and I don't know I hope still hope for an RC1 in early July The number of contributors varies but it's It's not going down visibly An interesting development is that we have a More and more stable releases. So we release for example 253 and then backboard some patches to 253.1.2 we are currently at 253.5 and we are currently at I think 251.16 or something like that. So I Mean it's usually hundreds of patches backported to those stable releases and more and more distros are using those point releases to build their Packages and there's quite a bit of demand for them If we could keep up current pace for this year will have 44 point releases Between various branches and If you have some patch that you think should be should land in a stable release you can either Market as needs table backport in the upstream GitHub repository or you can file a pull request against the stable system destable repo with it was a backport just do Make sure to use git commit minus x to get the hash of the original patch or Sometimes it's enough to make a comment if you if you cannot do the other things in a pull request and We have well about 2000 open issues and did It is growing but in a manageable way. We actually have I mean I think we cross about 80 percent of issues that are opened So this this is just the the diff and There's a split half and half between bugs and our fees We would like to make this number go down, but it's it's putting tough But it's at least it's not bad news in Fedora We made a Neffort over the last year. So those numbers are for I mean the growth numbers are for A period of approximately the last year in Fedora. We made an effort to reduce the number of open bugs and in particular David Targon and I have been working on Going for the bugs and closing some stuff and I think like this reduction of 50 percent. It's pretty nice We we don't see that often So and now let me talk about well features and Being removed first and add it later. So This has been going on for the last 10 years approximately but there are those two related ideas that you have a Something called unmerged user. I mean the naming is terrible right that you have separate characters of Bean and and user bean And leave and user leave and and and so on for every other possible directory and You look in both places and if you have that you have the other thing called split user where you delay the mounting of the second half until late in boot and most distributions have Stopped doing that. Well, so Fedora did that maybe 12 years ago other distributions. Maybe a bit later, but The point is that we have been maintaining this split The unmerged user thing in system decode All the time And it's well, it's if you don't have this Then this doesn't make any sense because you have parallel directories, but you always use both. So This and I mean both things are going away We a patch is ready in system that seems to us removes all this code We will probably measure it a bit later after 254 has been released just to make the release of 254 faster And I think that is going away is support for C Groups V1 this this is a more complex problem, but the way that system D works is that you have you specify settings in the language of C Group V2 and System D will translate the settings to the as much as possible for V1 if you're using V1, but this translation is not always possible in any meaningful way And V1 is not hierarchical So you cannot do unprivileged delegation because the if you delegate stuff on V1 The delegate he can take more than the parent has so it's not very useful and And some features are missing on V1. So we want to get rid of this and simplify our code base quite a bit so sometimes sometimes next year and and This is a bit So if you are using Fedora like There was a time where this warning popped up in various places. So System D tools are being unhappy when they are called in a system without slash proc mounted and This is I mean the code continues, but it's it doesn't like the situation and Why why are we making this warning? So we need access to proc self-fd sim links to map to work with file system descriptors and Without slash proc we don't have this this kernel API And we cannot do various things with file descriptors that we would like to do So, okay, and now about some positive stuff, so We had this issue that Users were reporting that when they use user units, they specify some settings and those settings have no effect And this is because system the in general works this way that if you have settings for system units And those settings cannot Be applied because the system is doesn't have the right architecture It doesn't have or was something was compiled without the right capabilities settings are ignored So I mean you can for example specify both as a Linux and up armor policy for a unit And the one that can be applied on a given system or maybe none of them Would be applied the other one would be just ignored and this meant that You could specify some some settings like I know protect home for for a system unit And it would work you would specify for a user unit and it will be silently ignored because the user manager does not have enough privileges to To apply the setting it could have the privileges if it used a user namespace for For for for the unit, but this wasn't on and you had to apply this explicitly So this is something of a compact break that we are enabling it because there were many reports of people being So negatively surprised and we'll just enable private users in many more cases now So there will be more sandboxing for user units and in general The number of various sandboxing options is is growing all the time I mean, it would be a separate talk about them. There is a very nice tool called analyze security Actually, there's two there's analyze security that will give you hints about security settings And there is system the analyze verify which will check the unit file for correctness and give warnings if something is wrong They're both useful for for hints and for checking a Small thing, but useful is open file setting. So quite often units would Use a shell just to redirect some file descriptor to some file from some file This can be done now natively by the by the manager and the of course the advantages that things are simplified and also that The unit can run with sometimes with less privileges because it can get access to a file As a file descriptor and not be able to open the file otherwise We have a new unit type so Type equals notify is an old thing. So it's a when Units are started the unit System he likes to make this operation synchronous. So it wants to know when the unit is actually ready and In type equals notify the unit runs and sends a notification message using the as the notify protocol when it's ready and for reloads We usually use I mean the most Easiest option for for reload is just to let system to send a signal to the unit but This is asynchronous the the signal by definition is Without any communication in the other way. So with type equals notify reload The unit is supposed to send a as the notify notification After a load has been finished. So we get a convenient implementation of reloads with symphony city So readiness notification Another feature It gets a separate slide because it's I think it's pretty nice it's soft reboot, so it's a You say it's a new way to restart the machine that brings down all the user space unmounts things and instead of actually rebooting restarts execs system D and brings up the user space again. So So we have like a normal reboot which goes for the bootloader and then you can know we have K exec We just skips the bootloader and goes directly to a new kernel And we have now we have soft reboot which skips both of those steps and goes directly to a new system D Another feature this one even gets an icon is a the I opus controller. So actually this is a kernel feature Mostly and this this is quite common in system D stuff that we are just providing a thin layer around the kernel so When you have a block device You get some number that it has some bandwidth I don't know if I have a megabytes per second read speed or write speed and we know this is not true in general because the actual bandwidth depends on What the device is doing and how much writing you are doing at the same time? So, I know initially you specify some some some some some some blocks to write and it is very quick because it actually goes to some internal Buffer and then things slow down quite a bit and depending whether you on whether you are reading at the same time The right speed will vary and so on. So the idea is to do fairly extensive benchmarking of a specific model of a drive come up with a simplified model that describes how the drive behaves under various conditions and Then you can set a meaningful policy to divide the bandwidth between services and groups and so on and Where system D comes in is that This is implemented in the kernel, but we need to provide the policy and system D now has a you deaf You deaf rules to figure out What say what what policy to set for a for a drive based on the model? Fill more revision and so on and In the hardware database we have we are starting to Grow a set of rules for different drive models. So If you have a drive model that is missing you can do the benchmarking and submit the poor request And then it will be applied to all the drives in Whenever people are using this So this was actually contributed by Facebook folks So and now Some sub features and here sub stands for subterranean and those are large features. They're just not very visible and So this is not a kernel thing the when we use process numbers speeds to refer to Processes it is well known that The process can die we wait a bit and another process gets born and it gets the same number So so speeds are not a reliable way to refer to processes And the kernel has gained APIs to refer to processing using file descriptors speed of these so we this the this ambiguity goes away and Various system details have been converted to to use pdfs internally And also stuff like clip system D and the debuts APIs are getting Extended with a second set of calls that allow pdfs to be used instead of peds So this is quite a bit of work, but it's generally not visible if it works And kind of in a similar vein Internally we are converting a lot of our code to use file descriptors to refer to iNodes instead of the path name The most obvious Thing is that this allows Removes the possibility of a time of check time of use race between I don't know checking the file and executing the file But also it makes it easier to write code which operates on Subtrees of the first in character. So change routes and also disk images that you mount temporarily somewhere and then you do out in operation on the On this image and I also wanted to mention kernel install It was written in C. It used to be a bar script and this meant that Well, first of all all logic had to be implemented like for example to find where the ESPS which is actually a very complex game of guessing Had to be duplicated into and this was very annoying. So in the end You what another you wrote it in and it also means that we can use FDs in kernel install and Can I start and I'll operate very nicely on You can say say kernel install dash dash image equals image name and then doing installation on the inside of an Disk image I mean, I think not yet, but soon and so this brings me to Disk images so The idea has been around for a while, but the name is new. So discoverable disk image is an image or Yeah, and this image that are an actual disk that follows the discoverable partition specification So it has a GPT table and the GPT table The role that the different partitions should be used for is specified by their partition type identifier and there's you don't need other configuration and System the dissect has been around for a while, but it has grown new capabilities So for example, it can you give an image and it can recursively mount the image on this on some mount point so And the mounting is done in the same way that if you booted the image and you would auto discover that I don't know like slash var There's a specific partition is slash var and another partition is slash home base on the on the DPS People talk about Supply and chain issues so so a bit in this topic the m3 Command makes like a recursive report of the contents of an image and This is a system the dissect Example I try to make make it fit fit on a slide. So this is like, you know after surgery so it opens the the image and and extracts the three files so it knows what is I mean what what the image thinks about itself And print some metadata and partition List but it also has this this idea of using the same Image for different things. So this is where the DDI power is So this particular images can be booted in UFI. So as a real system or in Qimo It can be used as a container But it's not suitable for the other things and so so we have portable services So a service that is like a normal system service that comes with its own file system an interd and A DDI can also be an extension for each of those things above so We have the same format But depending on what partitions are inside and some metadata is using different ways and a This this area in general has been there have been a lot of development in system D with capabilities for this In particular how to apply the extensions and various places and how to check that the extensions are signed properly I Have a second talk in the afternoon where I talk more about this because it's just not not enough time here But briefly so We we have extensions that allow us to add stuff to an Image for example, we have an immutable image that is signed We boot it and we want to extend it and we have those extensions that it can also be immutable and signed But there are also other mechanisms. So a Lot of work has been going into credentials. So Credentials are this system the idea that you have a blob of data for example, it could be a some configuration snippet or a certificate file and The manager will take this this blob And when it is starting it stored somewhere and what it is Starting a service the service specified that it wants a specific credential the manager finds this credential and creates a file Before the service is started and then the service can load the file. So this is doesn't sound useful But the thing is that this storage Can vary a lot. So it can be a file on disk And the credential can be also encrypted and then system D will decrypt the credential before passing it to the service It can come from a pipe or a socket. So this means that credentials can be generated dynamically when requested and they can be stored in QMOS and bios fields and some some other similar stuff, which means that you can pass a credential to a virtual machine They can be spent on the kernel command line the bootloader will load them from I mean as the boot will load them from slash ESP from the ESP partition and and Credentials are hierarchical in the sense that we can have a situation where we have a credential disk This is passed to to the virtual machine manager and then this passes it to the virtual machine and the virtual machine passes it to Assistant the individual machine loads it and passes it to a service and so on and so on right and an example of how this is how we make use of this is There's this specific credential called VMM notify socket and We put a VM we pass the credential to the to QM and Inside of the image inside of the machine System debuts and sees that there it has a credential. I mean it looks for a credential by this name and sends notification to this To the socket so For example, it starts the sense ready equals one when it when it's has finished booting. So this allows us to cross the the boundary between the host and the Machine and this is done in a fairly nice way. There is no network involved. It's a Very generic mechanism and And So also it can send an exit status notification so basically you can have the situation where you Make the virtual machine and the container behave in the same way and you can specify an exit status and have the machine Fail for example, very nice for unit tests where you do some tests in a virtual machine or in a container or both And you want to have this uniform and Our tool that has been has seen significant work is system D measure. So The idea is that you build a new kernel and before you put the kernel Well, you build a kernel and ID and Figure out some command line options and stuff like that and with all that before you boot it you calculate What PCR values will be? What the PCR values will be? after you have booted this This combination of things and This means that you can well you can predict those numbers and means that you can sign policies for them. So this is all geared towards Pre-calculating PCR values and signing policies and encrypting secrets In a way that they're bound to certificates not specific PCR values and A related topic is that we have a new idea of boot phase paths. So system D will write those strings at various points during the boot sequence Which means that the PCR values change and define points during the boot time and This means that for example, you can have a key for the root volume Looks for the root looks lags volume That can be bound to the to the TPM and can only be decrypted in the interd because after you have existed From the interd the PCRs change and you cannot access the same secret anymore And so on and and there's also a number of Stuff that is services that write information about the machine into various PCRs So that we can build more useful policies. So for example like the machine ID and the information about discs that are mounted in various places and And again, this is about building PCR policies that actually are useful and Something that I have been working on is a Helper to create unified kernel images. So this is like Ukify uses system D measure and so You have a you have a kernel and an ID and some command line settings And now you call system D measure to figure out what PCR values will be you build a PCR Policy all of this you sign it you put it also into the unified kernel image possibly multiple of those policies and then you sign this whole combined thing for secure boot with yet another key So it's quite a bit of messiness and you keep I make this easier. It has been rewritten to Now it doesn't I mean system D measure doesn't require root privileges because it doesn't access the PM anymore and So you keep I also doesn't require privileges, which is just nicer and faster And Another tool that has seen a lot of work is system D report So it is you have a You specify a set of partitions that you expect to see on the machine and when repart is executed It will match those definitions to the partitions that are on disk and create any that are any that are missing And maybe for example grow the ones that are too small and so on and if everything matches, then it's just an idempotent operation and And So You part is nice because it works atomically so it first opens a device Without having a partition in the partition table goes to a specific offset rise the contents And after this has been done Sinks the disk and then creates Partition entry at the beginning of the disk so the partition appears With contents, I mean with a file system and files in the file system if you if you specify so already At once and We used to do it in this way that we would use a loopback device to mount the Temporary partition somewhere, but now the code has been reworked to Use file system tools to write the file system contents including files directly at a specific offset And this is this doesn't require root privileges. So you can build file System images inside of a container and also as an unprivileged user And it's also faster It's nice and There has been also quite a bit of work on the system deep boot loader and the system this tab so System the boot is the boot loader for for ufi and system this tab is the thing that is prepended to the to the kernel to create a unified kernel image and We used to have a dependency on new ufi. It was quite a lot of code and it was annoying to have this We wrote a bunch of stuff to get rid of the dependency. So They are now smaller and and and we also Like our code better You can use as the boot to for direct kernel Boots under QEMU. So you call QEMU dash dash kernel As the boot and you and then as the boot works as a as a kernel and it will actually load another kernel from inside of the image This has been for a while around for a while, but it's kind of becoming more interesting with the Whole work on unified kernel images that as the boot will do enrollment of secure boot keys if If The machine is booted in setup mode And I also wanted to mention that There's some improvements to pass the the random seed to the kernel So the kernel gets started already has The random random pool populated so we don't need to delay waiting for for randomness. This used to be a problem source of delays in the past and Anaconda the the installer used for Fedora and rail systems is getting support for system the boot This poor request has been merged. It's not complete yet, but Hopefully you will be able to install systems using as the boot fairly soon and I Wanted to mention that we don't bite and you know, like there's their issues and stuff to work on and We merge poor requests Quite often and I will be happy to see more contributors and Yeah, so four minutes for questions Sorry, I didn't catch this So the question was whether soft reboot will work with OS three systems. I Don't think that there's any reason why not I mean what it does it it After it has brought the user space down it calls the equivalent of Like a switch or the operation and you specify a new system the binary and Potentially some options so we can figure out a way to boot OS three the same one or a different one. Yes, the same kernel. Yes, and actually processes can survive The you can mark processes not to be killed. So it's not a complete replacement