 I'm going to wear my glasses on my nose like my grandpa because I can't read my nose without him and I can't see you with him. Aging sucks. Hi everybody. Welcome to CentOS Connect at Flock to Fedora. This is the second time we've had a CentOS Connect with a rebranding. Previously we ran CentOS Dojos and we went through kind of a rebranding process. We've got a new logo and a new event name and we've tried to really focus on connecting people with the name from different projects within the whole enterprise Linux ecosystem and across all of our SIGs. So it's really good in that respect to be here at Flock this time to connect with Fedora which is obviously a very important sister project for CentOS in general. We wouldn't really be able to exist without the work of Fedora. So we're really happy to be at Flock this year. We had planned, we were kind of looking at trying to do three per year with the big one starting off the year and in Europe at Fosdom and then two more but events this year got weird. So this is our second one of the year and our last one of 2023 but we will, I guess I haven't booked a room yet so I can't say we will but we are planning to once again be back at Fosdom on a day before Fosdom event. So if you're planning on Fosdom, if you're not planning on Fosdom maybe you should plan on Fosdom but if you are planning you know pencil that in to join us the Friday before Fosdom, Friday, Fosdom Saturday Sunday right I think and then so the Friday before Fosdom is we're planning to put that together. So anyway I just want to remind everybody obviously it was in Justin's opening of the Code of Conduct that we're all bound by here and you know we're also at a Fedora conference and Friendship is one of the four foundations of Fedora so you know let's all be friendly to each other and have an excellent event and in the immortal words of Bill and Ted be excellent to each other. We have a lot of good talks today we've got some talks about Sentra Stream we've got some talks about the work in our SIGs about the work that happens in Apple which is technically a Fedora project but we love them anyway and also you know some of the other projects Rocky's talking later so some of the other projects kind of in the EL ecosystem so a wide variety of talks I'm excited to hear today and oh and also there is the panel tomorrow which kind of it's listed as being in this track but we frankly ran out of like time slots to put something in in today so I begged Justin for for a slot tomorrow to put that in so make sure if you're interested to to attend that tomorrow I'm not sure which room that's in if it's in here or another or actually that's in the the main the big room I think tomorrow okay and let's see if you're doing the social media stuff the the X or the mastodon or tiktok or I don't know my space make sure you use the social media tags there's the two on your badge the flock to Fedora and flock Ireland but also if you're here in this room please use the CentOS connect hashtag so that we can see all of your beautiful toots I guess and there are we have Fedora badges for anyone who does badges there's a badge for for this flock event and there's a QR code for it somewhere around here as well as throughout the entire venue and then and then well that's the other one and then we have one that is for CentOS connect so if you are here in this room you can have a CentOS connect badge it is the same badge that we used for the one in Fosden we'll do one connect badge per year if you attend any CentOS connect you get it so if you already got it in Fosden it's not a new badge but if you didn't enjoy your totally uncopyable pixels I guess and then just a note to the speakers whoever's here we do have 25 minute time slots this includes time for questions so you know you know your talk and how much time it's going to take so you know if you want to have time for questions please fit that into your 25 minutes so that we can have our five minutes to change over sometimes it can be tricky with you know laptop changes and stuff to do it in five minutes and with that I guess actually who do I have here I have a few minutes that I'll kill a little time first I have who do we have here that's speaking today we can do like really quick pitches yeah like just like a really quick what's your talk and why should people stay in this room and not give up their seat and hear it so where's our mic let's pass it around and hey my name is Adam I work on the CentOS team engineering team and I'll be talking about how CentOS team works about how it relates to CentOS projects and those things and I'll show you everything in the build systems and stuff how things flow so that'll be interesting maybe to learn about 1130 yeah hi everyone my name is Brian and I'm coming from the keyword upstream project and I'll be telling you how we build our CI system on top of CentOS stream and how we use this and think of it okay alternative images sake get your alternative images here KDE that's right after this KDE and Apple what have we what are we currently at what's the future for that and state of Apple with Carl so you don't get tired of my voice he has this voice you don't want me to talk much I'm hungover even hungover he has a better voice to me state of Apple that's that's the last I believe we're the last the closing last thing so stay around for the whole and it's always good our graphs look better than that because they're correct hi and David I'll be talking about the work we've been doing in the CentOS hyper scale SIG to enable CentOS train deployments in large-scale environments hi I'm Eric Curtin I'm gonna give an update on everything going on in our automotive SIG so we develop a CentOS stream based distribution that's designed to run in running vehicles basically so all the goings on in that SIG I guess hi I'm sure you from Rocky Linux and my talk would be about secure boot and how distribution can achieve to be more secure a little bit slightly secure using secure boot awesome thanks I think that's all of the speakers that are in the room at least it's most of our speakers so with that I suppose we'll turn it over I'll have the pleasure here of introducing for the first of three times Troy Troy Dawson who this time is going to talk about the alternative images SIG thank you very much thank you thank you alternative images SIG let's talk about it the goal of the alternative images is great alternative images CentOS stream by default comes with a net boot image and this 10 gig DVD and there's no in between and well we want we want variations so that's what the SIG was set up we invite all all groups to come the hyper scale SIG is one of our main things and whenever we do these things we have lots of goals we're not going to go all this it's been one year since I've announced this at it was anyway it's been a year since we announced it is this is a SIG that's going to take a long time setting up we had to get up the infrastructure of course we had to set the SIG we had to investigate what kind of images are we doing we looked at lives install plus live just plain installs containers immutable we out of all those we sort of tossed out containers because that's what podman's for people can make their own containers so we investigated how to make them the live CDs Kiwi and image builder and to be honest the only one that ended up working right now is Kiwi so let's let's go into doing some of that this was meant to be a lightning talk so maybe I'll slow down a little bit unless you guys are gonna have questions that after but this is getting to the the good stuff what we actually started doing we got Kiwi set up in this the SIG infrastructure the CBS centos build system we Neil has made some test images unfortunately they are having problems booting so they're not really what you want but they are the infrastructure is almost completely set up we should be able to push images up to the mirrors just like RPMs are life CD and image builders are not happening life CD just because we don't have Pungie I believe that's the reason anyway it's not happening image builder that's a whole another story image builder we also promise we're gonna do documentation documentation has started it's still in rough shape but there is some documentation one of the things that I've done when we found out we couldn't do image builder is I says hey here's how to make your own images using the API I have since found out at this conference that hey there might already be a command line so I don't have to the documentation might get cleaner and we're gonna tell you how to do Kiwi things using the Kiwi image builder but that documentation hasn't happened yet so where's where's our future line Kiwi we're close we're always like one month away from the Kiwi images that'll be a live image that you can install from very much like the KDE spinning fedora or the the workstation in Fedora we are gonna do both KDE and GNOME we're not gonna be even though I'm KDE love KDE we're not gonna be selfish and we're gonna do a GNOME one too and if others want to join once we get the documentation hey we can do I really want to do my ultimate desktop image that has KDE GNOME XFCE all all the fun stuff but that's still many months down the down the road once we get images published we're gonna have sort of some supported ones the ones that go on the mirrors we will have those refreshed at least quarterly so that we don't have old images just sitting around and we as people come to us and get things going we'll figure out their consistent things documentation that we are you said about one but we're gonna have more how-tos wow I'm not talking slow enough I I'm supposed to start just two minutes ago okay we're gonna have more how-tos we're gonna have better build your old images I once I started working with the image builder I sort of found that this is sort of cool I hate the web interface well it's not bad but you can do so much more going through the using the APIs and stuff so we're gonna have more templates for the image builder that's it so I meant this to be a lightning talk I didn't mean it to be 15 minutes do we have questions for this or I can talk really slow good we have one good so as the person who also does a lot of the images for the rocky project a I'd love to talk about how we can collaborate and work on the the different goals that we have and also the similar goals that we have because our kick starts that we used to build our images are based on the centOS one so there's the natural evolution there for for the history of that the other side of it is what do you think of the current landscape as you were talking about all the different options for building images there's a lot of tools that have been developed by Red Hat and others and are sort of in interesting states I guess is the best way to put that I'm thinking like image factory which is what Rocky's currently using and based off of to build you mentioned Kiwi there's there's a bunch of other solutions as well so guess what are your thoughts on the building landscape as it exists right now I would I would love to have more than Kiwi not the Kiwi's bad if we can get it into the thing the one thing that's limited limiting us is this CBS the the Centosill SIG build infrastructure we would really like our images to be built in that or possibly Fedora but Fedora sort of doesn't want us so that's why we're in this room but if we can get it to work on the the SIG infrastructure we would love it yeah because I believe image builder did come up I put live CD and that but I believe image builder did come up yeah sorry I'm taking this and somebody told me every once in a while don't see what just comes to your mind but open Susie has some really good tools too so yeah there's there's if we can get it there I would love to get it so I'm you know the automotive is also trying to do things theirs is gonna be a little different and we haven't quite figured out how would we get that in there too which is probably why they haven't really been with us because we don't know how to build their images on this SIG infrastructure so anyway that's the thing we'd love more than one way Kiwi's the only one that currently is fitting any other questions Brian back back there I've got sort of a philosophical question for you try okay so like when you when you have like a large release engineering operation you're building a bunch of images for people you're kind of you know making decisions that you think around behalf of what the users might want or something like that what do you think would make the the user folks more comfortable generating their own images or dealing with their own blueprints rather than having them regenerated that's sort of why I was documenting the image builder things oh I don't have to repeat the question because you said into the mic one of the ways I think users would be more comfortable doing it is by doing their own configs maybe starting with a kickstart but doing their own configs if we can make it easy for them like I said we as the Kiwi group are not Kiwi group the the the people that are currently active in the SIG we're gonna set up some some initial kick starts and stuff like that but we would love for other users to come and if it is something that we can you know auto generate we're gonna get scripts doing once we're things so that once like I said once a quarter just automatically redo their image we don't want somebody to just drop it in our lap we'd like them to at least show up every once while say yeah this is the right image but we'd love for the users to make their own kick starts make their own templates so hopefully that makes them comfortable and if our instructions aren't very good tell us that they're not very good and we'll work with them to make them more comfortable because I do realize that sometimes what what goes on in my head doesn't doesn't answer somebody's questions so my documentation isn't always the best way I definitely would love if someone says it's not good to work with me so that so that it is good any other questions thank you for the questions by the way and and there's Neil in case you're wondering he's he helps with the Kiwi stuff so any Kiwi questions go right to you all right thank you fingers crossed this has been tried before so it should be fine so just that, finger, this, and play so I'm just gonna say our next speaker up is Adam Shamalik who's going to talk about role development in public so thank you for the not introduction because I have an actual ability by bio so that would be embarrassing hey my name is Adam I work for Red Hat and I work on the Cento stream engineering team just one thing I want to say I don't represent Red Hat's view this is my personal view I have an engineering role so this has been mostly on that theme actually gave some similar talk last time and last time I showed images like this and that showed the situation between Cento stream that we have Fedora where all the innovation and work happens and then red and basically that's where the next major enterprise in the next version gets developed and then there was a big jump to rel and then a Centos was rebuilt and these days we have two more steps Fedora is still the next major version and we have now Fedora ELN which is like a smaller subset with specific build flags to be more representative of what's going to happen and then we have sent those stream which is well development in public and that's where the next minor version gets developed I also had this image I get more details about like where the sources are about the size about the build flags and stuff and this is actually the first change I want to make I feel like we really need to think about the Centos stream and rel sources as one thing because Centos stream is where rel gets developed in the open and that affects what contributions happening in both places and what the goals are Fedora as you know is a community distro and the goal is to create a community distribution Fedora community decides about everything that goes in it rel is an enterprise product and Centos stream is the bits that where it is the space where it gets developed so that might influence what gets what gets in there but the takeaway is basically they have different contribution model but anyway this is a community conference why am I talking about an enterprise product I'm here to just set the stage where Centos stream is it's part of the Centos project and it's sort of a weird part because it's basically rel but there's many other things in Centos project that create a community like SIGs we have for example the hyperscale SIG that make it work in big-scale environment and actually nice people to talk to because the K-Mod SIG alternative images SIG automotive SIG SIG there's like many SIGs and this is where the community is this is the Centos project they usually build above the operating system or do changes within it but they don't have to reproduce the whole thing and if you actually want to do work in the OS and do some innovations and integration that's Fedora project that's great place for that and that's where bigger changes can be can be done so also you may be heard we're talking about agency that's what the Centos SIGs are and join a SIG or create one make your own things but this talk is about Centos stream so let's have a look at the flow how things get done and because my focus is engineering you'll get a flow chart so this is basically how every change happens and I'll walk you through so first what happens with every change there's a bugzilla bug and there's some projects coming on to switch this to JIRA but that's for later so that's where bugs get reported or features get requested and basically an agreement is made to proceed. Merch request happens that's in GitLab that's where the code lands in the repo that gets mirrored to the internal infrastructure and then a build gets submitted in the build system we have three stages that you can follow the changes through when the build succeeds it lands in the gate tag and I'll show you in the build system how it looks like then there's some gating which are like bunch of internal automated tests that happen it gets to the candidate tag and then there's a verification step which is really like red hat internal paperwork stuff and then it gets to pending and can go to a compose what's a compose compose is like a collection of the RPMs a certain snapshot at a certain time which are made into a repo made into images OS trees and anything that you can actually consume and actually one new thing that's not on the slides is we are building some testing some release testing so we take the production compose test it and then that gets released into the mirrors and we push to mirrors we push to key I key ok way for container images vagrant AWS and all that things and actually some of the testing that's something I want to talk about with you later and we'll be looking for community to contribute tests and define how this works but we can talk about that okay so let's see how it works so this is a bugzilla bug it's an old one but it just illustrates how it works that's how it started there was like one particular change and you could see that on the URL like that then a change landed in GitLab which is that's the pull request and by the way that's where all the sources are so if you want to browse all the rails basically sent those stream sources go to GitLab.com. slash red hash slash sent those streams slash RPMs and everything is there like all the comments as it's landing during development and they land like pull requests like this and then a build gets submitted this is coji this is the build system and you can see all the build logs and everything in there and if you scroll down a little bit there's the tag so there's candidate and pending so it passed all the steps and it's ready for compose actually one thing that we just added like last week is a pending signed and they're just an implementation detail where all the signed packages land to make releasing smoother and when it gets released not released composed this is the composes.stream.sintos.org we have the production composes there so you can get the latest and there's the repose and some basic images and after that it proceeds to the release to the mirrors and that's next to the SIG content and that's where you can get sent those stream the artifacts so that was the that was the flow we basically saw it going from bug contribution build test verification compose and to the release and with that let's talk about contributions and how stream relates to rail and you got a flow chart before now you get a life cycle diagram like this this is what rail lifecycle looks like this is a rail 9 I think it's from products pages or somewhere you can find it online as well but basically every minor release I think this is like half a year each new one gets released and it gets supported in various ways for various stages so there's like the minor release test extended update support for some and then update services for SAP solutions does even longer life cycle so you can see like there's a lot of things happening but for cento stream we don't really need to worry about it that much there's arrows that point where sort of sense to a stream is so like this whole thing gets developed and sent those streams so it sort of follows along you don't really get the sense of the minor release is so that's that's not there what's important to know that the vast majority of packages in rail and therefore in center stream never change in AB I throughout the whole life cycle and there's a document the rail application compatibility guide that you can Google and read for more details and Red Hat takes this very seriously because customers build applications on top of rail and they sometimes just want to run it and never change them so Red Hat doesn't want to break them which takes us to the contributions what you can contribute is bug fixes they're very easy to do and valuable if you have a bug fix feel free to submit it of course just please don't break what customers have guarantees on and Red Hat needs to decide or the maintainer needs to decide what goes in so the maintainer needs to test it and and merge it another type of contribution is stable updates from upstream that's usually fine I'm stable that's a flexible word right this means that this is in terms of changes in the code base I'm as well gets older because it's a bi stable the upstream gets like way far way more far from it so and packages usually don't rebase so if there's a stable update from upstream that's good but rail doesn't tend to pull in drastic changes that much of course which takes us to back ported feature which is an interesting thing where you take one feature from a newer version and do a lot of work to make it work in the old version without breaking it and that's actually how many updates in rail are done and you're very welcome to of course do those two what we can't do in center stream is a bi incompatible rebases that's what I mentioned there's the rail application compatible as a guide so please don't send those because Red Hat would need to reject them or the maintainer and then there's a special interesting type about docs and typos and pages as I mentioned there's a lot of process involved with every change so even like if you change one line somewhere it goes through a lot of process so it needs to be maybe stacked together and bundled as one as one thing so that these might take longer otherwise if you're a customer there's a thing in the customer portal that you can report an issue to documentation so there might be a different channel for that okay so that was about the types of contributions and I know that contribution to center stream might not be as easy as maybe to Fedora because of all these constraints but that's actually why Fedora is very valuable and if you can please do your contributions in Fedora that flow into stream rather than making changes in center stream or if you want to change in center stream just join a sake or start a sake and do your change there that's also very welcome and yeah okay so how to do changes I'm open bugs in bugzilla I'm this is very welcome if you make it easy for the maintainer to reply or read that will definitely help the you've seen the life cycle diagram they have a lot of things to do and we have like three major releases in on the fly so there's a lot of active releases they need to take care of but so opening bugs and merge requests listen merge requests but first please talk to the maintainers in a bug and agree with them that they actually have capacity to merge it before and if that lands then you can follow with through the pipeline as we've seen and see all changes land in there and when they make it through completely they'll be on composite stream.centos.org for you to for you to use okay let's talk about using center stream this will be just like two slides trifuges and development bug report submit patches before it goes to rel that's what it's meant for it's a preview of the next minor version of rel so that's one use case or if you built on top of rel you can use into stream as CI maybe we'll hear all that in the bits as well and you can make sure that your bits will work on rel in the future or use it in the community build system along with the SIGs build on top of center stream in that way and if something breaks you can always submit a bug or maybe a merge request and get that fixed which you couldn't with the centos Linux before and that was it about how it works and now I want to talk about centos stream 10 a little bit which is coming very soon and this is a horrible diagram I made so it's supposed to represent code change and like a git change we have Fedora Rohite branch in Fedora we have Fedora ELN which is basically the same it's just a subset and when we go some Fedora get branched that's where centos stream will start appearing actually we already have some centos stream 10 sources already in the centos stream this gets don't take this as a final list or anything packages will be appearing disappearing like it's in a rough state but you can see them there and that will be still sinking changes from Fedora for a bit then we cut it off and have only sent those stream from that point ELN will be like rel 10 rel 11 sorry and then further in the line we'll start building rel and it'll just become one of the normal releases currently rel 10 centos stream 10 are in the Fedora state so if you want to learn changes go to Fedora Rohite and land them there and there's still time to to get those otherwise that was mostly it that was sent to stream rel development in public remember that centos stream is just one part of the centos project on this invisible slide and it's mostly about the six that create the community in centos and of course Fedora that's an entire community project and these are the spaces that you can you can join and contribute here's some of the links and that's it for me thank you any questions yeah so you put question marks for the Fedora version with the with that little graph showing centos stream 10 yeah but we're already supposed to know what version of Fedora that branches from so why the question marks yeah um do we know the number actually yeah it's 40 it is right yeah so why the question there you go so because for the mic yeah it's Fedora 40 but there might be some changes it might be slightly rough okay so it's like yeah sure okay thanks for the question hi I one thing I wanted to clarify when you're showing your rawhide and in eln and then the branching and all of that process for eln most packages are rebuilt you're just rebuilding with rel flags and there's very little difference that we kind of discourage you doing if Fedora if you cry or cry not to do that too often colonel though the eln colonel is the footer is the rel 10 centos stream colonel even though it's the same disk it it's built completely different it's built with rel configs it's it is a rel colonel so if there's something you want to see and you you don't have to wait for rel to branch this and then suddenly make changes it can go into elan now and you have a working colonel that is a rel colonel if anybody needs a moment to step we're gonna we're gonna stick to schedule because there's people in other rooms so we'll start the next talk in about 10 minutes so that's like a small disclaimer the whole process is still learning process for people who know secure boot and the more you know that the more you know that you know nothing basically so this will be the agenda and I promise I'll try to keep it in time so I won't hold people for lunch so I work as a cyber senior as a senior cybersecurity officer in the irish center for high-end computing here in Ireland we are a research center that provides a high-performance computing services for all research institutes here we are funded by the government we are part of the university I started my Linux and open source journey around year 2002 been Linux professional says admin for over 18 years now and then I'm volunteering and a part of the rocky Linux release engineering team I'm mainly focusing at the moment for secure boots sake HPC and sake art arch and sake kernels and I'm a world member of the RSF and rocky Linux project and product also I'm a little bit into ham radio or two for radio as I call it since I don't eat ham so let's talk a little bit take a step back about how secure boot started so the ufi released a specification which is it used to be called back then if I version 2 around 2006 and I thank Vincent Zimmer for correcting my information regarding the dates for the history because there is not too much documentation about how this whole thing started then Microsoft started pushing a Windows logo program or what used to be called the Windows logo program which required hardware vendors to be able to run Windows 8 to have secure boot enabled there was some information that's not 100% clear to me there was a discussions that for specific architecture like arm you can't disable secure boot and then the went back and the disabled this feature so now user can enable or disable secure boot but there is a lot of back and forth here there was lots of panic maybe until like 2015 lots of the people in the open source and Linux community was saying okay basically Microsoft is trying to look down running Linux in any hardware that has secure boot Fedora 18 CentOS 7 Ubuntu 12.04 was the first was one of the first that starts releasing secure boot and shim there was a lot of frustration when you have a third party driver that like any of the graphics driver that outside of the kernel tree won't run because you have secure boot and your option either disable secure boot or build your own kernel or sign your own driver load your certificates and we will come to that later but the current state for how the secure boot community is working is very great there is lots of collaboration between people from Microsoft Oracle Open Sozi Red Hat, Debian, Canonical and there is loads of coordination now between the shim and the grab developers and Microsoft so there is no more non visibility of what coming next regarding to firmware development with the hardware so what's secure boot secure boot basically is a layer that try to protect you from running malicious code into your operating system before your operating system so like the boot kits and so on which is great and we will see now how it works. So you have a set of keys that usually shape with your firmware there is usually one master key that's called the platform key which is usually related more to the hardware vendor and then you have something called kick that might be one key or multiple keys which is key exchange keys then you have two type of databases db and dbx db will have what the firmware is allowed to load and dbx will have what the operating system cannot load that's in theory most hardware will be shipped with those sets of keys preconfigured and then installed and then secure boot and the firmware need to start the chain of trust the whole chain of trust is based on the public key infrastructure within the firmware so the platform key will sign whatever kicks that there the key exchange key the key exchange key will sign a database that includes a specific certificate that then will sign your first boot loader or will sign a database which is dbx that will prevent specific hashes and certificate from loading in your firmware in practical at the moment the kick database for example within most of laptops that will run windows will have a windows certificate and then the latest community need to submit a specific first load first stage boot loader to Microsoft getting signed and then loaded into your operating system that key exists in the db of the firmware the Microsoft keys we will also will get this into the details in a little bit but don't mix the db and dbx with the actual shim db and dbx so shim is what Linus community and open source community wrote to be able to boot a first stage boot loader on your machine when you have Microsoft keys for one reason grab two is under gpl3 and it can't be signed unless you would release your sign keys and Microsoft and other people will not be signing the will not be releasing the actual private part of your public key for the public because that's what defy the whole idea of having a secure boot kind of concept then there was another aspect being added that's called the spat which is secure boot-advanced targeting which is emitter data that you look you put in top of your EFI loaders and will have more information about this software that you are running like the version some URL or email but most importantly if the second column which is a global version and that also will allow the if I and secure boot to disable specific loaders or specific version based on their global generation as you can see here we have shim version two for example so that's allow mean that we cannot run anything that says shim version one and this was used that to revoke malicious or vulnerable both loaders that's been already released to be running anymore in your operating system the dbx and db within shim introduced as a how you call it a reaction to the grub boothole bug where basically there was a I think around eight security bugs in a grub assistant point and that require all grub hashes to be added to the dbx within the framework which is basically took half of the nvram that assigned in the framework to be able to load or forbid hashes so shim now has their own db and dbx okay okay so why should you use secure boot so security is not a luxury it's demanded you have as long as there is security you can use you have to use it so for end user it will protect you from bootkits it will give you control over your if you buy need to load and that's happened when organizations basically wipe the framework or load their own sets of keys inside the framework to make sure that you can run only specific if if i that they wrote most military will be doing that they will not leave it with these top keys also you as a developer you may not want any kernel to load or any grub to load unless you build it and you compile it so you can build your own small secure boot infrastructure and whenever you have your distribution whatever the distribution is you can load your kernel your keys and you make sure that this laptop will only load your stuff for distros it makes we try to make end user life much more easier also to allow us to be able to actually prevent vulnerable if i and both loaders to load on the operating system for example if now we found out that we had a bug in grub that been released we can use secure boot to disable this grub until the users in advance you need to upgrade the grub because at a certain point this grub will not boot anymore okay so let's talk now about the process of how a distro can achieve secure boot and that's a little bit of a mix between technical and non-technical most of the obstacles in secure boot are actually non-technical so they are all more about process governance and your procedures to actually be able to sign and build secure boot so we will talk about the technical part first so in high level you need to have a fips 140-212 operation environment and there is a little bit of discussion regarding the operation environment because some will tell you enough to have the keys are stored in hsm that is fips 140-212 certified but we prefer if the whole environment is fips certified then you need to generate your own distro keys as we talk some distros will prefer to go with extended validity certificates other than building their own ca and having a see a self-signed ca then you get your shim and you put your ca within this shim then you split your certificates and put them within your kernel grub fwd and shim as well again and whatever other efi binaries you are going to load compile and send your packages without shim now you need to get this shim signed by microsoft otherwise that's great it will boot in your machine if you actually loaded your shim ca into your framework as we were talking about microsoft has some requirements to do that which is basically first of all you need to be registered with their partner program for hardware portal i think you need to get your organization to be vetted and have an ev certificate you need to provide your security contact you need to fill a shim review form or it's a github issue basically and you go through the shim review process so the shim review process are being reviewed by a community of linux developers mainly they are shim and grub developers that volunteer their time to do that they are from most distros with hat oracle open sozi and i don't want to forget anymore that's debian canonical and they check actually all your code regarding shim the only sign grub at the moment so you make sure that your grub is hardened correctly they require you to have specific fixes for certain um cvs in grub they make sure that your kernel is have a specific configuration like look down and that your kernel also has specific patches for cvs fixes and then once your shim review is accepted you get your shim and of course it need to be reducible so the shim review will try to rebuild again your shim and they need to get against the same hashes usually you provide a container file and maybe a side repo where you actually was using the the packages and the tool sets that you used to compile the shim for once your shim is accepted you go back to macrosoft portal and you um you combine your multiple shim if you are signing for multiple architecture into a single cap file sign that cap file with your with your ev certificate that macrosoft portal has unsubmit your shim to macrosoft pray and hope that everything will go well because sometimes things stuck in the pipeline at the macrosoft portal so macrosoft will do lots of processing on the binaries they will compare to make sure that everything is running fine once everything is good they may ask you a questionnaire like fill some questions and answer some um questions about why you are submitting shim and what's this is other than the shim review after maybe a couple of weeks you will get a signed binary from uh macrosoft which is basically a shim including your distro ca which is signed by macrosoft key that the public part of the certificate within the framework of the laptop or your pc or whatever you get this shim binary and you drop it into your rbm starts running a lot of tests make sure you are not gonna break everybody's laptops and nobody will boot once you are happy you can release so the chain will be firmware will verify your shim shem has your certificate your certificate um or your ca you have a few uh certificate from this ca that actually did sign your kernel grub and fwd whatever other afi binaries you have everything will boot so far so good but then come the kernel modules so the kernel modules is is is not and considered a part of the secure boot so if you have your kernel and look down mode you won't be able to load any kernel modules unless they are signed by the kernel but the only part that's signed from your kernel for secure boot is actually the kernel image but not the nth ram fs or nth rd which is why now there is some discussion within the system d community for the uk i where do you need to combine a kernel boot command with the uh with the nth ram fs and the kernel image and have one with a stub and have this signed for secure boot the keys most of the distros like sentos and red hat and others they use a formal keys while they are building the kernels so your kernel modules are your spec file basically generating a key then they are signing the module and then these keys are gone but your kernel knows that these keys this know the keys through something called a mock in your machine which is a machine owner key database that shem generates including some of the kernel keys that you have while you are building your distro then you release that's so far so good so um the approach how we did it for rocky uh i sadly kind of volunteered myself to do this we i said okay i'm going to volunteer to look into secure boot uh which was a great journey and i'm still learning a lot of things uh then come the most complicated part which is finding actually fips 140-2 level 2 uh environment the most of the major cloud providers they do have an hsm service that is fips 140-2 level 2 or level even 3 certified however there is no operation environment that can connect to this hsm that is fips 140-2 level 2 certified and that caused lots of issues and we ended up actually finding a data center where they are fips certified and we had hardware there and this is how it is but again as i was i was i was chatting with some people who actually wrote some of these requirements and they were saying okay mostly we just required the keys to be in the fips environment not the whole build environment but there is no confirmation about this based on the documentation that issued from microsoft the whole environment need to be fips certified and we needed to start installing those um releasing those packages with secure boots so we went only with a single certificate for kernel grub and fwd everything went fine we submitted shim review for uh 8.5 i mean we submitted before 8.5 however until we got the whole process done 8.5 been released and we finally released 8.5 was um was assigned shim from microsoft we had to go on to another iteration and we released a newer shim because our shim was based on 15.4 plus some cherry pick get patches that the shim review committee required us to have before the shim review was being released now we have shim 15.6 signed there is shim 15.7 already released and i think within a few weeks we'll have a new shim released soon so we went into two phases first phase we went with a single certificate as i was mentioning we were using a upq fips 140-2 level 2 um hsm module and we were using psign in server and client mode to be able to sign the rest of the binaries the kernel and grub and so on so psign will talk to the uh to the ubhsm forget the keys and sign the binaries but now we move to phase two which is this is actually the current that we have at the moment uh we still have the self-signed ca for shim uh we start separating the kernel grub and fwd certificates so now basically we have multiple certificate per um package and we move to use uphsm the uh uphsm fips 140-2 level 2 which will allow us to be able to use more even uh certificates or key slots within the hsm module and we are using we move to use psign tools sp sign tools and so far that's works fine and give us uh less headache and now we have more modular of an kind of a networked hsm that we can use even for signing other packages within the secure boot ecosystem that we have okay so what's next at the moment in the er resf we are talking about joining the ufi forums as contributors uh we have um we are thinking and talking about signing other kernels from the from rocky seg kernels which will includes kernel mainline kernel lts with some configuration patches we are building we are looking into the uh power pc and the system secure boot because those are a little bit more complex and they don't do the traditional if i kind of concept of floating packages uh also we're looking at the arm64 we never submitted an arm64 shim to microsoft because microsoft has different requirement for that but we are going to look into that and submitting this and the aim is to help the seg alt arch to have a secure boot running on most of the system of chip um bit of hardware if it's applicable and also we are looking at the uki from system boot which is as i mentioned the combined kernel with any tram fs and command line and system do beat which is e-boot manager and we need to automate at the moment there is not too much automation happening within this when this uh signed which is good and bad because you kind of need to confirm that you are doing things right but that doesn't mean that it can't be automated and have more monitoring uh in secure in secure boot ecosystem there is a spat support being pushed to the kernel patches at the moment and the nx support for kernel patches the nx support is a little bit tricky because microsoft from last november require any shim submit to have an nx support and there is no nx support at the moment in the kernel and they won't sign any shim unless it's nx supported so i know that debian is the only or decode the last cycle of releasing shim and that's nx supported and i know that they will backport the nx patches to their debian uh to their debian uh kernels uh for the spat still under review by the uh by the kernel uh developers um so that's to rica any questions yes a okay is this battery it is there as far as i know there are still uh there are still patches need to go in to be able to actually do the revocation correctly or um as far as i know i'm not 100 but i know that at the moment you can run things below your spat version but that should be fixed i hope within the next release of shim which is very interesting because i don't exactly know what the history behind having the spat if you can actually use the shim db and dbx because with dbx you can just prevent all your hashes so the idea also of the governance here that we were talking about that you need to keep logs of everything you are doing everything needs to be logged all your hash all your binary hashes need to be stored so you can actually make it easier for you to revoke but what i know is that the spat and the dbx db and dbx was a reaction to the boothole any more questions this may be a dumb question but you were talking about the fact that kernel modules need to be signed uh if your kernel is signed obviously if you're wanting to use them is there a way to um with with a secure boot uh rocky linux release is there a way to do current you either to sign your own if you have out of tree kernel modules is there a way to load them up in without having to go through all of this uh yes there is a way so that's why the kernel module is a little bit confusing because the kernel module is not actually a secure boot feature but if you have kernel look down on some patches it will be a secure boot feature and kernel modules will not load if you have secure boot enabled and even i as far as i remember there was some things that you need to make sure that the kernel can check that the boot loader actually honors secure boot however there is a ways that you can load another certificate so you can sign out of three modules if you want and then load them into your mock uh tree but then you will have to do that the way that this works is some distros will have specific key that there will be certificate loaded but it's not used unless it's need to be used for signing an out of tree kernel if they really really audited and they want to run in their distros otherwise it's up to the user user can just build their own kernel and then load this key uh load the certificate into their mock uh database into the machine and then it will load i hope this answers your questions uh i guess this is kind of a me catching up question um from what i understand the shim is was originally developed so that um you could actually uh use secure boot with linux uh regardless that uh it wasn't the whole thing signed at the time um okay is that is that correct i'm not well i guess that's why it's called a shim but no yeah to max off exactly not the whole thing just the shim yeah uh and so i'm what does i mean going through that process what what did did you uh what does i mean it must be like all the open source operating systems are just doing shims and what does microsoft do you know do is there anything like they is there any feedback from them about that or they're just like yeah whatever like they don't care it's not that they don't care so the the idea of shim that as as the at the uh nil and i think david yeah david mentioned that she so you don't need to push your kernels and so on but also you could have just pushed your grub to be signed by microsoft but that can't be happened because the grub two license will allow ma will force microsoft to release the keys and you don't don't don't want to do that that's why the shim existed in first place to be able to submit something to microsoft so they can sign and then other linux distro can load that at the moment as far as i know uh the shim has hard code only to road grub two and most of the shim reviews will only advise you that at the moment we are only reviewing your shim or your submission if you are only loading grub two kind of concept uh so that's what the main idea of the shim that you have something simple enough that submit to microsoft very easily audited and then microsoft can sign it and then you can take the shim from there loading your own certificate and your own key also the same way as when you are loading the keys for um like when you put your ca within your your shim you can actually utilize the db feature of shim and have multiple certificates there and there was a lot of discussion in the community okay why just we don't have one shim that has all distros key sorry yeah there was this discussion to have this kind of of course also there is now a side a side tools that's running uh within shim that actually i think they just changed the the name recently it used to be called certimule which basically if we are in rocky and you another distro that you want to load your own only the kernel modules or only the kernel images signed by you you can give us your certificate and we will sign this certimule and then it will be loaded it's kind of similar to db dbx but the issue is db dbx that every time we need to update this db dbx we need to send it back to macrosoft to be signed so that's why there is this certimule tool exists however it's not very used yet any more questions i don't have questions are there any more questions going once going twice okay thank you so much so lunch is served or will be served in three minutes the lunch room is on this floor just down the hallway and we will return here or whatever track you want to be in uh in an hour at one thirty uh if you come back here we're going to get a hyperscale update hyperscale sig update followed by uh an automotive sig update so thank you and i'll see you at lunch there is three talk slots followed by the break and we only have are these working uh we have a bonus break we can go to one of the other tracks or in our case we're going to other talks i guess we're in front of you guys are ready yeah i believe we are cool hi hello everyone my name is david and i'm neil uh we'll be talking about what the hyperscale sig has been doing lately uh i've given various versions of this talk over the years so hopefully this is not too boring if you've been one of the previous ones over hey we have a logo now we do we do have a logo yes uh these are slightly less boring than they usually are so here's the agenda for today we will do a quick recap on what the sig is uh we will talk about deliverables and what recent work we've been doing and we'll close with a few notes about what's coming down the pipe do you want to do the interview yeah sure so uh the sentos hyperscale sig is primarily focused around the sentos stream our goal is to be able to support to do enable people to collaborate to bring what they're doing to support large-scale deployments of sendos throughout uh to the community and to be able to help everyone kind of benefit from those kinds of things and key to that is being able to have this community-centric cross company cross stakeholder collaboration on packages and testing because in any real organization that's going to run any kind of Linux at scale whether it's for desktop servers cloud whatever you're going to make your own packages and you're probably going to be either replacing some stuff or shipping some stuff or updating some stuff or whatever and in a lot of places they wind up just doing it internally and then never sharing it with anyone we want to bring that more into the open and to help people build a good community of practice around this kind of stuff and maybe even upstream some stuff to make it better for all the other people and open to anybody who's really wanting to do this kind of stuff whether it's desktop server cloud you know weird thing I'm a jig here or something yeah we're we're open to it yeah and to be clear even if this is primarily targeting large-scale environments you don't have to work at a company to contribute to the SIG it is really open to anyone so if you find this kind of work interesting or if you think what you would like to do would be a good fit for the SIG by all means feel free to reach out and get involved so the SIG was established in January 2021 we started with six founding members from various companies we've now grown to 30 members which has been is been quite fun and to be clear this isn't necessarily 30 members that are like actively doing things all the time because people have various interests some people will join just to work on a specific thing and they will keep like working on that but generally speaking I would say we have a core group of people that is fairly that is fairly active and then as a wider group of people that kind of coming or will join for specific activities we hang out on IRC on send us hyperscale their room is also raised on matrix you're welcome to join at any time most of us are in the US so you might get interactive replies if you hit us up on us time but people generally keep an eye on the channel there are formal meetings every two weeks also on IRC on the send us meetings channel which you're also welcome to join if you would like and for the last year or so we've also done monthly video hangouts this was something that started during the pandemic and it proved very nice both to actually have some kind of face-to-face connection especially when we couldn't meet each other at conferences but it's also useful and these end up being a mix of social time and like occasionally working through problems or just shitposting it's fun lots of it's an open zoom meeting so it's anybody's welcome is welcome to join we've also started doing actual in-person meetups now that we can travel again we did our first meet-up in Boston at DefConf US last year and we did our second meet-up at Connect earlier it caused them I wanted to do a meet-up at this event but time we couldn't get scheduled in time but hopefully we will do another one this year later in the year at some point somewhere maybe we will see so this is really just you know more detail about the things that David and I just said you want to check out we we try to document all the things that we're doing and we have a back catalog of talks and things like that as well as our track our ticket tracker that shows what kind of stuff we're working on what we're thinking about things of that nature feel free to check all the amount that's more detail we also publish activity reports every three months I believe whenever Sean emails me that I'm due for an activity report which he did just before we are due for another one that will come out after this conference at some points but you can find the previous ones linked there they're all published on the send us blog as well okay let's talk about scope generally speaking the seas focused on as I said enabling large-scale deployments this this happens on various stages of various layers but the main things that we try to do is first of all faster movie package backwards so there are sets of packages that we would like to be able to maintain at a faster pace compared to what's in stock center stream and because there's a package that are part of core center stream there aren't the kind of thing we could maintain in a Pell so it makes sense to do this kind of work in a SIG one example here is system D that we will talk about in a little bit the other large buckets of changes we maintain the SIG is policy and configuration alternative and changes so packages that are shipping the distribution that have opinionated default settings that don't necessarily match what would be a good fit for this kind of environments so we ship packages that have alternative settings here but with the idea that they are still dropping compatible with the stock distribution one they just provide extended extended abilities and extend opportunities the city is also a good space for doing large-scale testing of features generally speaking in some cases you are able to test changes just by touching one package but there are some things that involve the distribution overall so it is useful to have a place that where this can be applied and it can be tested in a production environment and we will have an example of this later as well finally we produce a kernel build as part of the SIG and we also produce light dvd images okay let's talk about package backwards package backwards are delivered in the hyperscale repo that we maintain on a stock center stream system you can dnf install centers release hyperscale that will give you access to the repo i just had these are meant as dropping replacement for stock centers packages so if you're running a stock system and you install this it should behave the same as your previous stock system and if it doesn't that's a bug and you should let us know these these packages are built against a pill and they require a pill because frankly a modern system isn't terribly useful without a pill we only target x86 and arm because those are the only architectures we can actually test on but if somebody's really passionate about say power astray 90 you are welcome to join us and maintain them there is a long list of packages i am not going to put here because it would be very boring but those are the last ones that from looking at the build tag came up recently in general these tend to be either packages where the version in stream is too old to be useful for a specific use case so it's a back for from fedora or a specific patch that is missing that will take a while to get upstreamed into stream properly so we maintain it in the SIG until it's updated in stream and same for the updates by the way oftentimes and get a pre-based in stream and we can drop them or it's packages that we follow the upstream very closely and want to maintain them two major items that we are working on lately are open ssh in kimu open ssh is the event primarily by meta where we have an internal build on open ssh with a fairly extensive patch set that we be slowly working with the open ssh upstream community to get open sourced and ideally merged into open ssh proper so we are we're gonna try and get this maintained as part of hyperscale there are builds of these already out but they are not in the repo because frankly i don't trust them to be usable but if you want to play with them the sources are out there and the patch set is there um likewise for kimu we have a few members in the SIG that are interested in having an up-to-date kimu uh ideally an up-to-date kimu in a pal because kimu would actually be a good fit for a pal um but also i think there's room for doing SIG work here so we're hopeful that we'll be able to provide kimu and potentially an extended crystallization stack as part of the SIG uh in the future right so as part of what we do we try to make sure that uh whatever packages were either backwarding modifying forking whatever that we can keep track of what's going on so if you're involved in fedora you're probably familiar with release monitoring and the automated uh notifications for when new versions of software is out so we implemented a variation of this that actually takes two feeds it takes release monitoring for upstream stuff but it also takes um uh information from a sent os stream core to see whether you know when stuff gets updated there so that if we have a package that is actually forked from sent os stream we can do rebases and stuff like that as well as being able to track the things that we use we track from mainline from fedora for example like as we're going to mention later with system d where we track further along we can make sure that we are caught up with that and keeping up to date we don't yet do the rebuild stuff that's something that we're kind of trying to figure out how to do it's a complicated mess of interactions with all the infrastructure to do it properly but it's a goal that we want to have because we want to reduce because like if with the package set that we have and a bunch of different things a lot of it is very mechanical and after we get the initial package made and so we'd like to be able to reduce the busy work and make it easier for us to do more i-value stuff yeah and uh if you if you're in a state that maintains packages you can run the same tooling if you would like uh it's published on that repo it's relatively straightforward to deploy an open shift yeah or really any other deploying environment where you can deploy containers so let's talk about system d as i mentioned we have a branch of system d that we track as part of hyperscale we currently the release version currently is 252 um we were i think we had 253 in progress but we'll probably just do 254 at this point because i just came out upstream we have builds for stream 8 and stream 9 uh this is uh this generally tracks the latest ish system d and it's built with the defaults of the latest ish system d in fedora among other things this means that the ships to see group two by default um and i i expect at some point you will actually drop support for secret one when that ends up happening upstream in system d it also ships with uh the a plethora of extensive demons that system d provides that aren't shipped in center stream by default so ships with umd ships with network d we are solved the umd on if you're running the stock kernel from stream you will need to run it with psi uh which i believe is not enabled by default in eight and you have to boot with psi for one or something like that i think we have psi nine enabled in nine and all there and umd is actually provided in center stream core now for nine as well it's just not installed by default but um and i don't think we have the weights in the stock on we do have the weights now yeah they don't wait you we added that upstream excellent so only network d and and resolve these also upstream also turned off by default but network d is the specific one we add on top and then there's a bunch of stuff from meta and the system d community back ported down into to make it better for our use cases yeah to uh we also so an example recently was the journal stuff for example where there was a lot of work coming from meta where folks were working on improving the journal upstream and these were changes that were able to test in production by leveraging the sig and then we were able to get them landed into system d upstream and then they became available as release bills uh this bill also supports acilinux um although uh as far as i know none of us actually run acilinux in production that are running this environment yes well you do that's right um if you're passionate about acilinux uh we could really use more people to keep an eye on the acilinux stuff here because it's hard um but generally these are just backwards of the acilinux policy onto from fedora in a module just to make it work and if you're interested about features in system d specifically you can reference just a news file from upstream which is very comprehensive um the way we actually do is that we have a repo on pegger that tracks the actual system the upstream repo that we used to stage patches that's the one that we make releases from this usually tracks system is stable not sorry not system d um we then have another repository for relinch uh that we we used to have this on pegger but we moved it to github so we could leverage the github pipelines ci uh these as both scripts that developers used to do the general work but it's also where the ci is maintained uh the way the ci works is that we do daily rebuilds of uh of our packages against the latest system d github head so the idea is that every day we get signal on whether is there has there been any change in system d upstream that would break on stream in our environment so we can so we can do something about it uh sometimes it's a bug on our side sometimes it's something that needs to be fixed in system d sometimes a little bit of work this has been very useful at meta internally specifically oh i work for meta by the way in case he wasn't clear uh but it's also useful in general nil does not work for meta uh nil works for reddit um but this has been useful in general uh long term we would actually like to have a bit more of extensive testing here so we could do things like say spin up a vm or cloud instance and run a battery of test potential around the test suite uh we're not quite there yet uh the how to work with system these documented fairly well so you can reference the contributors guide if you're interested in doing work there do you want to do this one yeah sure right so sometime ago some lovely folks from intel came by and said hey we want to do uh we want to try to do cool things with our cpus and there's nowhere to do it yeah we were okay with doing it let's let's go for it and so we worked with intel to create a space for them that included um you know things like a zealot backport with some enhancements that they've been working on um some glibc stuff they're a bi and api compatible um last i heard actually that some of that's going to move to the isa sig and some of that is going to move into senton stream core finally and some of that's going to be completely thrown out and replaced with a fedora change at some point because um there are some there's some things that are targeted that we could go into fedora to then bring into the next row or something like that so there's there's some stuff going on there but yeah yeah you can reference the blog post for more information this is a actually a pretty good example i think of the kind of change that can be staged in the sig and then can provide a place to prove that it's actually useful and valuable and then it can get upstream in all the proper places and long term we can sunset the changes in the city itself it's also one of those examples of where something and start out in our sig and then fork out into their own sig if it makes sense and they have uh some kind of sustaining energy for it so um oh yeah as i mentioned we target eight and nine concurrently uh most of the new development i would say happens on nine and then is back ported on eight um i think most of our production environments these days are on nine i expect at some point we will sunset well we will definitely sunset eight by the el um but i expect as we get closer to the el work that goes on eight will start dwindling quite a bit uh as part of the sig we actually did quite a few contributions to stream itself that i'm not going to read off all of these and this is not a comprehensive list uh but these are some examples of things that we worked on in stream itself this generally led to either contributions on gitlab to center stream proper or working with the developers in stream to lend the corresponding change in the distribution and i did not list all of their ebases because that would be that would be a very long it's far too boring um on the testing side as i mentioned before uh the sig also provides an avenue for doing large scale testing so the example we have here is the copy on right change we've been working for quite a few years now and the idea behind this change that you can read all of it on the fedora wiki is that it's that change to the packaging stack so rpm dnf and that ecosystem to leverage butterfs and leverage copy on right in a way that can make packaging installs more efficient and this isn't just patching rpm it is a fairly extensive change that involves multiple components so it's kind of difficult to test by just installing a few packages um so what we did in the sig is that we provide a repo called experimental that has all of the packages needed for this change uh that you can you can install the release package there and then do dnf upgrade and that will give you the whole stack and it will let you leverage this feature this is what we run in production and meta by the way because we we actually run this stuff and it works fairly well uh but it both provides a way for people to play with it and test it and it's also a nice it's a nice playing ground uh that allows a change to evolve and uh there's been quite a lot of discussion upstairs in rpm on the best way to integrate this change and it has the form of it has changed quite a bit and i i've linked the various mrs because i uh the various pr so i think it's a fairly interesting thing to look at if um if you're into the history of the change uh but having it out there made it very easy to find people to hear so it's a place where this actually works that you can test that you can play with yeah and so for the kernel uh we on top of the stuff that we're doing with the user space stuff i uh in the state to actually make a a kernel that enables other features primarily we do um right now i think a simple drm enabled across the board to for basic graphics and we have butterfest support um we build for sent os stream nine and eight uh based on the sent os stream nine kernel tree and it's available in the experimental repo and it's really like a compliment to the other stuff that we're doing um in the user space and as part of doing this particular work uh we actually did a bunch of contribution work to send us stream nine upstream and i wound up being the guinea pig for figuring out how to handle upstream contributions in the sent os stream nine kernel um as and part of that i chronicled my experiences and kind of wrote for our sig um a cheat sheet of like how to correctly contribute fixes to the sent os stream nine kernel and in addition to that work um the sig helps uh with kmod butterfest for the kmod sig to make sure that it stays working and that they can ship it for for relas a separate kernel module i can do this one on the user space side uh there's a few changes that um take care of the user space side for butterfest so we have a backward of butterfest prox uh because stock stock rel doesn't ship butterfest so they don't ship the user space for it either so sent os stream doesn't either so we ship butterfest pros and we ship commsides we also ship uh the storage stack and installing the distribution where butterfest support is restored so this can be used uh for building building anaconda effectively and building images with butterfest we also ship other kernel user space uh so it's still for example uh there's a few other things i think um in the same vein that list is not comprehensive uh finally uh we ship a modifiable of kpatch kpatch is shipped in in the distribution uh but uh the version that is shipped in relan in sent os stream doesn't include the actually useful bit of kpatch which is the part of making kernel patches because i believe it's meant for you get kernel patches from right out and you can apply them uh we ship kpatch with the build site so you can make your own kernel patches if you want and we also ship support for clang pgo optimization which is something that meta has been working on for the past couple of years um uh because we use clang extensively for kernel builds internally i am not gonna talk about pgo because it's a complicated topic but if you're interested in this there was a long talk at lumbers uh last year i believe that covered this um so you might find that interesting um we in addition to packages we also have container images container images are great because they give you one way to quickly play and test the system uh these are built from scratch uh on open shift uh you can find the repo there these aren't based on existing the string container images they are bare images built from nothing uh the reason for this is because the the center string container images were originally based on ubi and that made it very complicated to layer changes on top uh these are hosted on quay and you can use them with that palman one liner if you want to play with it um these are minimal container images so they're they're fairly suitable for doing work on top if you would like to do so i like using them for cicd stuff because apple's also pre-enabled and activated in it yep so on top of this like uh with with the container stuff as long and along with the kernel and the rpm cow that kind of builds up to the live media spins that we've been working on um so currently we have two centaur streamate spins of the nomin kde plasma that were built with kickstarts using live cd tools um these include basically their whole suite of stuff except for the intel repo uh and allows us to give people a complete experience of what our entire stack looks like and how it performs um you can download it and check it out we've been for the past what year and a half or so been plugging away at getting centaur stream nine stuff done using kiwi using kiwi descriptions getting a whole bunch of um infrastructural work done to support it within cbs because for centaur stream eight i was doing it on my laptop and i was uploading everything by hand and it sucked and it was very manual and sometimes i screwed up and that was not good so for centaur stream nine we're trying to automate and support simplify the process so anyone can do this anywhere and because of the work that we're doing for this we actually collab we helped essentially spin up the alternative image sig and we've been using our expertise to help them get started to start providing images as well yeah the idea is for eventually all of the image building work to converge within the alt images sig so that that becomes kind of a service that is available for other sigs that would like to build images so we don't have to reinvent the wheel every time uh coming up in the future as i mentioned there is more work to do around image builds uh there's a work we talked about kimu uh we have a long-standing goal of uh putting together a way to do transactional updates using butterfs um and we would also like to have cloud images at some point for hyperscale uh these are both things that we would like to have in the future but aren't quite there yet uh here's a few links that you can reference uh i will upload these and attach them to sked so you can get a copy of the slides afterwards so you don't have to remember this and that's all we had do we have time for questions okay are there any questions speak so this is not a question for neo it's a question for you uh you mentioned that you contribute stuff to to stream and there's this issue that uh it was even mentioned today during the the keynote that red hat only wants stuff that is useful for rel does this create a problem in practice for you yes uh these has been a long-standing source of tension i would say uh the i think in general things are trending in the right direction and are getting better but i think there's a lot of variance depending on who maintains specific components within stream and within rel on whether your contribution will be accepted or will be timely reviewed or not so there's there's areas of the distribution where it's very easy to get changes where it's very easy to have a conversation on hey this component should really be very based and and we can get that done quickly there's other areas where this can take a while it can take several conversations it can take getting somebody offline on a meeting to to actually discuss it and sometimes it's not a good fit and sometimes you do end up having to do this downstream because for whatever reason it just doesn't work well uh so i i think that's something that we are trying to figure out overall uh i would say that compared to trying to do this back in the day when we were we sent a seven sent us linux seven uh this is far better than it ever used to be and just the fact of having the having everything in the github repo where you can submit mr's directly there you don't have to rely on like attaching patches to bugzilla and hoping for somebody to see them that's already been a big improvement but but yes i agree that this is something that we generally we overall need to do better at any other questions did we steal some of this lot this is yours yes i need the long time oh did we steal ari sloth are no we probably should have swapped them based on the other ones but am i good stash early yeah i think we started early because we're not going to get the other rooms are in the middle of one hour talk so nobody's gonna be coming over cool um yeah so this is a a general update all sorts of going on in the um sent to us automotive seat um so for those of you don't who don't know me i'm eric horton i'm a software engineer at red house i work mainly on automotive stuff um oh and i wanted to give special thanks to to sandro because um i found this beautiful slide deck laying around and i asked him could i use it and extend it and embellish it and he said yeah so um thanks for helping me create the side slide deck um so yeah our sake our sake is still relatively young i think it's about two years old now um so just to give a description what it's all about this is um an example of an open hardware vehicle called from a company called open motors um it's it's it's a very bare bones framework it comes with no software no electronics it just has the minimal requirements for for building a vehicle um so you can make your bespoke custom changes on top to make your vehicle more individualistic um so i kind of think of sent to us automotive stream distribution and drive us read add in vehicle operating system um it's it's kind of the software version of this so building up your software in in a vehicle um starts from the operating system um so sent to us automotive this automotive stream distribution for short sometimes we call it auto sd uh enables you to do this um so just before i describe this um slide specifically um our model is very simple it's it's very similar to the sent to s stream rail model where sent to us automotive stream distribution is based on sent to s stream and we just have additional automotive repos that generally add extra automotive specific packages except in the case of the kernel we actually replace the kernel um um and same goes for red hat in vehicle operating system that's based on rail and we add additional um packages for automotive um so because we're based on sent to a stream and um rail the the usual workflows for contributing to fedora epel sent to a stream i'll i'll apply of course but for an automotive specific change we also have one other route for contributing so this would be what's called in this slide um the sent to us automotive um sig repos um so if you can contribute your change there it'll it'll make its way to send to us automotive stream distribution and which limits you make its way to red hat in vehicle operating system and then we have this kind of continuous certification and testing framework so if anything pops up the change is made into into the repo and it propagates through the pipeline again so yeah we receive all kinds of contributions from hardware vendors doing hardware enablement from automotive vendors like a general motors or something like that um for feature requests and whatnot um sometimes software developers just want to add certain libraries that maybe aren't sent to a stream as the base and we also work with other partners like um sofi and eclipse stv um so so sometimes we um include reference implementations for um the standards that come out of those um groups and oh yeah so this is one of the few packages we actually replace so we have our own automotive kernel um this is like 99 percent the the same 99 percent the kernel that goes into sent to a stream uh the the key difference is we we use the real-time linux patch set um so basically the scheduler for this type of kernel the folks on determinism and low latency and sometimes i use this analogy um to describe the whole real-time linux thing for people that are familiar with um let's say you're a driver of a vehicle and you hit the brake pedal you would probably like the brakes to react quickly um and and and within a certain certain time frame so this is kind of what real real-time linux is all about and and these kind of safety critical don't means it's it's it's often used um so so yeah there's a there's a focus on functional safety there's there's specific certifications um that we aspire to achieve for the operating system and this is all about um yeah basically reducing the risk factor um ensuring every function is carried out as prescribed um i've heard this analogy from a couple of people in automotive in the past so obviously we can't ensure that bad things never happen but like the analogy often used if if if i give you a gun and and you shoot yourself in the foot um that actually worked as designed you just used it in a poor way so um it's a lot of that is going down to things like reading the man pages do the man pages say what actually happens when this when you actually call this function and execute the code so it's it's all that sort of thing um and of course there's performance standards um we aim actually to be the first continuously certified linux platform for road vehicles um and this this this effort is kind of essential to be honest to the safety of automotive linux distributions we actually pump a lot of effort into this area um oh yeah another difference from say vanilla sent to us um stream is we use oyster kind of as our immutability solution um a bunch of red hat based operating systems use this just to name three is rail for edge rail chorus and sent to a stream chorus which if i remember correctly was announced at the last sent to us connect um so what does oyster provides it it has a lot of desirable qualities that are um suitable for automotive so oyster gives you atomic upgrades so say if you get a power cut in the middle of an upgrade or whatever there's not such thing as a partially applied upgrade or whatnot and there's also kind of a tool that complements oyster called green boot so once you do an update or install an extra package when you reboot there'll be health checks and you can make a boot as successful or healthy or bad or fail and in that case you would obviously roll back um so the atomic upgrades feature is desirable um i started calling what oyster does and directory based a b updates because i've talked to partner in the past and saying and they've often said it's not um a b it doesn't do a b switching because you don't require a and b partitions and that's not actually true we just do um we just do a b updates in a different way it's more directory based rather than um partition based um so yeah we apply delta upgrades because oyster is basically like a get repo for our file system so you just apply the def on top every time you do an upgrade um so provides immutability um so like the file system is read only so you can't really change it that easily although i will say if you're a developer there are some developer tools that can give you right access um there was an interesting talk earlier about um secure boot a kind of related somewhat related project uh os 3 at the moment is something alexander larson and jz epi are working on called compose fs and basically what compose fs kind of does it kind of extends the chain of trust further down to the file system um so you can like um detect malicious changes or bugs that cause data corruption or whatnot um and something that we don't include in automotive stream distribution but i think it's an interesting advancement at the moment in the whole oyster ecosystem is there's a tool being developed called bootsy and what bootsy is it's it's using it's using oyster containers as a transport and delivery mechanisms so it's it's you're booting into a container but it's not a container per se you're just using the the protocol basically um so that's kind of cool because it's kind of consolidating our um transport and delivery mechanisms but yeah this is kind of a future thing um oh yeah go on to more detail about compose fs i left a link to a talk alexander did there and the and the github repo but um yeah there's there's ongoing upstreaming in the linux kernel and we intend on backporting that uh i suppose i should say sent to a stream because that's where it actually lands um but eventually propagates to rail and since we're based on those operating systems excuse me uh we we will be one of the users of this um so yeah as i said it it detects um it verifies the file system is the way it's supposed to be like in case your hard disk failed or something and there was this corruption it detects that kind of thing um what else do we have oh this is this is this was brought up in an earlier talk as well about how we build images we do something slightly differently to the other sakes so you may have heard of image builder before um the the core kind of framework that um image builder used to to compose operating systems is called os build so we just used that actually um we're not so interested in the ui's on top so we just kind of wrap some make files and scripts around os build um and basically that's how we build our images um we don't have anaconda installers or anything like that either because um people don't well 99 percent of people don't install their own automotive operating systems uh yes yeah i'm sure there's always the one percent um uh oh yeah this is this is just another diagram um so we take oh i finally had to remember why we put in the slide um one of the difference with um one of the difference with differences with um sent to us automotive stream distribution and red hat and vehicle operating system is although we provide sample images for you to play around with and develop on and whatnot it's it's actually generally the the end customer that does the final build um so they'll take all the sent west off and they'll they'll also add their packages and whatever on top and they'll actually do the final image build so then they get their own custom in in vehicle operating system um so yeah these these are some various communities we're involved with um i'm more familiar with some than others to be honest to scope this talk a little large but um just to give you an idea of some of our partners we work on um on automotive standards and that kind of thing um actually yeah i i wanted to talk a little bit about um the difference between sent to us automotive stream distribution and red hat and vehicle operating system and i'm actually gonna go back a bunch of slides because there's something i forgot to say on this slide actually there are some packages that only make it as far as sent to us automotive stream distribution and don't make it further because there's a distinction between the project and the products and i'll just give you a simple example of some of those packages um we allow you to install sent to us automotive stream distribution on a raspberry pod have a generic environment to kind of play around with things and we have a bunch of packages that enhance the experience on raspberry pie um but those those packages will probably not propagate red hat and vehicle operating system for example because raspberry pie is not an automotive board um so sometimes there's a difference between like the the sent to us automotive stream um project and community and actually the the final um product so um that's that's something i forgot to talk about not not every package makes it all the way um and this goes the next group of slides are gonna show some slight differences between sent to us automotive stream distribution and the final um product as well um so what is sent to us stream distribution made up of we have a kernel um we have a container runtime and all sorts of container based tools um we have an OTA framework that delivers security updates and that kind of thing um we provide logging and mechanisms for monitoring and diagnostics um the the usual system these stuffs syslog and that kind of thing um we have various tools for development um we have a lot of health checking tools for various types of failure detection um we have mixed criticality support that's kind of like um this whole software defined vehicle approach has gotten very popular and and when all most of the industry is is actually trying to reduce the number of ECUs in a care um so boards now have to satisfy um many different roles um so this mixed criticality support is um what this is all about and i'm actually gonna briefly describe that in a slide later on um on priority boot up where we've been putting in a lot of effort trying to optimize our boot because there's kind of an expectation in automotive to boot like really quickly often people talk about the two or three or four second boot or what not for different services um and then there's redhead in vehicle operating system which obviously has all these neat features but then it has certain additional things on top um so if you want the certified version that's redhead in vehicle operating system um and you also get very safety manuals and documentation and that kind of thing um here's another project that started in the automotive org it's it's called hurt show with uh um with an asterix because um it's going through a renaming process i think it's on its third name now so hopefully we have the final name next time um it's it's a multi-node service controller it's it's kind of like a a new container orchestrator um but it has alternative qualities um that are different to traditional container orchestrators so it's it aims to be deterministic rather than aim for uh eventual consistency um it's lightweight there's a there's a focus on performance um and it's it's it's also very simple and like the lines of cork count isn't exactly huge which um which makes it easier to certify um i left a link to a blog post and there was also a recent talk about that at dev conf that goes into more detail um q qm was kind of a related tool to hurt you um this is something else we're working on um it's kind of all about um namespacing containers to make sure that they don't interfere with um other workloads because like some workloads in a vehicle are a lot more critical than um than others um like i don't know the breaks might be more important than netflix i don't know that kind of thing uh so it achieves out using various tools like um hair chip podman c groups namespaces se linux podman and actually an interesting thing about this qm containerized environment to each container even launches its own instance of system d um yeah so you'll hear a lot about arm enablement um in in automotive because 90% of the slcs deployed deployed are um arm best so um it's part of the reason i i i've worked on some of the sahi stuff with davide and nil their talk is soon um but that's our focus we're we're also open to alternative architectures in future as the industry involves like we we have our eyes on architectures like um risk five and that kind of thing so um i wouldn't roll out those um we also build everything on x86 64 also um which brings us to our next slide that wasn't supposed to be the next slide um which developer tools yeah we build every we build and publish everything for x86 also so people can develop on their laptops like thinkpads or whatnot um so we have vm so you can actually spin up an instance now in aws all the various container tools we we have we even have a container on quay now so you can use those for development uh as i said on x86 and arm uh we even have emulators if if you need an arm environment on their intel based machine uh we actually even produce a raspberry pi bare metal image if if you want to kind of a physical board to work on that doesn't break the bank um and we have a lot of these artifacts are downloadable at this slink there actually are further artifacts that that you have to build yourself but um so now this slide um these are some of our various publicly announced partnerships um so gm qualcom looks out in itas um some of them are more related than others but yeah it's all part of the wider automotive seg um what else let's skip over anything no so yeah i think i i think i said everything i was supposed to say um these are just some things as an aside one of the guys i know on the team asked me to um to spread awareness that the fedora robotics seg is kind of being brought back to life um so there's a lot of similarities between the stuff we're working on in the robotics seg so um that's kind of um a seg that's getting some more attention now and as we said actually a huge a huge focus in the automotive seg is on arm enablement um with various automotive boards um and davida and niel are actually doing a talk after this um and surround the enablement work we've been doing for apple silicon and actually i found that really useful because um we talked about the raspberry pi in the past raspberry pi is it's an 100 euro board so it can be a bit slow um so i actually find the apple silicon hardware it's it's very useful if you're looking for a generic arm development environment for auto sd or other arm based operating systems um these are some of our community contacts you you may have met them before people in the room but we have jeff roll who's our acting chair he he runs monthly meetings which which are quite nice if you want to learn more about the community they're quite informal we normally have a decent attendance of 20 plus people so that's that's a great way to inform me just jump in and chat and speak informally about things you're curious about we have pingoo who works with jeff roll and we actually have leo rosetti leo rosetti um is often our point of contact for the sophie community in eclipse suffered by a vehicle yeah here's some various links to those monthly meetings i was talking about at jeff roll post most of them on youtube in the end we have a git lab where you'll see all our automotive artifacts um if if you can't attend those if you don't want to attend those monthly meetings and we just um like to informly ask questions i i find the matrix is very good as well as the mailing list um they're they're all very responsive ways of of communicating uh with the community um yeah and that's kind of it unless there are any questions looks like i'm good thanks very much guys you made my job very easy okay i'll let the cameraman tell me when to start oh i'm good okay hi everybody i'm troidasan uh i'm a fedora kde sig member uh more specifically i build the kde in eppel so i build it for well rail rail compatibles let's let's start with a bit of history uh with rail seven kde was in rail seven along with qt both three four and five but that was kde five with rel eight uh they ripped it out and by day i mean i actually ripped it out they told me to um i've i've been a kde user since well since i've been using linux i tried them so that's 99 um i've been a kde user so when i was asked to take it out of of rel eight i i sort of it was mixed feelings i was sad but i was excited because i knew then i could take it over in in eppel and do what i wanted with it um so where are we now so rel eight those of you that are still rel eight users actually there's a lot of rel eight users i shouldn't say those of you um we are at qt five point fifteen uh those that don't know uh qt five point fifteen is going to be the last of the uh qt five releases from then on it's going to be patches i don't know how long right now we're at point nine um rel eight isn't expecting any other qt updates unless they're critical security updates and for kde plasma uh rel eight is that version five point two four we're at five point two four because uh i basically can't upgrade any more the libraries are so old all the other you know popular gpg me there's like five others we just couldn't get it any further uh so right now that's where we're at and that's where eight is going to stay for nine that's the more exciting part nine currently we're we're in this transition with rel nine point two that has qt five point fifteen point three and sento stream that has five point fifteen point nine so in rel nine point three uh qt is going to get updated it's not going to get updated in rel eight again that's probably just going to be critical updates for rel eight um but with nine we keep pace with the the latest stable fedora so we're not grabbing things from rawhide right now we're grabbing things uh from f28 and what happens is we upgrade them on sento stream so in this case apple nine next um you'll see that they're both five point twenty seven uh because qt is not going to get updated any more plasma from what we know and i wish neil was here uh is staying with five point twenty seven i don't think it's going to go to five point twenty eight but anyway just like qt they're just going to keep bumping that number up um but uh apple nine next is currently well actually it'll be starting next week i'll be updating from five fedora 38 we go into apple next so that we can stage them when you know rel nine point three comes out everything started stays we've got the build dependencies everything's tested and then we can just sort of sync the the disk it repels over to apple nine build them it's nice and quick and easy uh and fewer surprises there's always a surprise i i'm amazed there's always a surprise but it's it's a lot better than if it wasn't and this slide says what i just said uh eight staying the current release oh that's right uh if there is a major security release we will try our best to backport backport it to eight uh if it's unbackportable we'll deal with that later but uh eight also so we said it was five point twenty four five point twenty four is an lts release um so about once a year i i make sure see if there's another release and i'll i'll do that but that's not a major upgrade it's usually just some bite fixes um rel nine as i said we're syncing twice a year with each release the reason we're syncing twice a year instead of six months or once a year those of you remember last year or maybe year and a half we said we're doing it once a year uh that just wasn't practical kde was going too fast in some other packages as you know we we need the previous release to build uh so we're doing it twice a year and the you know we test the updates it's actually pretty stable and pretty good that's one of the reasons why i like kde now this is probably what y'all came for what are we going to do next uh and for nine it's not going to be too exciting uh it's not in this slide so nine basically we're continuing what we're going to do until we can't update anymore we're going to follow the qt five this is the current things qt five versions of fedora 28 now at some point fedora is going to go to qt six and nine is gonna stay on five that is the current plan oh if we have time which we probably will since like everybody could ask a question then we'd still have time uh if we have time i i have other thoughts but uh the current plan is we will stay with five although qt six is in five and we will continue to update it um so ten what are we doing for ten rel ten uh rel ten is in our i sites rel ten will have qt six and it will have kde plasma six we will not be doing kde plasma five um we don't know how well they would match together anyway but we will have the qt five point well 15 point x uh basically we're going to be grabbing that from well rel nine well anyway we're going to keep the latest one because rel nine has to keep being supported by yon so rel ten will keep that qt five at least as long as rel nine does um but the well let's go back a little bit so anyway yeah but the the main desktop is supposed to be kde plasma six with five this is this you might you'll look in this thing and this is actually on the uh we had a discussion on the kde sig and you'll notice that most of these say should or can and things like this no firm things okay so just keep that in mind so if things have to so most qt five packages will go into apple ten i can tell you one that's not and that's qt five qt web engine that is a pain in the rear i ain't putting it over there um you shouldn't need it qt six qt web engine yes that's great don't don't expect qt five but the other basic libraries uh as long as they aren't a pain in the rear we'll be going into apple ten so if a qt five package is no longer supported in fedora that package will be updated in apple ten on a best effort manner which means that it's possible we might drop some qt five packages down the road if there are some bad security updates um i'm i'm a great packager and the kde sig is great with things but i am not the best security update person for qt five packages based on qt five there's still quite a few that are qt five only uh if they can be built with both qt six and qt five we ask the here's the should they should only be built on qt six if you want to we're not going to force you to not build the qt five version but we're recommending that you build qt six only or have some plan of phasing into qt six if it can only be built on qt five go ahead build it on qt five but i'll try to move upstream to get them to do qt six i mean by the time i i know we're planning on working on apple ten soon ish but by the time it's actually released i mean that's still what is that it are we still two years out okay it's still two years out qt six should have been out for a good long while by the time apple ten is actually released so try to get them all built on on qt six oh wow i am um okay let me do the oh we do have questions good oh okay are you the one that's just doing the live sorry are you the one mission are you doing a live image uh no oh you're the i was i was trying at that time to use well to work okay uh so uh i was talking to you at the time extensively and i tried to install you beyond the probability that too early uh it wasn't operational and i couldn't even update an operational one later yeah you're doing a great job with kbd but this sounds like a one-person project it what happens to kbd in email if you are here it sounds like a one-person project but uh really the katie esig i am the i am the rebuilder the katie esig uh and if nil was here i put he does a lot of things if he gets hit in the bus me and him shouldn't be on the same bus that gets hit by but uh the katie esig actually does a lot of work to make sure things build in apple so that i only have to do a a rebuild um and there is other people that can do it i have my scripts people have my scripts i am nowadays okay the early days of rel 8 i was doing a lot of stuff but then the the sig said hey we're gonna make sure things build on both rel and fedora and they've been very good at putting those if statements and anything else uh this last one uh getting ready for the apple line next i literally had to do one package i i did the test builds on copper and went oh okay and there was like one package i i did the pull request and to be honest it didn't build on fedora either so anyway i built i found there so so okay yes why doesn't that fit you just said twice a year and then you said half a year that's six months no we we well we we don't um right now i usually they first off they don't the i should the the stable fedora so right now it's fedora 38 what fedora does they will put things in raahide and they won't even put it into the next one until it's at least a point one and i won't bring it into the apple until it's at least a point two so so it's a good point but yeah yeah i yeah because we we originally said one year and like you said katie's doing it every six months and it just wasn't working so we shifted to every six months yeah but uh we and that's what we stuck eight on but we were finding that that's what we tried to do for the year and but then when we tried to jump it was just easier to do the every six months okay yeah yeah because it because it's we want the the bigger bugs to get shaken out first but thank you for the questions those are very good yes okay the question was when are you going to put a change change proposal to have fedora katie shipped in work stations instead of no no they are not fighting words um i i have i are any gnome people watching this no uh yeah well yeah we yeah this is not the the katie esig meeting this is the apple one i don't mind it being a spin i did push dramatically i am so grateful for the web redo because when it has workstation here's some alternate things that are coming up here's documentation here's something else oh at the bottom of the page you know below testing and stuff is the katie spin it's like yeah that's sort of uh insult oh yeah we've got the mic um how does how does the katie spin uh stack up against workstation just in in terms of usage i don't have you looked at the county data i can't answer that standing this podium not because i don't want to but for my apple talk i use matt's things and it has that oh okay he doesn't ever say that i have it on my laptop i'm yeah i know the data's there i just haven't looked at i'm curious i'll tell you about 20 minutes i i never looked at that because it's on the fedora graphs and i could be wrong maybe it isn't um anything else oh i actually almost took up my time um i'm hoping other people show up because i'm supposed to give the next talk with somebody else they're not in the room um anything else we can end a little early if we want okay i'm sorry to you because that's where most of the conferences but i think people will come back in in for the the apple one yep and this is how i like okay good uh hi everyone so my name is brian carry and i'm pretty new here so i am the i work for red island i'm the upstream qvert ci maintainer and i'm here today to talk to you about how we build qvert ci with sentos stream and so just for some background for anybody that doesn't know uh what is qvert uh qvert is a virtual machine management add-on for kubernetes um it allows users to create run manage vms in kubernetes clusters um it's considered a production grade hypervisor um and as far as i know it is the leading virtualization uh add-on for kubernetes so it's the leading way to run vms in kubernetes and last month we had a very big milestone in that we that kubernetes finally reached uh version 1.0 so that was a a big release for us um testing a project like qvert comes with its challenges as it integrates a large number of different projects and sub projects um but in qvert qvert proper our testing can be broken down into two main categories which is unit tests and our larger e2e tests so our end to end tests um the end to end tests actually require running against valid virtual uh test clusters that we spin up and one of the other aims in the project is that we try to stay as close as possible to the methodology used in the upstream kubernetes uh project so their testing methodology is upstream um this means that we end up using their kind of ecosystem of tools like services like prow which is basically like a Jenkins for orchestrating ci jobs against kubernetes clusters um so i can hear you thinking okay that all sounds great but where does sento stream come in so sento stream is the solid base that we use for our virtual test clusters which are then schedules to larger workload clusters so these clusters are spun up uh for our end to end testing um the test cluster node images there's a specific project that we have a sub project within the kubernetes organization called kubernetes ci and this project is responsible just for building these cluster providers as we call them um the overall these these cluster providers the virtual ones are based off of uh sento stream vagrant images so we basically take the latest sento stream vagrant image continually as as much as we can and so the building process all of this is done through automation but the the general guideline for it is we basically have uh fedora container spin up this fedora container has all the tooling required to start up the sento stream vagrant image and then we have a tool for orchestrating the provisioning of these images we call this tool it's written in ghost it's called ghost cli this tool basically just orchestrates which scripts are run where as some scripts are run during provisioning and some scripts are run as cluster runtime basically um once the provisioning scripts mainly focus on setting up network and storage requirements and e dependencies that we have for installing kubernetes with qbedium um so these scripts are all run against the vm of provisioning time um once these scripts complete successfully the vm inside the container is then shut down and and we then commit that container image as an image as a as a container image so that image is then committed with the updated vm image inside um so okay that's great we have a node image but how do we know what we built is actually a valid kubernetes cluster so within the kubernetes i repo we have a number of test lanes that run against these new any changes that are pushed into the kubernetes i repo as pull requests so these run a a subset of the qvert end-to-end tests and they also run the cube the suite of kubernetes conformance tests so that we know what we're building is an actual valid kubernetes cluster based on sento stream um any merge changes to this kubernetes i repo leads to new cluster provider images being published to quay um so they're always available there when our automation picks up that there's a new image in quay this this image is then picked up and a pr is created against qvert-qvert proper so that we can run the full suite of end-to-end tests against this image as well so we do have some protections there against running against the latest sento stream um when we rarely hit on the issues um so this is kind of a picture i drew up prior to the reason why i wanted to do this talk was to spend some time drawing this up from my own mind um so this is how it looks when it all comes together so these end-to-end test pods they're basically scheduled to our worker nodes um the the ones here below just reflect the top one it's the same thing throughout um generally our so basically within the end-to-end test pod we have a podman instance which spins up our node container image that we've just published and then this node container tanner image in turn starts up the sento stream via and then the Kubernetes cluster gets to a running and healthy state following some runtime scripts and then we basically install qvert against it and we run our test suite which basically qvert uses this this vert launcher pod concept for spinning up VMs within side these vert launcher pods um so basically those test VMs that we spin up inside that test suite are running in nested virtualization um the bare metal nodes up until recently were also running sento stream as well in production uh some of the some of the benefits we saw from using sento stream um first of all it's an extremely stable base so 99.9% of the time we don't hit any issues so it's just smooth sailing 99% of the time um it allows us to catch any potential issues earlier than we would have previously used in the old sentos 8 model um so previously the providers were built with sentos 8 um but we've moved to stream we've moved to stream 8 and stream 9 um over the last year we've hit a couple of issues really only a handful of issues a couple of examples here so we hit a couple of kernel bugs uh we hit an issue with network manager and some DHB clients and then SE Linux policy changes can trip us up every so often there are components in qvert that basically require certain privileges and sento stream can't be aware of that so then we have to talk between projects to see what is the best way forward in that way sometimes we can make changes on our end sometimes sento stream makes changes um so overall sento stream is a very good target for us for testing as it really reflects our main downstream product which will be uh open shift container native virtualization so it's very good at reflecting that environment and it also allows us to catch issues very early and so problems we've hit I really struggled to come up with any major problems here um I headed on my overview for the for the talk so I said I better include it um so issues are very rare as I said on the previous slide um sometimes we get blocked with sento stream issues well they're not sento stream issues but issues that we hit um and that blocks us from delivering new providers so if a new version of kubernetes comes out sometimes we could be blocked from testing against the newest version of kubernetes because we have some issue back here in sento stream um the second point is just a pet peeve of mine because we're always testing against latest uh basically we have the latest kernel in sento stream and then we're running our automation and the automation fails because the kernel modules aren't there yet so that's just that's the moment we hit every so often um then I have community members coming to me and go why are these lanes failing and I'm like just give it a 20 minutes and they'll be there but um so this always starts up the conversation within the kubernetes community whether we want to pin to a certain version of sento stream or not um in my opinion we get way too much value from running against the latest in order to pin it and we do not want to be managing uplifts of sento stream because we will fall behind so I generally just prefer just to go to latest sometimes we'll pin an individual package for a certain period of time just to get around a problem but that's always a temporary measure um running off the latest just gives us too much benefits um so yeah as a bonus content slide I said I'd show you where to get sento stream VM images for cuvert so if you want to ever if you have a kubernetes cluster and you have cuvert installed somehow uh and you want to run a sento stream VM we have um container disk images that we build and publish um in quay as well so these container disk images are used for ephemeral VMs normally they're um very handy for ci or any kind of testing that you might be doing um they're basically based off of the sento stream cloud images so they're loosely wrapped around those so basically any of your configuration will be cloud in it config that you would need for that um yeah and the image registry also includes a handy yaml example to get a stream vm up and running very quickly I was going to time to do a demo on it but I I chickened out towards the end um so yeah here's just a screenshot of the landing page as a result and as you can see the the uh the yaml is quite quite brief um it's probably a bit small on the screen there but you have your cloud in it config down here and you just add your cloud in it config and then you can do whatever you want that that basic example will get something running but it won't be much used to you you won't be able to sign into it you won't be able to get into the vm or anything like that so you add your sh keys your authorized keys or whatever to that cloud in it config um so yeah just to conclude um cuvert loves sento stream really uh it's not just as I alluded to previously it's not just using cuvert ci it's used all over cuvert so all of our cuvert artifacts are built in a sento stream workspace um cuvert actually relies on the sento stream virtualization stack so we actually take live vert and key move from from the sento stream nine repos and we actually use those and build those into our vert launcher pods that I mentioned earlier so they're actually the the components are actually starting up the vms in the kubernetes cluster so that's all stream nine virtualization stack and our production ci workload cluster was deployed on sento stream to very recently but uh unfortunately the burden of maintaining os updates and kubernetes updates on that cluster was too much so we moved to open shift just recently the updates were were on me to do so it was too much um yeah so then just to finish up just to say thank you to the sentos community and if there's any questions I'm curious a little bit about the vagrant layer was that a traditional decision or is that something that you chose yeah it was a long term it was a long decision made a long time ago and it's been carried forward um the vagrant image gives us kind of a handy user setup and login details into the vagrant image and we just use those then throughout our automation so to change it it would involve a bit um but the vagrant images have been working quite well for us so awesome yeah it's good to hear from vagrant users for sure so this might be a basic question I missed the beginning unfortunately because I was between rooms but um can you explain to me like what's the elevator pitch for kubernetes like it's it's your virtualizing inside virtualization uh no no no so the kubernetes product itself will be um that's that's only kubernetes so that's only how we're running our e2e our end-to-end tests so that that nested virtualization only happens within our test suite um but kubernetes if for anybody that has large kubernetes clusters that they want to run vms alongside pods um for large organizations who might be migrating to a kubernetes workflow and they have existing vms they can easily just take that image and deploy it straight onto their kubernetes cluster or open shift cluster um and then they basically it helps the transitions that kubernetes is kind of way of working so the vms are basically treated as a container well not not the word container but a place that apps go it's not that the containers are running on top of that vm you could you could run you could go all the way down but uh no generally the the vms will be uh yeah just running inside pods and they'd be more or less they'd be kind of like monolithic services they'll be running there okay okay and then the other question that I had was when you're doing nested virtualization in your ci yeah is the environmental difference when you're doing that is that limited in terms of you don't find certain things because it's not quite the same environment uh not really there there is a performance impact so there's a slight performance impact so all right when we run our test suite on bare metal it runs a lot faster but the nested virtualization there's a small performance impact there so some of the tests may have may take longer so we do have kind of we have a flaky test process that we have to try and identify these tests and make sure that they're they're kind of fixed to to fix those kind of flakes and then is it running on open qa in fedora no no no this is this is all running on a big large bare metal workload cluster that we have okay thanks because one of our co presenters is not here and also hopefully we'll pick up people after that aside so our last talk of the day you guys just want to start it like we're on time so oh really um welcome everybody uh it's great that I think this is the last one of the day and we have such a full room that is great compared to my last talk those two of you that were here that was wonderful um anyway uh I'm Troy Dawson I am the apple steering committee chair howdy y'all I'm Carl George I'm the apple team lead in the see the apple sub team of CPE the community platform engineering group inside redhead yep and we're gonna give our annual state of apple at the fedora con uh as usual I like starting off with grass if those of you who've heard matt's things uh he says there's two types of things this one is his thing called velociraptor and it's it's basically the old thing it's just getting the ip addresses of machines coming in we don't know anything about them and uh it's sort of cool we've actually broken the five million point on this one uh you'll see that uh apple seven is by far the the biggest and it's a three and a half million so anyway this is a fun one we have all these various points uh I'm not going to point them all out uh the apple nine you can see the apple nine there I'll point that one out it's a hundred point that people coming in late ha ha getting it's okay so so anyway apple seven still by far our leader now again these are velocity velociraptor if those of you saw matt's thing you know why it's dinosaurs and he's called them dinosaurs because we don't really know what's happening um these are all done by ip address uh we know things like facebook has several million behind an ip address so and it counts as one last i heard they've changed it to plural millions millions okay okay with an m or b okay now this is brontosaurus sapphire so brontosaurus sapphire is with the new hey i should have that in the notes new uh dnf check i did this one count me dnf count me so these are only for eight and nine because seven didn't have it but these are more realistic numbers and as you can see eight it's up here at two million which is pretty good and uh nine is it it's just that yeah i i'm not gonna i don't ever show these with fedora numbers because they get sad yes matthew says has said several times that the apple artifacts are some of the most downloaded artifacts from the fedora project yeah it's but for nine we're only 200 basically a quarter of a million only yeah so let's break those down a little bit more uh if those again if you saw matt's talk he he likes to break them down into inferior things that don't last very long one week uh two to four weeks and five to 24 so these are actually trends that we are expecting i used to go over these a little bit more but uh for we'll call them enterprise loads we these big 25 plus weeks things are what you expect because people don't usually use them in short loads except for testing so we expect testing different testing people that have a one month cycle some people do that they completely wipe their machine and put loaded every month and then the majority of people okay so here comes the funner things uh last time we we showed these we did not put names here and we people were sad so i'm putting names here um and this is rail nine you know notice rocky's in the lead but it's at a hundred k it's not really that big of a lead but they still are and uh cento stream and alma are basically tied they're fighting there for a second uh rail and there's oracle okay so but this is nine nine we sort of expect that is still growing rail's going to grow up because a lot of people don't really put the rail on until three or four actually we'll see yeah it's right there okay now this is eight so this has some really fun things that uh i i thought was interesting and i didn't even tell carl these so that he's can be surprised the biggest one i think is interesting is centos linux right here this is where centos linux eight went end of life and it was really fun because it went down and it went right back up to almost a million and then it went down but if you see in the past uh almost a year it's actually leveled off to six hundred thousand yeah some little clarity on that my theory is is that this wasn't that those systems went away and people deployed new systems the way that count me works is the way that the end of life went the content got moved the repo content got moved to the archive or the vault sorry and then at that point uh because of the order of the repo files apple stopped getting hit and it would uh d and f would abort on that first one that it couldn't reach and so a lot of people would just switch those to the vault repos to keep using the same systems without updates and then they would start contacting apple again so i think that dip and then coming right back is just people switching from the live repos that got retired to the vault repos that still existed but just were unmaintained yep and i totally agree with carl and but it looks like at some point at least some people realize that maybe we should get something at least some people realize it was a short term solution yeah but not everybody so it's still in first place but it's not by as much now the very end it dips down again it dips down again and we'll see how it stays yeah it's dipped down before uh the the second place is uh rel and rocky they keep changing over and over that's what it is and this last one Alma finally passed cento stream i'm a cento stream on the engineering team so this makes me sad oracle linux is there and then cloud linux um matt puts this actually on cloud linux now here's the next slide uh i then went and found all the distributions that have been checking in and this is where for me it's fun i'm not going to read this word gets weird all 101 so cloud linux which was at the bottom of the last one is at the top of this one this is 100 000 to 100 users there's cloud linux and then looking at this uh actually i'd heard of all of these but an annulus i don't even know how to pronounce that i think annulus is my guess annulus that is a chinese distribution so i've looked at it and it's is that the one that's backed by uh alibaba it was on the alibaba website but it might be like running it on amazon you know i don't know it like i said it's so chinese oh i'm not even sure it's chinese i think it's chinese anyway for those 100 that was the one that stood out to me for this one uh these are the 100 through 5 users um aren't very good my favorite one on this is this one 10 cent os or 10 cent os so i'll note that anyone could just edit your your etsy os release file and put any name you want in this data it's not hard to just come up with fun things and you'll see that on the next slide with the smaller single digit stuff yep they're so easy yes oh yeah there is fedora wait where did fedora oh yep yes we do to be clear we do not recommend running apple on fedora because it doesn't make sense because the packages in apple should already be in fedora so it's a little strange yeah now there is a conflicts in the uh in the apple release package but if they just configure the repo file directly they can still hit the repos and get counted in these numbers so another slide you had amazon linux which as i understand the last amazon linux that wasn't based on fedora linux was amazon linux 2 which was mostly rel 7 ish and so people are adding apple 8 to that release even though it's not even close to targeting it and hoping things work and you know for your python stuff sometimes anywhere it's fun so to show how really weird this is so uh great now where did it go foobar where's foobar foobar foobar linux has one user and i thought that was weird because i'd heard of it and the maintainer actually contacted me this last week they rebuild apple they have they rebuild all of apple very similar to like eln does rawhide and this one thing is a mistake so they have a lot of users they rebuild apple as as a package gets built they rebuild it just don't want to be counted i i don't know why i all i know is uh that's where which one oh no never heard of it before oh i i i remember that one from back in my scientific uh that's from china i mean there's an ibm linux up there too and who knows what that is like i said anyone can edit your oas release file to say anything and put it in here it's oh there i actually took a couple out one of them somebody had actually accidentally piped something into their thing because it was all these weird things anyway i took a couple of them out but uh anyway those are you don't want to fork bomb in your slide this was just for fun uh don't expect it every year because it was a pain in the rear to make um this is more more serious and uh some our maintainers and we're we are very grateful for our maintainers both you here those of you online and those of you watching you later thank you very much um over the years we've we've gotten more and more it looks like it's going down and but maintainers fluctuate apple is a volunteer-based community um even most of us have now granted carl is currently getting paid for it but he was a volunteer way before that and so we want to thank you people um apple nine we did have a trend going down so the trend is sort of to have 10 pack packages per maintainer so that no one person gets overwhelmed and then rust came in um and this thanks fabio if you're watching this apple nine got down to 11 and then all of a sudden just boom yep and more sad face i'm no longer number one it makes me sad oh uh it's not that uh uh fabio no he's like three uh the other facebook person michelle michelle oh wow all right michelle is is number one i'm like number three or four i'm sad not that sad i actually i'm excited i'm glad people yes obviously maintaining a ton of packages yourself is not great well because i was number one for so long i'd never wanted to say that i never wanted to give the top 10 now i can so uh anyway we're we are very grateful for our maintainers in in apple and uh on behalf of not just the apple committee but as the users thank you very much apple seven uh i'm gonna start with this one in turn over to you um apple seven will go end of life june 30th 2024 that's less than a year away uh red hat recently uh due to the large amounts as you saw the second slide large amount of seven people apple rel seven people have made this new extended life thing and i don't know the three or four letter acronym but els elas extended life cycle support i believe okay they they made this different thing but the basic ending of rel is happening on the 30th of june 2024 uh as an apple committee we did discuss this and apple is going to end on that same time we will not be trying to extend it the end of maintenance phase two yes so on june 30th 2024 apple seven will be will go into archive it will no longer be able to be it will no longer be maintained and all the things that go with the end of life apple release anything else for apple seven i know i'm i know i'm hogging the slides no the uh there's one point i want to bring up that as y'all saw in the other charts apple seven is still the most popular artifacts but it's also getting the least attention from maintainers a lot of maintainers they tend to like the new shiny they're focused on hey i can get all my new stuff added in apple nine it's interesting they like deploying it and getting all those new features so just naturally over time apple seven gets less and less attention um which is partially by design similar to rel seven it's in the last part of its lifecycle and it's slowing down it's getting fewer fewer changes only the most critical critical things get fixed now so but because it's still so popular if you have apple seven packages you depend on consider getting involved in helping take care of those because we have a lot of open bugs we look at what open cve bugs there still are because there's no guarantees and it's all volunteer we just kind of hope that people come along we look at those from time to time when we can but we would like help if you're interested yep thank you very much let's go to eight sure all that's it you do that let me talk a little huh yeah so with apple eight we did some interesting stuff this is the first release where we had sent to a stream mate and things got shaken up first a little bit we knew we wanted apple to we knew how important it is for people that were using it and we proposed a new thing called apple next because we were starting to see that some library changes if you saw adams talk earlier he talked about the application compatibility guide so while most things in rel don't change very much at all well you know a bi wise there are some things that are on a lot lower priority on that list that are allowed to change their library sownames and things like that think llvm qt and there's a few other things when those happen now they happen and sent to a stream about you know four to six months before they happen in rel it's the change that's coming to rel that's planned for and approved for rel it's not some you know wild unexpected thing we see it happening and we know it's going to be coming but sometimes occasionally that can cause apple packages to not install correctly we saw that as a problem with apple eight and we wanted thought about how we could fix that so we came up with a thing called apple next and that allows maintainers to optionally build against sent to a stream mate to get their packages compatible it's not a whole duplication of apple it's just a rebuild of those think of the times I've measured it was usually less than one percent of packages that had trouble with that due to those changes so it's just select rebuilds it's an extra repo so rel rel you relate users would use just epa late sent to a stream eight users would use epa late and epa late next together and that's worked pretty good there's a there's been some it worked it was good for a first attempt attempt is kind of a bolt-on solution to the problem with how we were art the existing way we did apple i'll get into that that's starting a thread of what we're going to talk about on the apple 10 slide so i'm going to you know back burner that for a minute but it worked well enough the other thing i'll point out is the end of life stuff i know you're about to jump in with that i saw you yeah so talking about end of life uh sento stream eight goes end of life same time let's just say it's the same time of seven uh basically june 1st of next year with that the apple next uh repo repo is going away um it really shouldn't affect any non sento stream people but we just want you to know that it's it's going away when sento stream goes away but it shouldn't really affect you unless you yeah at that point it won't be necessary anymore yeah because that's the point that rail eight enters its maintenance phase and you know a lot fewer things would be changing there shouldn't there won't be any more minor releases and there definitely shouldn't be any more library changes for lower like acg four packages the lower priority ones i mentioned yep we ready for nine so apple nine we had the rail nine launch uh last year in 2022 and um we actually at one point we thought that it was for the first time ever we launched apple nine before we actually it's not exactly correct we found some old mail in this stuff about how apple seven actually launched with a beta period so it made things a little confusing and when you yeah it was actually a post from kevin there um so we were kind of describing it somewhat incorrectly depending on how you viewed it um apple seven left its beta period after after the rail seven launch so sort of it was true but also there was something so there were packages in apple seven just under the beta label at the rel seven launch we did some lines on this chart at the time of the rel nine launch there was two thousand six hundred and seventeen source packages in apple nine which was great just having those available at own launch day for rel for rel seven it was a little bit behind that i think it i think you said it took about a month for it to catch up to that point it was released that arbitrary cutoff it was released to me we made it to you know in july and it was like a month after that that they that we removed the beta label from it we didn't put that on there because it was just going to make it messy but it's an interesting little artifact of history so the big thing to take away is the growth pattern for apple nine you can see it's much much steeper than apple eight another thing i'll point out here is that apple eight actually launched or rather rel eight launched with zero packages in apple eight it got a late start there's a lot of reasons for that and if this wasn't a 25 minute talk i would probably get into those but if you want to know more about that come and find me and ask more questions because it's a long story but we obviously knew that was a problem we actually got a lot of customer feedback about i'm not upgrading from rel seven until the packages i need in apple are there i know they're not supported but i still need them and that actually helped lead to the creation of the team that i'm the team lead of now the apple team with ncb to staff that role i wonder what the same now that all those packages are in eight and nine and they're still on seven they're big enterprises they take a long time to change stuff so we hopefully hopefully this is a lot more attractive target so that way when they're looking at their upgrades they may just skip eight entirely and go all the way to nine and just get current a lot of them won't they'll still go to eight first but we can't control that so going to the next side so this is a a few logos of things of notable packages we had in apple nine this is from a while back i forget when we did this this is one at launch time this was at launch time this was at launch time that we were we were saying look we have all these at launch time and now how many do we have now a lot more a lot more basically ran out of space for logos yeah had to resize a few of them but yes there's a lot of cool software there um it's growing a lot and uh if you've got your favorite piece of software that's not up there or didn't get a didn't get highlighted this isn't everything i couldn't fit everything on one slide if you have something that you want to see an apple help get involved and get it added yep oh we're at apple 10 so apple 10 here's the good stuff this is what y'all came here to hear about so i mentioned on the apple next stuff would i talked about it on the apple eight slide we also had apple next for nine um it worked as a bolt-on solution to get get the problem solved but we noticed a lot of some problems with it it was unintuitive for users and a little bit unintuitive for maintainers as well the the branches weren't created automatically in disk it they had to request them and some maintainers would be confused about when they when it was appropriate to request the branch uh we saw a pattern of maintainers building it in both apple and apple next just by default they thought they needed both when they're the dependencies of the package involved weren't different between rel and sent to a stream so there was no need for it it was unnecessary so we thought how could we make this more intuitive and it's actually something while i was digging up the the apple seven beta stuff and reading about that i found out that this is an idea that's been thrown out before talking about having minor releases for apple branches uh so that's what we thought about doing for apple ten yeah so here's a little bit of the history with uh with apple seven we built you know the current state of it it's built against rel seven point nine it has a disc tag in the release field of dot el seven and it goes in the repo path of apple slash seven and then eight you can see that still this is the current state apple eight follows the same pattern apple eight next is still built against sent to a stream eight it gets a disc tag of dot el eight dot next and it goes in a different repo path same thing for nine and then for ten for we for reference we thought about well how does this work we're talking to fedora maintainers a lot of time because apple maintainers are fedora maintainers if you're a fedora maintainer and you're not involved in apple all you have to do is make the branch and then you're an apple maintainer congratulations it's super easy barely any convenience so here's how it works in fedora it does fedora doesn't have minor versions but with each uh release we have we have the leading branch rawhide that has fc 39 disc tag and that reflects what the content is going to be um and there's a very similar pattern with the minor release minor versions in sento a stream what you see in sento a stream nine right now reflects the content you can expect in rel nine point three that is going to be released in the fall so there's a i noticed that similarity there stuck in my head and i kept thinking about it and wondering how could we build on that and make something that's more intuitive uh when when things branch from rawhide to f 38 um or rather in the future whenever f 39 branches the f 39 branches will be created and then the rawhide branch will get switched to the fc 40 disc tag and they've got their own you know repo paths so go and switch the next one this is what we're thinking about for apple 10 we'll have a leading branch apple 10 built against sento a stream 10 and it'll use a disc tag reflecting the content that it should say this will be at this point in time will be the rel 10 launch basically so at that point sento a stream 10 will already be reflecting 10.1 content so it'll get the corresponding disc tag for that but it'll still we're thinking we're going back and forth from this a little bit but thinking about just putting it in a bare apple 10 path just because that minor version of the content it's reflecting isn't programmatically determinable inside sento a stream itself so the easiest way is to just think of it as it doesn't have a minor version and make the repo path work the same way but we can still put it in the disc tag for the migration whenever we actually create a 10 1 repo at this point in time real 10 0 will be out and we'll have apple 10.0 branches with the corresponding disc tag and repo path uh oh that's one thing i forgot to bring up that we did different with apple 9 is that you don't change slide for it um with apple 9 we actually we launched it early um by building by building apple 9 not apple 9 next by building it against sento a stream 9 early for about six months that's what let us get all those packages in there early and then that was that was a huge boom to the project and getting packages out and we thought about how how could we bring that same success we did it for 9.0 but we're not doing it for any other minor release of 9.0 this brings the same concept to every minor release and then packages the other part of apple uh apple next that's painful uh especially for Troy with all his KDE packages is that whenever he does have to rebuild against say a new qt library he has to do it in apple next and do all of those updates and then a few months later do the same exact thing in apple all over again he can't there's no way to inherit the builds because of the way we designed it as a bolt-on thing with this we explicitly want to be able to inherit the build so that things that you build against sento a stream you can just it'll just populate the next repo path a few months down the road and bring that same benefit that we had with the launch of apple 9 to every minor version so for example for the KDE users when well 10.2 comes out you will get your KDE packages that day as soon as you update to the release so it says 9.2 or 10.2 you got the KDE things because they were already built in the sento stream that was corresponded to 10.2 um and i don't have to do anything that week and you don't have to wait and have your broken i i i feel bad but there's only so it's i can only build them so fast um so anyway go ahead no somebody's calling me that's okay uh oh next slide so we're still talking about all the finer points of this uh apple 10 proposal and uh if you want to join that discussion the short URL red dot ht slash apple 10 that'll take you to the fedora discourse page discussing it and there's a few breakout threats from there talking about the finer details about how we're going to implement it and uh that's all in flight and in progress and if it's interesting to you come help us build it this is all the plan it it's not just the plan it's approved by the apple steering committee it's the way we're doing it we just haven't done it yet and uh there's some of the finer details to work out still yep um i just saw the time so we're going to go to the next one there is an apple survey we did we're trying to do annual apple surveys we didn't quite get it in time for this one uh coming soon look at the apple announce and apple develop um we'll post about it there and uh probably in the next week i would say we'll have that survey live and keep it open throughout the month of august i think and uh it'll be less complicated than last year's yes we trimmed down the number of questions significantly and and reworded them i i like this year's and question and answers uh i in in the in the past i i i i maintain some packages for for fedora and in the past uh there was an apple uh branch and then then you could just push your changes and right now i can't find it and i have been trying to to i have been looking for information on how how to put these packages on on apple and i just i might be bad at no it's okay it's okay it is a it is actually one of our most common questions um uh basically you you have to branch like you do the other ones they are not currently automatically branched the command will be fit pack in when you're in the checkout repo you can do fed package request branch apple eight apple nine whichever yep actually oh maybe we do have it on oh that is if you go to that url it has instructions for fedora packages non fedora packages and just end users instead of typing the whole url if you go to doc stop fedora project dot org there's a rectangle that says apple click on that and then the sidebar there's a package request link right there will be a little easier to to navigate to than typing the whole thing but yeah that's that's our most common question so don't think that's a silly question we don't do it he mentioned that we don't do it automatically uh mainly because we don't always know that it's appropriate for an apple package to go in the next branch of apple sometimes it's because that package got added to rel and by apple's rules it can't be an apple if it's in rel other times the software might just be you know no longer maintained upstream so it would be a bad idea to add it to another apple branch that's going to hypothetically exist for another 10 years so we leave that up to maintainers we are thinking about a few things with apple 10 obviously changing the branching style to have minor versions but also there's a thing called elan extras which is sort of like if you're familiar with elan how it's sort of like future rel elan extras is sort of like future apple elan itself is built you know specifically of what the content set that's going into rel 10 right now and then you know whatever future major version in the future um elan extras anyone can create their own workload there is the term and that's just the packages you care about to make sure they keep building correctly against elan so in theory it will help reduce the work and make sure things work correctly when you build it for apple 10 in the future uh we're talking about ways that we can actually look at those workloads and automatically create apple 10 branches for those people that have already expressed the interest i want to have this working on future rel um but so we also we've also talked about just creating branches and then if people don't need them just whatever let them you know retire them when they want to or just let them exist and be ignored but i think that would be good not everyone agrees with me so i don't think we're going to go that route um i don't think having branches around is that big a deal but whatever i don't make decisions unilaterally unilaterally any other questions we're a little low on time but uh you got one more oh it's the end of the day yeah you're not going to get anything oh okay apple to the castamole oh good uh this might be a slight tangential question go for it we have an hour oh we have an error cool i can i can go for an error uh what was the like concept behind the apple logo or is that oh um i got to rewind my brain back okay mo is if mo's duffy is here she she helped we were we were spitball and all sorts of pictures and she came up with sort of this one oh wait where's where's oh there's our logo sorry but uh do you want me to explain it but uh the the red part is rel the blue handle is fedora and then the middle one is like a rock socket wrench and we figured we'd have a blue the same blue as fedora because apple is in fedora that's that's the closest thing we did but everybody liked it so that's what it was yeah the centaur that one never worked out that was that was sort of cool but yeah i think mo uh she's put out i could show you the old one and try to explain what that was but uh everybody that looked at it had a different definition of what it was any other questions yeah okay well oh we do have one oh okay well thank you all again thank you for our contributors um we we really appreciate it uh it's a lot of volunteer works it's a lot of people are red hats and volunteer and a lot of people are other companies volunteer so so thank you very much and we'll talk to you next year