 Hello, we are starting Hi, so my name is Luca Bruno. I used to work at CoroS. Now I'm working at Redut What we are going to talk about is the data CoroS Which is kind of like Putting all our idea all our minds all our kind of like developments project and whatever and kind of like Visions and not like I'm not pitching you anything but it's like visions Into like a fedora base operating system Which basically means like taking ideas from existing wars existing Projects and whatever and trying to get like the best out of that. You're spoiling the presentation And this is Jonathan, please. Well, I run. I'm Jonathan. I saw work at Red Hat on Throw and Red Hat CoroS Yeah, let's dig in. We got a lot of stuff to cover So I guess if you have zero context on what for a CoroS is a simple definition is it's a minimal opinionated Container-focused automatically updating operating system designed for the cluster, but great as a standalone note as well So there's a lot of things going on that sentence there So hopefully by the end of the presentation, you'll sort of be able to make sense of what each of those parts mean So your first reaction might be wait a minute This sounds a lot like some other distributions that I've heard in the past and you'd be right actually There's where at least at least two distributions So there was atomic host and container Linux and of course once you're coming close I mean through atomic host red rel atomic host sent us time because and They both sort of had the same goals, but there's still quite a lot Differences in how they go about fulfilling those goals And really as Luca said at the beginning, which is why I was saying who's spoiling is federal CoroS is really about taking the best of both worlds and sort of coming up with something that's sort of top-in-class and I Guess some more context on this if really you you've been living under a rock But about a year ago almost to the day Red Hat acquired CoroS and that's that's that really changed the future for those two OS is right And that's what's giving us this opportunity one year later to sort of Tell you how we're combining these The ideas from these two operating systems Okay, so what is federal CoroS? So maybe we can start with what's on the host itself So it really only has the minimum to boot the system run containers and update the system right so it just has Like its goal in life is to run containers There's no developer tools. It's not meant to be like to hack on it actively We're hoping to not have any cloud agents in it. Maybe cloud agents in the container, but not directly on the host Hopefully no Python at all that might not work out, but we're trying our best So one consequence of that is no atomic command obviously atomic was using Python I think that's that's a really telling point the fact there's no atomic command because you know on federal atomic host and Relatomic host the time point was sort of the entry point to the OS and that's how you would manage your notes You would do like atomic host upgrade and the idea here is you're not going to do that anymore because the host will manage itself So you shouldn't have much reason to even log into the system, right? Because we take that load off of you I'll talk about that a little more later So for a course is the upstream or better course. So for those of you who are in the room in this room for the previous talk So we're trying to sort of set up the patterns that get inherited by Red Hat Core OS There's a lot of things that are similar I you know the way that the OS is built is is very similar same tooling a lot of the same technologies like ignition and Just in general similar design decisions But the major major difference is whereas Red Hat Core OS is sole purpose in life is to be a platform for OpenShift Federal Core OS has sort of a wider view on what it's meant for so you could run it as a standalone thing without OpenShift for Kubernetes Or you could also use for OpenShift for Kubernetes. We're still discussing what's going on there actually one of our secondary use case for Federal Core OS is to run to be a platform for other Container orchestrators, so it doesn't even have to be OpenShift or Kubernetes So okay platforms, so where's this thing going to run? So all the you know typical popular clouds AWS, Azure, OpenSack, etc. VMware, VirtualBox, QMU and of course bare metal. I don't think there's any surprises there. This is mostly What Continuum Linux supports today architecture, so we're gonna start with x8664 and then we're hoping to add R64 and PPC64 LE and provisioning here. So far it was kind of like standard distribution, standard things This part specifically is few slides are things that we are Bringing over that we are carrying from container Linux We're doing a bunch of things in container Linux a few of them We are kind of like happy with what we are doing everyone to bring it on into Federal Core OS and some other not But I'm not gonna touch into those So the first one that it's interesting that we are carrying over is ignition I don't want to spoil my own talks that later we're going to hold the details of this But the idea is we need something which is like cloud in it But we don't actually need cloud in it. We need something which is a smaller subset of cloud in it So we have this component called ignition which configures a node when it is booting up in the cloud or on the metal By running some provisioning logic in the initram fs And it is strictly a subset of cloud in it because it only run on first boot It only takes care of what we call the first boot provisioning and not the full life cycle of configuration management on this node It has like some cool properties by being able to trim it down on this subset We can actually enhance what we provide to a user to something like atomic provisioning either the provisioning succeeds So the node boots or it fails and that's something that is like it makes sense for first boot provisioning But not for like a configuration manager and The last point of this is It is a machine interface. So the input to this subsystem is json I've been through operation you have been through operation, you know that like writing manual json is terrible Doesn't have comment It has a lot of like gotchas and that's not what you want to do if you are writing something manual You want like a user-friendly interface and that user-friendly interface is yamo now you see there is like a Gap between the user writing yamo and the machine taking json. That's where we actually build tools. So we We have additional tooling which converts between these two words like another level user interface and the lower level machine interface Signitional it takes care of the json part the actual like user flow user interface would be something like this On one side you are providing us something that you can Humanly manage. So a yamo file in the middle. This is kind of like collapsing a lot of logic a lot of stuff There is some Process in the middle and at the end of the day you get out this ignition configuration the json file Which is like much much more easier to handle to process on the machine side. This is what is actually used On the provisioning side So there is a lot of confusion I say in this because like people are often confused between yamo json ignition cloud unit Whatever ignition is only concerned with the json But you as a user you are touching a lot of other things and most likely like an higher interface Then the next one that we are also like Carrying from container Linux is the way how you would install the way you would deploy this operating system In the normal world in a traditional infrastructure You have some kind of like installation step which you start from an easel or some from some sorry from some kind of image And with that image, which is not the final OS you install something onto the disk You customize it and that's your installed OS So you there are a few steps in the middle a few artifacts and there is a bit of like mixed logic and data in the middle This is in Fedora world in red at war something that I'm still learning It's like this anaconda kickstart and a few of other like RPM base and user configuration steps In container Linux and what we want to do for Fedora course as well It's being much closer to the way that cloud provisioning works in the cloud You have an image, which is your virtual machine image Which is kind of like a golden image and it's already the operating system itself And when it is booted for the first time you provision it in the specific way that you want this node to operate But the base image is the same everywhere in order to do that. We are trying to Kind of like drift a bit apart from anaconda kickstart and try to use instead like what we already have so ignition and these key images So what we're gonna provide for Fedora class are like these key images that you directly like DD or if you are in a cloud you already have them like as image like AMI or whatever is is it called in your cloud environment and those images are actually Provision in this way via ignition of the first boot So whether you are on bare metal or on whatever cloud provider the Booting step for a new node is always the same you get some image and you provision it on first boot That's it's kind of like easy in a cloud environment because you have like cloud API to manage these kind of images If you are like on bare metal, it's usually a bit harder and there are two strategies that We are gonna provide both of them because they are covering different use cases So at the end of the day There's gonna be like artifacts like easel bootable ease of all that you can boot as like a live OS and you can kickstart but not Yeah, you have kickstart The installation process and at the same time we're gonna provide like artifacts so that if you are directly Pixi booting your node so directly into the US without any installation step in the middle You can still do that there is like a huge eventually here because All the things that they were mentioning here like we know exactly what we want to do we are kind of like merging technologies and Projects is like a few of these already exist a few other kind of like No problem, we just need to do it and some other things that we are gonna touch at the end up kind of like We know that we have a problem when we are actually discussing the solution Which bring us to the next step, which is like if we provide you an image instead of like an installer It means that we already mandate some kind of like partitioning for you That's both true and false It is true because we have to pick some kind of defaults In particular in this case like we are picking for you how the partition in the out looks like so how many partition You have in the system how they are allocated and we are also picking for you like what is the default file system? Which is something which is again like Not very controversial and a bit of a merge between the two world so we are splitting like the main group file system so slash and The place where you would like store your data so slash bar into two different partition and we are defaulting to XFS which is like modern and good for containerized workload at the same time This is our default You may want to have like a different like customized partitioning or volume formatting approach And in that case like back to the original discussion everything is done on first boot customized the ignition So ignition knows how to Repartition a disk and how to format it even if it is like the root file system or the bar file system problem is Ignition was coming from a different war so from container Linux approach. We are still like trying to Make it a bit more like generic upstream project and not like something very coupled to to to container links. So This feature some of them doesn't work right now But the idea is we already know what we want We already have this configuration the way to the knobs to configure this and we need to kind of like plug it all together into the pipeline Couple more slides another thing which is way more controversial is like container run times Container run times I could speak like for hours about them at some point like everybody's his own preferred one If you're a developer you actually want to write your own because it's like it's cool nice You learn a lot of stuff and then you try to convince other people to actually ship it in their own distribution So if you are on the distribution side instead instead what you're trying to do is saying no to people And try to keep it like the same the same amount the same set So the set that we're going to ship is cryo because to be honest like nowadays most of the containerized workload are on top of Kubernetes and Kubernetes as these interface to the container run times, which is called a CRI Increase basically like the component if you want to have that At the same time there are some workloads and some use case that either not on Kubernetes or they have like Partially different semantics like I want to run the Kubernetes components themselves in some container runtime, but without Configuring the whole Kubernetes environment. How do I do it? There are specific runtimes for that and Pogman from my point of view fits into that category So they are kind of like complementing each other The last one the last item is this set is mobby what used to be called Docker And again like fact is reality is there are a lot of people that are coming to the container Word because they know Docker At some point in the future Maybe they will know also about like how to handle your whole infrastructure and your whole cluster orchestration with Kubernetes But probably they're not yet there or there are they have some use cases where it doesn't fit So we're also going to ship mobby for those use cases In a slightly different way that what it used to be on Fedora and well I'm not very familiar with this technology again and coming from another from another word But the idea is mobby is there if you need it is going to be socket activated So usual system D local socket activation It's not going to do a lot of like magic is not gonna like preprepare stuff It's not gonna have like defaults for reformatting repartitioning or whatever So not this not not to this Docker storage setup that I heard like a lot of people who loves it And that's it. It's like basically a plain as possible container runtime so that you can customize your infrastructure as you want And this is like the last part that we are kind of like Carrying over from container Linux, which is networking your firewooding And the idea here as well is we try as a distribution developers a distribution we try to get out of your way as much as possible and We push for a world where we only provide you we only take care of the first boot provision So whatever you need to bootstrap this node to bring it up to the initial configuration And then after that any kind of like configuration change Which means also like network configuration change or firewall configuration change is done via some other specific component Possibly a container as one this means for example for a firewall Yes, we support firewall. We have IP tables or NF table. There is still this like this split in the in the Linux world We support those but the idea is we support a static configuration that you set up on the first boot And you never change if you need to change something after that then you need to have like a containerized component that actually owns These mutable data these mutable rules for the lifetime of the node Same way the same idea same things for the network part here. It is a bit more controversial Let's say because even in like in the Fedora and real world itself. There are like two main project like There is network manager, which is pretty much everywhere And there is net and there is network D which is not that's like as per was even network manager But it actually fits pretty well into what I'm describing here and so the idea is Given that we are pushing for something new given that it's a good time to kind of like Have Plain discussion about where we want to go Net or D is there. Yes, it was working pretty well in container Linux at the same time Like network manager could use a bit of like push and like giving them a direction for what we would like to have So we are actively working with a network manager and developers in order to kind of like get feature parity or at least Getting in the same direction so that we can use it the same way as we were using network D in container Linux That's pretty much it I would say for my part. Yeah So next big item, of course the update system. So just to clear one thing right now. We're still using RPMs We're not using gen 2 Is this here? It's okay Maybe the battery. Sorry you guys do you want mine? Yeah, I can use Lucas Yeah, so we're not going to use gen 2 RPMs from Fedora, and of course, we'll still be using RPM history from Just like in atomic hose So this is a departure from container Linux, which was using update engine, which is from Chrome OS So our chemistry, you know, it's very I'm one of the maintainers for our chemistry. It's well maintained and We can easily modify it as we need for 4s the big item here of course is fully automated updates So this is really the most important difference between, you know, through atomic hose and for a core OS and That's really what's set apart even before between container Linux and for atomic hose is the automatic updates because When you have automatic updates You don't have to care about your note, right? So it the relationship that you have between you and us is completely different What we were doing with atomic hose is we were sort of telling people, okay We send out these updates every two weeks, but you have to log in and do our chemistry operate Of course, you can script that but it wasn't sort of isn't part of the model to have automated updates whereas with container Linux you know, it's like from right away from day one it was automatic updates right away even for single notes and This has huge ramifications because for example, if you have automatic updates, you Sort of cannot break the OS. You cannot break backwards for credibility ever, right? And that's that's a That's a tall order Of course with automatic updates, you sort of also have to have automatic rollbacks, which is something we never quite got to in atomic hose Container Linux, if I understand had an automatic rollback, but it didn't have all the niceties for example Use a defined health checks wasn't there and that's something we're looking to add. So for example, you can say, you know Reboot into the new update, but if I can't reach this server or if I can't See this device then something is wrong and I need you to roll back and so we'll sort of have that functionality for The cluster case we also want to make sure that and again, this is also something from container Linux We want to make sure that the nodes are Cluster aware meaning that when you do an upgrade you're not taking down all the nodes at the same time But you have sort of have a controlled rollout throughout your cluster. So you don't have any downtime on the release engineering side another really cool thing in Container Linux that we want to adopt also for our core OS is This concept of controlled or rate-limited rollouts So what that means is that when we have a new update stage and ready to go Instead of making it available to everyone at the same time We can sort of let it trickle down trickle out little by little and give time to see if you know either We notice this there's a higher failure rate than usual in the notes that are Updating or give time to users to report. Okay, this update is completely broken and we can stop the update and sort of look at what's What's going on there it keeps turning off keep turning off Do you hear me okay in the back? Not going to do you to the mics please for the recording Recording for some reason, okay, it's just come cover saving feature Can you just check if it is not muted yes Test one two, okay I think I covered everything there Okay, so streams so now we're going outside of the OS and really talking about the environment in which is so as we'll run right so with for a core OS We want to have something similar to the Container Linux model and we sort of had that for atomic holes, but So we were gonna have three product production OS three refs So refs are like branches right with the OS three being get for your OS model So it's a different get branch. So you have the testing ref which is basically Whatever is in the fedora repo plus what is in updates in the updates repo and then We let that bake for two weeks and if it goes well, we promote it to stable and then We also have the next ref and next is basically if Raw hide hasn't branched yet. So For example right now, right? We're in further 29 and there's no third or 30 that has been branched yet So testing with just next would just be tracking testing Once through our 30 branches, it would be tracking through our 30 so the idea is with next is either the next release of Fedora or the same thing as the testing ref and We're sort of gonna have to We will be recommending people to run You know most of their nose on stable But then some knows on testing and some knows on next just so they can get a heads up on if something is gonna break their system We can catch it before it goes all the way to stable So as usual with container Linux and atomic holes actually we're looking at, you know, two-week releases It's not contractual. We might change it but Three-week releases plus out of cycle patches for security updates or if there's a really Terrible bug tricks that needs to to get in of course, we'll have a bunch of development reps that are not necessarily meant for For Consumers, but just more for ourselves or for other federal core rest developers So one for tracking raw high body updates will basically be a nightly snapshot of testing And body of this testing is testing plus the dissessing repo Probably I should put this point higher up But a really important point here is that the rest are on version, right? So when you look at federal atomic hosts and rel atomic hosts Actually not relevant close, but if you're on top of course The refs were had the version of the operating system in them, right? So for federal 29 you have federal slash 29 slash x86 64, blah, blah, blah But what that means is that when we get to through our 30 you have to do a rebase from federal 29 for our 30 All right, so it breaks that Illusion I guess we never really tried to have it be an illusion of of a single stream Whereas with for a chorus these brefs are on version meaning that stable is gonna be tracking federal Let's see right now. It's tracking for the 29 and then at some point well after for our 30 releases It'll jump from 29 to 30 right in that same single stream of updates so that has a Huge sort of it's a big difference because actually in the model of how we approach updates, right? because before Rebasing from one major version to the next was sort of an explicit step that you would take Whereas now this is something that we have to make sure nothing is gonna break and really really tested Think I covered her from here So schedule we're targeting for our 30 for the initial release We still have a lot a lot of items left to do There's some things that are likely not going to make it to the initial release for example multi-arch We'll probably start with of course x86 before we're trying Right now to speak to build it on PC64 Ali at least and then For automatic updates, we have less something that we really want to have from the get go because that's sort of part of the Value add of So I'll just show you I'll just show you a couple of links here So we got Probably the three most important repos right now. It's the through our core West config The cable oh, no, it's okay. It's okay. Yeah. Yeah, so fair of course config is where we're hosting the actual definition from for the US right now. So for example, if you go and For our core West base right so this is if you're familiar with Hierophant sorry, so if you're familiar with Atomic holes, you know, this is the exact same thing that you would feed to RPM was recomposed The only difference is in YAML instead of Jason, which is really nice You know, so we got a repos here, but the interesting bit I want to show you is at the bottom here So this is where we're actually defining all the packages that we're Shipping in through our core OS. So for example, like Luca was talking we have enough tables here And this is likely to change I'll show you a demo of but another repo probably worth showing is So another massive difference between Atomic hosts and for our core West and read our chorus is the way that the OS is built So there's a whole other talk about this after so I won't go too much into details I think the talk is right after mine actually, but We have this tool called core rest assembler Chorus assembler makes it super easy for you to build your own federal core OS locally and test it So this this really we're hoping this will You know make making it much easier for you to contribute to the project and see exactly what effect it's having so You know you can use it's just a bunch of scripts, I mean it's in bash right now But it's working out pretty well so far But the idea is that repo I just show you the federal chorus config you literally just feed it To core as a standard you do course under in it and then the repo and I used to do a fetch and then you fetch will basically Pull all those RPMs that I showed you in that list and you do a course center build and it'll build the OS for you before you and then it'll build Q-cow images for you and then you can do Core assemble run and it'll even run it run that latest artifact You just build in QMU and you can test it out. So We took that course assembler. We put it in a Jenkins pipeline. So this is the current repo holding the definition files for federal core OS So I won't go through the Jenkins fast too much, but it's basically Doing the exact same thing. I just showed you right with Container assembler in it fetch and then building it and then we have the output going this is running in sent off CI in the OpenShift instance and then we have the artifacts So we have the artifacts here Okay, so we have the artifacts here and so you can test it out. I mean, this is still really early stuff We're still defining a lot of Critical parts of the OS, but if you want to give it a spin, you can and I can do that right now I'm sorry. Just one second. There we go. Okay So here I have the Q-cow 2 that I just that's from that pipeline. I just showed you So this is the CT config that Luca was talking about. This is a YAML file that sort of replaces cloud in it So there you can write, you know, the SSH keys you want to Give access to and the groups, etc. And you can feed that to CT is what will convert it to JSON basically, but it doesn't just convert from YAML to JSON It does a bunch of other things and now we get our Ignition config, right? And then we can feed that to Fira core OS so you can use QMU directly, but I use vert install The really key part here is that dash dash QMU command line This is basically like an escape hash for libvert for whatever command you want to pass to QMU directly that libvert doesn't understand or doesn't wrap And then here I'm passing it the JSON file that we just created So now we've got for our core OS up and running Okay, and now oh, I should have showed you the yeah So as it says all aspects subject to change highly experimental. One thing I want to show you is Well, two things I guess No doctor, but there is Moby engine That was a trick In OS release, so, you know, we identify ourselves as Fedora So ID is Fedora, but if you look at the if you look at the variant ID at the bottom were core OS and You know in continue Linux the ID was Continue Linux, right? Yeah, so we didn't want to Make Ernoi was core OS. Yeah, so we don't want to make it core OS here to to really work closer to Fedora than we are to gentle obviously I Guess that's it. I think we're ready to take questions. Let me just quickly go back to them Yeah, so I'll just go to this place quickly a lot of things still being discussed, you know Chorus or okay D as a hot topic. It's basically gonna depend on what's easier to maintain in Fedora Collecting metrics, you know in container Linux This was tied to the update system for Fedora core OS We're trying to see if we can sort of decouple that from the update system host extensions huge topic You know whether we should Recommend package learning for some options and how to handle out of three kernel modules container services like torques on Container Linux or system containers on atomic. Those are things that we're trying to move away from This is a lot more discussions going on at the Fedora chorus tracker repo Okay, and you can get involved through a core OS Free-node channel the chorus mailing list and we have weekly community meetings on free node You can join the discussions on the github tracker repo, of course a lot of things to do So it's a great opportunity if you want to get involved in open source All right, are there any questions Can you repeat the question for the question was what are some of the challenges that we are hitting with multi-arch right now So the way we're building the OS is with this container assembler thing And we're she using container assembly as a container and a lot of the things that go in that container are not aren't currently being built for Multi-arch and I think Sydney you were doing a lot of a lot of work on that with PPC And you know we've just have a lot of assumptions in our code base that were Like as we were sort of bootstrapping and getting started We were sort of just testing this on x86 and I was sort of going back and back tracking and going okay We got to generalize this and generalize that and I don't have anything specific for you right now But you can definitely go on the core of some of the repo and it's like at least two separate issues about about that Do you want No, yeah, the goal is we ship one image Because it's a little nuanced there, but we're we're shipping one image that's going to be used everywhere, right? That's why we don't have to have any cloud agents. We're shipping with all the the container runtimes that you would want There's a little trick there in the sense that the way ignition works. It needs to know what platform You're running on so it'll act because for ignition to know where to get its metadata from is to know What platform is running on so right now we're shipping slightly separate different variations of the same golden image, but Yeah, that's your question is basically the same image everywhere. We're we're only gonna have Like QCAP twos and then depending on the platform will have variations of that, but the disk image itself will be the same It'll know because the ignition the platform knows what platform it For a chorus knows what platform is running on so it knows how to Like for example on Azure, right? You have to do this check-in process to tell Azure. Oh, I'm a healthy. No, don't kill me so We have we have code in for our core OS that can see okay We're running Azure tell the hypervisor or whatever that or the metadata server that we're okay We booted successfully the short answer is each image for each specific platform as his own platform ID That you can introspect around time so that you can conditionalize execution of something based on that So the since the service itself can detect whether it's running or it should be enabled or not and run or not So the question yeah the question was what about this dilemma okd or kubernetes the issue right now is that neither kubernetes or okd are very Actively maintained in fedora and of course where for distribution we got a derived from fedora. So it's mostly a question of Can we get maintainers for these packages and You know actually start if we if we can get it get maintainers and get people to Or have the maintainers receptive to bug reports from users of the report was then it'll make it much more appealing to ship There was one year in front This is I would say that is completely out of scope for this So we are as in container Linux It was like a base OS for your infrastructure and then you can run whatever you wanted of that as long as it is a container It's pretty much the same scope. So we don't provide a container marketplace It could be part of Fedora whatever other project that is not that say I will mention We're We're working on this other This one container called thorough toolbox. I mean, it's not a marketplace doesn't ask you a question in any way But we have we have it's just related because we do have some containers that are like Sort of made to run on Fedora corvus Like we're actively making sure that it works well with the corporate or corvus workflow So toolbox is one of those examples to allow you to sort of Easily debug on the platform Oh, of course you mean QEMU Is not the main goal I wouldn't say we don't but we didn't even say is the primary goal of that Yeah, it's kind of like it depends as usual. I think right now. It's not So the question The question was we plan to put like QEMU Yes, yeah, yeah, good point. Yeah So I actually have this script that does That I used to create VMs really quickly, but I didn't want to use my scripts for this demo to not obfuscate what was happening So I was just Yeah, got you Mm-hmm So I can take this because I found the pain So before tradition in container Linux Sorry the person was like what's do we actually have to ship our pms for open shift? Okay, do you whatever given if most of the stuff It's like containers The answer is before in containers. We were not shipping any of these like any Kubernetes Components at all we're using like Quasi container rise cubelet, which is kind of like not how it's meant to be run It was giving us problems So we are kind of like exploring a new path here where you're building an OS and we didn't have the cubelet before So we have to put the cubelet now into the US and it's must be coming from San Fedora or PM So that's the that's the short answer. Everything else like the control plane of Kubernetes or any other operator Extension or whatever. Yes, it will be run as a container But there must be something on the host itself and that's some that's something is the cubit and the cubit Right now should be in the host if somebody fix it upstream like cubit container eyes, but I don't think it's happening right now So so the follow-up comment was that even the artifact produced by open shift upstream They're mostly focused on some specific platform and use case and doesn't cover the whole Fedora ecosystem architecture and what else I Yeah, that's that's actually one. Oh, sorry the comment was if we're making a single stream where you seem to see go from for a 29 for 30 with backwards incompatibilities and The answer is we're gonna try our best to Sort of catch that before it happens and work with the maintainers to not break compatibility Yeah, that's that's gonna challenge you right because again before the atomic host You're the one rebasing so it's sort of your fault if you break the system Whereas automatic updates is our fault if we break the system Yeah Would it not be fairly hard Yeah, I mean I'm just saying that it depends on where the incompatibilities coming from if it's something with the way the package is Packaged then yeah, we're looking there, but if it's something with upstream itself I mean, we'll just have to look at it on a case-by-case basis It's more follow-ups We have been doing container in its releases for four years a little bit more With like auto-adapting to the latest kernel system the docker or whatever So we know that it's a pain. We also know that it's what we want to do And the goal is kind of like working with the whole ecosystem Whenever you have to introduce some break and change There is some way for us to out update into the new version and in general to try to convince you not to do that Unless you have a case for it So it's kind of that it's a global ecosystem problem. We should all work together But we have been doing that in the past it works pretty well, but it was like a smaller world now It's like it's a bigger challenge. That's it. Thank you very much