 Can you hear me Sean I can yeah Guess we're the only one here so far. There we go You can hear me now. I can hear you Excellent. There's Andre. It's a Talos only cloud our group. I hope we got the right day They said September 3rd 3rd 3rd 1st Thursday of each month 11, I'm sorry 8 a.m. USA Pacific which is 11 a.m. Here, which is one minute from now Doesn't mean that anyone else will come to this meeting or just a yeah, I suppose they could have heard Who was going to be talking and said, oh, no, I'm not showing up Sorry, it might be very punctual. We'll do our thing in the echo chamber if we have to man Well, somebody's recording it anyway. Oh, that's a great point No hot key for mutes in the web version of zoom The web version of zoom is severely limited Hmm Not even no gallery view either Hello, Marga. Hi, can you hear me? Yes, good good Fine. I just I had never signed up for zoom before I had always been using it like as an anonymous user So I just signed up. Yeah, same here. It's only for signed up people Gotcha, I Got worried and but eventually just trying to sign up and no Is this your first time here Marga, I don't know If if this is usual to wait or we got the right day in time, right? Yeah, so yeah, people are showing up. I have no idea who's like the host here We don't know either. Okay But we're being recorded so someone Someone did something Okay, so why we wait I'm going to try and share my screen Just to like check if it works because I use zoom inside a container and I don't know how it behaves, so Okay See a black screen. Oh Black screen. Yeah. Ah, there we go. Okay Okay, the aspect ratio is a little off, but It's a little tall for what it is, but it's readable Okay, that may just be me Yeah, I can see the yeah, I think you're good. I think it looks good. Yeah, okay. Okay. I can stop sharing until we Get whoever is supposed to be the host here I might as well do the same thing here myself. Hey guys I'm the host Yeah, that's a displaying properly or is that still off center? No, it seems fine. Okay Yeah, this is Ricardo, can you hear me? Yes. Yes. Hello. Okay. Yeah, welcome everyone Glad to have you here We have about two items on the agenda today, so I'm gonna Send the meeting notes or put the meeting notes on the chat So that you guys you all At yourself as an attendee there you go And Yeah, we have two items today and One is the Talos and the other item is flag cart So two projects that are in a similar Space or they're kind of similar, but you're not glad to have you here and You know talk about how different they are or how similar they are so Take it away. This is Talos, right? So Sure Share up again here All right, so I am Shawn Corde from Cycord Systems We critique consultancy in a number of different specialized areas including group that is So to get started talk a little bit more about one of the dreams of Kubernetes That is to abstract away the machine Abstracting a lot in Kubernetes, but at least for the purposes of Talos. We're looking at Whenever we run Kubernetes all of our operations are focused our attention as users Should be at the cluster level and not the machine level. We shouldn't care about the discrete resources on which the cluster is built That's where Talos comes in So Talos is written in Go. In fact about 96% in Go It's designed to run autonomously as part of a dynamic cluster Adapting and updating over time using coordinated API calls between cluster members monitoring points admins both human and machine and To be secure by default and come batteries included overall Talos represents many years of knowledge accumulation across a wide variety of backgrounds from Contributors who have been running Kubernetes in production since the 1.0 release and perhaps a little before It is of course suitable for cloud use But also being built with a keen targeting for the bare metal user a Number of design features have guided the development of Talos based on our experiences And we continue to refine these in order to create the easiest most manageable container operating system and the container ecosystem Sean yes Do you mind putting the slice in percent mode? Ah Yeah, we tried that before and it it ends up cutting off the slides It has something to do I think with the fact that I'm running zoom out of a web browser Okay, no way which unfortunately I have to do because I'm on Wayland and Wayland has the screen chair problem Low we have There may be a way to maximize it within the window I don't know but That might help a little bit better Okay, I think maybe to close the left sidebar with the slide list or shrink it might make the slide bigger Yeah However, one does that I think I just got rid of the menu somehow There we go you Nope, that's ruler anybody know No, but don't know don't worry too much about it Yeah, too much. All right. Yes. Don't want to take up the time Sure. Okay, so Talos is highly focused specifically to run Kubernetes in container oriented workloads Therefore, we want to do avoid using a generic general purpose OS This allows us to limit the attack service We have a tiny set of tools and almost no listing services We have no common packages and we install no administrative tools to exploit We keep a small footprint the entire OS fits in less than 100 megabytes Including the kernel a highly focused role which allows us to secure the kernel We conform to the KSPP the kernel self-protection project guidelines We are always using the latest stable current kernel version and by default We allow no loadable modules in the system All our code Is easily auditable in one place and under a single license the Mozilla public license version two Everything in the us is fully tested with a battery of unit and integration tests Covering everything from internal components to the kubernetes deployment and the life cycle management We have a true read-only image based on a read-only squash FS image with a predefined allow list for specific writable segments within the running system and even then those writable segments are only to RAM you can never modify the base image itself Talos is designed to offer no unstructured access Everything comes through the API even for internal components. They run they talk to each other through the API We have no cheating in the system at any level We have no shell. We have no SSH We include a cluster-wide PKI with automatically rotating certs and short-lived ephemeral certs wherever possible We have a common RBAC system for various levels of access control and We have all his ability built throughout the system We are designed to minimize The maintenance overhead at every level obviously, of course the API based life cycle controls help that a lot Offering such things as deployment reboot control reset or wiping the node and upgrades all via API But we also offer metrics monitoring debugging and a number of common Unix Unix like utilities to Be able to diagnose and work with any problems that might arise Kubernetes deployment and tell us is handled by a small manifest requiring only a bare minimum of required customizations to get bootstrapped Tell us is also a certified Kubernetes installer. In fact the OS is the installer Tell us is built With aggressive automation of Kubernetes in mind. We have structured customization for all Kubernetes common control components And we manage upgrades for both the couplet and the control plane We have managed recovery in the unlikely event that we lose control plane pieces We have an API by which you can recover any piece of the control plane including the API server The toy the deployment templates we use are robust high availability templates with best practices as defaults We've tried to eliminate the arcane in pillows. We try to use a simplified API built on grpc with every interface defined clearly in protopof This allows us to maximize the visibility and portability of our APIs as well as maintaining constant interface contracts We try to have abstract away the Unix primitives for instance instead of ps We have a process list instead of LS. We have list files Instead of cat. We have read we still have aliases to handle from the CLI tool The Unix style commands, but in general we've tried to make this accessible across the board To as many people as possible with no discreet knowledge of Unix background We have tried to main secure and say at maintain secure and same defaults with a minimal input Requirement from user supplied and user generated data We strive to have no barriers. We have a layered API which allows us to compose and assemble higher-order controllers at both the internal component level the external node level The cluster oriented control level and the Kubernetes oriented API level at all levels. We offer APIs For interfacing and command and control for higher level applications Since all of this is built on gRPC these controls are language agnostic And it allows users and sREs dev ops people to build their own Business logic controllers as workloads within the Kubernetes cluster in whatever programming environment. They are best familiar with So the Talos default sign features that we have No general purpose OS No unstructured access no maintenance overhead no hands Kubernetes no arcane no barriers You could say Talos says no so that you can say yes So what's next? So we're working on a whole lot of other projects. This is just a small list of things that are interesting The newest of these is cozy. It's common operating system interface It's designed to be a standard OS agnostic interface to provide structure and security between the kernel and the user space The main progress it's gonna be a year to like some kind of contrast. Is there a big organization? I think good masters later 55% they will work for Sorry somebody was Got it. Okay Couldn't quite tell if those were questions or so talking else Sorry again, I Wasn't really sure whether it was a question or not. So he Okay. Yeah likewise Okay, at any rate Do feel free to stop me. I don't have much more here But stop me if you do any of your questions. We'll have time at the end so building an EP EPF system, which allows which will allow Talos to be an invented OS based on reactive changes to To system updates. So when we add a network link when we add a block device remove block device, etc we can quickly and Atomically react at the Talos level to these changes We're building CAPI providers for a number of cloud providers and their metal In fact, they're metal. We have an entire management system being built Somewhat along the lines of the RP or matchbox, but a lot more and a lot more specific Specifically catered to Kubernetes So our better bare metal management system Argus will allow us to handle the complete life cycle of nodes from Inventory management and node provisioning Pixie booting and node classes and it'll quickly allow you to build an entire cluster and help you manage it over time all of course from a API's Sidero is even a CAPI provider for bare metal which ultimately in the best case will allow you to bootstrap the cluster From a laptop or any other computer So enough talk. Let's see what this Talos actually looked like in action so Switch this over to How am I doing on time by the way looks like yeah, it looks like a right good. You got another 15 10 minutes or so 15 maybe Okay Here we are share screen Give me my team ups all right So we have Starting up with an empty Digital ocean Range the only thing at this point I've provided is generic load balancer and Space from which we can build our system So the first thing we're going to do is to generate our Talos config Some team looks what we're doing here is generating all of the PKI infrastructure and the configuration for Talos itself Can everyone see this is it's large enough? Yep, it looks good to me Okay, good. So among these We are really only going to use the control plane and join Nodes in it is something of a legacy. It allows you to automatically bootstrap the system that I'd like to break that out and Do that manually via the API So We have a config now. We just need to create Some droplets and VMs on which to run this we'll start with the control plane nodes for the sake of the demo and My wallet we will just start one Worker node after a little while we should be able to get the IP addresses control plane nodes Then add those to our Talos config now that we have our Talos config set up. I should explain what we're doing We're setting the endpoints for the Talos API and we're setting each of those to talk to the master control plane nodes For reasonable defaults. I'm really good at copying the HGC Set up the node the default node list. So Talos CTL we should be able to now see Service list of the first of our set So we see the Talos at least this control plane node up and running. We can look at any other simply by Telling it which node to look at copy and paste again. Let's try this again. This is running On the other nodes. So the first thing we want to do Is to go ahead and bootstrap the Kubernetes cluster. We're going to use just purely the defaults and We can then CTL log and this will take a little while as it bootstraps the cluster So in the time being we'll look at some of the other things that we have So we have list for LS We can look across the system Find files You can read any arbitrary file of note a question Yes, I can management can happen anywhere, right? It can be on your laptop or can be on Yes, in this case, I'm doing the management from my workstation here at home The we're working on machines up in digital ocean Yeah Can some of the I guess the boost bootstrap and it's already automated, right? So like correct What we're doing is waiting for that finish Does it allow you to manage a fleet of servers or is individually Yes, so the bootstrap only occurs on a single server But it will bootstrap a high availability control plane from that starts with one and then it'll build the rest of the control plane after that one is up and running and Tell us handles that automatically the way it determines which nodes to use is based on the configurations That we applied to those VMs So for instance, we had the control plane config and we had join config Any node which was created with the control plane config will be Used as a control plane after the bootstrap and he did are joined as said are created with a join config We'll be created as worker nodes Got it So that was back with our Just with the droplet create we specified the user data as being one of those two files to say which type of node is created when Does it work with any cloud provider or does it just work with the digital ocean? Yes, it should work with all cloud provide all the major cloud providers including packet AWS gcp as your digital ocean And really let me know if I'm missing any but Yeah, yep Cool All right any other questions before we See if we're up yet. Okay. We are up. So bootcube as we see has finished So we should be able to grab our KubeFs config which has now been pulled into the local directory and then We'll lose since I use kconfig our kconf here locally. I'm going to load it into my Kconf database And we should be able to get nodes finally and We see all our control plane nodes the three of them Not all of them are quite finished coming up. We just have two control plane modes at the moment But the CP zero will eventually become a master as well, and then we have the one worker node So at this point we can run of course any workloads we wish and we have a fully functional Full-plane I available to control plane Alright, and that is pretty much the demo The I'm happy to open for questions Yeah, I have another question doesn't manage the life cycle of Kubernetes or For Kubernetes upgrades and yes, great to know. Yes, so the kubelet is Is bound is presently bound to a Default of the talus version that's installed. So when we upgrade the talus nodes, it will also upgrade the Kublet that's bound to that that is also independently controllable by the talus config So if you want to bind a particular version you can By modifying the config of the system Likewise each of the control plane Elements are controllable by the config So I shouldn't say each we currently had just have So I shouldn't say currently either so there are different levels we're in a little bit of flux So presently if you were to install a zero dot six, which is our current stable release Version the control plane is actually self-hosted and self-managed So there's no automatic updating the control plane components. You can edit those manually In our next versions, we will have that managed We'll actually be backing off from bootcube entirely been running All of the control plane components from talus directly at that point We'll be able to structurally control the versions of each of the control plane components Awesome and Can you Also, because I mean some people might have some applications running in Kubernetes is There any plan to Maybe do some sort of health check on some of those things when you want to get upgraded Yeah, so I should mention also of course that we have currently a Pretty early stage, but it's fully functional upgrade controller for talus so Ultimately, we're building in hooks on that so that we can have Clean exits clean Clean drains for all of the nodes so as it happens right now The Kubernetes upgrade controller for talus or the talus upgrade controller for Kubernetes We have will properly drain the node wait for any hooks that are already set up by the pods to Confirm draining of that node and then upgrade one upgrade the nodes one at a time So that is automated. It will hook in to the existing current that is existing drain hooks so a drain clearing hooks and And upgrade those one at a time. We're looking for some for building some more advanced controls in that so we'd like for instance to have a Plug-in systems and some add-ons available to talus itself And for those will be using event hooks internal to talus to be able to signal when we can perform the upgrades on any individual nodes Cool. Thank you for the yeah The answer is very comprehensive There's some So anybody else has any questions, I don't want to be the one asking all the questions Yeah, please This is Eric. I just had a question about some of the I guess networking project Sorry, I might get to the background I was just wondering if you wanted to use something else because I think you guys are in this example We're using flano, but if you wanted to use something like celium would that be possible? Yep, absolutely. As a matter of fact, we didn't normally sorry Andrew. Did you want to answer that? Okay, it's the split over sorry. Yes, we absolutely support cillium in fact Many of us use cillium as the default It is just a line in the default config Let me see if I can show that to you back to share the T-Mux session the May not be in the default config Yeah, I don't think it is it's an it's an extra manifest or it's a Yeah, it's a manifest field Yeah, but yeah, it's it's it's on the it's on the color step web site But it is a key that you can add in the cluster config Which allows you to specify any network available any in Earl for additional manifests So you set the CNI to blank and then you can use Whatever you want whether it be cillium or calico or Danum any of the available CNI's And to be clear that'll probably change a little bit in the long return the reason it's it's like that now as essentially But forced upon us because of bootcube, right? So as I move away from bootcube will probably actually move towards cillium as our default I would imagine I mean, I think we all pretty much never use flannel anyway, so Right, you know for our normal Especially not me since flannel does not support IPv6 at all Which is a good feature I should mention Talos is IPv6 clean you can run a pure IPv6 system For any of those who care and dual stack for that matter Any more questions? I think we're running a little over on time at this point, but happy to answer any more the one last question Are there any plans For making this or applying for a CF project or being Part of the ecosystem of the CNCF projects I Could I could try to answer that one Sean? So I think early on We did reach out. I think there was some questions on whether or not Accepting an operating system. I mean something entirely new You know, you know accepting an operating system into CNCF was new at the time We are a little bit of you know, as you could tell we're unique. So it's it's Yeah, the line is fuzzy. I think in the long run we would like to we don't have any immediate plans It's just figuring out What can we contribute to CNCF and one of the places I think that we could is with this cozy project Which is something that I'd love to work on with other similar operating systems and Figure out how we can get this into the CNCF As a project similar to how Container D is a container runtime interface. We want to provide an interface for interacting with the operating system But yeah to be determined Yeah Yeah, I think the TOC is pretty open about different kinds of projects especially we have sandbox now and some projects that may not necessarily Fit the usual criteria For example, the CNCF just accepted into sandbox K2S, which is a Kubernetes distribution. So Or a Kubernetes flavor. So Yeah, so I think the CNCF wants the sandbox to be kind of like a Circle of place where products can develop But I see that Talos is you know pretty advanced already so and that may not be the best fit. So, you know, it's up to you the Talos team, you know to decide what what the best one is Yeah, it's a great suggestion. We certainly will look into it. Thank you. Yeah, all right. Thank you very much. That was a very Complete and thorough presentation So I think the next item on the agenda that we have is a flat car Sorry didn't mean to cut you off, but as I understand it's also another operating system It's also based on CoreOS and but it's more advanced than that. So So go ahead. Okay Okay, yes, is that working can you see my slides? Yes. All right. Awesome. So, yeah, so I'll give a brief introduction to flat car and We don't have a demo prepared, but I Guess as it's based on CoreOS and it's not such I don't know Special thing, maybe it's not necessary to have a demo. So flat car is Developed by Kim Falk. So first who is Kim Falk? Kim Falk is a company that exists since 2015. So we've been around for five years We do a lot of things related to Linux containers security and we do open source engineering With many different clients We have a couple of products flat car Which is the one that we are here to talk about today is Linux distro derived from CoreOS and we also have a Kubernetes distro called locomotive And on top of these two products, we also do other consulting development like Rocket and Go BPF were developed by by Kim Falk engineers So what is a container Linux, this is pretty basic and I guess Many of you know this already. This are the bases where that CoreOS was built upon so container Linux means that It's a minimal distribution that only has what's needed for running containers on top of it So it doesn't have everything that a general-purpose distribution has it only has like the minimum base So there's less software to manage. There's a reduced attack surface area. Of course, it's not This is some this is some stupid bug that my zoom has I'll try again If this doesn't work, maybe Tilo, you can share the slides for me Okay Sorry, I don't know this happened already. I think my my My zoom setup that is super paranoid Doesn't like it and it it sometimes stops redrawing the screen or something Let's see. That's it show. What is a container Linux now? It's starting to show so now It's like it's a blank screen I'll give it a shot. Let me let me let me try it. Maybe I should stop being so paranoid Whatever works for you guys Well, not soon How is it that this is like the CNCF thing and it's not using a free product Yeah, okay, no, it's not a cloud native product, but we could we would be using shit see All right, folks, do you see Yeah, I was in slide four already, I'm sorry. I Yeah Here we go, that's a container Okay, so container Linux it means that We have reduced attack area surface area. Of course, it's not as small as status, but it's much smaller than a typical General purpose distribution It the file system is immutable particularly that the slash user Part of the file system is immutable. So that means that there's also less attack surface there Etsy is still mutable. So configurations are still possible, but it reduces the amount of of bugs and security threats And it has automated updates. So whenever there's a new version, it just gets applied and Works and it rolls back Automatically if it fails to boot. So like if the machine boots and tries to boot and fails it it gets automatically rolled back Next All right, so we mentioned this already, but just so that it's super clear Flatcar is based on chorus which itself is based on Chromo s which is based on gen two So we have like all of this history, but right now we are Going on our own. So chorus doesn't exist anymore. It reached its end of life And while before we were tracking chorus and everything it did now we are Going on on our own Developing our own new features and so on next If you don't know where the name flat car comes from it comes from a train metaphor Or maybe not metaphor a sea mile It's the kind of Train that carries containers. So it's that's why it's a flat car container Linux next All right, so how is flat car structure? It has four channels so chorus had three channels alphabets unstable and We keep those and we also have an experimental channel or labs channel where we try out things And maybe we decide they are a good idea or maybe not This alphabet unstable channels we We introduced the new changes in alpha and after we are happy with those changes they get promoted to beta and once They've been in beta long enough that we think it's stable and they get promoted to stable Currently the version in stable is still the Based on the last chorus version, but alpha and beta are Completely new and include a lot of new things that are not available. We're not available in that last chorus version We have images available in all major cloud providers and some minor cloud providers as well and they are also publicly available to download and We have images for a lot of different type of machines and We have a public update server that anybody running flat car can use to get updates to the latest version and That's there's more on that on the next slide So the king folk update servers, it's a completely open source project. It's a code name Nebraska and It we are running one community instance that can be used by anybody running running flat car and also People that want to have more control and want to run their own version can run it on prem or hosted by us and This update service allows Operators to decide when and how they update their machines So if you don't want all of them to update at the same time, you can Set a lot of different knobs on how you want this to happen and it also gives you Monitoring and visibility with what's happening inside the cluster wet whether an update fail in Many machines or not Next slide Okay, so what's our current status? We have a team at King folk that is dedicated to this project and we are keeping it alive. We have build and test infrastructure a lot of integration tests that make sure that it runs correctly in many different cloud providers and A lot of that is thanks to packet. We have sponsored infrastructure by packet which helps us a lot We have all channels maintained completely independent from chorus and we have support infrastructure We we have already a bunch of large enterprise customers with a Few thousand hosts. So it's it has been growing There's a graph coming up in a couple of slides and and we are happy that a lot of people are adopting flat car and Yeah, as I mentioned, it's integrated into lots of cloud platforms and coordinates distros So in the next slide We have a bunch of logos of supported platforms. We are working on the cluster API integration. It should be Greedy in the near future. We are also working in integration with Gruncher, but We so we care not only about getting integrated as a base OS on the different platforms, but also about being integrated into into things like like Rancher or kugermatic like integrating with the whole ecosystem and The next slide we have Pretty graph although without numbers and basically this shows that when coro is Reached end-of-life or or a little bit before chorus reached end-of-life a lot of people decided to migrate to flat car and This has kept going up so It's it's nice that these people that migrated to flat car didn't then migrate away from it They they were still happy With the results and yeah, and we see this this constant increase in adoption As time passes Alright, so plans for the future we are working on publishing a public roadmap that will be maintained in the open a Lot of the work that we have been doing in the past few months has been updating the chorus This the last chorus version to newer versions, so We have updates to the kernel to system D and a lot more packages coming up So in the current beta we have kernel 5 4 and system D 2 4 5 and And then once all of that isn't stable. We plan to keep it updating that and This is this is a lot of work because we were working from like the basis of chorus and Encores kind of stopped doing updates for a long time So there's a lot of packages to update there, but the goal is to basically reach a point where everything is up to date to the latest versions and But then in the last point in this light the LTS release Some people realize that they actually like the facts that they liked the fact that For the past year or so chorus had been making very few updates They actually like an OS that changes less More than they like being on the fleeting edge. So we are working on Releasing an LTS version, which will not change so much. So it will be supported for 18 months and Then after that people can migrate to the next version, but it won't be changing all the time how the stable Like the stable version the stable version will keep changing As It should and yeah, so that's basically it we have one more slide But yeah, basically We are proud to be continuing the legacy of chorus chorus Doesn't exist, but the spirit lives on in flat car and we also The locomotive our coordinates distribution is kind of like the the legacy of tectonic, which was the the chorus coordinates distribution Yeah, and that's it for the presentation and Hopefully you have some questions Yeah, I have questions so Not 100% familiar or super familiar with how chorus was managed but Do you plan to have also an API based type of management or do you have that already? Depends on how you look at it, right? So that's high by the way, I'm Kilo One of the directors of engineering at Kinfolk and my team owns that part So the good good not quite taking the direction that Taylor's is taking, right? so one of the approaches that of chorus was that The operating system layer to your cluster Which either runs containers or runs to the natives which one runs containers It's kind of just a piece of boring infrastructure like you don't want to handle it too much You want to have itself updating and maybe make a noise if something's wrong, but it shouldn't be too exciting for you, right? So and that's them That's the philosophy that chorus growth You do most of your configuration at provisioning time So we use just like for us views ignition for that is a Certain amount of compatibility to cloud to Cloud in it as well so you can do either or And you have minimal changes at at reboot time or that's about it Most of the operating system remains immutable. So if you want to change anything you really really thought That's for major changes The operating system doesn't have any package management, you're not installing software or flat car because you run containers, right? So in order to upgrade you just you know do this AB position flipping thing your update service on flat car Post the update server, which is either the public instance that we host or your own Nebraska instance that you host and Writes the update to the second partition and then either does the reboot depending on its configuration or signals a Upper level that it's now time to So if it cannot twice to be unexciting in the in the whole life cycle sense and that's the That's the the chorus approach that we're continuing with that Yeah, I got it. So yeah, I was I was kind of curious. I mean someone thought the operating systems are Basically moving towards that this API based Approach the other project that I familiar was sort of familiar as bottle rocket and also doing something with API But I think it's just ultimately is what the philosophy is on the with the project, right? And and how they want to what the users what the users will want, right? And their profile Mm-hmm the closest thing to an API API we have is the configuration at provisioning time Other than that, we just don't want to be in the way You would basically solve everything else with the containers Anybody else has any other questions? This is Eric. I did have one question and sorry. It's just may have already been answered But I believe they're still fedora chorus and I remember seeing the announcement that with black car It was going to be a seamless, you know the position where it's with the door There might have been a little bit more of I guess above I was just wondering if you guys could have shed a little bit more light on what the differences between, you know door chorus and black car Sure. Yeah, actually there was I Had a slide about this and then I removed it because I thought like okay chorus now has been end of life for like so many months that Talking about the update is not relevant anymore, but I guess I was wrong So updating from chorus to flat car is a very very Simple thing you just change the server That you are using for the updates and then you get the update to flat car and so the update it's just handled by by the OS like just another update and You you could have like your machine that is running chorus and then gets the update payload from the flat car server and when it reboots it's running flat car and That's it. So you basically just need to change the server that you get the updates from and that's it and So I guess the main difference between Fedora chorus and flat car is that we are a drop-in replacement Exactly, it works exactly the same way chorus worked Whereas in Fedora chorus, they went a different way and you basically would need to complete it redeploy and a lot of things have changed so like you would need to Adapt your setup to run on Fedora chorus the principles of being a container Linux aren't the same But it's not the same setup. So things change Gotcha, thank you. And then I did see a recent announcement about ebpf support. I think could you elaborate a little bit more on that? Sure, we we use ebpf a lot. So we we have a bunch of tools that we developed and so we ensure that Things work like it's possible and easy to use ebpf in flat car. It's not like very a Special changes that we are doing we are just basically enabling things in the kernel that are there to be used and we are Just providing containers Container images with bpf tools that you can run on top of flat car so that you can use bpf on flat car and so if you run a Kubernetes distribution on flat car you can use the bpf Tools that we wrote or the cube TTL trace that other people wrote Directly without having to do anything special because it just works out of the box so there's this This dpf-based tool that we are sort of sort of tools that we've been developing And it's actually pretty that Albon who was in this meeting earlier left Because he's the he's the director driving that team and So the the tool set is called inspector gadget and it allows you on the Kubernetes level to gain insight into what your what your server is doing and it uses Uses ebpf for that so Flat car will need some integration work and some testing work And that's what margar was mentioning and this is mainly about making sure that everything works from this from scratch and seamlessly and uses and just go and use tools like inspector gadget or the Vcc Tools you that is around and without jumping through any funny looks Any more questions? so, yeah, I think the my last question is The same that I that I asked the Talos team is are there any plans for making this cncf project or applying for one of the The stages and the cncf Tilo, I think this question is for you. I don't know the answer to this Was being quite I'm thinking hard say so our CEO Chris cool Sorry, I don't mean I don't mean to put you on the spot, right? It's a very good question. Just it deserves a very good answer So we didn't fully investigate options yet. It sure is a An interesting direction to explore I would if he'd be present I put I'd actually Forward this question to our CEO Chris cool. He's I think one of first cncf ambassadors. So there is very strong interest in Working with the cncf But we don't have any concrete plans or anything to announce at this point in time Yeah, got it. Yeah, so I think the cncf is always open, you know for for new projects. So whenever the project maintainers decide to apply then I think it yeah it's not They'll be happy to have the project or you know, whenever they're feeling like they're ready, right? So Any other last questions? twice Well, I want to thank the Talos team and I want to thank the black car team for both of your presentations and yeah, and and we also have a Sig runtime slack channel and mailing list and any other Topics that you want to discuss or any questions about these projects or any other projects that are related to Sig runtime Feel free to send them that way. So yeah, thank you very much Bye. Bye guys. Bye. Bye. Bye