 Okay, so hey everybody, we are here today to do a hands-on workshop for Fedora CoreOS and I'll quickly go into who are we. So my name is Dusty Mabe. I'm a software engineer for Red Hat and I work on Fedora CoreOS and Fedora Cloud and also Red Hat CoreOS which kind of leads into the OpenShift product here at Red Hat. I'll hand it off to Timothy to introduce himself. So I'm Timothée Ravier, you can call me Tim and I'm also a Fedora CoreOS engineer working at Red Hat. Thank you. Thank you Timothy. I'm Nasser Hussain, mostly known as Nasser Hussain in the Fedora ecosystem. I mostly work with the Fedora Joint SIG in order to help getting started with the newcomers and improving that experience. If you are a newcomer, talk to us at Fedora Joint SIG. Dusty, back to you. Okay, thanks Kaz. Okay, I'll do a brief introduction to Fedora CoreOS. If you happen to catch my talk yesterday, this might be a little repetitive but I'll try to keep it under 10 minutes and then I'll introduce the workshop and we'll have Timothy and Nasser actually go through the workshop here or you can execute it on your own, on your computer at home. So real quick, what is Fedora CoreOS? It's an emerging Fedora edition. Fedora CoreOS came from the merging of two communities. One of them was CoreOS Inc., the company's offering called Container Linux and the other was Project Atomic's Atomic Host. Fedora CoreOS incorporates the Container Linux philosophy, provisioning stack and cloud-native expertise and also incorporates the Atomic Host Fedora Foundation, the update stack from Atomic Host and also enhanced security with SC Linux. So what are some of the features of Fedora CoreOS? One feature of Fedora CoreOS which is a little bit different than traditional Fedora is it has automatic updates on by default. This kind of means that by default things tend to be a little more secure when automatic updates get applied because CVEs and security fixes and the like hit systems faster than when usually somebody reacts to it. But in order to achieve automatic updates, they must be reliable. So how do we make them reliable? We have extensive tests in our CI pipeline and we also have several update streams which allow for people to run future update streams before they get disabled so that they can report issues and they know if they're going to break and they let us know so that we can try to fix it if it's going to affect a lot of people. We also have managed upgrade rollouts over several days which means if we start an upgrade and the first 10% of people that get the upgrade have failures, we can stop it so the other 90% don't actually have to deal with that, which is just a mechanism for us to be able to control the upgrades and see or get more information on how successful they are before continuing. And then, you know, when things go wrong, we have an escape plan. So if things aren't working for you for your application after an upgrade, you can run RPMO history rollback and that can be used to go back to what you had before, which obviously was working because you were happy. And in the future, we want to be able to do this in an automated fashion. So based on how a user, you know, specific things that a user defines, specific health checks that a user defines, we want to be able to run a few health checks on boot and say, oh, these health checks that the user defined as checking if their application was actually up and running failed and they have basically said that they would like to go back if that's the case. So in the future, we'd like to be automatically be able to roll back. I want to go a little bit more in detail on update streams. So we have three update streams that we offer. One is next, which is experimental features or Fedora major rebases. So for example, once Fedora 33 beta comes out, we'll probably switch our next stream over to F33 content. We have our testing stream, which is essentially a preview of what's coming to stable. So if you want to get a window or look into what's coming two weeks down the line to the stable repository, you'll run the testing stream. And that will let you know if your stable nodes are going to break. Stable is the most reliable stream we offer. And it's just essentially a promotion of testing content after some time. The goals with these streams is to publish new releases into update streams every two weeks and to also try to find issues as soon as possible before they hit disable. The next feature I want to talk about is automated provisioning. So Fedora Core OS uses ignition to automate provisioning. And theoretically, all of the configuration for the machine is baked into the ignition config. That means it's very easy to automatically reprovision a node. So if you lose a node, no sweat, other than if it happened to have data, that's not backed up somewhere. Obviously, we can't save your data. But in general, the configuration is very easy to reproduce. And hopefully you didn't lose data as part of that. And ignition, because we use ignition, it's also the same starting point, whether we're on bare metal or cloud. We use an image-based approach. So whether you're on bare metal or cloud, you start from approximately the same image. And since we use ignition everywhere, you get a more unified experience whether you're in the cloud or on bare metal. So a little bit more details about ignition. It's a JSON document, basically. It's usually provided via some sort of user data mechanism. It runs exactly once during the initRMFS stage on first boot, which means that if provisioning fails, the boot fails. There's no half-provision systems which can sometimes lead to confusion and possibly worse side effects down the road if one little part of your configuration didn't actually apply. Ignition configs are machine friendly, but not very user-friendly, which means that we probably need to make something that is a little better for our users to interact with. So we created a tool called Fedora-CoreOS config transpiler, which translates ignition spec into, sorry, which translates human-friendly YAML into ignition spec. And Fedora-CoreOS config transpiler doesn't only translate, it's not just a YAML to JSON translation. We also have some distro-specific helpers in there and just some things that make the user's life better experienced. One of those is if you add a new file system and you want it mounted, instead of having to create the system, the mount unit yourself, you can just tell Fedora-CoreOS config transpiler you want a mount unit and it will automatically generate one for you. The next feature is being cloud-native and container-focused. So in general applications running containers, so we have the Podman or Mobile Engine container runtimes for that. And because of ignition, we can easily deploy new nodes and have them join a cluster. So you can spin up 100 nodes or spin them down depending on your needs. So it's a little cloud burst functionality is supported natively. We also try to be ubiquitous. So we're trying to offer Fedora-CoreOS wherever you want to run your workloads. And right now we have, you know, eight, nine different platforms that you can run those on. We're trying to add more all the time. The next feature is OS versioning and security. So Fedora-CoreOS uses Archdemoistry technology, which I like to call like get for your operating system. So if you can imagine a single content hash that basically is, you know, a single identifier that tells you all the software in a particular release, that's pretty powerful, especially when somebody's creating a bug report and trying to relate information. We don't have to ask them, you know, what version of system did you have with this kernel? We know basically they give us a version number or a hash and we can reproduce the exact problem that they're having, you know, unless it's like specifically environmental that we don't have access to. Archdemoistry also uses a read-only file system mount which prevents accidental OS corruption. So accidentally RMR-RFing something you shouldn't have or like unsophisticated attacks from, you know, modifying the system. Obviously more sophisticated attacks would be able to do more, but that's where SE Linux comes in. So if your application inside of a container gets compromised somehow, SE Linux will hopefully keep the, you know, keep the compromised application just limited to itself and not accessing either the host or other applications on the system. Okay, so what's next for Fedora Core OS? We want to add more cloud platforms. We're working on that all the time. We want to add support for the other architectures that exist in Fedora. We have a proof of concept for ARM64 right now and hopefully that'll prove out, you know, the concept and we'll be able to add the other platforms. We want to have more human-friendly helper functions in FCCT. We want to do better, you know, have host extension like ability for things that just can't run in containers for whatever reason. So more reliable package layering right now. Package layering is very reliable because of differences in the remote yum repositories and the base layer on the host. Improve documentation, tighter integrations with upstream projects, all kinds of things. Okay, so for the workshop today, we have five different tutorials. Well, the first one's kind of like just setting your system up, but we have a few different tutorials that we'll have you run through. The initial setup is just setup instructions, download this file and, you know, set up a few aliases and whatnot. The second one is enabling auto log in and custom host name. So we'll basically show you how to write your first ignition config. Well, the door core S config and then translate it into ignition and then start an instance. We'll have you start a service on first boot. We'll have you learn how to SSH into a machine and also automatically start a container. And then we'll explore the system a little bit and show you how updates work. And as for the workshop itself, we have a few different options. One of them is just executing it on your own as we would have done if we had an in-person workshop. So with an in-person workshop, usually we kind of go through this, you know, introduction at the beginning, just like I'm doing right now. And then everybody goes and follows instructions and then they raise their hand when they have questions. For this particular one, that's still an option. You can run it at home, you know, and then you can come back to us and ask questions. But we decided since it was virtual, we would also have, for the people that want it, the live stream here will actually be just us running through the steps of the tutorial ourselves. And then people can kind of pop up and ask questions. We really do want to encourage people asking questions as we go because we know this content and we want others to learn it. So you asking questions as we go is really the best way for us to get information out there. So there's a link to the tutorials on our docs website. They're in the slide. There's a HackMD document, which I pasted earlier in which one of Timothy or Nasir will actually paste into the chat here soon again, which kind of gives a little more detail what you need for this workshop, where you can go when you have questions during the workshop and then after the workshop. So head over to that HackMD and that should actually give you everything you need and I'll paste it. And let's see, okay, and I have a slide for getting involved. So if you want to get involved in Fedora Core West after this, maybe you execute this workshop and you really like it. We have a website, we have an issue tracker forum, mailing list, IRC. If you search Fedora Core West on Google, you should be able to find how to get engaged with us. All else fails, if you have an IRC account on FreeNode, join us in poundfodora-corewest. And I'm going to hand it over to I think Nasir first. I'll stop sharing my screen. Actually, real quick, does anybody have, are there questions in the chat that we should address first on the video sessions maybe? So there's one. We are really optimized OS and so we're mostly not related to that. We are two different projects and they have their own operating system, their own version of system for Kubernetes, but we are a distinct project. They are similarities, but different systems. Yep, we have a question. Can I use Silverblue to follow this tutorial? So if Silverblue is your host OS and you're able to launch virtual machines like with Lidvert, you should just be fine. Quick comparison between Fedora Core West and continue to optimize OS from Google. First, continue to optimize OS from Google. You can only get on Google Cloud. So if you're not running on Google Cloud, you're out of luck. Fedora Core West runs on a lot of platforms. Second, Fedora Core West is based on Fedora. So that's all the Fedora packages, everything that's in Fedora that you can get on Fedora Core West. And that's the same thing. You get exactly the same thing. So if somebody walks on Fedora to fix something or to have a new package, you can get it on Fedora Core West. It's the same. And we less focus than I would say the container optimized OS from Google. In a sense that Fedora Core West can be a Kubernetes node, but it can be something else and you can make anything you lack from it. It's not specifically targeted at Kubernetes. You can do that. It's a great way to do that to be a Kubernetes node, but that's not forced on you. Nassir, do you want to go through the setup in the first few for the user? Great. Thank you very much Dusty and Timothy for your question answers and the introduction. So what I'm going to do is now share my screen. Am I audible? That's great. So let me share my screen now. Is my screen visible? Yep, we can see it's good. So in the Fedora Core West official documentation, we have a section called tutorials where you can get started with Fedora Core West and how things work in the ecosystem. So we are going to start with the first tutorial setup and which is we would be using the Fedora Core Archive and would be virtualizing it on LibWordD. So in order to follow this tutorial, you need to be on a Linux host in which LibWordD is enabled. So you can virtualize using KVM. So if you need some help with support with enabling the hardware virtualization support here, you can, there's how you can do. And you would need some co-host tools in order to get started. So it asks for creating a directory which I've created here. So it's nest demonstration hack first where I have the aliases set. Like there are some tools that you need to work with, as Dusty mentioned in his introduction about FCCT, the Fedora Core West config transpiler. What it does is it transpiles the ML Ignition config in a not very human readable Ignition config. You write the code in a human readable ML and the transpiler transpiles it to a Ignition config. And there's another tool, Ignition Validate, which you use to update the Ignition files before provisioning your Kubernetes, before provisioning your Fedora Core West instance in order to ensure that the Ignition file isn't having any issues with the syntax source. And the other thing is the Core West installer. What it does is it pulls the latest Core West stream image you would like to pull. So let's start. So I have these aliases set up. You can pull these images. Let me see if I can do that. So I have these images pulled locally. So as you can see, these are already pulled on my side with Podman. And you can set your aliases to Core West installer to running a Podman container which would save the Core West stream archive locally. And this is Ignition Validate, which will be validating our Ignition configs. And the last one is the Fedora Core West transpiler. So I'm going to set that as well. And the other thing is that in order to get the QQ archive that we would be using to virtualize the Fedora Core West instance. When you switch over and you copy and paste, can you make the font size on your terminal window just a little larger? Sorry, let me fix that. Let's see. Oh, if I've got this one. No, no, no. One second. Yeah. You just copy and pasted the aliase commands to that one window, right? If you just change the font size a little larger, you should be able to see things. So for that, I would like to just go, not share my screen for a second and then fix my config of a simple terminal. Yeah, no worries. Control plus do the trick, but I'm not using the GNOME terminal. It's SG, so I just need to type the SG in SG, and change the font size very often. So 15 would be fine. SG. If we can't change it, that's fine too. I thought it would be okay. Sorry. Yeah, it does. Now is it? That looks a lot better, yeah. So I'm going to go back to stupress and that's it. So what you can do is now using the alias of course installer, you can download the QCOW archive, and this is how, oops, I'm not on bash. Sorry for the inconvenience that press, it runs from there. So CD documents, Dora, OS, next demonstration hack press. So now we are going to be setting the aliases as well. And this one. So what's going to do is run those containers for us. And now we can pull the course installer QCOW image, the virtualized one. So we are going to pass the download here and then what I want is the QMO image with that I would like to decompress as well. So this is going to pull that and for the convenience, I have that image downloaded, so it's going to start the download here. But what I have is this image downloaded pre-session. So it's not, you can easily, it would be in a QCOW two format. So what I did was move that image to a Q, Fedora as the arinium data as Fedora chorus minus QCOW two. So this is how you can do that. And if you are on Fedora, we have all of those packaged in the Fedora official repositories. So you don't need to run them on Podman, but also it's recommended to run those tools from containers because that ensures that the latest versions are there with packaging it needs to get approved and ensure that everything works fine and stuff. So if you would like to manually build, manually download those images rather than using the course installer, you can use Curl or W get depending on how you need it. So you can do that by creating a variable as release and then pulling that from Fedora's course build weapon. So now let's ensure that the image that the archive we got is signed from Fedora. So I'm going to import this thing, the Fedora GPG keys, and then I'm going to ensure that oops. So that means I have, okay, I don't have to release the variable setup. So it turns out I don't have that far. That's for Manu. Sorry everyone. So let's move on. Rather than verifying, I have those images verified pre-decession. So I don't have the sick files yet. And you can set the FCCT. So we have already these set already. So as you can see now I have FCCT available as well. Manu Manu version and Ignition Validate version and the course installer. So now we don't need course installer as we've got the image and I've moved that as Fedora course or took out through. So this was the initial setup which I had some issues with because I have had these things set up pre-nest in order. I don't break things at the session but things turned out not the way I expected. So let's get started with the initial provisioning scenario in which we would be. Real quick, so just a recap of that setup session. So basically what we've done at this point is for the workshop there's three different tools that we use in the workshop. One of them is FCCT. One of them is Ignition Validate and the other is CoreOS installer. The reason we use CoreOS installer is just to handle downloading the image for us and verifying that it is the actual image that was delivered by release engineering. But there's a few different options for that. You don't have to use CoreOS installer to download the image. It just happens to be one that downloads the image, uncompresses it and also verifies that it was signed with the Fedora release engineering key. So it's just a nice tool to use for that. As far as these tools go there's three different ways that you can use them. One is in a container as Nasir showed earlier. One of them is with RPMs. So if you happen to be on Fedora you should be able to DNF install each one of those. And then the third is you can pull the actual binaries from GitHub. The GitHub releases for each project if you want to. So that's all this tutorial did was just get us set it up, downloaded an image and then our tools are set up for the other tutorial. Thank you Dusty for the recap and explanation. So this is how you set up your environment to get started for the workshop. And now we're going to be moving on to creating, running a basic provisioning scenario in which we are going to be auto logging, sorry, auto add-in system de-dropping unit which will get us auto logging in the TTY terminal and where is the terminal access. So whenever you need to provision the Fedora PoS instance, you have to give that an ignition file that Dusty introduced earlier because as the archives are distributed you are not actually modifying the archive but provisioning it by using a way to call ignition config files which we can create by using fcct. So in this very first provisioning scenario what we are going to do is like get the very basic things done and that is to add a system de-unit drop-in to get overwrite the default serial getTTY service that we have and which will auto log in the core user that in the serial console that we have. And the third thing that we are going to do is like add set the system hostname to tutorial and add a batch profile that's going to tell system de that to not use a pager for output because if you don't use that you're going to see some warning outputs there as well and raise the kernel console logging level to hide audit messages that we have because that's going to take a second and it's going to write those audit messages on the terminal on the console as well. So we are going to raise the kernel console logging level as well. So what we are going to do is create a fcct auto logging ml file that we have. So let me see what we have here. So I have it here as fcct auto log in fcct auto log in here. So what it's going to do is create a variant called fcos in which we set the version to 1.1.0 you could set it anything you would like and what we are going to do is create a system de-dropping unit in which we are overriding the serial getTTY service that we have and we are going to create and give the name as autologin core.conf in the dropping unit in which we add the contents. In the service we are going to overwrite the execution start in the main unit and then add a new execution start with a minus prefix which is going to ignore failure. So what we are going to do is prevent getTTY to not do the default exec start and run the usr slash bin in getTTY with auto logging the core user and we are going to use that as in the terminal. In the storage section we are going to be modifying the files at hostname. We are going to set the mode to 0.644 and the contents we are going to add tutorial in order to set our hostname to tutorial and in the system de-pager we are going to tell system de to not use a pager when printing information and with the silence we are going to create another file which is silenceordered.conf which is going to add the kernel.printk is equal to 4 which will raise the kernel logging level from debug to warning in order for us to hide the audit messages from the console that we are going to be using. So as you can see this is a really human readable format like you can easily go through this yaml file and understand what it's doing. This is how the thing that Dusty mentioned that we are going to be using a human readable format. So as we have got the yaml file locally so now we are going to be using the fcct fedora coes config transpiler in order to change that into an ignition file. So what we are going to do is I'm going to be using the pretty format along with a strict keyword and I'm going to provide the yaml file which is going to output the ignition file. So what it did is it converted that to a ignition file ignition config file and as you can see this is not as readable as you would like it to be so it's not that human friendly so we are using the fcct in order to transpire and hash that data here. So let's provision a basic instance here. Before that we are going to validate that the ignition file we just generated with the transpiler is valid or not. So what we are going to do is this one the ignition validate the tool that we get that we installed or downloaded earlier in order to make sure our ignition file works fine. So it's a success that our ignition file can be used to provision. So now what we are going to do is set up the correct AC Linux label in order for the word installed to access that autologin.ignition file. So I'm going to do that real quick and now what we are going to do is provision a fedora coes virtual machine in which the name would be fcoes. We would be creating we would be using two virtual CPUs two GB of RAM and the OS setting the OS variant Asperger 32 and setting the network bridge to VIR DR0. So in order to make sure that things are working fine at your site you're going to make sure that the styles of your lib word b is lib word b. So it is active it the virtualization daemon is loaded and is currently inactive but we are going to and that the socket is listening and whenever we provide something to the socket it's going to make the service active. So let's provision this basic word to install which is going to when the in the command line is going to provision that and in the course conflict we are going to be running the ignition file the disk size is 20 and the backing store we are using the backing store as we don't want to play with the base with the QCOW 2 image archive that we have and use another 20 GB disk that we would attach. So I'm going to copy that command here and what it will do is in our system d unit we used the we asked that to attach the dty console here so it's going to do that and as you can see it starts provision. One question that we had in chat so at least one user has a system where it doesn't recognize the dash dash os dash variant option to vert install. So if you happen to get an error when you pass dash dash os dash variant equals fedor 32 you can just drop that argument completely. So if you're having the same problem just drop that argument and it'll still work that just makes the VM boot a little faster. Thank you just for answering the question. The OS flag is like totally optional that just less word vert install know that we are using the fedor 32. So as you can see it has provisioned using the ignition and to ensure that the ignition config work. So as you can see the tutorial login for it says the auto login automatic login here and let's ensure that things are working fine. So as you can see we have got a dty console here. So let's get the system cpl about the dty service that we have. So here you can see that the service that the auto login dot minus code that we provided using the ignition config file we have it here which is going to overwrite the exact start in the main unit with somewhere above it and what it will do is like it will get it will set the auto login to core and it's going to provide us the terminal interface here. So as you can see it has provided us and the ignition config seem to work fine. And let's verify the other thing that we did was to set the hostname to tutorial. So you can do that by a hostname cpl. So here you can see it's the static hostname is tutorial and you can also do that by getting to itc slash hostname. So that seems to work fine. Now what we are going to do is as you can see fedora core seems to work fine and our ignition config file has provisioned the instance the right way. So we are going to verify that the rpms that are distributed that are packaged into the fedora cores are working fine which is one is ignition which we used for provisioning the second one is kernel. The third is mobi engine rpm which is going to provide you the docker engine and the docker commands along with that we have podman podman system d rpm os 3 like that's that's it if that's the introduced that it's like a get for your opening systems and there's an catty service which is the auto updating service that we have within the fedora cores internals. So let's ensure that we have the right versions here. So what you can do is if something seems to be failing you can like share with us that the ignition is not working fine you can tell us the exact version which version that seems to be failing along with that we are using rpm os 3 so rpm os 3 stands oops it's not a flag it's fine. So as you can see in the os 3 we have the version as fedora 2032 one this is the stream version and we have the comet hash here so if something seems to be failing you can provide us the comet hash and we can we can verify that how can we fix that and provide you some fixes so you can see you can do that by the comet version and in order to ensure that the system ctl service for us and catty is working fine so we are going to ensure that as well. So what's doing is like as you can see in the update of default the auto update is enabled by default and i'm not sure about this error dusty would you like to specify what this error is it seems like yeah it just looks like maybe zincati started before the network was completely up but yeah that that's safe to ignore basically zincati will periodically pull the update server and if it has an error on startup it'll resolve itself late okay so it's just log that information i wanted to you can system ctl restart zincati and it should not show that error zincati and uh you'll have to do it with sudo on the front oops oops so i didn't like let's try it now status zincati service so as you can see initialized auto bit so it seems to be working fine now it's month month full uh why is pipe cat so the system d pager work around that we did was only for the core user so when you sudo um that's not set uh got it so what now it's working fine that zincati was initialized before my network was configured so it resulted in that error but it's like uh it has it's like internal loop in which it will pull all the updates so it will auto update when it gets one so it seems to be working fine another thing that we can do is in order to investigate the logs that we have uh from the ignition config we can use the general cto in order to where this logs and i am pretty sure it's going to be long so i'm going to pipe that in more so uh as you can see uh ignition started the it started the zincati update server it tells us about our information and the logs that we have of ignition so uh whenever the ignition created the files did all the process that we specified in the ignition config file and what it did was it then you mounted itself and the ignition finished and successfully so here you can see so the next thing that we can do is like ensure that the podman and the podman container runtime is working fine so we have that here as well podman version is going to tell us the podman version is currently 1.9.3 it's running on go 1.4 14.2 it was built on go 1.14.2 and you can get the podman log just it's not log it's in 4 sorry so as you can see here's the info about the podman that we'll be running another thing is that you can also in docker and the docker the docker in the docker service that we have can be initiated by using the docker command or the pseudo docker command or unless you are in the docker group as well so when i'm the issue docker running docker and podman is that it can result in unexpected behaviors in both times so you can check this uh frequently ask questions entry that uh if you are running both at the same time it can cause some issues and unexpected behavior it's highly recommended to not use them at the same time uh that in in fedora course we do have docker.service disabled by default but if you would like to start you can run any docker command and it's gonna and the docker socket is listening to it and it's gonna enable the service here so let's continue uh here it tells about the same thing in the note and what i'm gonna do is now unattach the uh the console that we have that i can do with control and describe closing bracket so it's gonna unattach that and it is unattached now and what i'm going to do is now destroy that uh of course so it's going to destroy that thing and we are going to be removing all the storage that we generate that we created uh for the session and it's going to do that as well so it's now removed so dusty would you like to add something here about the first a programming scenario that i i think uh i think we're pretty good i mean pretty much all we did in this lab was go through create our first fcct config and then um you know switch that over run fcct to create ignition out of it and then explore what it did to the system on boot um yeah look good that's great so now we can move on to a second tutorial which is focused towards uh running a script a batch script on boot so what we are going to do is we are going to be using iconhaslib.com to update the issue gen and from console login helper messages that's going to output the notes public ip address that's that's really helpful when you are working on a cloud environment where you have different public or private addresses so we are going to be showing that script in usr local bin and as a public ipv4.minus4 ipv4.sh when we would be provisioning the machine using uh ignition so in order for that to uh run on boot what we are going to do is we are going to be creating a system deservice as well which will ensure that console login helper messages issue services working fine and which will be running before that and after network is online and the condition path that exists is issue gen public ipv4 in the service what we are going to do is set the type to one shot in the exact start and we are going to be running the public ipv4 script that we would be creating and after executing that we are going to use touch to create issue then public ipv4 file and that will be remaining after exit yes and install that which is wanted by console login helper messages issued in the service so we will call this unit that we are creating uh and we will embed it into the fedora coos config in a ml format so let's see that ignition file here uh fccp services that we have let's focus on that so we are going to be creating the same variant we are setting the version to 1.1.0 in the system d drop-in unit we are going to do the same thing in which we will overwrite the exact start in the main unit and uh set the autologon to truth with the cold core user and we are going to create another unit here uh the drop let's say as far as i know it's a it's a system d unit in which we are going to set the it will be running before console login helper message services enabled and after network is online so we can have our ip address there and it's going to execute the public ipv4.sh file which we haven't created yet and which we would be doing in the storage file section here here it is so it's going to create a local file a file in local slash bin slash public ipv4.sh directory in which the mod is set to executable and this is what the file looks like it's going to detect our public ipv4 address and print that out to console login helper messages we're just going to show that in when we provision the machine other than that what we are trying to what we are going to do is set the hostname to tutorial like we did earlier set the system d pager to cat and telling the system d to not use a pager when we are printing information and the other thing is we are going to set the kernel the kernel's logging level from debuck 7 to warning with 4 to hide the audit messages so this seems pretty simple uh would you add someone like to add something uh okay sorry i thought i heard something nice there so it seems good now we are going to be so as it seems pretty readable this is where the fcct comes in we are going to change that into a ignition config file sorry it's going to be pretty and we are going to set the layout to strict and oops and we are going to set the output as services.ignition config and you can verify that the files created service oops services.ign so as you can see this doesn't seem as readable as the ml file so this is what we would be providing the word install in order to provision the fedora core instance so uh now let's validate that our ignition file is working fine so ignition validates services.ign uh equal success if you would like it to success it succeeded so if the ignition file has any problems ignition validate is going to show that to you in the console so let's set the correct ac Linux label that we have again so it's going to change the security context in order for word install to access it and now we are going to be using word install to pre provision our fedora core instance the same way but we are going to be providing it a different ignition config file here with the same um same uh some config same specs like we are going to be providing still the two virtual cpus the ram would be 2048 along with the os variant as fedora 32 if that gives an error you can it's pretty optional if you're on fedora i believe it wouldn't have any issues it's just to make the boot up process a little faster and word install know that we have any fedora instance which is going to import a network bridge and the graph we don't have any graphics there yet and with the in the kimu command line we are going to be using the we are going to be providing the services the ignition file we have setting the disk size to 20 gb and we are going to be using the kyoko image as a backing store so let's get started uh it is connected to the domain it seems to be provisioning stuff starting this uh this is where the ignition is working this is from the sdnx policy it's gone now uh let's see about ipmo s3 system demon is working since we can find network is online now and as you can see hosting service uh it seems to work fine so as you can see with the console login helper methods messages we did get our public ipv address here and as you can see we can verify the thing that our the newly created services working fine so what it did was it created your scroll in order to create a request to icon has it and send the output to a console login console log this is what we did it was just to get your public ipv4 address along with attaching the tty console and auto login and along with providing the detecting public ipv4 address using icon has it so this seems to be working fine now i can and the file would be also be created also would have been created so if you would like to ensure that the files created you can do that by that local it's in bin public ipv4.asset so here you can see the file that we created using uh ignition so that's it now i'm going to detach my console session here and i'm going to destroy the instant that's running of course i'm going to undefine or remove all the storage that we have it so it's detached so this wallet from my side i had ran the basic initia ignition config files and what demonstrated the basic provisioning scenarios that we have and now i'm going to be passing that through timati to continue where i left off and continue with the second with the second two provisioning scenarios that we have so timati thanks nassi great um awesome do you have any questions so far okay somebody just finished it also that's that's good um all right uh let's go and move on to the next part so i'm going to be set up to do the setup and let me know that's good so i'll start that okay yeah we can see your script great all right he's both at the same same time doing something yeah maybe i'm going to make it this bigger here how big is this maybe on the right hand side terminal window i would maybe go at least one step up okay yeah that's probably yeah that's probably good on the left hand side maybe make it bigger well yeah i can do make this one bigger here yeah i think that's good yeah it's good here all right so for the next two steps so we are going to do SSH access and in starting containers and then we will do some updates so first let's work on the SSH so we are basically going to reproduce the same steps that we did before but with a different fedora core s and ignition config so let's start with SSH access so with the by default you only get one user which is set up by default it's a core user but there is no specific access authentication ss configuration set so you have to do everything you have to tell the system everything if you want to get access to it so for now we've been setting mostly auto logins on the console so that's great for development for testing but that may be not so great for running production so what you want usually is to set up a SSH key to get access remotely to a system if you deploy it as a server somewhere so that's what we are going to do we are going to add SSH key for the core user and at the same time to make it a bit more spicy we will create a specific system day service that will fail and we'll see how the system handles that and finally we had another system day unit that will bring up a container for running an actual service on a system something that we want to do we don't just want to run fedora core s for the sake of running it we want to actually have it do something so let's write the fedora core s config and i will go over the config with you here so first we so that's the basics we are going we have added here a new section to tell that we have okay we have users and we have one named core that's the default and we will add a specific authorize key here to get access remotely so that's the skill key that we'd be added to the authorize key file in the user home directory we still have our specific system day unit so the tty unit to set up autologin we'll see keep that for now that's simpler for debugs then we will create a we have a failed unit the unit that will fail it will simply just go and call bin files so it will fail and we'll see how that goes and then we have something a little bit more complex so maybe something i should do is yeah we have here a new unit uh that will run podman that will run a container via podman so this unit will pull the autcd container and run it with a set of parameters so it's basically a podman command um writing this unit by hand is maybe not that easy so what you can do is you can use podman generate system d i think if i'm currently or if the system did generate yeah this one and call it with the container name and and you get you get the unit like that you get the same unit written back to you um so yes our unit that will run autcd and we have a couple of other things so we still have a hostname setup or pager setup and some audit silent sync all right so let's take this federal chorus configure here and copy hit in the file there we go fccg okay i'm gonna do something else should i get something clear all right that looks better okay so i've got my config here i've got my image and i will run fccg fccg my config and to put it at venus.gn all right i've converted it to ignition format so this one all good and i will provision a system with this config so some srinux foo and let's go and do the vip install and copy everything again there we go oh where do i get this okay so our system has been booted successfully and it's applied the config so what can we see here first we can see let me just back off the screen we can see that we have one failed unit so that's all right that the unit that we wrote the failure unit here and we get a little bit of snippet here on the console to tell us which unit has failed so if i go and has consistent with all the status of failure of course it failed so that's good then we had ssh access setup so let's let's go over ssh so we'll disconnect from this console um and go over ssh so here the system is here and i've added my own i made a mistake i didn't set my real ssh key in the config so i'm not going to be able to access this stage so i will have to do that right now okay let's start let's destroy my system and reprovision it hey i'm back up here and we can see that we have the key here my key has been added to the system to the authorized key found so i can disconnect and i can directly ssh so with the core user and my ip which is here to the system and i will be authenticated no password as i didn't set any and we disable password authentication by default so all good we got ssh access we got a failed unit that's good um and let's have a look at our new service so we uh we added a specific system the unit that will run a netcd container via podman so if i take a look at the status of this service here so it looks good it's active it's running and we have the log so it looks good one should be running if i can i can also look at it using podman itself so if i go podman i have to run this as root because the system system b is running this service as root posman as root so yeah we've got here the container running it's been half a minute that's good and um let's uh yeah let's do some etcd let's try some etcd command so if you don't know about etcd to make it simple to make it quick now it's this basically a distributed database uh that you where you can store key key value pairs so here we will visit first curl command and we will tell etcd so we'll call etcd and and haskips here uh to add a key and name it fedora with the value fun so if we go this copy here and do this here so it's called to our running instance of cd and haskips so in etcd answered itself answered our query and said okay i've created this value here with this specific index very good and let's ask again the value from etcd so if we called all the keys allowing the database and we go we pipe it to gq to make it look a bit pretty so here we get all the keys the keys fedora and for another you the one we get so all good so our etcd instance is properly happened running and that is because fedora is fun right maybe we should we should change its value and put it like fedora chorus because like it's fedora chorus so i don't know if that is our load in etcd keys so we'll have to try that out okay is there anything else any question in the chat or anything for for this this section here i saw any questions in chat uh we as uh as tutorials always do they they invoke us to think a little bit more critically about what's going on and we have some suggestions for some features so other than that we don't have any uh questions about the tutorial whatsoever cool all good so let's move on to the last one um so the last one oh let me just destroy my system first my my running instance so i can log out and destroy it all good the last one the last tutorial is about running updates so we've said the one of the one of the things that is great about fedora chorus is automatic updates that you don't have to worry about that they will happen automatically you can manage how they happen automatically and uh you always get the latest stuff and the latest features latest security fixes so here but we have a chicken egg issue if we've done it don't know it the latest version of fedora chorus two for this tutorial so to make it to see it in action we have to go back and download an older release and do the do everything again to have it to start it to have it auto update itself to see it in action um i think the release happens approximately every two weeks so there they haven't been one yet so maybe next week then there will be one um so yeah that's what i did here so you you don't have we don't have anything automatic to download all the release because that's not something you usually do so if you want to do that you will have to go to the release page here fedora chorus and look at the release notes and here you see that's the latest release so you just pick the one before here and you download this one so i don't think that there's an easy one to go and download this one you just have to replace the versions into all the strings yeah so that's what i i did the instruction here so that you just have to replace the release number here and you can go and kill those two files and i'll do it just right now let's download this one should be quick and let's download the gpg so once you've done that don't forget to verify that you've done load did we go and unzip it and there we go i can just remove the one i had before and simply i'll just rename it to something with a particular name okay so i have never my older image let's do this again so we are going to write niggijic in the config again well fedora chorus config and convert it to edition we'll do the auto login setup let's make it easier we'll keep our ssh key that's great that's easy to keep the console login level and we'll remove all the units because we don't need them for this setup so that's the config here nothing specific we have the rice key the gty config pager and the silence audit so i'll just go over here and just like copy one from containers and go with dates but that's basically what it is and just like remove failure unit the xcd member unit of them we don't need keep the pager and the silence of it are good so see quite simple config here so fccg all fccg it's and about two updates okay we got our new config and let's start a system with that so there's some system so next foo and let's boot up the system so here depending on the connection depending on how fast the system blows up and everything we'll try and be quick and see the update happening live so to do that we watch we will look at two services so the first one is the status of the system yeah os3 status oops status of os3 so the habit has already started and the gty status so here we have the gty it's running so he said okay i'm initialized and oh there's an update so okay i'm going to update the system right now and if we pull that again look at this so fm os3 status here he says okay it's busy we are deploying the updates it's working on making it available so that's the current version that we have booted and the gty is still working on it so here we'll just have to wait for it to happen should be quite quick and there we go as you've noticed my system does just reboot it by itself and we now have two versions okay so let's do the same thing let's have a look at the status so now we have two versions we have the old version the one we booted the first time so from june and the one from july here the last one the the update and that's good we have got a little star here which means that this is the one that we are currently running so that's the the the basics back to os3 that you can have multiple versions installed at the same time on system but only one running and so that's the one running here but we still have those versions available and we can go back to them if we need so that's what we will do in a second let me just take what had mine yeah so that's the status we've did that if you got a full eGFA terminal and not just a basic console you will see a nice dot here instead of the star and yeah so you might be wondering what what happened what changed between the old version and new version then you can get this information with by calling apm os3 and asking for the changes so if i do that here and ask for the changes so there's a lot of them just go back let's just clear screen maybe do it again so okay from previous deployment to the one that's good that's like a good diff and we see we've got all those packages that have been updated for the new version that's good you've got the exact difference between the previous version and the new version that's federal static classic packages that's the same one you will get in on federal workstation or federal sub all right but let's see something happen during the update and the latest version of whatever package that we've got inside as a bug and you need to go back uh because you still have to make system do useful stuff and sometimes updates are getting in the way and you just want to uh to have it working right now why do you take time to fix the same thing then you can ask apm os3 to roll back and to roll back the previous versions that's we what we are going to do now so here so we we have to be rude we've got an apm os3 roll back and go ahead reboot we're ready so it said move one the one the previous version from the from the back to the top and rebooting so here we go if we'll have a quick look here hey we are running the older version now so if I take a look at the status I still have two versions available in my system but the new version is no not enabled it's not the one that has been booted because we've booted back on the whole version and so what happens with din catty din catty is smart enough we can figure out that we made an update something happened we had to roll back and so now he din catty will just okay auto updates are enabled i'm good we have the new version available in system but there has been a rollback so let's just not re-update again right now let's just wait for the next update and when the next update happens we will just like skip this one and go to the next one also interesting in this output right here you can see starting update agent zincati and then the version of zincati and you if you paid attention in the rpmo history db diff output earlier you saw that zincati is actually one of the things that got upgraded and the new version was 0.0.12 so this is kind of proof that the rollback actually is using the older version of the software okay I think I've done everything here I don't well we've covered all the topics if there are any questions we can go with that Micah says I wonder if zincati can log a message about detecting a rollback oh yeah detecting and choosing not to go to an update I think it should I think that'd be a nice it probably does if you enable the debug output but I think a info level message would be useful in my opinion as far as specific questions like I said it looks like we don't have any technical issues outstanding mostly people just asking questions about fedora coro s features and and saying that they're interested in using it um let's see what time it is 11 23 we'll just I guess hang out here for a little bit and see if anybody has any questions if somebody wants to hop in and discuss things you know maybe discuss features or just ask other questions you know ask to join the the session and we can just have a you know talk back and forth with audio and video so feel free to ask to join and we'll just pop you into the session we can only have nine at a time presenters so I don't think we'll have that problem but maybe so I have a question uh if anyone wants to get involved with fedora coro s how can one get involved Timothy you want to say um so I would say they are several things to do right now and so there's a bit for everybody so if you're more into uh writing code there's there's a lot of fedora coro s is made of a lot of programs so you can have a look at the code that we have so it's all happened there on the github.press.org so it's showing well yeah so on coro s here so there's a lot of projects using coro s and and things like that maybe um which one uh the the project that we use to to track the changes like the the things that are happening right now is the tracker so if you go here you'll see all the issue that we have and potentially the things that need to be worked on there's a lot of things to do um if code isn't just saying that's specifically fine you can help us with the docs so you can do either try out what we've just did the the tutorials or just try out the docs and see if they work for you so if you go to the fedora docs and you go to fedora coro s here you go got all those docs we have available on a lot of platforms so you can grab the one you like and try it then and configure your system to your liking to who you want it to be i have a very practical uh recommendation for if you want to get started with fedora coro s um if you happen to have a server that is doing something simple try try using fedora coro s to do that um you know maybe take your simple application or whatever that you're running on that particular server for me it was a irc client uh and just convert that server to use fedora coro s and because you are using it and it's something that you actually you know utilize periodically if not daily you will learn and you will become more familiar and you'll become a part of the community it's it's always this way to to get involved is to work on the thing that you want to do and the thing that i'll put the ring you and the bugs that you have of course if you use it the the easy recipe so yeah dogs translation yeah sure go ahead i was gonna say we have a question in chat about um uh what types of platforms or hardware is fedora coro s tested on like in the ci so right now for we automatically test on aws and gcp and we automatically test uh you know vms pretty heavily so like um we use q emu uh to test vms but we also use q emu to test the bare metal install workflow with vms so we won't just start the q emu image we will also launch the bare metal um you know the coro s installer install process to install the bare metal image into a vm we don't currently automatically test on bare metal hardware directly um that's something that we might look into doing soon uh by adding for example like packet or something like that i guess theoretically we could change our aws instances that we use the test uh to cover the bare metal instances that they have as well but we don't currently do that yeah we don't automatically test on vm where right now if you know if you know some somewhere something that is able to give us like access to a vmware environment uh that we don't have to pay for that we could automatically run our image against then we would be happy to add that to ci uh but yeah we don't i i don't know of a cloud that specifically offers vmware although maybe that exists i haven't sought it out directly um but yeah typically at least with the the two cloud platforms and we have more coming that we test on automatically they kind of give us uh accounts that are unbuild so that we can make sure our project works well there and then um in return basically they uh you know our os works on their platform so that when users want it it exists it's tested we know it works um so we have aws and gcp right now and we also have a pure open stack provider that's coming in the future called vex host that will use to test our open stack images um but yeah we would love to test you know vmware in the same fashion yeah david says uh aws offers a vmware that's nice david does that mean we can use uh we can use the the the vmware offering for our current uh agreement anybody here that's already using fedora core os um and want to kind of give a five minute spiel for you know how they're using it and what they like about it maybe joe do you want to come into the room i saw something pop up on my moderation panel but did i think okay oh yeah i see four of nine i can stop sharing maybe maybe it takes him a oh wait now it says three of nine oh there he is hey you guys hear me yep we can all right cool sorry for the kids in the background um i use uh fedora core os uh on my work um forum.com so we're providing community software and the idea is that we want to give stack of the software over to clients and we're using fedora core os as the base os to run all the containers with potman so we use ignition to kind of build the vm stack quickly and i the the cool the best part about the stuff that we talked about during the session was iterating quickly on your workstation so i use a script a batch that i wrote that just kind of launches a vm on my workstation with ignition so i can test quickly and it takes about a minute for things to come up come alive um and the um so i can test quickly and then we move to managing our infrastructure with terraform and all that jazz uh and awbs right now so and that uses fedora core os so the the once you learn how to get used to the ignition config and uh launching the software uh that you want a lot that you want to you know spin up on fedora core os um it becomes actually really nice because everything is declared it's very easy to use so um do you what what kind of what kind of scale are you guys working at with uh um well i mean it's just launching right now so um not much to be honest uh we're hoping that um you know we're going to be able to provide individuals with a free open source version of forum so you could run it you know on a server in your basement and launch the forums you know our software staff quickly um or run it on awbs or gcp or digital ocean or whatever so that's like one of our like principal goals and the reason why we chose fedora core os is because we want to make sure that our users have an updateable stack right so we want to be able to dictate to them when they update not only the software that our developers are making but also urge them to update fedora core os as well thanks joe also authored a uh documentation entry that we are in the process of reviewing to support uh wire guard vpn um set up pretty easily with fedora core os so i look forward to digging into that one yeah yeah one of my time here late one of the cool things that i would love to see is that you'd be able to launch like a wire guard vpn with fedora core os quickly right so you know you could spin up a digital ocean droplet and you'd have your vpn configuration ready to go um and you know enable people to securely access the internet through wire guard which um you know it's kind of the motivation for getting wire guard support like for the wg binary and fedora core os yep sounds good uh go ahead i see a question by matthew miller uh as a production user would you have any problem if there was a count me process that just told the same thing dnf count me does there is a system running fedora core os which exists uh matthew feel free to to ask to join and and we can talk with audio too um in general uh we have plans for uh what we call the pinger service which basically i'm at my treadmill desk nice um we have plans for a pinger service which will give us some more information uh kind of similar to what container linux had in the past so for example uh i think it it will probably be a little bit more um comprehensive than just the count me stuff one thing that we want to do is be able to tell if systems are successfully upgrading um and that will kind of help us judge our in our upgrade window um you know if in the first hour of the upgrade window 20 of nodes are failing to do an upgrade we would like to be able to stop the upgrade um so yeah i think we have plans for a pinger but we don't have it quite implemented yet um but yeah i would like for you to be involved in that let me read your comments oh so okay his question is joe dos does something like a pinger service raise any concerns for you um i mean it for for wouldn't be developed like the free open source version of forum uh we're going to disable any telemetry by default we believe that like hey if you want to be tracked click this button or you know here's how you turn it on um you know make it opt in you know make it opt in versus opt out um you know for the sass stuff that i'm working on for forum.com um we'll probably leave it on just because like we would want you know fedora to know you know what our status is i'm okay with like providing the information back to fedora but like i think by default you know you need to always turn you know make it opt in i think that this that's a huge thing you know you don't want to go the route the other Linux distributions have done which is like kind of like forced you know things that are coming at the the consumers uh like you know ads in your motd or you know forced tracking yeah i think it's yeah i i had i think i had a similar opinion to begin with about opt in um there there's definitely counter arguments against opt in which is like well nobody will do it right uh and i think it depends on how it's done uh so if i understand correctly i don't think the dnf count me is opt in i think it's maybe opt out math you can probably confirm that um i'm not super familiar with it uh but in general i think the goal is try to try to make it as little information as possible but also still useful um and so things like i think you can actually see if you look at um if you look at the request that is made from a system today to the update server the it actually i think it actually tells the update server what version i'm currently on um and i think that is at least one piece of information that we want to collect we just want to see how up to date systems are um and i'm not sure what else but i don't think it we plan it to be anything that is you know would be considered bad uh but yeah obviously that's a tricky tricky route to go down uh and one that we should definitely you know have ongoing discussions about to make sure that it's done the right way i do have a question dusty that maybe you could expand on uh specific around auto updates um do you folks have any plans on how to fine tune auto updates right now i think one of the biggest gotchas that i got as a new fedora chorus user as i was iterating in my uh on my uh you know setup and then uh fedora chorus updated and then zincati rebooted me like it took me a second to realize why am i rebooting in the middle of my initial boot right um is there any plans to make that a little bit more user-friendly for people that aren't consuming fedora chorus in a clustered setup like you know for individuals that are really using it like as like a oh a VM that they're running on like digital ocean or something yeah i think there's a two there's two things that we've at least talked about one of them i know is implemented um although i don't know how it affects the very first run i think it should be yeah it's it should just apply to the very first run as well but one of the things that we've done is we've implemented something called a um a periodic scheduling strategy for zincati updates which basically says you can define a period of time during a week or something like that uh where updates are allowed so if it if it's not you know one am on a saturday morning when nobody's using our system uh then don't allow updates and if it is one am to two am on a saturday morning allow updates right um so basically you the user can control what period of time the updates will happen um but you know as part of that you might not be you know part of the original rollout window you'll you'll get it whenever it comes later right so like whenever your period of time that you've designated for updates is open uh you'll get the update then um and the other one i think is uh specifically to the problem you were having were you logged into the system or were you just what uh watching remotely somehow or what i mean my first time i didn't catch it i i didn't realize that was updating because i didn't log in right away but you know when i was iterating on my my the configuration that i was working on i would log in right away and then all of a sudden they would just reboot and i was like what the heck is this and then i was like oh yeah uh auto updates yeah i think there might be a feature request to basically say if there's anybody that's that's logged into the system like either ssh or you know logged in on the serial console somehow um to disallow basically print a wall message to the screen and say you know i we've staged an update and it's going to reboot when you log out or something like that um i i'll have to track down that feature request i think it exists but does one of those two things help yeah i think it does definitely help and i also think you know as we you know see more individuals consuming for our core s um you know for their individual servers that are clustered uh having some sort of uh more user friendly method of like consuming the update like you give them control be like yeah i don't know what that would look like either through um i don't really know what the point that i'm trying to get i think of anything that says you want individuals to say hey this is when i want the update to happen right obviously i disabled automatic update or automatic reboots it will stage a reboot but then i have to reboot oh really okay yeah so i think that it just you know one of those little gotchas of learning a new system and for the record i think that forced updates are great in a lot of ways because you know it's always easy to to defer i'll i'll update later you know that's always the system ins idea well i'll schedule this for later so i think it's a good it's a good mentality to be in yeah it's kind of interesting i the philosophy behind it is one that's easy to reject um and i'll go through an anecdote here so when i first started at red hat i um i worked as a consultant for one company uh for a little while and they did everything in amazon aws they were one of you know 2008 doing everything in amazon aws right um i didn't start working there until later but you know they they were a very early adopter of aws and one thing that they did is you know their systems in the office were all you know just dumb systems be it windows or linux or whatever everything that was interesting they did on development hosts running in aws and people would log into them they had their home partition that was essentially mounted um as an ebs volume uh but every week um there's like i guess what was there like 12 different development hosts that everybody shared it was a small shop but every week like four of them would get rotated right and they'd get removed and and new ones would come up and basically what that means is you don't store anything on that system that's not going to get blown away after a period of time you know if you do some long running process that you need um you know you better kind of store that in your home partition somehow and automate bringing it up again on the new one or you know any sort of state does not accumulate over time so you know uh every two weeks or so your development host is going away and you need to make sure that whatever you're doing you know by default your philosophy is all right i need to save this state whatever i'm doing if it's going to be a long running thing and so you know originally you're like ah but uh every time i don't want to and then after a little while it's just like okay yeah i you know my approach to how i've built things now is that this doesn't matter anymore right it's just a different mindset to get into um so once you get to that point it's like all right no biggie new update just came through i'm good yeah it's it's hard to move from the you know keep treating your services pets versus cattle right and i think fedora corvus helps you kind of nudge closer to the cattle uh uh side of the world uh for running infrastructure which is i think really great yeah david says uh don't starve all the other systems by doing all updates at the same time one thing that should help with that david is each system um um each system is unique in the rollout window uh so if you don't explicitly define you know what's your uh risk level risk tolerance level is for updates then random one is chosen chosen so uh you could get an update at one percent of the rollout window for one machine or 60 for another one or 70 for another one um so all of your updates won't update at the same time unless you've explicitly set a risk factor that you have chosen to use and then you know if it's part of a cluster the cluster itself can kind of control updates we have a tool called fleet lock which i haven't used and probably needs a little bit of of you know more thought and love to kind of make that happen but the concept there is your cluster uh can help control how many nodes are down at a particular time for an update and stuff like that sorry airlock you're right mica airlock i don't know why i call it fleet lock nasir are you talking to us you're muted can't hear you you might want to use the chat had my mic disabled so we're saying we have just one minute left if i'm right yep so hopefully everybody enjoyed everybody enjoyed the tutorials and if you would please do share with other people who are interested or just you know on social media the tutorials are in our documentation in the past what we've done is we've made like a blog post or something like that but i feel like the work that we've done this time to get them into our actual documentation will go a long way for people discovering them and you know making them more easily accessible so please share those and you know if you find issues open pool requests we welcome contributions so thank you so much