 Hi everyone, super excited to be here with you today to talk about Flatcar and with this hands-on tutorial. So it's pretty cool, we are not so many today, so I think if you have any questions, any use case you want to develop during the tutorial, we can discuss about it. Before starting, let's see the requirements. So it would be nice if you at least you have QMU installed on your laptop. So this is one of the hard requirements to follow this tutorial because we are going to spawn instances using QMU. And we play a bit with Terraform after that, but really QMU is main dependency, so please raise your hand if you have QMU on your laptop just to see how many folks, okay no one has QMU. It's not too bad if you don't have it because I have some recorded screencast demonstrations, so I will play it for you and so you can just follow the tutorial and try to see the things working. So that's it for the requirements, so you can still install QMU while I'm starting the talk. So my name is Mathieu, I work as a software engineer with Microsoft, mainly involved on Flatcar ecosystems, so I work on the related tools like Ignition or Cluster API project for the OpenStack provider, and also I'm a DevOps teacher in former engineering school where I teach to students to use Ansible Terraform and stuff like that to deploy stuff in the cloud. And in France I'm involved in a couple of associations in order to promote DevOps tools but also to organize events and meetups around DevOps. So that's it for the introduction. And the goals of this tutorial is to learn a bit about Flatcar and how does it work, how to operate it, how to deploy it on bare metal or in the cloud and to see about the provisioning of Ignition because it's not like over operating system, you won't provision in a regular way using Ansible or cloud init, but yeah we'll see Ignition provisioning and yeah, but the main goal to me is to give you the key elements for later be able to work alone on Flatcar and to start creating awesome infrastructure. So how this tutorial is about to be structured, basically we have four hands on and each of these hands on have a GitHub repository. So there is a couple of theory with the slides, then you have the practice with the GitHub repository. So the idea is for you to be able to work at home if you want to reproduce what I'm going to show you, but also to follow and doing this during that time. So yeah, if anyone has any questions at this point, okay, yeah if you have a question now or later you can join the Flatcar Matrix channel. So basically this is where we discuss about everything around Flatcar. There is a discussion with the maintainers but also the community because Flatcar is community driven so you can ask your question during this tutorial or later or whenever you want if you have a question about how to get started with Flatcar, how to use it with I don't know with GCP or yeah, every question you have in mind you can ask it on Flatcar Matrix channel but you also have the Flatcar Slack channel on the Kubernetes Slack channel if you want to join it, to discuss with people and get some ideas and get some feedback on your ideas. Let's get back to the presentation, Flatcar, anyone is already familiar with the concept of Flatcar, please raise your hands if you heard about it, I think if you hear you already heard about Flatcar, so briefly Flatcar is a fork of KOROS back in the days before KOROS becomes Fedora KOROS so it has been forked and Flatcar now is the way to go if you want to deploy containers in the cloud so Flatcar is based on Gen2 Linux, it's operating system that you can use to deploy container workloads. By container workloads it means it can be a simple Docker container but it can also be a massive infrastructure with Kubernetes nodes and yeah, everything is deployed. So most of the time Flatcar is being used to run Kubernetes cluster but you can use it to deploy your simple and GenX container if you want to, it's perfectly fine. There is a key concept with Flatcar that's interesting to understand, first of all it's auto update so it means once you deploy your Flatcar instance it's about to be updated automatically. So if you have a big infrastructure with many nodes you don't need to think about that update them because they're going to be automatically updated. So this is pretty interesting if you're concerned about getting new CVE patches, security or new features or stuff like that you can just forget about Flatcar in your infrastructure because it's going to auto update by themselves. Flatcar is a community driver and open source operating system which means there is no decision from the top, it's from the community. So it's time someone from the community wants a new feature, wants a package to be added to Flatcar, just raise an issue on GitHub or discuss on Slack or Matrix channel and yeah, we can discuss about it and if there is some traction on this request we can just implement it into Flatcar. Flatcar has some key elements regarding the security, for example the slash USR is in read-only mode so it means you can write something into slash USR, it's read-only, it's mounted at it and there is a DMV retry when we start the instance so you're kind of pretty sure that there is no modification on slash USR before mounting it. And there is AB partitioning, so when you have deployed Flatcar instance and you have an update the update will be right in the B partition and when the instance is about to reboot the B partition becomes the active partition so like so you always have two partitions with the Flatcar image on it and you can switch from one to the other so it allows you to have a auto rollback if there is an issue with the update but yeah, we'll see that later with the hands on and there is no package manager with Flatcar. So this is something quite disturbing for an operating system to not have any package manager so once you SSH into the instance you can't just run IPT update or emerge something or DNF or whatever, there is no package manager at all. So the idea is to bring the responsibility of installing packages to the Flatcar maintainers and not to you and for thanks to this you avoid drift from the Flatcar release and to install your own package and stuff like that so if you want to really install your own packages you have some mechanism like systemd csext the new feature of systemd which allows you to mount slash usr or slash OPT into the operating system but you can also extend it with the Flatcar SDK so there is a way to build Flatcar so yeah, no package manager so this is something the most confusing for folks who starts with Flatcar or container Linux and there is no package manager. So let's see the first hands on, so it's a discovery one, the idea is to get familiar with Flatcar and just to spawn a single instance on QMU on your laptop and run nginx Docker container with it. So Flatcar is designed to run containers so there is already Docker and everything installed on the instance so there is really no need to configure anything you just start the instance, connect to it and run your Docker command if you want. If you have any questions at this point, okay. So I'm going to run it for you, okay. So everything is available on GitHub repository if you want to do this later. So the idea is to first download Flatcar images using wget from the release server so you get the QMU image and you have everything on track. Wait a minute, I have an issue with my... So you download, yeah, you get the QMU image from the release server so it takes a bit of time, it's only a couple of minutes if you have a good network provider and once it's done you just unzip the image from the server and you are ready to go to create your QMU instance. So now the instance is starting so you can see Flatcar booting live on your computer and you can see that there is no package manager at all so I just connected on the Flatcar instance so regular Linux system but yeah, no package manager so pretty confusing and now I can just run nginx container just for the proof of concept and that's it, nothing to do already is installed, nothing to provision like with unzip all on Ubuntu or Debian you need to install Docker and all required stuff and yeah, that's it, I have nginx working fine on Flatcar. So this is really the simplest way to get started with the operating system to develop stuff on your own, you just get QMU image from the release server and you have a helper script that allows you to run things and you can play with it, install stuff and break stuff, I mean, yeah, this is the most simple way to do. So yeah, if I get back to the slide, let's talk about provisioning now. So this is nice, nice, we have a discovery hands-on, we just started a QMU instance of Flatcar and I run Docker container on it. Now I want to bring some automation around it because if I want to deploy this in the cloud with many thousands of Flatcar instances, I don't want to SSH into each of these instances to run nginx containers, so how can I provision the instance? You could use unzip all, yes, but by default there is no Python environment into Flatcar. So the idea is to use Ignition and Ignition is developed by CoreOS, Redat, it's open source provisioning software, it's binary, wrote in Go, and there is a couple of things to know about Ignition that are pretty interesting to know has a provisioning tool, it runs from the init ram file system, so it's really early in the boot of the instance, it runs once, so if the provisioning has correctly been done, it won't run anywhere. So this is not like cloud init, cloud init is going to run almost at each boot of the instance, Ignition, if everything has been fine, it just don't run any more. And with Ignition it's all or nothing, so if you have an issue with your provisioning, the instance won't boot, it just fails to boot and you will learn into emergency target of a boot phase, so yeah, you won't get a half provisioning instance, the idea is to get all or nothing, so if you have an issue with your provisioning, it just fails. But at least if it works, it means that your provisioning has been correctly done and you can trust it. Ignition is declarative, so you define a state that you want to reach, it's a json configuration and in this json, you define a state you want to reach and that's it, you can't have dynamic conditions if I'm running on flat-core version, do something else. So the idea is to reach a state, so you define files you want to be here in the system, you define a system, the units that you want to be created during the provisioning that want to be enabled, so the final state of the instance you want to reach, you define it with Ignition. And finally it's generated configuration, so the idea is to not write json because no one likes to write json, YML is a bit better to write, but yeah, the idea with Ignition is to generate that configuration, so you write using butane for example, which is a format to write butane configuration that can be used to transpire and used to work with Ignition. There's a bunch of new terms, this is quite interesting, but yeah, you start to get familiar with Ignition, what is it? Now we can see we don't see anything, so I will skip this, but now let's skip to the hands-on number two. So how to do exactly the same thing we've done in the hands-on one, except that now we're going to use Ignition. Let's see that. So if I'm going to hands-on two, there is a readme, there is a config.tml, and there is the demo, so I'm going to zoom in. So this is a configuration file in YML, so it's not json configuration as we mentioned with Ignition, so as I said, we're going to define a state we want to reach using that butane configuration, and then we're going to transpile to generate Ignition configuration. Then when you're going to start the flat-card instances, we will give, we will pass that Ignition configuration, but first let's define which state we want to reach. So for example, I want to have NGINX service up and running, so NGINX service is going to simply run Docker, image, NGINX, something, something, well, anything you need to get to run NGINX, and it's going to be enabled, so you can see there is this enabled key here, so it basically means create a sim link, so when the flat-card instance is going to boot, it's going to enable the NGINX service, and I will get everything up and running. Now I want to create a file, I want to display Hello World, so I'm going to create a file, so you can check the documentation, and I'm going to try to do it without the documentation. So the idea is to define files, and we want to be, that file is to be in slash opat slash, or windex.html, so really static files, we want some content, and that should be it. So this is our beautiful website. The idea is to just show Hello, open source submit of America, but if we try to see what's going on, we want a file to be in to slash bar slash www slash index.html, and we can see that this folder is about to be shared with NGINX container here. So if everything works fine, we should get our NGINX serving this index.html web page, so I'm going to save it, and I'm going to transpile it. So as I said, I need to transpile my ML file to ignition configuration, so this should work, and if I check now my ignition configuration, I have almost the same content, except that it's forwarded to JSON, and you can see that you have some pretty nice stuff like this data formatting, data format, so you avoid everything like scaping characters, issues, and stuff like that, and you can see that all my system, the unit, has been stringified, so it's really convenient because I don't want to write this by hand. But now I have everything to start my flat-card instance, so if I'm going to show the demo, I'm going to play it. So this is basically what we've done, the transpiling step, so this is just what I've shown you, and now I'm going to start the flat-card instance, and you see there is a flag-e config.json, and now we have the flat-card that is open running, booting, and if we check, we have this NGINX service that has been created, it's open running because we specifically asked to enable it at boot, so we can check the content is the one we define in the config.json, we can see that there is my content in barww.ex.html, and finally we can check the logs of a unit, and you can see it's downloading the NGINX image, and okay, I have my NGINX open running and serving that web page we just created. So this is just a simple way to bring some automation around flat-card, so we just pass Ignition file and Ignition, we define the state we want to reach. Any question at this point about Ignition? Yeah? Oh, okay, I see what you mean. Since you're going to use Docker, in the end, you can specifically define your registry configuration into slash ETC, slash Docker, slash config.json, and you can define this through Ignition. So when each time flat-card instance is about to boot, it's going to get that config.json, okay, I have to write a file in slash ETC, slash Docker, config.json to define that specific registry, and that's it. Each time it will pull NGINX, it will pull from your registry and not from the public one. Another question? Sorry, for people in the stream, the question was, is there any way to specify a registry through Ignition? So it's nice, it works locally with QMU and stuff like that, but what about, five minutes break, that should be all right. It's run locally, but what about the cloud? We want to run it everything on, I don't know, on Google or AWS or stuff like that. Well, flat-card provides images for really bunch of platforms, so you can have the bare metal, I would say, but also virtualization, background virtual box, public cloud, AWS, Azure, GCP, QNICs, Metal, OpenStack, well, everything you can think about it, and also some community stuff, so the virtual, rack space, exhaust care, it's community supported platforms, so it means we don't run CI testing against the best platforms, but we do know that someone from the community did some tests and wrote some documentation about it, it's available, but we don't test it before sending a new release of flat-card. But AWS, Azure, GCP, QNICs, Metal, OpenStack are covered in the CI, so each time there is a new flat-card release, which means every two weeks, we ensure that every test are passing on this platform. So if you want to do exactly what we've done with the NGINX proof-of-concept thing on Azure or GCP or whatever, it will work as long as the cloud providers support user data. So this is a screenshot from Azure, and you can see you can specify user data when you create an instance. So in this specific field, you're going to pass the ignition configuration in it. So ignition has a mechanism to know on which cloud provider is running, and it's going to fetch the metadata from the IMDS, so Azure, GCP, AWS, all these cloud providers has this IMDS service, which is a metadata service, and ignition configuration is going to be provided by the user on the user data fields on this cloud provider, and ignition is going to fetch it when the instance boots. So it will be exactly the same thing as flat-card with QMU, except in the cloud. So this is the same thing with GCP, for example. You provide the user data, and then you pass your ignition configuration. So it means once you have defined your ignition configuration, you don't need to rewrite it for each cloud provider or stuff like that. You just send it to the cloud provider through the metadata service. So now it's the third hands-on. So the idea is to use Terraform. I hope everyone is familiar with Terraform. Terraform is a way to define and to provision infrastructure using infrastructure as code. So you will define which instance you want to create using which image on which cloud provider. So for this example, I'm going to use OpenStack because it was the simplest way to run things. So if someone wants to try right now with Terraform and OpenStack, I can provide some IP and credentials if you want to. Otherwise, I'm just going to quickly show how does it work. And to see, OK. So moving to hands-on three. So in this folder, there is a Terraform configuration. So the idea is to give you really everything to be plug and play. So if you want to try at home, you can just use this configuration and everything should work fine. So if we have a look to the content, I'm going back here. So this is the Terraform configuration of FlatCar on OpenStack. And the most important part is where we define the FlatCar instance. So this is, I think you already seen some Terraform code before. So I won't go through each of these key values. But the most important is that one, the user data one, because this is where we provide the Ignition configuration. So Ignition can be defined with Terraform also. So you have everything in the same code base. You have your Ignition configuration, also your deployment. And this is where you link that famous user data between the instance and between the one created by Terraform. So if we have a look to this user data, it's data resource, CT config, machine Ignition, something, something, something. And actually, it's a template file here. And if we check the content of a template, this is what we've seen earlier. So this is the engineering service. So when you run Terraform, it's going to take that configuration, generate the Ignition configuration associated to it, pass it to the user data of the FlatCar instance, and then deploy everything on OpenStack. So this is the OpenStack example. One particular thing that could be interesting to see is this thing was not present earlier. So with Ignition, you can also provide kernel arguments. So the instance is going to boot, it's going to check the proc command line value. And if there is an argument that should be here, but it's not like this one, it's going to append it to the FlatCar command line, to the proc command line, then reboot the instance. And like so you can have this parameter to be available. So this parameter is just to auto-login. So when the FlatCar instance is going to be up, you won't need to pass through the authentication. You can just lend into a bash and do your stuff. So this is just for testing purpose. Like so on OpenStack, I don't need to provide SSH access or stuff like that. You can just reach the instance. So if we check how does it work, so we're going to the Terraform folder. So this is what I shown. I'm going to pass it. We've seen it too. So I'm adding back this index.html, just for proof of concept. And that should be good. So I'm going to init. Terraform is always in three steps, init, plan, apply. So this is where we import the plugin. And I'm going to run Terraform plan. So if we check, it's going to create this ignition configuration. So you can see it clearly. And it's going to create also a bunch of stuff. We don't really care like SSH key for the provisioning. And yeah, everything we need to create an instance. So it's quite easy, but it's just instance and image to create. And I'm going to try actually if I can deploy it right now to show you. Okay, I don't have Wi-Fi. But fortunately, I have the demo. So I'm going to run that one. And then I'm going to apply the Terraform configuration. So it's going to compare what's being deployed on the OpenStack side and what's it's actually deployed. And then it will create the resources. So this is where it creates the resources. And that's it. I've created an instance on OpenStack with this IP and with this user data. And you can also check the output anytime you want. So if you have CI behind, you can just get the IP in. And that's it. Okay. And finally, the final step of FlatCar, which is the update. This is one of the most important parts of FlatCar. As I mentioned earlier, you don't need to provide any further mechanism if you want to update a FlatCar. You don't need to think about it. About the update, how does it work? When you deploy a FlatCar instance, there is two systems that you need that's going to run for the update. The Update Engine and Locksmith. So Update Engine is a simple binary that's going to fetch a public Update Server. Every, I don't know, it's quite random for the times. But like every 20 minutes or 25 minutes is going to ping the Update Server. Hey, is there a new FlatCar release? Yes or no? And if there is a new FlatCar release, it's going to download the release. So it's a new image. It's going to write it to the B partition, as I mentioned in the introduction. And it's going to send a signal. So I've downloaded a new version of FlatCar, wrote it to the partition B. Okay, I'm ready to reboot now. And Locksmith is the Update Engine that's going to take care of the rebooting. So you can define some strategy with Ignition, where you can say, okay, reboot as soon as you have the update, you can reboot. Or please reboot each Wednesday between 2 a.m. and 3 a.m. in the morning. Or you can say, don't reboot at all. Let me handle the things. So you can define this kind of strategies or even use each CD. If you have each CD bucket cluster, if you have a big cluster with hundreds of instances, you can say, try to get a token from the each CD cluster. And if you have a token, you can reboot. And once you reboot, you can release the token. So you can dispatch your rebooting stuff. And yeah, so the idea is that it provides automatic way to update. So once it's deployed, you have nothing else to do but wait for the update. So I won't run this hands-on since basically it's just show what I said in a terminal way. But the interesting thing about Flatcar is you have three release channels, so Alpha, Beta, and Stable. So we've seen some examples with Stable. And when we release Stables, it's every two weeks. So every two weeks, we have a new Stable with new patches, security patches, and CVE from the software installed into the operating system. So you have, for example, Docker or VM or stuff like that. You get some security issues like each day. So every two weeks, we try to deploy Stable. And Alpha and Beta, this is where new features come into the operating system. So when you have an Alpha with new, brand new features, the Alpha is going to be promote to Beta. And after six months or stuff like that, the Beta is going to be promote to Stable. So when you have a Stable release, you can be pretty sure that nothing will be break with the update because it's going to be in Beta for several months before and users are using Beta in cluster. So you can save the update. And by the way, speaking of update, you can see on the release coordination channel. So this is all public that we're going to deploy to release, to start the release today for the next week. So, yeah, you can track this if you're interested to see how does it work. And if you have questions, as I said, you can try stuff, you can join Matrix or Slack channel. It's too bad I don't have network earlier. I had network but not anymore. So, well, that's life. Yeah, if you have any questions, so please feel free to reach out on Matrix, Slack, Flat Girl channel or Kubernetes channel. And I'm here for the next two days and tomorrow. So if you want to discuss about the operating system, about ignition, about cluster API, yes, in Linux, whatever, please feel free to ping me. Thank you.