 Hi everyone! Thank you for joining us in this CNCF webinar. My name is Nick, I'm heading the DevRel team at SpectroCloud and today we're going to be talking about Kairos, an open source project that helps you build the immutable communities infrastructure. And today I'm joined by Etore. Hi, I'm Etore, head of open source from SpectroCloud. And the topic for today is going to be Kairos deep dive with some demos, but first we're going to be handling a couple of questions with Etore so that we can introduce the project better. So first of all, Etore, can you tell us a little bit about your background? Yeah, I've been working in the open source for actually more than 15 years. I've been open source contributor in several open source projects and I've been also working on GoLang, Operative Systems in general. I was also getting to the developer, Subbion developer as well. They are Linux distribution that are very well known. So and yeah, I've been having a lot of mixed backgrounds. So I have also experienced with Cloud Foundry. I was also developing in the Cloud Foundry community as well for Susie. So yeah, I've quite a mixed background. Okay, cool. Sounds good. And so can you tell us a little bit about what Kairos is and what motivated you to create that project? Yeah, actually Kairos started from a need, I would like to say, because in one of the open source projects that I'm contributing, which is Subbion, which now is Makachino, we are actually really needed an immutable and distributed infrastructure. So it came out from a need because we have a lot of build systems over the contributors. So we actually needed a system which was immutable, distributed and scalable. And on top of that, I also steered to as a passion for immutable systems. So I was a longtime K3OS user myself. I've been working also at Susie and the Rangers. So we actually have a deep love for the design of immutable systems. And that's where actually Kairos sits for. Okay, that's cool. So we said that Kairos is a meta distribution. So what do we mean by that? What is a meta distribution? Yeah, that's a good question. So Kairos is a meta distribution because it can be overlaid on top of other Linux distribution. So it is more or less, I don't want to say like Gento, because also Gento, it's a meta distribution, but Kairos doesn't have a base system. So you can actually use whatever OS you want as a base. Indeed, in the Kairos release, you can find images which are based on online, open SUSE, Ubuntu. There is literally no limitation on that aspect. Okay, sounds good. So can you tell us a little bit more about the Kairos foundation and just walk you through an introduction to Kairos and also what makes Kairos different from other solutions? Yeah, I would be happy to do that. All right. So Kairos tries to try to solve one of the issues which is crucial in the edge in the bare metal, when deploying Kubernetes on bare metals. So first of all, what we are trying to do while deploying Kubernetes at the edge, we're trying to move the computation more close to the data and to the consumers of our data. So our cloud native application, we want them to be more close. So not only for increased throughput, so better latency over the network, but also to have better analytics, for example, or local data AI, so processing and everything which can be interactive, user interactive. So that brings a lot of challenges. So one of those, for example, is what you do with the machine. So what's the management life cycle of a machine? Both are actually the deployment and how do you handle the update at the S level of the machine? And also, what about Kubernetes, your distribution? And what about security? So there are a lot of open questions when we think about Kubernetes applied at the edge. And one of the most important piece is also how we do customize the OS, because in an ideal world, if we think about the immutability, we think of a system that doesn't change, but still we would like to introduce some changes to the machine, like having additional models, depending on the issue that you are trying to face where you are deploying Kubernetes. So it needs to be still a little bit flexible, the mechanism to keeping that into account. So K-ros, basically, it's not only a meta-linux distribution by itself, but it's very tied to Kubernetes. Indeed, the whole life cycle management, it's through Kubernetes. That means also upgrades are managed via Kubernetes, and they can follow deployment rollout strategy, like you are used for application. So you can apply the same logic, but instead to the OS. And K-ros itself, it's just a single container image. That's probably the aspect that I would like to underline here. So the image itself contains all the requirements for the image to be bootable, and that includes also the kernel and the initRT, for example. So let's make it a little bit different from other distros, also other immutable distros where still, during the upgrades, there are more moving pieces. With K-ros, the upgrade is very atomic, because it's one single image it gets. Okay, so let me take a step back here. So what we are saying is that, basically, K-ros helps you build immutable operating system from whatever container image is, and on top of that can also help you deploy and distribute Kubernetes on top. Is that correct? Correct. Correct. Exactly. So it is driven by, it is focused by running Kubernetes, but it can also run other workloads, right? The design by itself, it's very destructnostic, and it doesn't have any string attached to specific implementation. There are a bunch of requirements by the layout that we are adopting, and this we can carry over across all the distros. And when OS is going to be converted, let's say, to a K-ros one, it will inherit all the features of K-ros. One of the most important ones is the installation process of K-ros itself. So if you boot K-ros ISO by default, you would be displayed with a QR code, and the QR code actually you can use to complete the installation. So in this way, at the first boot, the node is waiting for a configuration, and the configuration, then you can give it by logging into the machine. You can actually do an interactive install, so there is an install that guides you step by step to set up a Kubernetes cluster with just a couple of questions. Otherwise, you use the QR code with the K-ros CLI that it will connect to the machine and it will push the configuration. So this is one, for example, of the key features that would inherit, as well, whatever meta Linux distribution you are going to build on top of K-ros. And the same way, you get aspects like being cloud native, which is container image-based, that means you can use Okay, so let me interrupt you. So does that mean that when you want to build your operating system, you want to deploy your operating system, it's as easy as creating a Docker file? And then what you're also saying is that you can use Kubernetes itself and Kubernetes CRDs to manage the same thing, the automated installation of all that? Yes, exactly, exactly. So what is doing K-ros behind the scene when doing an upgrade, it's creating an image file on your state partition on the disk. So during installation, K-ros is going to have a very static partitioning schema that you can customize, but there will be a strong separation between the OS data and the user data. This was also a little design choice that have something in common with Android, right? So if you want to think it in that way, so you have a section of the system, which is reserved to s, and when we do an upgrade, what we do, we just swap an image and we pull a container image. So you can use the container image as a single source of truth in the whole K-ros lifecycle management. That means from the booting and ask the fact that comes from a container image, you can create an ISO or you can use the container image for the upgrades and nodes when they have to upgrade, they will just point to a container registry to consume the new image. Okay, so and when you upgrade, is there any possibility to rollback in case of failure? So yes, in case of failure, actually, there is a boot assessment strategy built in in K-ros. So let's say that you are going to upgrade and the upgrade is going to fail, it will automatically boot into the fallback system. The fallback system is the former image that was used to boot before the upgrade to happen. So I want to underline that the upgrade is an atomic action. It doesn't happen. For example, you don't reboot the node and perform the upgrade. The upgrade is going to run in the system and the next time it's going to reboot it, it will be already in the system, which is meant to be upgraded. So the strategy that we apply there is assessment of the boot. And in case if it fails, then you get back to the system, which was the best you want. Okay, sounds good. And also I wanted to ask you, so this is a operating system in the end one. So knowing that it's immutable, how can you customize your operating system, meaning that how can you add specific users or configure your unit settings, those kind of things. So that's a great question. So Keros as input configuration for the user adopts cloud init. So we stick to this format for everything. And this goes through user configuration and running a generic command on the OS before booting everything that is customization have to happen into a cloud init configuration file. Now Keros by itself supports having an account config file during installation. So it can be served via, for example, HTTP. It can be also served manually. So you can copy the file and perform the installation via the QR code. So what you send to the machine, it's always a cloud init configuration file. So for example, if you run Keros in a cloud provider, it will actually try to get the cloud init from the data sources of the cloud provider. So you can specify the cloud init config file directly in the control management plan of the cloud provider as well. Okay, that's very handy. And so I would say that the follow question is, you know, because it's still an operating system, one important thing is how do you manage packages in that operating system? So yes, that's a great question again, because as we said Keros by itself, it's a container image. So we have to see that as a pipeline. So if you want to customize the OS, you must rebuild the OS. That's a key strategy of an immutable system. You're not tweaking the system by itself while it's running, but instead you rebuild a new image and you push that image as an upgrade for your cluster. Although there are instructions that you can leverage in Keros to handle some customization to some degree, but the streamlined use case would be to rebuild the OS from scratch. Okay, Tori, thanks very much for all this information. Now what I can propose is to walk us through a couple of quick demos. This first demo shows how to deploy Keros configuration at the edge by simply booting up an ISO image available from the release and using the generated QR code. So here this is a virtual machine we have just created from VMware vCenter. We've mounted the ISO image and now waiting for the QR code to be displayed on screen. So now let's assume we have the QR code as a PNG file. I have that file on my machine here, where I'll use the Keros command to deploy the configuration at the Kubernetes edge location. That's one parameter I need. The second is the YAML file containing the Keros configuration. Here you have an example where a couple of customization options have been defined like the SSH key, a Keros user I want to create with the password Keros, and also the DNS customization. On top of that, we are adding the Kubernetes layer by enabling K3S. Now we are ready to send the configuration. We have a couple of parameters here, first the PNG file, the YAML configuration file, and we also specify on which drive to install the system here slash dev slash sda. Finally, we also want the system to reboot. So we add that option too. Okay, the payload has been sent. Let's take a look at the edge server. The payload is being received and we see that the installation is starting. After a couple of minutes, the system reboots and we can now access the machine via SSH. Here I'm logging in without password thanks to my SSH key. And let's check that Kubernetes has been installed and that we have a basic K3S environment up and running. Okay, everything looks good. Let's move to the next demo where we're going to automate Keros installation directly from Kubernetes by using custom resource definitions. This time, we need a YAML file that contains the configuration of the Keros OS artifact. This is not a Kubernetes native object, but we have added a custom resource definition into the Kubernetes API. That means that the OS artifact configuration can be understood by Kubernetes. A custom controller within monitor crowd operations on that object and will take appropriate actions. In our case, we're going to create a new object which will start the build of the ISO image. This time, the image will directly include the custom clouding configuration. We won't need an interactive installation like it happened in the previous demo with the correct code. New Kubernetes objects are created to build and serve the ISO to the end user. The pod is created to start a process that builds the ISO. A service will also be created so the ISO can directly be downloaded via the network. Here we are monitoring the build process which takes a couple of minutes. When it's finished, we're using curl to download the custom ISO directly from the Kubernetes service. The next step is to mount the ISO into the virtual machine and see how it boots up and if we can log in with the Keros user from the console. Here we are at the stage where the installation process has finished and the system has rebooted. We're ready to log in as the Keros user to see if the clouding configuration has been applied. Okay, we can log in with our user. All good. Now let's jump to our last demo. We'll show you how to orchestrate a Keros upgrade from your Kubernetes cluster right at the edge. We're going to use a similar approach, in the sense that we're going to add a custom resource definition to create a new object type in Kubernetes. This time, it's going to be a planned custom resource, available from the system upgrade controller project which provides a general purpose, Kubernetes native upgrade controller for nodes. This plan contains the information to perform the Keros upgrade. Here, a couple of parameters to highlight. The target image version, which includes both the latest version of Keros and K3S, and we also specify the upgrade image, which is Keros open source in our case. We deployed the plan object in Kubernetes by using kubectl on the Kubernetes cluster at the edge. It triggers the orchestrated upgrade of the cluster. The process is executed from a pod that is automatically created as the plan gets deployed. Here, we're monitoring the logs from that pod. If you look closely, you will see that the current active image is changed to passive and that the new image is now replacing the active. Then the system is rebooting. As you look back to Kubernetes, you will see that the pod is now marked as completed and both the Kubernetes version and the Keros versions have been updated. That concludes our demos for today. Okay, that was very nice demos. Now, Etore, can you walk us through some of the big items of what's coming up in the roadmap? Yes, sure. I would be very happy to. So, we have actually very exciting items in our roadmaps. Now, we are looking at integration, which is the area which we are focusing on creating and making possible to create these derivatives more easy without the derivative creation with Keros. We are focusing on creating a controller, which is letting you able to recreate directly immutable distribution Keros-based from Kubernetes by itself. So, it will be completely driven by the API. There are so many other topics that we are going to touch. Also, for example, SecretSupplyChain with Cosine. Because, for example, as we said, everything is a container image in Keros, also the OS by itself. It's published into container registries. We can apply all the tools in the container ecosystem at the OS level by itself. That means also be able to verify the OS with Cosine, verified images, and also to be able to create this bomb service below material reports directly with images by itself. Yes, we are also having a track for security, argument security. We are also planning to have an internet with copy as a life cycle management. On top of that, we already have partial support to what we call a peer-to-peer support in Keros, which lets you create cluster on top of live peer-to-peer. Basically, that means you can stretch a Kubernetes cluster up to 1,000 km already. You can already create clusters that have automatically connection between themselves, regardless of the network, thanks to live peer-to-peer. But yes, this is still experimental. That's all, more or less. We have other exciting items, but that's the one that I would like to underline, I think. Okay, sounds good. Can you tell us where people can find you? Do you handle any office hours if they want to contribute or learn a little bit more about Keros? Yes, that's actually a very good question. We have a Matrix channel, so you can enjoy Keros IO on Matrix. We are on Twitter as well, Keros OS, and also we are on GitHub, of course. Everything needs open source, so you can find us there. We use GitHub Discussion to communicate with the community, and we have office hours. So there is the event calendar that you can find in our website that you can use to join. So we have a weekly appointment. We hope you find Keros very exciting, as much as we do. We also hope this video has been interesting and managed to make you keen to join our community. Thanks for watching. We'll see you in the next one.