 Hello, my name is Jens Riemann. I work for the virtualization team at Red Hat. And today I want to talk about an operator that we developed that lets you install and maintain the CaterContainers runtime in a Kubernetes or OpenShift cluster. What are CaterContainers? During this talk, I will assume that you already know that. If you don't, there are other talks at DevConf that go more into details about that. One of them is called CaterContainers will impact you, yes, you. And I highly recommend to watch that. But in one sentence, CaterContainer runtime basically runs your containers inside of a virtual machine. Now, one of the questions that often comes up is how does that compare to KubeWord and CNB? And one way to look at it is basically that CaterContainers is KubeWord turned inside out. So with KubeWord, you run your virtual machines in a container. With CaterContainers, you run your containers inside a virtual machine. And how do you use CaterContainers in a Kubernetes cluster? There is a way to do that already provided by the Cater upstream community. And you can get there a demon set called CaterDeploy that allows you to install and uninstall and run pods with CaterContainers. So why do you need an operator on top of that? Well, operators are the native way in Kubernetes to manage your applications. And that's a good way to manage the CaterContainer runtime as well. So that's what we did. We started developing an operator that allows you to install and manage the container runtime as a second or third or whatever runtime on in your cluster. So the main part of this demo, of this talk, I want to be a demonstration. And if you stay with me for 10 more minutes, you will see and know how to use the operator yourself. And you can play with it on your own cluster. So let's get started with the demo. I have a freshly provisioned cluster. And there's no operators installed yet. So as a first step, we go to operator hub in the web console and search for Cater. And then you can just click Install here. And it'll bring you to the next page. And here in the future, you'll be able to choose between different update channels so that you could choose between an unstable line or a more stable set. But that's not defined yet, really. As a next step, you can choose the namespace where you want your operator to run in. We do that here. And I need to create a new one, just Cater operator system, and just hit Create. And then we leave the approval strategy here at automatic default. Doesn't really matter for this demonstration. So this will take a short while, and then we can go on. Because before, we can actually use the Cater container runtime. We have to create an instance of the custom resource here. So what it's deployed now is just the operator, but nothing is installed yet on the worker nodes where we need it. So just give it a short time. Here it's already finished. Now we can take a look at this and click on it. So the next step, really, to the full installation is creating an instance of the custom resource. The custom resource is called Cater config. Create instance. And now we could here at this step, we could select a subset of worker nodes in our cluster where we want Cater to be installed on. And how would you do that? You would apply a label to those nodes where you want it to be installed on, and then enter that label here in a match expression. And if you did this, the operator would install the Cater runtime only on this set of nodes that have the label. We don't do that in this demo. We just leave it at the default, and that means it will be installed on all of the worker nodes in our cluster. Just hit Create here. And now the installation has started, and it's a great demon set that runs an installer pod on all of the worker nodes. Take a look at this here. And you see there are three installer pods running here. And so they are created on a worker node each, and they basically pull down another container image that contains the Cater binaries. And they are then installed on the host file system of the worker nodes. That's step one in installation. Just deploying the binaries to the worker nodes, but that alone is not enough to run a purpose Cater containers in it. We need two other things. But this step here takes too long for us to wait for it to complete in the demo. So I have prepared another cluster where this installation has already gone through, and we'll go and inspect what the cluster looks now like. So this is the second cluster where the installation already went through, and we'll now go and inspect it. We already seen the first step that was the demon set that was deployed and put the binaries on the worker nodes. The second step is placing a configuration file for cryo on those nodes as well. And we let the machine config operator do that in OpenShift. So we create an object, machine config, and from that on, MCO will take over and place the configuration file for us on those worker nodes. So this is what the object looks like, and it has the configuration file and an encoded format in here. It also has a system D one-shot job definition that basically creates the initial RAM desk and kernel and puts it on the right place so that the virtual machine that is created can use those. Once this step is finished and we have this cryo configuration file and a system D job in place, final step what the operator does is create a runtime class. And with that runtime class in place, we can create a pot. And the only difference to an ordinary pot is that we had one line, and that's called a runtime class name. And in our case, the name is just kata. Let's take a look. Already started up in pot here, called example fedora. Take a look at the definition of this one. That's a normal pot. Everything looks as in every other pot. But there's one place here in the spec. Here it is, runtime class name kata. And that's what turns your pot into a pot using kata, the kata runtime. So this is important. All you have to do in your definitions, in your deployment files, in your pot specification, is add this one line, runtime class name kata. And we can run it, or it's already running here. And take a look at terminal, and take a look at the kernel command line. And you see, this has an option here. Agent use vsoc true. And that shows that it's using a kernel command line that's meant for kata. So this is inside the kata virtual machine. And I also added a service and a route for that. And it runs a simple web server. And it's reachable, so everything as usual. That's it for the demo. Go back to my slides. So just let me repeat the steps. You install your operator by the web console, or in a manual way that's described on our GitHub page. And you create an instance of the kata config custom resource. And then you add a runtime class name kata to your pot specifications, deployments, whatever. And there you go. You're using kata containers. What's the current status of this project? So it's a project with ongoing development. It's not a polished product yet. Especially the part that adds value to it, upgrades, is something that we are working on right now. Upgrading the operator itself is working fine. Upgrading the operands, the kata runtimes, that's something that I'm working on, and I will finish over the next couple of weeks. So all of this will be available as a tech preview also in one of the upcoming releases of OpenShift. And with that, I end my talk. Please make sure that you check out our GitHub page. Give it a try. Play with it. And most importantly, give us feedback by creating issues or even pull requests. Those are very welcome as well. Thank you very much. And please ask me any questions that you might have. Goodbye.