 Hello, today I'm going to show you how to install the OpenShift Sandbox Containers Operator on top of OpenShift's container platform version 4.10. So I'm logged as administrator in the OpenShift console and I'm going to click on the operator hub. Let me type sandbox in the list filter and then I have the OpenShift Sandbox Containers Operator. I'm going to click on that and I see a description of that operator. It's based on the Kata Containers OpenSource project and it provides an OCI compliant runtime to run your workloads inside lightweight virtual machines. So they have their own isolated kernel and that adds an additional layer of isolation. There is additional documentation at the bottom of the page that describes the various concepts. In particular, a sandbox in a sandboxed containers environment corresponds to a pod in OpenShift. So let's install the operator. We have to select the update channel. The installation will always happen in a namespace and that namespace has to be OpenShift Sandbox Containers Operator and the update approval should be automatic. So we just click on install and as you can see the installation is relatively quick. So now our operator is installed and ready for use. In order to trigger the operator, we need to create an instance of a Kata configuration. So we click on create Kata config and I'm going to use the name demo Kata config. There is a config pull selector that you can use if you want to specify the machines so the nodes that should host your sandbox containers. In my case, I'm going to leave that as is which selects all the nodes on my cluster. Now I click on create and my Kata config is created and applied. We can now watch our operator doing its work by clicking on this YAML tab. And we're going to see that the operator is in progress and a list of the nodes where the operator is currently installing the required binaries. We're going to shift to a terminal in order to be able to see what is happening. In this other terminal window, I'm going to watch the output of my demo Kata config. So we see that it's still in progress. And now since the installation is going to take a while, we are going to fast forward through time. Right now it's installing in worker 2 and that's the bottom window in that other terminal window. So we are going to watch that installation progress and then the worker node is going to reboot. You can check the time it really takes by looking at the clock at the top right. So now our worker node is going to reboot and that's the usual Linux boot. The reason we need to reboot the node is because we are going to install a version of QMU that is optimized for this use case called QMU Kiwi and that's going to let us run our virtual machines. So doing this software installation on Red Hat Core requires a reboot. This is also the reason why we are updating one node at a time in order to preserve the cluster. You can do things in parallel but then you're going to make your cluster less available for other workloads while this happens. So we are now done with worker 2 and the operator is going to add this to the completed worker node list and start the installation now on worker 0. Installing on worker 0 works exactly like for worker 2. So we are going to fast forward through this until the point where this is marked as completed and the operator shifts to the last worker in my cluster worker 1. So we are going to see that appear to the in progress list and again let's fast forward. So now the last node is rebooting and we are going to see that once the binaries have been installed on all workers and they are all marked as completed the operator is going to install a runtime class called Kata. So this is done now and my cluster is now ready to run sandbox containers. I can now describe the runtime class itself and that will tell me information such as the part of a head that is incurred by this class. So you have a fixed amount of CPU and memory that is overhead for running inside a virtual machine. So the CPU is used for things like operations and the memory to run an additional kernel and have things like file buffers and so on. So that's basically it in the next video. We are going to see how to run workloads inside such as sandbox containers configuration. Thank you and see you in the next video.