 again to a yet another OpenShift Commons briefing as we like to do on Mondays. We like to hear about new things, new ideas and we've been going through a number of things around the latest release of OpenShift 4.8 and we have with us today Adele Zalook who's one of our many OpenShift product managers and he's going to talk about the dawn of OpenShift sandbox containers and I know that sounds a little mysterious but by the end of the hour we should know what he's been thinking and how it's going to impact the latest release of OpenShift so Adele take it away. Sure thank you Dan. Hi everyone my name is Adele Zalook and I am the product manager for the newly tech-perview product called OpenShift Sandbox containers. So let's talk about introduction and use cases. What I'm going to talk about in this section is mostly around how we do sandboxing, what are the trade-offs of sandboxing and I'm going to talk a bit about what is called a Vegas mode or what I call a Vegas mode and when and where would you use sandbox containers versus other products in OpenShift. So what is sandboxing? There's a lot of references here you can see the numbers. I'm going to link these or these are really linked in the back of the slide but one definition for example is a sandbox is a tightly controlled environment where programs run. Is it any program? No these programs could actually be programs that could leak resources so we would want to isolate them or they could actually be programs where you run on trusted code or in general sandboxing is means for isolation for any workloads that you run using some forms or some technologies available for you in the kernel. And there are a lot of use cases for sandboxing there's a lot of types I'm going to be walking you through the types. So what are the different types of sandboxing? I'd like to categorize those as software and hardware and software we already know about those. So Linux namespaces is a form of sandboxing. We know about the mount, the UDS, the PID network namespaces. There's also a prominent way of namespacing which is called username spaces which allows you to kind of like map root inside the container to the non-root outside and so reducing the blast radius as well. There is an enlightenment that the reaccess control max and Linux security modules of the most prominent as a Linux is one there's a lot of it protects from a lot of things basically it uses label to label processes inside the container and the files they're used to access and then prevents misbehavior from like if the if the processes got out then we can't really access files they're not mapped to. There's also sitcom but it requires a bit more knowledge about the interim of your application what system calls you're using and so but you still can use it you can decide which system calls you want to block prevent and or allow through. Then the other type of sandboxing which is more on the hardware side of things the first one is pretty easy you know you're sandboxing using your machine you you run nothing else you just run one application and that's it so you have their method of one application nothing else shares the host you're good to go. The other form however is more to pin pack workloads on the host and that is a virtual using virtualization you may hear the term BTX or PMX in case of this is the instruction set in case of Intel and SVM case of AMD and there's different types the first one is where you're running the hypervisors without the OS directly at layer zero on without an operating system existing things that do that exist that exist for example our Zen which is a free hypervisor running directly on hardware and then Microsoft's hype fee but then there's another type which we're interested in which is type two hyper hypervisors which allow you to virtualize on top of an OS so you have an OS already and then what you do is you basically enable Linux kernel modules for example KVM is one KVM core would be one module that allows you to core function now this is then based on the architecture you would then start to enable different modules whether it's Intel or AMD and so on and that gives you the ability to kind of like or allows you to create virtual machines and make use of the hardware virtualization features and the instruction set to divide up your host into different virtual machines. So what are we interested in in the stock we're going to be talking more about type two that KVM bit and as the the base technology for the product or OpenShift Sandbox containers that we build on top different layers but again like there are forms of sandboxing the software hardware the mixture of these is what we're looking for not one or the other all right so let's talk a bit about some some trade-offs when we're talking about you know the software side and the hardware side or the virtualization side so one trade-off when we think about this is like when I choose to run my workloads and as VMs and I'm looking at efficiency you could run more workloads because usually you have more type control over how your resources are met that's with containers versus VMs. VMs are more a bit heavier in weight but still we're targeting lightweight VMs with with with catech containers which I talk about later. In terms of performance there's this additional virtualization overhead that layer does not come for free you should expect some some performance overhead when you're when you're picking the virtualization path and then the isolation so with normal containers vanilla ones I'm going to be talking about like how it is done in OpenShift but with vanilla containers as it is you get the host and then on that kernel on the host kernel you're creating Linux namespaces and then you're running your workloads there's an additional circle if you see was with virtualization and that is kernel isolation so each workload would then get a separate kernel to run the workloads on and that that is you know giving it an edge in the isolation bit so it's more like a trade-off independently your use case you should make sure you're choosing the right option. Now I'm going to talk about Vegas mode so then background so Daniel Walsh wrote a very interesting blog post on iclinux and there he mentioned you know what happens in Vegas stays in Vegas after you enable iclinux and and now we're going to walk through like what kinds of configuration that can be enabled that gets you really in Vegas mode so the first like this is the lazy mode where you're you don't want to do anything you just want to rely on vanilla clusters and don't worry we don't let you do that most of the time we either have very good documentation or enable that and automate all the necessary bits to get you in Vegas mode with operators as you always know but with namespaces you know you get minimal configuration but you get a little isolation remember that triangle diagram I showed with VMs you get that kernel isolation so by default you're you have a separate kernel but that does not mean that you are immediately safe right there are other steps that you want to take to reach to to good security levels or good isolation levels and this is by enabling you know SIGCOM, SAlinux and an optionally running workloads and VMs alongside containers so it's more like not one or the other it's the collective of these things and what we're trying to do with OpenShift sandbox containers is giving our users the choice to choose between these two options when they need to for example for compliance reasons or for for audit reasons you want to be on the safe side and then say oh I'm just gonna run my workloads these certain workloads on VMs because I have no absolute control over so I don't control these workloads or where they come from they are not picked from OpenShift registries they are not picked from Red Hat registries they are so from foreign registry but I still want to run them for example on OpenShift so the question that you might have now is so you already have something that uses VMs in the OpenShift landscape right why would I have another technology that also uses VMs in OpenShift and this this this diagram kinds of walk through the different decision tree that you start frustrated not sure where to go and then the question becomes like have you already gone through your cloud native journey did you containerize your workloads your applications did you build images if you answer yes then you probably want for 95 percent of the cases go with the normal containers use case because this is this is how we test and our code and and we have already published repositories that allows people to to securely pull stuff from on the other hand if you are in the five percent of the cases and you say listen I have no control over what I'm running on my container and I'm pulling from images what I don't trust but I still want to ensure some level of isolation then that could take you to OpenShift sandbox containers this is where you have an OCI compliant runtime OpenShift sandbox container is based on Cata I'm going to be deep diving into Cata soon and it's an OCI compliant runtime like currency that allows you to run containers the main use case is kernel isolation as I said so you get a separate kernel for each workload you run and then you can run for example on third party code or maybe untrusted code that you don't have control or your decision matrix does not involve you being in it to decide whether you want to run that workload or not so you you pick sandbox containers to run it in a VM or in a lightweight VM then you come with the like the third approach where you're not yet containerized for example but you want to use VMs for general purpose think about you know starting just a VM on Kubernetes and this is where you can lift and shift your existing VM images running on different hypervisors to the OpenShift and Kubernetes world that could be traditional VMs they have no they could have no existing container image there's no image built for them and therefore we have a solution OpenShift virtualization which is a general purpose virtualization on top of Kubernetes that allows you to do much more than running containerized images so each has its own use case lightweight and 95 percent of the use cases would use normal containers then if you are security aware or you want to ensure compliance for certain cases then you go with OpenShift sandbox containers for general purpose and migrating existing VM workloads you'd use OpenShift virtualization all right now I'm going to be talking about the bits that are concerned like that we are concerned with in OpenShift that helps us or that our product uses the first one is O&M so we make use of O&M we are and operated like any other and we make use this is the operated lifecycle manager think of O&M as your Red Hat package manager right you define source RPMs you write them you define how you want to install your software and then you push them to repo and then they are available for you to RPM install them right so this is a lifecycle process it gets through many iterations whether it's security or all the things and all these things are gone into that process and included and we want to do the same with Kubernetes so this is what O&M is about it allows us to package our Kubernetes manifests our Kubernetes artifacts the same way we do package our Linux components and there are three important resources that so there are a lot of resources that O&M defines but I'm concerned here about three these are more the pre-rex for you understanding how to enable and how to use the product the one is the operator group this is more concerned about multi-tenancy so we want to make sure that each cluster admin have the control over what namespaces what operator gets certain RBAC permissions right so here we define you know our operator we define the namespace that the operator should exist on and then based on that we define the namespace or the mapping and for this mapping to happen we need to create a cluster service version or a CSV and that CSV needs to be in the same namespace as as the operator group and therefore the mapping happens and then you get the permissions or your operator gets the permission to do what it needs to do I will pause on the CSV now I want to cover the subscription so once you the subscription is a resource that you create on the cluster and once you create you define a channel and that channel is basically where you get your updates on so that channel puts in your operator and whenever there is an update you also have control whether you want to put that update automatically or manually that is configurable through the subscription but you need to create that subscription resource if you're doing that for example by the CLI that resource gets created automatically if you're doing the whole installation process from full in that case we're doing like sole plan approval for example automatic and therefore any update on that channel will get put and apply to your cluster now back to the CSV the CSV is your RPM this is your where you define what components of your package what components of the operator this is where you say I want to have a deployment these are my resources and this is these are my CRDs that I want to enable in the cluster so this is your your your most granular bid that every operator needs to have to get into a cluster and get installed by O&M another important piece or bit to this equation is the machine configurator so if you remember the history of like OpenShift with OpenShift 3 in admin usually you would have to create and install all the packages in advance before getting into a node a Kubernetes node and before getting in an OS lifecycle so with OpenShift 4 the MCO got introduced to kind of like lifecycle the operating system with the with OpenShift process so it takes care of packaging and installation of all the components and to the point where you can actually also join a node to a cluster there are multiple pieces to that and the one the most important one is so the machine config operator is divided into three main components the machine config controller the machine config server and the machine config demon the machine config operator consists of more for the machine controller consists of four subcontrollers out of these four subcontrollers there's one very important one which is concerning us today which is the renderer controller whenever you create a machine config and this is how you say to the machine config operator I want to modify the existing configuration of the OS I want to install something so you create a machine config it takes the base configuration and renders that additional configuration that you specified with the machine config and plus them together hashes them together and creates for your new rendered config this rendered config is what will end up being installed on the nodes and that's where the machine config demon you know watches the node looks at the status finds out oh there is a new config that I should apply starts applying it to the nodes that have been defined on the machine config pool the machine config pool defines a set of nodes that the update should be applied to and the machine config demon basically takes that rendered machine config that has been rendered by the renderer controller in the machine config controller and applies that to the nodes this would be important in how the operator works finally we rely on our extensions this is a concept so as I said like you define a machine config this is how you say I want to install additional things open shift sandbox containers install an additional runtime which is cata containers it's not the main runtime of your cluster you already have a main runtime and that is running by default the runtime that we want to install which is cata containers is a day to run so you install that after fact that you have a cluster everything is running already then you want to run sandbox workflows using vms with cata containers then you create the machine config and then you specify extensions extensions as are a red hat correlates this way of saying listen and there's a few things that I would also take care of and lifecycle myself and you have to specify one of these and sandbox container is one of these components that the mco decided to lifecycle itself as well this is basically a list of packages that are required for example cata containers qmu and others so the mco watches that and basically starts the process actuates that creates a desired config and basically follows the past and the node controller picks that and starts installing it on the node in the cluster um so with this in mind we we've set overlap the mco on extensions um let's now get to the operator right so what is the operator this is actually the thing that you as a user would consume at the end the operator will be available is in when 4.8 is released on the red hat operator catalog and it's installed as any other olem operator it exposes a crd called the cata config this is the crd that you would use to configure your cata installation um and it installs cata containers on your cluster cata containers as I said is an oci compliant runtime that allows you to run lightweight virtual machines was the same cube native experience that you would with normal containers and then you have it also configures cryo because cryo is the high level runtime it configures cryo adds the runtime handlers creates the runtime classes the runtime class is a way of describing uh that you want to add an extra runtime in addition to that default runtime so as I said we're not the default runtime on the cluster uh where there's a secondary runtime that gets enabled day two um so it creates the runtime class for you and then finally also installs chemo which is our virtualization machine virtual machine monitor or how we create VMs as an os extension as I mentioned like cata containers and chemo are os extensions which rely on the machine config operator to lifecycle and install now how would you use that so we expose now the installer or the the operator gets all these things in your cluster you would then start there's one simple one resource that we expose at the moment it's called cata config cata config exposes one optional parameter in the first release which is a pool config pool selector this allows you to choose which nodes in your cluster you want to install the cata containers runtime if you don't specify any it gets installed on all your nodes you have the option to specify which nodes you would expose in your clusters or you would install the runtime on when you create that cata config resource it starts by this is how you trigger the installation right you it creates the runtime class and then you finally could create workloads deployments stateful sets and pods and all you need to do is just specify the runtime class name on your cluster so that's basically how you would use that as a cluster admin or a developer there will be a lot of integrations in the future on which was workloads that already exist for developers but that's where we are at now from the operator perspective all right was that I am going to share my second screen let's see let's double check that the font sizes are big enough for my eyeballs too okay yeah a little bit bigger would be great and if anybody has questions um type them in the chat and we'll we'll get there that that works the upper two squares um windows rather um yeah I'm not going to use them now so I'm just going to use that one um down at the moment and then there's another demo along the process I'm gonna be using all right okay cool um so we've covered only the open chip bits so in this demo I'm just going to be showing the open chip bits basically OLM the MCO and the operator and I'm going to go into cata containers deeply in the other demo um so so we want what we want to look at is um we want to go to open shift um this is the marketplace this is where all our um operator catalogs reside and you're going to see here the normal ones that you use to see is right out of operators create your operator certified operators in your cluster but since we're not released yet this is where like we added an additional um um um catalog that we would pull stuff from um and there are the three resources that I told you about so if we look at um OLM resources there are many as hopefully yeah so there are many resources exposed by OLM we're interested in subscriptions operator groups and cluster resource versions so you see the the subscriptions are namespaced operator groups are namespaced and the cluster service version are namespaced that means we need to go to open before we go to the namespace of the operator we can also check package manifests this is more telling you here are the list of um operators that I can see from all these catalogs if yeah these are all the list of operators available and if we search for 10 box containers we're going to find that there is indeed an operator uh available important installation I've done so this is more of a read only um um version um I'm going I'm showing the CLI but then I'm going to also show you the console um all right so we can then go to the open shift 10 box containers operator and we can see here the operator has been running um and we look at the operator group that's one resource it needs to exist maybe I should have specified which one basically this is again as I showed you like the target namespace all you need to do is specify the target namespace so what need to exist now is a CSV that maps the target namespace of that operator so if I could go to get the CSVs and look at the 10 box operator CSV I should see one here as well with the namespace uh maybe I should get rid of some plotter and then less yeah the namespace here is the same and as I said this is where you specify your entire package even the icon of the product and and you specify the permissions and um what you're deploying what request you get and so on and finally the subscription the subscription is is basically how you tell um the OLM where to get the operator from right so we need a catalog that can serve us the CSV and yeah and suspect that matters so we have a channel called preview 1.0 and the insult the plan approval is automatic meaning that any update that gets into the operator will be applied automatically and the source is that catalog that I've shown you and the reason I'm we're using that catalog is because we're not yet released when it is this will be released and it will be available with the red hat operators catalog cool so that's over there okay the second part so now you have the operator turning on the cluster the second part you need to look at is the machine config operator um but before we do that let's let's have a look at kata config so we saw the operator running already and for us to have a kata and installation I've already created that to save time because I want to dig into the details first let's look at the spec so here I haven't specified any node so that means I will install kata on all my nodes in the cluster so what I should expect to see like this solution has been done it says that the node that I've installed by kata runtime on is these three nodes which are the three nodes in my cluster um but what I what I should also see is is yeah so this is the runtime class so now I could actually simply just create a pod called anything and then specify the runtime class called kata and here you see something called the pod overhead this is because we're a VM after all there are additional things that we need to get to a cluster and that's the tradeoff that I mentioned before um and these things collected resources required for for these things in terms of memory and CPU has to be specified um as as you see in the runtime class so in compared to runc you don't need to do that because runc uh runs on on the host when we run chemo we run um a component called the kata agent which I'll talk about requires these resources to be pre-specified and they're mostly fixed except some some that changes based on the workload uh which we can compare related um all right so now let's have a look at the machine config operator so what is important for us is the machine configs right you're going to find zero and one these are the defaults this is when you install a cluster you get these by default what the operator does it creates a third one like a 50 um uh is is the one created by the operator and this basically um has the extension that we will enable that allows the mco to kind of like take control and lifecycle the stuff work so if we get the machine config now we look at the spec all we need to do is just the extension extension sandbox containers mco looks at that understands what it needs to do life cycles the rpm's life cycles all the packages and gets to work um and we can see on our node that the desired config is the same as the current config um what this means is yeah so rendered here I forget to mention rendered render stuff here are the things that that that collect that calculation of the hashes right so if you remember that picture that I show earlier um this is where the machine configs are render and the renderer controller takes care of doing that and then you know the node controller applies that so this is this is this is what it really means so this is the final config that gets deployed so this is now olem mco and the operator uh on the cluster let's look at the console now how you would have the experience the same experience on the console all right so this is this is a normal console you log in check the operators search for sandbox you're going to find one operator since I've really installed it um it's there in the cluster so now let the usual path you go you install it um and then once it's installed you're going to see it's there and it has succeeded and you're going to find um that the katha config resource one katha config resource has been created and this is the status and the spec as I showed you in the cli the spec says install in all nodes because I didn't specify the katha config pool selector had I specified that but I've gotten a selection of nodes um that only have katha containers so that is the open shift bits and we we're going to go back but first let's let's go and understand more about the stack of katha containers and what on that stack we choose all right so back to business what we're going to do is we're going to just have a look at a high level stack uh of katha containers how the into inflow looks like and what of that into inflow what components of the into inflow are we interested in from a high level perspective as I said was run c containers with normal containers you have a host you have a shared kernel and containers are isolated or sandboxed by a Linux namespaces say comp and all the goodies and the stack and the high level runtime that you're using is cryo the low level runtime is run c and you're using as I said Linux namespaces and c groups now if we move to the isolated kernel bits we're using again the same high level runtime but we're changing the low level runtime that would be katha containers and what this does it allows us to run VMs in a lightweight the same way we do um um uh containers they will have two isolated uh kernels if you run two workloads each workload will get its own separate kernel now one important bit to notice is that we um don't share the kernel with the host we use the same type of kernel as the host so you'll get the same benefits of our cost of red hat coro s patched um and and and and has no cbe's whatever and we use that on the vm as well this is more the entire stack of katha containers right you start here as a user and then um this is upstream right so this is upstream katha containers project then you have multiple options of what high level runtime you would use um and then there's the shim and then there is the your host um uh your kernel your vm kernel which is a separate kernel your Linux namespace we're not interested in that entire picture we're only interested in the highlighted pieces so we are interested in cryo we're interested in chemo this is what we're using now stream on our stack and um uh where you configure stuff so as a user you start by creating a normal pod you specify the runtime class name as katha and um ideally the cuba it watches um calls out to the runtime in our case it's cryo cryo um implements shim v2 so it calls out to um the shim v2 and a shim is simply an intermediary process that knows how to juggle the lower runtime class whether it's run c run c has also its own shim con one but in our case it's it's the container v shim um katha v2 and that basically um calls out to the runtime uh to the agent uh creates the vms and the low-level bits and there are certain places where you are interested to configure stuff you want to configure or what we configure is the cryo bit the chemo bit and the katha containers bit and um yeah here so basically this stack that we're interested in is chemo cryo and katha containers there is an other version of this figure that you could look at later on which has annotated descriptions of all these components if you're interested in the details um and describes each component in that stack in details including the networking stack um and so on yeah so that's that's basically overview from uh from a deeper down and as i said if you're more interested in in the nitty gritty is um um you could have a look at this figure and i will also show a demo all right so now let's let's look at the at the the demo to kind of like understand happening underneath so we stopped at the open shift bits last time we discussed mco olm and the operator now we want to switch the workloads now i have installed the operator by the console and i want to look at workloads right i'm going to show it from the console i'm going to show them also from the cli that's like last time okay i want to move to default namespace but by the way something i forgot to say k is alias to kubectl and k and s is a kubectl plugin that i use that helps me map through the namespaces you know i'm going to the default namespace and and k-pods is also um kubectl like a pod right so here i have two um pods how i did i create these pods let's have a look normal pod this is the normal i created a deployment which created the pod i used a uv8 image um to create the pod and i did not specify a runtime class name in this container now if i look at the kata pod i have specified the runtime class name because you know the operator did all the work for me all the nitty gritties installed kata the rpms chemo the agent everything is on the notes now i can just as a user go and create that and specify the runtime class name on the pod and off i go so this is this is the configuration what's actually there is right image so this is my image but if i search for runtime i won't find anything because i didn't specify a runtime i don't need to specify a runtime class for a default pod now if i look at the kata pod and maybe rip for runtime you're gonna find that i have to specify the runtime class name as kata that's just from the outside how i could differentiate between these two things now we can go to the inside and now have a have a deeper look and this is where we will need the the upper bits right here i'm going to use oc if you notice earlier i didn't use oc because open shift is kubernetes and it adds on top of kubernetes it builds and this is one of the things that adds so here i'm using oc so what we want to do is we want to find the processes on the host side that are running corresponding to each so by let's see change root let's see uh and it's not full screen let's search for kata the hello kata because i so this is you're gonna see here hello and hello kata these are these are the containers that i run on the default namespace if i could for that it was a simple ps um actually i want to um yeah look here maybe i should also specify here a filter let's see or kata yeah i'm going to find i should also find that kata yeah here it is so crycut the ps showed me the container id i just used that and filter and here i could find that it's running a chemo inside the chemo process and um if i do the same now sorry it's spiritualized and if i do the same now with the other container and just grip for this time you're gonna find it's running konmon right as i said that's the shim and it's running using normal um a konmon is the shim that talks to runc and and starts uh start starts processes um in the back end so this is the the normal container so the difference you can see here i'm running a chemo process to run my container versus a konmon process um as a shim for cryo to call out for running yeah here runc to call out to runc and start my normal container and that's mainly the difference from the outside from the inside we can also have a look right if i go into the normal container and go to the kata container and so this is the the lower bit is the node so if i look here at the kernel the man line parameters do the same here and i do the same here you're gonna see a difference so this is a normal container that's a kata container or a kata container pod and that's the host you're gonna find that the kernel parameters are different so we're passing we're using the same kernel version but we're using a separate kernel for each and i say a kernel version i mean i'm using here 4.18 and the same kernel version here the same kernel version here so i'm using the same kernel version but it's a different kernel and uh or a separate kernel it doesn't share the current the kernel with the host um and that's one one thing you can also identify or differentiate between a kata container and a normal runc container all right let's see what else we can see yeah i mentioned earlier so now i don't need these upper screens but we can also have a look at the rpms that got installed in the cluster as the result of the operating that are specifying the extensions in the kata config resource so if i list my rpms and report kata this is one rpms that got installed and you can see here you know it installed all the binaries i need to run the configuration for cryo you can have a look at that it's important so here i'm just configuring cryo listen you have a handler with the name kata and if you remember to exit that and get a runtime class and look at the handler or the name name kata that simply is mapping back right to mapping back to that handler on the runtime so it tells it yeah you have a new handler if anyone any pod specifies a runtime class name this is where you would look and this is what you should call it calls to the shim and the shim takes care of bootstrapping and starting the agent the vm all the things underneath another interesting thing that gets there is um yeah the agent of course and yeah as i said earlier we are using the same os image as the host but we're not sharing the kernel we're using just using the same os image and for that to happen we have a have a builder script that actually does that so that script is here right the system d um or a script that gets drawn by system d that kinds of like generates the root fs and and maps to the kernel of the host um let me go back to the list what else is interesting yeah so basically this is this is the rpm but the rpm gets installed for us you don't need to do any of these things i'm just deep diving for you to understand that the pieces um yeah but i think that's that's from a high level on the on the cli now we can go back to the console to um to kind of like see the the um how it looks like how the workloads look like so going back to home i'm going to workloads um you know i want to change the namespace so these are the two workloads this is the normal experience you get and when i press here i don't see any runtime class it is nothing here so that that basically means that i told you we don't need to specify a runtime class for a pod if i go back to deployments and go for the kata one i will find a runtime class name um or a runtime class as kata that's basically also one indication that um you're referring to a runtime class um but it has a cryo handler as it showed as kata as the runtime now you have also metrics which is interesting um like you would normal containers you could look at memory usage cpu and the reason you get you know a bit of high um memory here is because of that full overhead that i mentioned and that's the trade-off that you get by running that container in a vm if i look back to the normal deployment or the normal um pod running the same image i would find the the that to be much less um but there's also an interesting fact so this is only visible um if i point to for example to the outer scope and calculate the pod overhead but i might not see that if i actually go on the node and do stats for example for my containers and look at let's first find out the id yes kata pick that guy can i still see it okay i think we need to pick the uh that one here yeah you're gonna see you know the same the memory usage is the same so this is because it's inside and so but we're making sure that you're seeing you know also the overhead included um um just to understand also you know the trade-off and to better um a custom for your workloads all right so that's it for the second demo um going back to the slides now now the pipeline is more like uh how we're planning to progress from here as i said we have not released yet so we're very much looking for feedback for you to try things out and let us know what you want uh things coming up your way you know viewable metrics this is this was actually you know the effort to to kind of get right um console awareness you see the runtime class um the first version of the operator will only run environment and that's something important to to know um it's not going so known as to virtualization and the first early releases of the product um we're going like going through the releases we're going towards more making or towards making the product more ready for enabling kata specific metrics so you so kata containers consist of chemo the agent and all these things you saw in the deep dive we want to see those uh when i enable more logs and for bird positive um expose more dashboards and um expose more configuration option right in the in the future but that's again based on your feedback so for what we're looking for you to do is um and try out the product let us know what you think and um please reach out um and we're happy to help um was that um yeah i'm finished thank you for for for listening and the references here lots of references i'm wondering if you could also um that uh think link interactive diagram if you could just cut and paste and throw that um link into the chat people are in for that um and there were a couple of questions um and they were good ones but i didn't want to stop you in your tracks there to interrupt the flow because you you definitely had a flow there and it the deep dive i think people are going to have to watch this a couple of times and read some of these references um but if you can actually spell out that um link yeah that would be syntactically modify it later but i just added it here now before i forget okay cool and um Preston's asking um are there any plans to make the cat operator available during the initial cluster install or do they expect this to stay a day to up a good question for now it's expected to stay a day to up and the reason is to me is it's not the main runtime there is limitations when using VMs for example we require like RunC allows you to share host network namespaces for example kata is a VM by design it's isolated you can't do that so certain workloads uh that require privileges which kata containers hopefully helps with um does not allow that by by design so that's good stay at day two and i think will stay as as day two um um it could also like if you see i'm not sure if i also 100 understood the question but i'm answering the question based on the fact that you want it to be the main runtime not the not the secondary runtime but correct me if i'm wrong yeah Preston if you want to clarify that go for it give me a second two and then slight clarification coming in the chat no worries great talk by the way um now i'm going to have to watch it again because quite a bit of it went right over my head and it really makes me understand how much the machine can sit configuration operator um how much effort that engineering team and the folks working on it have been putting in a kind of overtime and why they're so busy loads those did a great job and i think i think so one in the early designs of this thing we were not relying on the machine config operator but seeing how great it works and and that we can delegate that life cycling to it made our our code base very simple and i think um and so the team did a great like spend a lot of effort to to to make sure we're in compliance with that operator to produce to all the team involved um also i think it's it's uh has been really um good yeah so Preston if you want to turn your camera on and and ask that question i'll read out what his clarification was i think he must work in the federal sector or something because it sounds like a secure issue the idea was for secure environments where a portion of the default containers would need to be isolated as part of day one spa war for example would make use of something like this i don't know what spa war was but it sounds like star wars so um uh there you go okay i i guess i guess so what i understood was that do you want to replace the runtime with kata but maybe i misunderstood there is always the option where like i think it's possible to kind of like get the operator as the instance or trigger it to install automatically but now we're we're triggering the installation only when you create kata config but we could also automatically create that resource or you could choose an opt-in to create that resource automatically in that case uh you don't need to trigger that manually and then the runtime will be available in the cluster but still after all you have to as a developer choose an opt-in for kata as a runtime even if kata is available on your cluster using OpenShift sandbox containers so there is still a manual effort of you declaring that you run at least for now that might change when we integrate more with developer workflows in the future but for now there's this manual step you need to take to specify the runtime for uh workload gotcha and hopefully you can hear me okay yep again yep okay awesome yeah that actually answered the question um when i was referring to spay war as an example i was identifying certain use cases where they would still need OpenShift as the base operating system in their disconnected cluster but there are certain highly sensitive classified workloads that they would run that would need to be sandboxed and isolated as part of a day one up so you know i was trying to identify with that type of situation to see if that was something that was in the roadmap or something that may come later on down the line um if we get clients that are requesting it yeah i as i said this is this is more um an optional thing so for now we're not enabling automatic installation of the operator but if you have automation out of scope that does that it also works but the idea is that kata is not the main runtime and this is the one thing that i want um uh people to also like get home is that it does not replace run c so you you enable it as an operator you could do that at day one if you automate it but it still has a secondary runtime to the cluster so you you will not replace the existing one it will be like a secondary runtime and then at day one if you automate it you could specify on your workloads kata and then all that you know sensitive data will run in containers that are isolated in VMs or kata containers yep makes perfect sense thanks for that clarification impression you asked one other question about code ready containers there um is this available will kata containers be accessible to code ready containers for development work i'll clarify i think i understood the question but i'm looking yeah yeah um ultimately uh same market space um we're looking at um clients that are looking to do development work within their kata containers um but some of these clients will be using code ready as their primary run base for uh the developer development teams and just wanted to ensure that the kata containers are interoperable with code ready containers or if there was some kind of special sauce that we need to look at underneath the covers at the moment not like was 4.8 not and we're looking into how to integrate like what's the best developer experience because you get also tooling um uh was the when you go to developer consoles you get tooling to to to with death files and so on that could integrate all sorts of things like code ready but from a first stance we're not doing that for simplicity reasons there are efforts to run open shift on different flavors which is a single node similar to code ready containers which we will be looking at maybe in the 4.11 or time frame but not as a as a first goal but we're looking now as we're looking for people to give us um feedback whether that's the whole thing is is feasible and likable or are usable um but definitely yeah something we were considering uh not in the near term though gotcha yeah i i don't want to take up all your time here um and i'll shut up after this i promise but i just i just wanted to state that i could definitely see a lot of good use cases for this both in the security sector and in the financial sector um both places would definitely have a high use case for this um i actually i'm primarily um commercial but i do on occasion cover some naps clients and all of the clients that i've been working with to date um actually for the last two to three years has been asking for something similar to what you just described so this is awesome i'm really looking forward to getting my hands dirty with this i'm looking forward for you to try it thanks awesome well thanks press preston and um christ short um if you unmute yourself i might have to you had a baby crying in the background before a comment about run not yeah run c is still the underlying run container runtime uh but preston if if you want to dive a little deeper into some of the product parts for this let me know uh just ping me short at redhat.com and you know i'd be curious to hear your use cases and uh i'll get that forwarded along to our product management team careful what you asked for i'm dealing with two large banks that are looking for this today well yeah like send me a message we'll maybe figure it out who knows awesome we'll do never better than actually having a customer that wants something um that would be cool all right folks um and as adelle said at the beginning of this he thought he could fill a full hour and he did indeed but it was um incredible so uh to be quite honest because i think this is a very important use case and i can see a lot of people um taking advantage of this new technology when 4.8 comes out um so we will put together a short blog post and i'll give you um adel um all of his resource links there shortly and um we will definitely have adel back again um to talk about this some more and um preston if you or anybody else who's listing has a um end user who's looking for something like this um definitely reach out to any of us at redhat but especially um adel um who's contact information we will post in the how to reach you um in the in the chat and in the youtube description as well so we can get some more feedback on this but again yet another wonderful set new set of features that are coming with 4.8 and adel we thank you for your time and um chris for um backing us up with great production services so awesome sauce and um we'll all be back soon thank you for hosting