 All right, let's go ahead and get started Hi, everyone. I'd like to thank you for joining us today. I am Chris short Principal technical marketing manager at Red Hat. I'm also a cloud native computing foundation investor. I I'll be modern in today's webinar Qvert beyond containers coming full circle back to VMs Before I introduce today's guests do a review housekeeping items During the webinar, you're not able to talk as an attendee But there is a Q&A box that pretty much everyone on the call will be monitoring Feel free to type your questions in and if I can't answer them kind of on the fly I will make sure they're fielded up and wrecked and stacked accordingly Drop your questions in there and we will get to them. Thank you the The next thing is if you have You know questions or want to just share information in the chat feel free And without further ado, let me introduce Rupak Parikh Co-founder and CTO and Joshua hurt senior software engineer at platform nine Please take it away Thanks Chris Pretty excited to talk about Qvert as we go not only with with communities with containers But beyond that with virtual machines So I have Josh hurt with me here. Hello. Back from nine. He's one of engineers at our Kubernetes team So today we will talk about Qvert I will do just one introduction talk about few use cases Josh will then talk about how to really use it. How do you create virtual machines with it? Then I will talk about architecture and followed by a demo and QA So what is Qvert? Well, Qvert is a operator. It's a set of CIDs to create virtual machines along with containers It had it uses the same orchestration engine as Kubernetes to do scheduling use the same storage Same networking so if you're CNI It can reuse that if you have monitoring with Prometheus You should be able to use that or anything else that you're using with Kubernetes and obviously all the tooling including your favorite cube Cattle it started in 2016 with Red Hat Kudos to them a fantastic project open source in 2017 It's trying to be part of the CNCF sandbox It is 1,800 plus stars. You can join virtualization channel on Slack To see what's going on there is weekly meeting there and these are the few companies who have Contributed contributed to Kubernetes cube work along with along with Red Hat So I was looking at a PR recently where Qvert was trying to make a case to become a CNCF sandbox Project and I stumbled upon a lot of comments from the users who are really using it So as a tea and media and then software are the three examples that I have posted on the slide They are using it for different use cases but all of them either want to be running virtual machines because they have Legacy workloads or workloads that cannot move out of virtual machines Or they are using it in very interesting ways to run Kubernetes on Kubernetes And I will talk about that in a movement So let's go through some of the use cases that I have seen I'm pretty sure that there are many many more but really You have Kubernetes as one orchestration platform to manage both virtual machines and containers, which means you can use your CI CD pipeline your processes your RBAC DAX or whatever you have done to integrate your Kubernetes with With other systems in house including a monitoring logging systems. So can I just mention some sure? So Elaborating a bit more on the one orchestration platform. I think that this is perhaps one of the more underrated Like benefits of using Qvert to manage your VMs You do have solutions such as Kubernetes native API gateways, which allow VMs to talk to containerized workloads, but this Means that your VMs are still running outside of Kubernetes So although the connectivity is there the management of the VMs themselves is still outside of Kubernetes requiring some other orchestration tool To do that and that's just more difficult, right operation-wise Yeah, so you want to mention that. Yep. Thanks, Josh So that means you have two systems to maintain and two different APIs and two different sets of integrations Yeah, so the next one in line is pretty obvious you have applications which are Moving from being monolithic or perhaps running in virtual machines and you are trying to create microservices out of some of them and That you want to interoperate you want to run them under one orchestration platform Maybe there are virtual machines Which are running network function virtualization for telecom industries and for a lot of Intensive workloads you may be using custom kernel modules using things like SR IOV Which gives you performance benefits are using custom kernel module as well You may you may be running virtual machines Where you where it will stay in virtual machines for a really long time Before they move to containers. So you will be you can use network functions for that particular kind of workloads and in the application stack if you have Services that you're trying to again turn into microservices that can run into containers. They can they can run in the container side by side Especially with the NFV stack with a lot of careers use There is a strong desire to move to microservices, but their technical limitations on how containers May not be used for for the network functions. It's also the virtual machines are still relevant there Next one is a really interesting use case that I did not really think about but when I was browsing through The list that I spoke about I stumbled upon that and Josh here as well did exactly the same thing But we call it the turtles all the way down What what people are doing is they take a set of physical servers Run a keyword on it create virtual machines and those virtual machines are given to the users In a self-service way so that they can create more user level or workload Kubernetes clusters on those virtual machines. It's a self-service Kubernetes cluster creation on top of Kubernetes running virtual machine the the cluster API has a keyword cloud provider that that you can easily use to do that and It's becoming very popular Josh. We have to add on that Yeah, so just there's there's a lot of hype around bare metal Kubernetes and I think that that maybe makes sense for say a production cluster when you need the absolute most performance out of you know your machines or your workloads or for your workloads, but the flexibility of having this type of setup with cube vert to take full advantage of your bare metal machines is really just Excellent. I've long thought that having some sort of I as or virtualization software on top of your on-prem servers for what you can run Kubernetes on top of is The most efficient use of your your on-prem hardware. So This just really works super well together. It is immediately what me as a developer gravitated towards using And it it just works great Which brings to the last point in the use cases We have predominantly Cube word being used in depth as clouds. So you have virtual machines, which are immutable. So if you have builders Which are still based upon the virtual machines, but you would like to start them But that throw them away you want to give self service like the baby Just spoke about how you can do Kubernetes on Kubernetes You can give it to your developers to run virtual machines and just increasing the velocity and the productivity Inside your CI CD pipeline or the cell service use case So with that, let me hand it over to Josh So we'll talk about What are the different ways in which you can create virtual machines and what are the few other things that are available in Cuba? Yeah, thanks for your part Okay, so we got a little bit of overlap here and these concepts if we can maybe Delete some of these. Oh, okay, or just restart it. Yeah, let's go ahead Josh Okay, there we go Excellent. Thanks. You know, I just restarted microservices, so Yeah, I just want to run through some of the concepts So the very first thing that most people will want to create is the virtual machine so the virtual machine itself is Nothing. It is just a object Right and containing in that object The spec is the the virtual machine instance spec is actually embedded in the the virtual machine CRD itself Right and this allows sort of one level of abstraction above your actual running virtual machine So that you can give it directive such as running or not or additional labels and so on and so forth right including things like well data volume integration, but that's Not right here. So yeah, and then the actual VM that is running is the virtual machine instance From here on out. I'm really just going to refer to it as VMI because it's a big mouthful And so These are the two building blocks You have Some things like virtual machine instance replica set because it is a CRD and So you get a lot of Kubernetes native ways of defining these resources and handling them as well and so One one thing that I really like about Qvert is this other resource they have called a VMI preset and this is Most commonly referred to as a flavor and a lot of other VM based virtualization platforms right EC2 open stack so on so forth The nice thing about the way that a VMI preset works here that addresses possible issues with other flavors and in other platforms is it kind of has this fallback mechanism so because Qvert is CRD base then you get to use things in a Kubernetes native way and that means using labels So you specify when you create a VMI preset you specify a label And as long as your VMI has a label which matches that the VMI will automatically use all of the Specifications in your VMI preset, which is you can set everything in there as you could set in a VMI itself But then the cool part about that is your VMI could override Anything and it'll always just default to the VMI preset if it's not overridden and that's like a really powerful thing Because you're not it's not as rigid right as a true flavor Josh the what can you do with preset? Can you set memory? CPU if that it you can do what we do with flavor but as any device with the flavor sizes Can you do more? Right you can do more so you can actually set networking as well Okay, which is pretty nice because then you start to get into this territory That is maybe neutron ish in in some ways, right? So Yeah, moving on you have these two primary VM booting options That's whether basically it comes down to whether or not you want changes to persist After your VM is gone, right? So for a lot of ephemeral workloads are just spinning up to accomplish some task and then spinning down You don't really need Persistence so they have this ephemeral disc Cubebird has made it really easy to have Just images listed because they have a way to integrate Basically keep the list of images and Docker, which is really nice. It's simply just writing it to slash beta and Slash slash slash disk. Yeah, so and then you have persistent disks and Basically, it's a persistent volume claim with the image right And so I just wanted to mention because it solves The hard problem of how do I load in compatible images for VMs under the cube bird organization? There's a second repo called CDI containerized data importer and it just makes the Uploading and cloning of images, you know a little bit simpler It's a CRD which sits basically on top of pbc and it's the data volume Right and there's this nice diagram here that explains pretty much the most common use case which is having the golden image and Your VMs can sort of reference that right and then storage You get cloud in it, you know Config drive no cloud you can actually set a cloud in it and Have all of your cloud in it be in a secret in kubernetes, which is nice if you have cloud in it to contain sensitive information empty disks for just You know additional extra storage. It all just simply gets mounted as a device at the end of the day host disks is nice, especially if You just have an image on the node itself that you don't want, you know anywhere else and then data volumes kind of Sit on top of a lot of these available options and you also get these Kubernetes primitives such as config map secrets and service accounts that really are just mounted as Discs inside the VM So it's very straightforward. There is a limitation today Where any updates to the config map secret or service account will not actually be reflected in the VM So you would need to read Restart the VM in order for that to take effect will really recreate the VMI object Otherwise nothing special and also there's nice stuff if you if you do choose the correct PVC types access modes and then have the you know, if you configure storage and networking just right you get live migration So So think back just wanted to mention for people who are not aware of cloud in it if you have used user data AWS where you can inject startup scripts or just playing data into the virtual machine Cloud in it will let you do it and Josh was referring how interoperable it is with the Kubernetes ecosystems of secrets and then config maps Go far Yeah, and so out of the box the the networking is It's nice. It's just native pod networking So that means that by default the bridge option is chosen So your VM will actually have the pod IP of the vert launcher pod which You know represents the actual VM And this integrates with things Out of the box like you know if you for example having sidecar containers and whatnot This is this is nice. I mean, there's other options as well. You can have you know pod masquerading so that there's a Defined citer that you can choose yourself for which VMs are an assigned a private IP and but they're knatted with the pod IP if you so choose to do that and Then there's more advanced networking use cases which start to get into more traditional VM networking setups and Some of these extra C&Is like multis especially Provide, you know the ability to run have like sRI of the Networking and so you have your VMs talk directly down to the Nick and multiple interfaces for your parts Which may be really important for virtual machines acting as that's a network functions acting as virtual routers In your setup and there's a really nice that you vert even though it is a you know still early ish project has decided to really focus on this extreme like hardcore use case because VM right uses now are advanced and Unless you have advanced features, you know advanced integrations. It's really hard to Yes, we should use kubbert for VMs because you'd be losing out on on performance, right? And so I'm gonna give it back to Rupak. He's gonna go over the architecture of kubbert So Just explain all the different concepts how you can create a custom resource of virtual machines So when you create a custom resource of machine you look you can create a machine and then you can start a machine You can stop a machine Then he'll spoke out the storage and how the machine this comes to existence with the images option for the center non-persistent So let me go into behind the scenes under the hood how exactly it works So let's start with How will you actually virtualize or you will run a virtual machine inside kubernetes? so the the very natural place is a pod and This is where you will run the virtual machine So when when you see a virtual machine instance creation What you will see is a corresponding pod startup and that pod name is word Launcher dash some things and if you look at it internal there are multiple containers Depending upon how you're starting there can be one or many if you are using container images There are actually two of them. The first one is called the volume container which looks at the docker image and Embedded virtual machine image inside the docker image it takes it Extracts it and give it to another container which is the compute container that will actually run the virtual machine that compute container a Launches word launcher, which is responsible for Intracting with livered for those of who who you do not know what livered livered is D standard library used to spawn version machines on Linux uses livered extensive way to To launch and provide the features that they use it has asked for Which includes your memory limits your CPI limits your devices that are connected to version machine now that livered Also Can be controlled and we will see this in in the later slides By other components in the system to start the version machine to stop the version machine Unlike containers which start by default version machine can be dormant and can be started later on So live word provides that capability It also has even more capabilities that are not Incubate yet, but they're slowly making their way into it For example, if you want to hot plug or unplug devices or even memory or or disk This you can but other things you may not be able to do but those can be Or will be added in in future. So the word launcher launch is cute live word as a child process the livered internally runs virtual machine Using either just playing QMU or Uses KVM Now the if you look at the diagram here, I'm showing a Kind of a storage on the host. This is where The the disk of the containers or sorry of the virtual machines live Either that or they will live on a PVC like Josh explained earlier Now I'm just talking about two containers in that part. There can be many more Depends upon what features you're using so you can have side cards for Liveness checks or you can have side cards for it's geo like Josh mentioned earlier the other thing to note is Since it uses Linux QMU and KVM By default you need to run it on the emerald server so you can run version machine natively but you can also run it in Virtual machine with binary translation, sorry an under version machine in a nested virtualization mode by using binary translation It's a little slower, but good enough for a POC or a demo or for your test purposes And I will be showing later on how am I actually I'm running This whole setup inside a virtual machine on AWS using just QMU There are some configuration options that you need to set To use the emulation mode, but once you do it, you'd be able to run the version machine So let's go to the next side. So now you have the version machine running inside a container How do you connect it to the outer world, how do you do networking well on the bottom of the screen you have the CNI When you when a container on the port is created rather a port is created I'm just showing the compute container here when a port is created We get a vEatware From the CNI into the container which appears as eth0 So if you log into the container you do IPA you will see eth0 the word launcher creates a bridge creates a tab device and Takes the IP address that was assigned to the container and then assigns it to the Virtual machine that's running inside the container Which means this part if you log in exact into this container You won't be able to go out of that container this part does not have networking by the way This is just a default option. There are many different options available So it's possible to do matting or masquerading Josh spoke about that briefly But this is the primary way by which the IP address of the pod is actually given to the virtual machine and Then the virtual machine, which is the workload communicates with other containers in the system as if it's another container or another part in the system the communication to The actual container and the pod now happens through let me go back To the shared directories or the unique sockets that are exposed Through the file system that means, you know thinking about this now one one possible Use case or benefit of this could be something like I've seen this project like overcube basically if you have bare metal running in Kuvert And then you create a bunch of VMs on that Kubernetes cluster and VMs are actually comprised of another cluster themselves Yeah, you get to use the the undercube right the underlying Kubernetes Constructs for things like load balancing whatnot you could potentially install metal LB and use these to to provide your VIP For for that like high availability. Yeah Good point because it is a virtual IP when you think about it. It was exactly in the masquerade mode. Yep. That's what it is So, okay, so let's go from so now we have created the virtual machine and saddening inside the container and the networking established Let's see how we actually create the the virtual machines Now if you look and in the source code and if you're interested There are many controllers in in the system So I'm listed. I have listed few here. These are the major ones Virtual machine corresponds to the virtual machine CRD virtual machine instance corresponds to virtual machine instance CRD and We spoke out replica set where you want to just like replica sets Containers you can create what machine replica set so what you will see is there is a controller for each and Some of these are actually very very simple, but some of them are very complicated. So for example when you create a virtual machine The calls go comes to the QCTL or API server to HD The controller actually does not much the object that sits there But when a user says, okay, you know what I want to start that virtual machine Which means you're changing the property of that version machine as soon as you start the version machine instance controller kicks in and Then it goes and starts a version machine. I will talk about that in a second So we so the product is still evolving. There are a lot many more objects that are That are going to be coming up VM groups is something So something to note. It's it's work in progress right now I just like to mention go ahead and do so Due to the fact that the project was started, you know in 2016 an open source in 2017 for anyone who's interested in going and looking at the source code Who may be familiar with some of the the newer Directory layouts or structures of developing CRDs and controllers such as Cube builder or the operator SDK This repo is going to look a little bit different However, just because those tools weren't available at the time. However, I think that the cubeverteam did a good job of Organizing things and using those those, you know primitives in Thanks, and the last one in the picture is a demon set called as word handler So unlike the controller that we just spoke about which which are single turns mostly the word handler is there's a demon set works on every node and What it does is it communicates with the word launcher With the Unix domain sockets of the next domain sockets that I spoke about earlier to do Future operations on version machines, which means you can stop it update it restart it I check the status how it's doing and and Even in future Part plug it unplug it that can be done through word handler. So these three Is is the code system behind behind Q word? So let me put this all together So this is the complete picture obviously much simplified So you have APS servers controllers are acting on the objects then communicating with the word handler to Launch the pod Which are controlled primarily by a word launcher Livered as the underlying subsystem which creates virtual machines and you have other parts living in the system Which are obviously controlled by Cubelet along with the scheduler and the APS server one thing to note because this is just a pod the scheduler is The same there's no changes. There are no changes to the scheduler It just looks at the resource constraints and schedule and schedule set up accordingly. So if you want to schedule With specific policies where you want to have and that affinity or affinity between version machines for performance or Isolation reasons you can do all of that So with that Let me stop and go for a demo So I hope Let me increase the font size a little bit more so here I'm going to be I will create a actually I've already created a Windows version machine. So this is an example of the version machine Instance object It is a win 2k 12 machine that That I have and what I have done is I just spoke about how there are many ways of creating the virtual machine and in this case What I have done is Because of the size of the image here the win 2k 12 images about I think nine nine gigs or so I directly copied it on to the onto the host. That's what I am so that I don't have to embed it inside a Docker container and and push it to a registry and then download it again So I have taken a shortcut, but you don't have to this is just for demo purposes and I'm also mounting that disc as a starter disc on On that domain by the way, if you're not familiar with live word live word calls all the machine as domain and This particular section in the specification There are lots of knobs that you can use really translated to the XML or the object specification that live word is is Is familiar with okay, so with that I've already created the virtual machine. So let me Do the cube cube kato? Let's say get virtual machines So there is only one version machines because remember there are two types of object. This is a version machine instance object So we let me look at all the version machine instances So now for the key plane worker one that is corresponding instance, which is running if I stop it If I stop this instance the virtual machine object will still be there, but running would be false on the other hand since I created Virtual machine instance for win-tiquet 12. It's already running and I started last night. So hopefully it's still working the Along with the cube kato There are there is another utility available called as word CTL that I will be using To show you console access as well as the VNC access to these virtual machines So let me switch into another tab and this is my Mac So let me clear the screen and what I'm going to be doing is Let me first make sure that I have all the setup correctly here so keep config that gets the VMI's And I hope to see vintica 12 and K plane worker one as the two. Okay, so we have it here I'm going to use the word CTL come on and Let me first show you the console access to K plane worker one and I think it's worth mentioning that in an effort to keep with standardizing on this Same tooling right and the standards of standardization of the platform Vert CTL is Actually available via a crew plug-in. So it plugs in to cube control to extend the functionality of cube control So once you have it installed Vert control could be Executed via cube control Vert Since you spoke about that charge Even the console worked here. I'm going to switch and show that Okay I will go back to my Another machine so on one machine. I have I have installed Word CTL. That's what I use here on another one. I have the installed CTL word Let's hit console and let me get out of this one so that we don't have multiple consoles open I'm going to this okay, go back to my machine where the Actually that we bring it up top of the okay the K plane worker worker one and It takes a second Here it goes so the console is connected we should be able to log in and Do what the work we want, but I'm going to get out of here Go back to my Mac because what I really want to show you is the VNC So that we can look at the windows desktop. So I'm going to clear this and Use my word CTL To really look at the VNC console VNC we integrate been 12k Okay, so now it's going to Use my chicken VNC and Here we are the windows version machine is running So let me send it the control of delete and here you are so the machine is running I would be able to log into the system After I change my password. I did not change the password For people who have used windows with VNC you're familiar with this problem of your cursor getting stuck. That's where I am So here you go So let me not log in. I'm having trouble with my mouse Let me not log in but as you can see the windows version machine is running So let me switch into something different now. So now we have seen how you can run your windows version machine inside a container Since you're on this topic, let me make sure that I have another Okay, so now I have another setup and So I'm going to show you That this is running on AWS So I have so I have another setup Which is a cluster running on AWS as you can make it out from the node DNS names They're all an easy to somewhere on the US East and I have Few parts running I have a and you next part and as well as my Test version machine running on this. So let me see if I can show you the test version machine So then So this is my version machine object. It's already running it is based upon The q-word Syros registry disk demo that's available on Docker So you can take this example Install q-word and you should be able to run this. So I have the version machine running As we saw that And let me make sure it is there. So you will see yes there is a test VM running and it has been running for a day and I have the other parts and I have an engine X part running so let me grab the IP address of both of them Okay, so what I want to do is since the engine X is running a Engine X server. I am hoping that going into the work launch a test VM Version machine the the that was your virtual machine I would be able to access that a particular IP address to show how the container version machine can work together so word CTO CTO Console test Okay, so I'm into the test VM and You you saw that I was wasn't able to it did not give me a prompt to log in Since it's a console. I logged in last night. I forgot to log out. It's still active Okay, so now I can I Can see if I have NC so that connect verbose mode this IP address and For 80 which is the IP address of the engine X server. I'm trying to connect on for 80 Open and then I should be able to say get slash. I should TP slash one dot one And and okay engine X replied back. I did not type in that should be Commands correctly, but hey the connection is open the containers and version machines are talking So let me stop there and open it up for questions All right, so there's there's lots of questions just you know kind of around the premise of the the architecture of cube virt so I'm going to try and explain it and maybe you can correct me if I'm wrong. Let's let's go about that So cube virt exists within Kubernetes It natively works within Kubernetes. It doesn't necessarily reach out to other hypervisors for any kind of Services or you know accessibility and it's networking security and Orchestration is all container and Kubernetes native functionality, correct So there's questions of like if I have a case cluster running on ESXi can cube virt deploy a VM to ESXi I'm assuming that's a no So if I I'm assuming you are running the cluster using virtual machines Mm-hmm Virtual machines would internally run another version machine. So this would be a case very similar to Your running virtual machines in AWS now With ESX it's possible and even with a VM. It's possible to do nested Virtualization so that will work and the emulation would also work. So the short answer is yes If you are expecting it to run virtual machines on ESX directly, no, it does not do that. Okay So just as an example because this is sort of kind of like this I Created an open stack cloud provider cluster wherein the cluster was made up of open stack VMs Mm-hmm Those open stack VMs were actually running on VMware ESX Yeah, and then I deploy it and created cube virt VMs on the open stack VMs which are running on VM ware Yep, so you can go as deep as you want It should still work So many abstraction layers so little time Yeah the So the the premise here is if you're running Kubernetes and you need to run virtual machines as well you can Do both without having to have another project or product to run the hypervisor Absolutely, you're absolutely right Chris and that's the motivation. So you have one orchestrator One set of integrations control both of them together and a lot of us Are transitioning applications existing ones new ones they need to work together and it's better to have one platform Correct so The There is there is a couple questions that I want to just knock out here very quickly How do you address security concerns? Containers communicating with MVMs in the same pod isolation multi-tenancy. That's all handled by Kubernetes, correct Yes, so if you're using CNIs with network policies Calico can all be And others you can use them as if they are containers or they are just parts So that's one level of isolation running virtual machines. They are actual virtual machines So you get the version machine isolation as well in some ways. It's a key so With that the virtual machine is isolated within the pod itself, correct So what's the machine? Yes, it's a regular machine and it's in the namespace of the bar got it. Okay Okay, so that answers that do you support any other orchestration platforms like swarm rancher cattle and messes fear anything like that No, this is very specific to to Kubernetes Yeah Yeah, and I do not know of anything else similar on So a question about performance really how does the VM? Inside the pod launch compared to like a standard, you know pod launch time What are we talking about minutes? Is it dependent upon the OS underneath to the greater extent? Well, can you talk to that? So it depends. Yeah, go ahead. Yeah, um, so There's it depends a lot on For example, whether or not you are using an ephemeral disc or ephemeral image like a like a Container disk or if you have Image already present on a PVC that doesn't need to be uploaded or or retrieved, right because there's ways of booting a VM where you've you've specified the source image via an HTTP URL So of course, it's going to automatically down pull that image and put it in a format such as the VM boot in that case we're talking about the Process around the actual VM boot if you mean the VM is booting It's pretty much instant as soon as you get well instant in the case of if you start the VMI and you All go to its console You can watch everything run and assuming you don't have a cloud in it that takes a very long time to run You should be up and running in 10 or 15 seconds, so So let me elaborate on that so you have your running a machine today Version machines the OS boot takes whatever time it takes and you really don't have control over Depends upon the operating system the modern operating systems, right? Especially the the cloud versions are pretty thin slim. They boot up very fast. Yeah Actual processes are Just like any other container or pod so the word and larger They are exactly like any other part of container that you're running So really you're talking about the time it takes to boot the machine itself So that's one and be there's a lot of times like what George was trying to explain The amount of time it takes to extract the image the virtual machine images are going to be bigger The oven to machines that that we played with there at least 300 400 MB Which is on the smaller side the windows machine that I got the Q call to image was about six gigs When he extracted it's probably nine gigs. So there is some time spent on just dealing with that. So but assuming that Your images are primed as in they are there on all your nodes Then that time goes away then it's just the version booting time Cool. So is there a more user-friendly interface outside of the CLI and we have You know quite a few tools there, but is there any kind of dashboarding that you know of that's been created around Cuba? Yeah, there is a web UI project that I wasn't able to show but you should be able to use it Which is a graphical interface and provides all of validation on top of what I just showed today Can you say the name again? Sorry Say that again say the name of the tool again It's called a web UI and yeah, it's just if you go to the the cube vert organization It's an operator that they Created that just deploys the web UI Cool. All right Lots of people asking for examples of yaml Literally the whole process of going from like nothing to I have a disc to I have a cube vert, you know running kind of deal Do you have a repo that you could possibly share that we could share out with everybody here? I Couldn't I have a detailed set of instructions that I've used sort of To gather, you know together some select yaml meaning apply this apply this apply this and you you should run There's also a I can put that in a repo for people to look at there is a cube vert MO repo that was made The I wasn't actually they suggested using mini cube for that and I personally wasn't able to get things to run correctly With that it's not terribly old. So maybe it was just something with my cluster Me using mini cube. I'm unsure So so Chris and others the keyboard documentation I thought was really good to be honest Okay, yeah once had the community's cluster running And and it's been a while we have been doing that but I remember once we had the cluster running It was very easy to set up the operators are really just a couple of yaml files. I can need to put in They started well the start wall. I spoke about the doc. So if you're going to get help you've worked you've worked There is a dog directory Extremely well-written documentation very crisp There is an example directory that I highly recommend everyone to look at it has tons of example. I Hope I wish to add some read me to it or we will we will contribute some documentation To make it easier for to consume, but it's actually very straightforward Okay Yeah, there's it right now, especially when it comes to If you're not using the container disk or one of the container disks first of all if you have a container disk that Needs to be a different OS because a lot of the examples use fedora or seros and things like that But let's say you really need like Centos or Ubuntu or something else a Windows image, right? You'd either you know, you really need to use the the CDI To in order to upload that image. It's just really difficult otherwise, which is kind of the whole reason they made it But you need to be careful. This is one of these gotchas right when when going through so the documentation is good. However When it comes to figuring out what version of the CDI works with what version of the cube vert operator That can be challenging and then I definitely ran it some ran into some issues there Compatibility wise, but once I found out some versions that worked well together I was off to the races. Yeah, so on on the screen. I'm showing a quick example of how you can create a Docker image out of a virtual machine image and as you can see it's actually very very straightforward Yeah, this is what Rupak did right to create a so this is a federal example, but it can be any q-counter image so you you put that under the Disk directory inside the image and just create the image and Then reference that as the container this and that's what's happening here. Interesting. And it's really as simple as that I had this is fascinating right like this is my just Blowing my mind right now because I have like, you know this old tool that I want to keep using and it's a VM And it's just sitting out somewhere and now I can just say oh Yep into the cluster you go Yeah, yeah, this is exciting. This is exciting stuff. All right, so, um, let's see So, okay, this is a good question Licensing right like some of these OS as you mentioned here have licenses like rail windows How how is that managed like where do you see that happening right now? Yeah, so that is a problem in fact So Linux is a lot easier. We were able to play with it very easily but windows it took us time now We come from whatization background Windows What they have done is there are a Evaluation images available that you can use and Subsequently, you will have to purchase licenses, but if you are just looking to expect We can add the links to the PDF that we are going to share From which you can download the image and then you can start using it So, yes, it's difficult, but it's not as difficult as it as it used to be So the you got images available. You should be able to create a very similar Docker file Just make sure that you purchase licenses afterwards So it doesn't let you run for a really long time and it will keep nagging you that it's not activated Right. Okay, makes sense Yeah, it's more licensing questions here How how would you patch the VMs in this case, right like you would connect to them via Your normal communication that that's correct like you define all that networking is back and off you go Yeah, so machine like machine So you should be able to especially if you're using persistent Version machine persistent disk with version machine where your OS is on a disk which will survive the reboots It's on a persistent volume. You can patch as if it's a regular physical machine So that's one what I would recommend if you can do it is really use immutable boot disk Which can your container this so that So that you can throw them away. There is a new version of red hat at available instead of using the old one you just have a new one and just replace it and Keep the data on a different pvc on a dot or different volume and then use them together So so that's what I would suggest, but you know choices Yeah, and also because cube vert is is able to leverage Kubernetes native concepts You're able to create things like a virtual machine instance replica set for example So now if you wanted to update You know before the likes of hot hatching if you just wanted to update say the image that all your VMs are running You know you have five of them you as a Kubernetes operator apply that to the deployment or replica set and Let the rolling upgrade to take care of it. Yep Nice All right, so it is the top of the hour. I want to respect the release time here So thank you so much for the wonderful presentation Thanks, we really appreciate your help today and Providing us with this information the webinar recording the slides will be online later Go to it was shared in the chat. I will repeat it as soon as I find it I am just filling dead air CNCF that IO slash webinars go there sometime in the near future this webinar will be available and Look forward to seeing you all at the next NCF webinar. Talk soon Thanks. Thank you