 All right, so welcome to our talk streamlining the Cupid VM creation Hope you had a great morning so far and hope you can learn something from our presentation So my name is Felix Matuszek. I'm a software engineer dread hat working at on Cupid especially specifically its instance types Preferences and its command line tooling and with me. I brought Andre pro corny He's an intern at red hat and he's also working on Cupid and helping me with the command line stuff So let's have a quick look at today's agenda First, I will give a short introduction to Cupid Then we'll have a look at the joys of a declarative API Next we'll have a look at how to make the VM creation in Cupid simpler And then we'll show a demo of the things we were able to achieve And lastly we will point out some next steps for the instance types in Cupid. So what is Cupid? It is an add-on for Kubernetes which allows you to run and manage your VMs on Kubernetes So Cupid might be an opportunity for companies who already have virtual machines and wanted to give a Cupid a Cupid net is a try So all you can do with traditional VMs. You could do with Cupid VMs, too But Cupid brings virtual machines into the container world So you can do container stuff with them too now. So for example, you could use the Services and the load balancing of Kubernetes. So Kubernetes has to offer something on top of traditional virtual machines And also you can combine your virtualized and your containerized workloads. So for example Let's say you have a virtualized application and you wanted to build a new microservice on top of that Then you can run your virtualized application in your cluster and build your new application in a container And they both can work together for example using the same network And to do all of this Cupid extends Kubernetes with certain virtual machine related customer source definitions And therefore allows you to use the Kubernetes API to manage virtual machines and Because CRDs are not enough to actually run virtual machines we provide additional controllers and agents and Under the hood Cupid is using QMO Librit to run virtual machines with one QMO process per container and As you might know QMO and Librit they both provide a vast set of capabilities and using those effectively will be the main aspect of our talk today And if you want to learn more about Cupid, please visit its home page or the user guide You can find the links on the bottom of the slide and also we have a booth outside. So please visit us So, yeah, what are the joys of the creative API? Well, as I just told you, Cupid is Providing the vast set of capabilities of QMO Librit and it does that with a declarative API as all Kubernetes objects do and For that we have the virtual machine customer resource definition and this is very rich, but it can also be very overwhelming especially for users intending to create virtual machines in the simplest possible way and Also maintainability of virtual machines can become quite hard because let's say you have a fleet of virtual machines And you wanted them to all use the same settings Then you would have to keep track of all the settings separately for each VM And this is not very maintenance friendly. So Let me show you an example This is the manifest of a very simple Windows virtual machine. So there's already quite a lot going on and When I first saw this it was I couldn't believe my eyes. I was like there must be an easier way and I also had some questions like How do you even know which settings are appropriate for your guests? So, how do you know all the settings and How do you apply the same settings to different virtual machines and reputable reproducible way And how do you even share a common set of options? so again, my question was how can we do better and How can we simplify the virtual machine creation in Q-Port? so Let's say for example We have all the settings required to run a windows-based virtual machine And we should group them into into one billing block. So commonly used settings should be abstracted Into blocks and these blocks should also be reusable between which machines to avoid duplication and Ideally Q-Port would already provide ready billing blocks So you could start right off the out of the box and you wouldn't need to like search for the appropriate settings for your guests and Please note this this is our approach to to this so Using the traditional manifest still have the use case because let's say you wanted Very precise customization of your which machine settings, then they can still be very useful But our use case here is for like creating which machines in the simplest possible way So this is a different approach Before we are looking at our solution. Let's have a quick look at the previous solution attempts So there were two first the which machine instance presets An issue with them is that they are deprecated starting with the version in 0.57 release And they will be removed in the future They are based on the pot presets API of Kubernetes and this API never graduated from the alpha stage And it all already was removed in Kubernetes 1.20 So it will be removed from kubernetes well in the future And there was also lesson learned from the which machine instance presets There was no differentiation between resource sizing and runtime preferences So let's say for example, you have a Linux based guest and a Windows based guest And both should be using four gigabytes of RAM for course There was no differentiation between the cores and the rest the other settings for hardware related settings for example What your what your guest prefers as a disk bus or whatever and So there was some duplication again because we all had this in one object And ideally this would be split into separate objects So we would avoid the duplication of the resource sizing and the runtime preferences And the second solution attempt we had were the templates and the issue with them is that they are a Downstream concept by Red Hat so you can only use them on okd or OpenShift They're not usable on kubernetes And they have another issue So let's say when you create a virtual machine from a template you create a copy of the whole definition inside the template And if you create another VM you create another copy and so on and so on And let's say you wanted to improve your template change something some setting then the only way to Apply the setting to all your existing virtual machines would be to drop them and recreate them completely And this is also not very maintenance friendly And let's also have a look at other hyperscalers. What do others do? Well, whether it's a GCP AWS Azure or OpenStack if you look at their command lines, they're all pretty similar So all you need to create a running virtual machine is just an image and everything else is derived from this image so We thought this was a quite nice user experience and that kubernetes should have something else. So Let's have a look at our goals again We wanted to take away the complexity when creating which machines and We wanted to group settings into resource sizing and runtime preferences and we wanted to improve the The maintainability of which machines by making those group settings reusable and how did we achieve all of this? Introducing the instance types and preferences Those are new custom resource definitions combining the resource sizing and the runtime preferences They are available starting with the version 0.57 release and There are names based in cluster wide variants available. So I told you about Ready billing blocks they could be shipped as a cluster wide variants While you could still have your own custom instance types and preference as namespace object and One of them or one of each of them can be referenced in a witcher machine so if you start your witcher machine which your machine instance will be created and The settings of your instance type your preference will get applied to the witcher machine instance and To understand this a bit better. We have a quick visual overview So we have two different APIs here first. We have our instance type API and this defines the instance types and preferences and then we have our core API in cupid and If you have a look at the witcher machine, you can see that you can specify an instance type and a preference in the spec of the witcher machine and If we go one level below To the witcher machine instance you can see that there is no more a concept of instance types or preferences So when an instance a witcher machine instance is created The settings of the instance type and the preference will get expanded and will get applied to the instance To the spec of the witcher machine instance So if you would compare witcher machine instance created with an instance type and a preference or without you wouldn't know any difference And speaking about ready building blocks. We have the cupid slash common instance types of repository And this repository holds a set of predefined instance types and preferences And the goal is to ship them with cupid by default in the future. So right now you still have to manually deploy them But we want to change this and if you wanted to have a look at those Pretty fine building blocks. Just go to the repository. You can find the link in the bottom of the slide So how does it look like using instance types and preferences all on the left? This is the same definition. I showed you before of the simple simple Windows witcher machine and on the right. This is basically the same with Windows witcher machine But this time using an instance type and a preference and you can see that the manifest has become quite shorter And you can see that the instance type and the preference are used and about in the middle But I won't bore you with details here of a manifest. So please have a look at it yourself later And the last feature was creating a witcher machine with just an image so we also implemented this in a cupid and For that we are leveraging labels so images are Kubernetes objects too and you can label them and so images can recommend the suitable instance type and preference and this is all required to create a running witcher machine the If you specify this volume or this image for your witcher machine, then the instance type of preference will be inferred So in the end this works similar to other hyperscalers and we also Provide appropriate tooling to use this feature on the command line Yeah, so in the end if you wanted if you wanted to then you have no more need to work with YAML manifests at all so an image is enough to create a running witcher machine and Talking about the command line We also improved the command line of keyword and the command line utility is called word CTL And it was enhanced to match the user experience of other hyperscalers and for that several new subcommands were added So now you can create an instance type a preference and a witcher machine with word CTL Which was not possible before Now I'm handing it over to Andre to introduce you to some of the subcommands so as Felix said already we were working on adding with a CTL create instance type and preference command and these commands should help you to Create the manifests They they will generate it for you For the very CTL create instance type we have support for most of the parameters that you can Specify on your own manually in the manifest YAML you can also do it on the command line with our command For the preference command we don't have we don't cover all of these options yet But this command should serve more like a starting point. So you will avoid writing the wall YAML manifests from scratch. Basically you will generate the YAML and you will save it somewhere in the file and then you will modify it modify it based on your needs later and The outputs of these commands as I've as I have said they can be saved in the file Or you can even pipe them into cube CTL or OC client and then create the objects in your Kubernetes or OpenShift clustered it directly So the last new sub command would be which would CTL create VM and as its name says it allows you to create virtual machines It's available starting with the version 0.59 release and it provides you with a six fixed set of CI flags to adjust which machine parameters So for example, you can specify a name boot volume and you can also specify the instance type and the preference to be used and Again this command outputs manifests and I think there's a really great way because We didn't want to reinvent the wheel here and create another Kubernetes client So all it does it outputs manifests and if you're familiar with Kubernetes then you already know how to work with Manifests, and I think you can also use it for example in in a script and just pipe the output into OC or cube CTL. So I think that's quite a elegant way So now we will present a short demo of what we were able to achieve and we wanted to make this demo focused on people who already have a running Kubernetes cluster and Who wanted to give Kupert a try so you should already be a bit familiar with Kubernetes and So the first step would be how to deploy Kupert and The second step will be creating an instance type and preference And as I told you before you could use predefined instance types of preferences But in this demo we will be we will be creating our own And the first third step will be finally to create a Witcher machine with an inferred instance type of preference So first how do you deploy Kupert? Well, there's one prerequisite and that is that you need a Kubernetes cluster Which has virtualization enabled? Then the first step would be to deploy the Kupert operator and you can do this with kubectl apply and Here we are just applying the manifest of the kupert operator from kupert releases page The second step would be to create the kupert custom resource This will trigger the actual installation of kupert and again, we're using kubectl apply here and Applying the kupert custom resource Manifest And then the first first step would be to wait until our components are up So we can do this with the wait command of kupert l and we can just wait for the available condition of the custom resource and One more notice This is just it is still using the release candidates zero manifests But the version 1.00 will be released in July. So Please give it a try when it's released And now next we will be creating our instance type and preference So now when we have a running cluster with kuber deployed We can start creating our manifest that we will use with our virtual machine first. We will create the Manifest of virtual machine instance types and to do so we will use the command that you can see here and In this example, we are specifying the CPU and memory flex which are required once But there are even more flakes that you can specify for example input output threats policy Or if you would like to use the GPU with your virtual machine You can use the GPU flag and it will pass through the GPU to your VM and Below the command. There is the example of the output This which you would not see here because we are piping it into the kubectl to create the object in our cluster Next step will be the creation of virtual machine preference manifest to do so. We are again using the virtual ctl create preference Here we are specifying the CPU topology and we are setting the value to Prefer course other option for this flag would be for example prefer sockets or prefer threats as and Of course, this this command has even more flex you can specify But as I have said already before it should serve more like starting point to avoid writing the wall YAML from scratch It does not cover all of the options yet Yeah So now that we created our instance type and preference we can finally create our virtual machine with the input instance type and preference And for that two steps are required first. We need to upload a bootable image label it accordingly For that we can use the image upload command of with ctl in this case, we are uploading our image into a PVC persistent volume claim called my image and then we're using the default instance type and default preference Flags to specify the instance type preference. We just created Then we're using a size one gigabyte We're using the force bind option to avoid waiting on the upload When we have a storage with wait for first consumer and then finally we're uploading a seros image Which is just a simple testing distribution Which is quite convenient to use And finally we can create a virtual machine with the inference enabled And for that we are going to use the create VM command of with ctl We're giving our VM the name my VM and then we have the for instance type in for preference flags Those are still required, but we plan to drop them in the future So imagine they wouldn't be there then the command line would be even shorter and simpler And lastly we need to specify our boot volume And we do that with the volume import flag. This will create a clone of the image We just created so our VM would also have persistent storage We're giving it the type of our image. That's a PVC. We're giving it the name my image Then we're using the namespace default because we didn't specify any other and again, we're using the size of one gigabyte And all of this is create a piped into a kubectl create and this is enough to create a running which machine So let's have a look at the results We have the instance type the preference fields of our which machine here And you can see that the which machine is using the instance type the preference. We just created So the inference was successful. We didn't specify this in the virtual machine This was inferred from the boot volume of a virtual machine And then there's the third field called revision name This is used to create controller visions for our instance type and preferences So those are Frees in time versions of our instance type and preference So let's say for example, you wanted to stop or start your which machine If the controller vision it will always use the same settings Which it got when it was created first, so there wouldn't be any changes without Clearing the revision name when you clear the revision name and start your VM a new controller revision is created And the new changes are picked up And if we have a look at our which a machine instance Specifically its CPU and memory You can see it's using to course so our instance type said it should use to two CPUs and Our preference said it should use course So we're doing that we're using to course one socket one thread and our instance type also Specified that we should be using 256 megabytes of memory and we're doing just that so this all for a demo and Last we have some next steps So currently we're shipping the version one beta one of our API and we wanted to Ship the version one with a cupid Version 1.1 or greater and for that we still need to improve our with city alfax So as I told you we want to enable the inference by default and We also want to deploy the common instance types with the word operator so we can use them right out of the box And then we still need to make various improvements to the controller revisions because right now For every which machine a new controller revision is created. And so we get a lot of duplication There's no de-duplication between them So let's say you have an instance type with the same settings and you create two virtual machines You get two controller visions with the same contents and we wanted to improve on that and of course Lastly we want to fix all the bucks because who wants to ship any buck with his version one. So That's it and Do you have any questions? Okay, so the question was if we plan to Release a tool for migrating from over to cupid and to be honest, I don't know I know that there are tools available, but I can tell you right now. Sorry Answer was that there's a tool called MTV. You should have a look at this Any other questions the question was if I understood right if we have the possibility to create Custom instance types and preferences or Yes, yeah, of course you can create them yourselves too So this you can either create them as a cluster wide object or you could create it as a namespaced object other questions Question was how well a cupid is integrated into Kubernetes and if you can use the same tools to Manage your which your machines and containers and I would say yes. Yes, you can it's it's quite the same. So which your machines are just Another object in the Kubernetes API and so naturally you could use all the tools you can use with containers You can use them with which your machines to so For example, you could use our go CD to to roll out your which your machines. There's no issue with that Nothing nothing different to a to a deployment or to a pot. So I think there are not too many differences there Does that answer the question? Question was if there's also an UI to keep it not just the command line tooling. So answer is Sort of if you go to okd you look at its console or open shift. That's the same console There's a UI available. You can use with Cupid so Partially yes, but not on Kubernetes itself The question was if the UI is also making use of the feature presented today. Yes, it is or at least it's planned to it Will be in the future. We will make use of it. Yes. Any more questions? question was question was if there are Security relevant settings in which your machine, right? Sort of if you if you let's say you wanted to break out to your host That was the question rate To be honest, I can answer right now, so I Think there is but sorry can thank this question But please come to the booth. Maybe someone else can answer. So any more questions? Okay, that's it and thank you