 Okay, so in preparation for this talk, I was actually doing a little bit of research about Kiefer's history, a little bit about the timeline. And I actually got a little sidetracked. I ended up stumbling upon a little bit of history about virtualization in general. I kind of thought this was interesting, so I figured I'd share it. Virtualization, its concept, has been around a long time. And what I was reading is that virtualization in some form has been around since around the 1960s. I'll take the internet's word for it. So it's been around a long time as a concept. And when you think about the idea of virtualization as a technology and you put Qvert on the timeline of different use cases and ways to use virtual machines, it's kind of interesting to think about. Qvert is a concept of running virtual machines on top of a cloud-native environment, running virtual machines on top of Kubernetes. So actually running that virtual machine inside a pod. So a really interesting concept when you think about it that way. And so kind of going back to Qvert and when the project started, it was around 2016, began as a concept, and eventually became a project. And over the last seven years or so, this project has spent a lot of time and over this long journey to eventually reach V1, which is a really exciting milestone and accomplishment by the community. And so that's what we're gonna talk about today, some of what that means for you as an end user, what that means for you as someone in the community, what that means for you as a developer, when we say V1 as something that has been released. And then we're gonna talk a little bit about what's next. You know, if V1's released, what can you expect to see next from Qvert? So when we were coming up with this V1 release and cutting the V1 release, we wanted to come up with a theme, what we're gonna focus on when we release Qvert. And the theme we came up with was, we want to align with Kubernetes. So what does that mean, aligning with Kubernetes? Well, it means a lot of different things. And one of the things that we first focused on was the release cadence. Qvert as a project has been releasing on a monthly cadence for almost six years. And this made a lot of sense for a long period of time. If you can imagine, Qvert's got a lot of features it needs to develop, so there's a lot of maintenance, there's a lot of feature velocity that's going into the project. A lot of these features we need to get out to end users, we need to get the hands of developers, and so there's monthly releases with these features. So this went on for a long period of time, but in the last year or so we're commuting up to V1, the community was really focused on trying to stabilize these APIs. And so slowing the feature velocity a little bit. And so what we came to was, okay, well Kubernetes has releases three times a year. So we can look to align in that sense that we can reduce the amount of releases we have. And this is really nice for a lot of reasons, for stability. I know as an end user, I've had it in video, like we really appreciated this. This is, we can't really keep up with monthly releases from Qvert. We'd rather have it consume this like maybe once or twice a year and larger chunks instead of having to constantly upgrade and end up getting behind. So this was really nice, and a really nice change for end users. And even Kubernetes, if you think about it, went through this, even in the beginning, Kubernetes started in 2014, that release cadence was a lot faster and then eventually even only as recent as like three, four years ago was four times a year and now it's changed three times a year. So similar progression that Qvert has gone through. And so now the important thing is that there'll be three releases a year and they're gonna be aligned with Kubernetes. So Kubernetes will release and then there'll be a period of time, about eight weeks that Qvert will spend to then do stabilization and then eventually release a cut of Qvert release. And so kind of in the same vein of aligning with Kubernetes, the other thing we wanted to do is create more SIGs. Qvert for a while had some SIGs. We wanted to expand this concept and for a lot of various reasons, like specialization gets some more ownership in different areas, the code base. And so we looked to explore that concept as another concept to mature the project. And so we're gonna talk a little bit about these. And so starting with SIG scale, this is an important concept obviously in the world of virtualization and computing. We need to be able to scale, we need to be performant. And so this SIG, its charter was to focus on how can Qvert be scale, how can it scale, it can be performant and provide guidance across the project, influence pull requests and to try and make things more performant and make it more scalable. And the really critical point is that in order to do this, we have to measure, it's really important. We measure and we measure over time. And so what we do is we, in SIG scales, we measure across releases. So you think about it as we'll have like a V1 release. You know, we've looked at all the pull requests and we have a fixed set of PRs and we run a, our fixed set of data of configurations and we run a job that tests performance. And we measure that consistently over time and we get ourselves a trend line. And then we look at our trend line and we compare that over time and we can get a sense of okay, how well are we performing given these changes. And so ultimately what we get as output is this really nice way to communicate to end users that okay, here's what you can expect from Qvert on this release. Will there be a performance change that could affect you in some way and you can be aware of that stuff. And so the, you know, good example like we measure virtual machine overhead across releases like whenever if there was an increase, we catch these things and we document them in release notes, we mention them. So it's really clear like what could happen to the, you know, if you're gonna consume this release. And so all these things are really signed of a mature project and that's been our focus for 1.0. And what's even interesting here, if you look at the diagram, you can see there's some dotted lines. There's two pairs of dotted lines. There's the one on the left, one on the right. And the dotted gray line is when the Qvert release started and then the dotted blue line and that's far left pair. It was actually the Kubernetes release when we moved to a different Kubernetes release for our testing. And so you can kind of see how like how this things could change. Even on the far left of the diagram, when we started measuring, you can see that there was some pull requests that caused some changes. And now it's interesting, even if you go to the far right side of the diagram with the red dotted line, that's Kubernetes 127. And you can see there's a clear correlation here like that something has changed that our data sets, our data points have become, have tight groupings. Whereas before they were a little bit more sparse. And so you can see that something changed and we can clearly correlate that with Kubernetes 127. So it's interesting, we see this stuff happen and when we measure it over time. And in this case, it was such a nice performance improvement that we observed. Six scale, so this is the other half, or excuse me, the scalability. This is the other half of six scales of measuring scalability across releases. What's cool here is the way we measure is we look at scalability as the number of HTTP requests we make to the API server, right? API server is just a resource that's shared by all different APIs. And the Qvert is just another user that consumes those API resources. And so we wanna be as good of a citizen as we possibly can. And so really cutting down on the number of requests that we have to make. But you can actually see here in this diagram that there's an increase. We have on the far left, we go from, so this job is creating 100 virtual machines and we can see there's a pretty strong correlation about 100 virtual machines yields about one-to-one for the number of patches that have to be major to the API server. But we can see that jumped to about 200. And this had to do with the pull request that we had to incorporate. And so we had to make some trade-offs and to actually bringing this in. This was an important pull across, so we decided that, okay, we need to have this. But what's important for you as an end-user is like you can see that there's a, you can see the trend line, you can see that, okay, here's what's changing, here's why it changed, and here's what you can expect. And here's the same thing that you can see on the far right to the right of the red dotted line. You get Kubernetes 127 and then our dataset changed again. Now we've got sparse data, sparse data points when we do our patches. So this is kinda cool because this is something that we observed from Kubernetes 127, something, we don't really have an explanation for this, but this is one that we wanna take with the talk with the Kubernetes 6 scale group to get an explanation on, cause it's kind of interesting to see. And then SIG API, this is a fairly new API. This is, or excuse me, a fairly new SIG. This is the plan for this SIG is to review the API evolution of Qvert. So do you think about it? Like as new changes come in, as the API changes, new features come in, we wanna make sure there's backless compatibility. We're not gonna be interrupting end users. You can continue to use these stable APIs and you're not gonna experience any major issues that would break you. Yeah, for, oops. So for SIG scale, sorry, for SIG compute, it's basically the basic responsibility of computers is gonna take care of all the compute features and traditionally the core features of Qvert. So when we reached the V1, and yesterday we had a release of 1.1, we already had a lot of features. And what's common between all of these features, these are like mature features of virtualization. And some of these features were on our back log for a very long time and we tried to get to it, but I mean a lot of things stopped us from implementing those. And some of these features, they are right now breaching the gap between traditional virtualization, data, yeah, traditional virtualization, data center virtualization that are now available on Qvert and some other stopguffs for us where the Kubernetes supports for all of these features. We never wanted to implement features that are only dedicated for Qvert like Qvert specific and we wanted to leverage as much as possible the Kubernetes and Kubernetes flows and principles. So for example, focus on CPU and memory that was introduced for a long time, Kubernetes had this philosophy that the pods are immutable and there's no way to change the specification of the pod. And now recently as the Kubernetes grew, there was a way forward with the vertical pod scaling feature that allows the pods to be immutable and allows pods to change the spec. And we came to implement the CPU and memory in a Qvert specific way, but now we have a way forward to move it to a more cloud native solution. So there will be a native implementation for this. Yeah, going forward the networking SIG is responsible obviously for networking. They had a lot of interesting features I won't list all of them, but I mean the notable ones are the network hotplug API that you could add more nicks to the virtual machine. And then recently again as a sign of maturity the networking team implemented a pluggable binding component. When we just started Qvert, we had several bindings of like a bridge binding and a masquerade binding. And as we grew the number of these bindings kind of exploded and every other user wanted to implement a different binding for its own solution. So the recent changes that the networking team created a pluggable binding and exported all the most of the bindings to an external repo that kind of provides a reference architecture for anyone to develop their own plugin used with virtual machines. And then the storage had a lot of interesting interesting features won't go over them, but one notable example here is again is the data volumes. At the beginning when we just started Qvert there was no solution, there was no seamless solution for managing images and how would you upload these images into volumes and these volumes need to be pre-populated before you start the virtual machine. So Kubernetes didn't have such a solution, we had to develop it ourselves. But this inspired Kubernetes to design a new volume populators that are useful for us and provide the solution for others in the ecosystem. And now we have a way forward to kind of deprecate the data volumes and use the cloud native solution that Kubernetes provides with the volume populators. Again kind of a sign of a maturity of the project. So what's next for Qvert, what's in the future? And the focus is, besides some features which I'll mention is really focused on graduation. Qvert joined the CNCF sandbox in 2022. And since then a lot has changed as we've alluded to there's been a lot more features. The V11 was just released recently with even more features. There's a blog out from the CNCF that actually goes through and talks about some of the features that are in there. There's more stability and a lot of exciting things for end users. And so our focus or one of our main focuses is we want to reach graduation. And what we're doing in the community is we're currently gathering the requirements and putting together proposals for doing this. But what's really important is actually from you guys, from everyone in the room, getting adopters. There are a lot of adopters out there. Like we hear about them all the time. People come up to us and they say like, oh yeah, we use Qvert, we like Qvert. But we don't always see the public endorsement of it. And Qvert has a way to do this. There's just an adopters file. It's just a markdown file that the community shares. And people that use Qvert, we encourage you as an end user to list yourself as an adopter. And really all it means is it just helps the project. It helps the visibility of the project. It shows that a lot of people are using it. And we always hear, like I said, we hear about it all the time, but we don't always see some people wanting to add themselves as an adopter. And so if you do want to do that, please reach out to Andrew or Fabian and they can help you with that and adding yourself and mentioning yourself as an adopter of Qvert. And that will help with graduation. I mean, one of the things like, one of the requirements of graduation is having a lot of end users. And so public endorsement is these are the kinds of things that help get the project to graduation. So what else? We look at expanding SIGS. Like I mentioned this important topic of SIGS and ownership. And one of the areas that we've seen a lot of success in is getting these specializations together, getting these groups of people together that care about a specific topic and really defining ownership for the group. So the project really wants to expand on that concept, try and find some additional SIG groups and continue to drive that home in the community. And then features, like what are the features that we consider to be really important for actually going through and taking the next step in graduation? Well, security, security is a huge one, right? We want the project to be secure. And one of the big features that was recently was non-route by default. So now in all the testing lanes in Qvert, we're running as non-route. And so that's really exciting and something really important for reaching graduation. Hubplug's another one. And you know, Vladik mentioned CPU memory hubplug. And to reiterate on this one, it's just a sign of maturity. It's the project to be able to reach this level as a virtualization platform is important. So you know, showing that we can do a feature like this, especially in this environment in a cloud-native environment is really cool to see. And that's also something that currently exists. So what are the things that we want to do before kind of going to graduation? Multi-architecture support. And I have there full support because right now you can actually run on ARM, but it's considered experimental. And really all that means is that the Qvert community has a bunch of test lanes for X86 and those tests run on every PR, every pull request. And for ARM, there's been a lot of work that's been ongoing to actually create parity between not only feature-wise, but also in the test lanes. And so that's what's really important. We want to make sure that ARM, all the features that we have for X86 are also tested in ARM for every pull request. And that's kind of one of the important criteria is before being fully graduated and supported. So we want to eventually get there. I would say it's feature-parity-wise. It's very close, like 90, 95% that we've observed. And then we kind of want to, you know, finish out that last bit of feature testing before we say it's fully supported. And then finally, performance. You know, I alluded to like some of the measurements we do and our trends lines that we observe. You know, those are the kinds of things that we're going to continue to observe and continue to observe across future Qvert releases and publish it out. Specifically, we want to do a few things like we want to improve on our reporting. And right now we kind of, we report whenever we come across things that could affect performance positively, negatively. And we want to improve that. We, the invisibility of these things in the release notes, like performances of things that we really want to be clear to the end user. You know, you want to open like release notes and you want to know exactly like what's coming when I adopt this release. And you know, specifically performance is a good example of one of those things that you hear a lot about that people want to see called out. And even more specifically, like we look at future wise, reducing the VM memory footprint is a big one. We've had a lot of focus on this and recently in different ways. But we really want to try and reduce the VM memory footprint. The actual for launch or process that actually manages the guests inside the pod. We want to have that. We want to try and if we can, in any way reduce that memory footprints just so that we can be more lean. So as you get to larger scales, you try to get as many VMs crammed onto a host as possible. We can make sure that we, you know, there's not a whole lot of overhead. This is only going to help our scale even more. Okay, and that's all we had. So thank you very much for attending and we like to take questions if anyone's got any. So if you want, you can shout them out or there's a microphone right here. Thank you, great, great talk. I have a question. What happens, you know, what do you think about reconciling the idea of having VMs, you know, sometimes big VMs over Kubernetes? Philosophy, microservices, big VMs. How do you think, what do you think about that? So let's see if I understand your question. You mean like how when you have like a virtualization, like you have a VM, right? You have your traditional app and maybe you're thinking about running it in a pod and you're trying to justify, should I run it in a pod or should I continue running in a virtual machine? Is that what you're looking for? Yes, but usually what happens is when people think of VMs, they are bigger than what we should have as a pod, you know? That's, they're not small things usually, but. I think one of the use cases for Kuberth in general is kind of provide a way for data center, like a traditional virtualization users to move to microservices and get into the world of cloud native. So one of the ways to do it is kind of take some of the, some of the applications that are easily containerized and run them alongside with their monolith VMs. So they don't have to, they can migrate into this environment fast and then continue containerizing, but without breaking their production environment or whatever they were doing before. Another way, these VMs don't have to be big. I mean, sometimes you can run a very small footprint VMs that emulate, I don't know, an antenna or something like this. So there are multiple use cases. There's also like a backup and restore that use cases that are out there. So I think there are lots of interesting ideas that VMs are still needed. Thank you. Thank you. Do you mind going to the mic? Just because I think for the recording, yeah. One of your gentlemen are from Red Hat, is that correct? Yes. So the OpenShift virtualization OSV project, I believe uses Qvert heavily under the hood, right? So are they also, so I guess two types, two parts of this, are they big advocates to the feature promotions that we're going to see and the what's next for Qvert? And how is that tied to the project of OSV in the future? And how is that relationship going to work with Qvert's growth alongside OSV, which brings Qvert more into the enterprise? Does that make sense? I'll just say that, I mean, rather traditionally did all the work upstream first. So we contribute upstream and we grow the community and community is important for us. And then as we, as with all the other products, I mean, we first develop upstream and then we bring it into productization. I don't know how to speak about, I mean, I don't want to speak about the Reddit side of things. I'm here to speak more about the community, but Peter in the back of our room can answer questions about OpenShift virtualization. And what was your second question? I'm sorry, I didn't get that. Peter Lutterback, I'm the product manager for OpenShift virtualization from Reddit. I think the basic answer, your question was how was the relationship between Qvert and OpenShift virtualization and how does that impact everything? And I realized I'm a cube console, I'll keep this short. Basically, as Vladik said, everything we contribute actually goes upstream and Qvert first. So there's actually features upstream that are not quite ready for a product and they will stay up there until they are. And by the way, we're not the only people using Qvert in production products, right? So if you go, and this is where I'll take my red hat hat off, if you go see the folks over at SUSE, right? There's a version of Rancher, I think it's called Harvester that uses Qvert. The folks at Platform 9 use it, Google uses it as part of Anthos, did I leave one, I think that's all of them, right? Oh, and then they use it in the G-Force now. Yeah, there are others like Civo and... Yes, and actually some smaller service providers use it as well. So we are not the only ones, and as Vladik said, the more folks we have joined the community, it's actually better for Qvert in general. So please take my red hat hat off, but yes, please help Qvert out as either a contributor or a doctor. Yeah. So I have one, because this story of Qvert containers then going back to VMs, it feels like we had very similar story years ago when OpenStack was emerging. So we had physical workload, people were moving to VMs. At the same time, sub-product of OpenStack, like for example, Ironic, they were supposed to give people using VMs still access to physical hardware. Now we're going to the next layer. So we are giving people containers, but we are giving them access back to VMs. So probably with all are aware of the fact that people are doing Kubernetes on top of OpenStack, on top of blah, blah, blah, blah, blah, blah. So you can stack it all together and then can we somehow combine it all so that if you have a user who was doing OpenStack with Ironic to get physical servers and now he's doing on top of this OpenStack Kubernetes with KubeVirt because he wants to do VMs, but at the same time, he still wants to do OpenStack Ironic to get zero one layer below. Can it even work all together so that can you get a coherent ecosystem? You know, physical stuff plus OpenStack or whatever you do, plus KubeVirt so that you get it all using one platform or is the philosophy at some point that you are just saying, well, you are doing containers, now you are doing KubeVirt to get VMs but forget about physical because it's too much. You know, is there anything considered like this or maybe there is no use case or maybe people are not showing this use case? You know, what's your view on that? Well, I think there's, if I understand correctly, I mean, there is in the ecosystem like the middle cube project, like I might even use Ironic like that does bare metal provisioning. Like I guess the way I look at it like KubeVirt is very defined use cases and then like you want, you like Kubernetes, you like pods, you like containers, you want to run pods and you want to run containers, you like the APIs, whatever. And then, but you also like virtual machines. It's a realistic thing. Maybe you just want a kernel to sit because you got in between you and your hardware because you've got untrusted users. Like there's a lot of good reasons like to think about it. And then, and then you throw in, you know, metal cube is bare metal provisioning with the same concept like you can put it behind Kubernetes APIs. So I mean, that's what I would say is like there are projects out there that, you know, when you kind of combine these things, I think you can get probably some of what you're looking for, you know, taking these three use cases together. Thank you. I think the OpenStack Foundation has announced that they will make the control plane more into the like Kubernetes way of things. Is there any plan like, you know, making the compute use cube word instead of still the traditional KVM way? So, KVM uses KVM. Okay. We are based on KVM. What I meant is like you deploy a physical machine when you put containers on top of it in the, like, you know, on the compute node, like a NOVA or LibWord, which still acts like, you know, a separate structure compared to the Kubernetes way. So is there any way we, like, you know, you guys thinking about like, you know, making it Kubernetes way for the computes? So we don't have like a defined interaction with OpenStack as part of our, I'm not sure what OpenStack does in that regard. But what I would say is that I mean, the way KVM works is that you have the user space part and you have a kernel-based virtual machine implementation in the kernel. And then the user space part is the QMU, which emulates the virtual machine and then interacts with the kernel part. So when we run QMU in a container, it's still kind of a native way, just a way to deliver this into the Kubernetes world. It's by using containers. So it doesn't really make a difference whether this QMU was running as a user space application based on NOVA or is it now running as a user space application inside the container, just like a way of delivering this. And also like, do you guys run any databases on top of the, any use cases that you have? We have a community users that are running databases on the QMU. Thank you. Thank you. So in terms of stability and production readiness, since it's based on Linux KVM technology, do you feel it is as production ready as if we're using KVM today and we're just wrapping up Kubernetes or is there a risk there, right? Just changing essentially your hypervisor layer can be scary for an enterprise that's already running. So I just kind of wanted to hear your comments on that. Sure, yeah, I think, well, I think it's an interesting question because if you think about technically what Qvert is, like we launched a pod. So we've got some user namespaces. We've got some processes in that pod. We've got a Livert server, a little helper process that Qvert runs. And then we've got QMU process. Like a lot of what you'll see when you interact with Qvert, it's like if you just use a version on your laptop, like a lot of the stuff is there. But where you'd have some hard times with, and this is like what we've been talking about, like as we've grown as a project and as Kubernetes has grown, is the features that aren't as cloud native. And like Vladik mentioned, pods being a mutable is an assumption in Kubernetes for a long time. So like how can you possibly change the containers back? How can you eventually edit the domain? Like you can't do that. So like we can't really vertically scale these things. So eventually Kubernetes changed. So my point is where you're gonna run into what this kind of thing is like when Kubernetes doesn't natively support a traditional virtualization feature or doesn't have a way to do it, doesn't necessarily support it per se, but like has a way for us to implement it. Then you can run it to some challenges. But for the most part, like a large majority of features, like what we're alluding to is that very mature features that you're seeing here that are listed. And these are kind of things that you see that for very mature virtualization platforms, but it's something that over many years we struggle with to get and we've gotten to this point. But so there might be a few here and there like left that you maybe are accustomed to or expect to work in a certain way that might be a little different in this environment. I would just add that the Kibbert is already being used by large companies that are running Kibbert in production. And yeah, I think it's on one hand the traditional virtualization features that are very stable and very well known. But in addition to this, it opens the door for combining these features together with the flexibility of Kubernetes and all the cloud native stuff that wasn't present in the regular traditional virtualization. So in terms of stability of these features, I think we're in the right place. Yeah, the short answer is it's stable. It's that you've got to commit to the use case. If you really like Kubernetes and you want to run pods and alongside purchase machines, this is exactly what you want to do. This is where you want to go. There might just be some interfaces that are not there yet for agency. There might be a few. I can't even think of some of them. Yeah, there are all these challenges. I mean, the... But for the most part, like you see... Not something we're not working on. Yeah, you'll see, like, you can run it. Because the thing about, like, you're no longer just, like, it's not like you're just like Kibbert, or it's not like our KVM, just like owning everything and doing whatever it wants. You know, now it's where we have to, we've got our API in front of that. It's like we've got Kubernetes, then we've got Libbert. And so we have to deal with the translation. And for the most part, it's really good translation. And over time, it's gotten even better. But in some cases, like, we kind of try and make it work, or, like, from different APIs. But the point is, like, it is stable, and then you'll see a large majority of the features that you'd run today on your hypervisor. You'll see and get with Kibbert. Thank you. You're welcome. Okay, we're at time. Thank you, everybody. Thank you very much.