 Well, hello everybody Welcome to KBE insider and with our fancy fancy new graphic You know, it's always nice to have a new intro clip just to make our lives more entertaining We also got some cool new swag. I've got the I got the sweatshirt going so, you know, we're we're always happy about swag So let's see. I have a couple of new things today. So first and foremost, I'd like to introduce my co-host for today Who is Josh wood and you know, so we have a rule on the co-hosts along with me is that they must be named Josh So Even if even if it's somebody else, we're just gonna call them Josh anyway that way. It's easier for me Yeah, as I said when when we were leading up to this land in it any Josh will do Exactly. Exactly. So a little bit about the show Actually, so Josh, do you want to introduce yourself real quick and then I'll talk about the show a little bit Sure. Hi, everybody. I'm Josh wood. I'm a Developer advocate at Red Hat Principally focusing on OpenShift and especially Operators as an extension to the Kubernetes API and OpenShift in a way of delivering features that we add atop the Kubernetes core and OpenShift Awesome, cool. So the reason we do the show is to try to give you an inside look into what's going on in Kubernetes land and in particular you know, like the idea being that if you kind of talk to Let's say the lead engineers or the engineers who are actually doing the work that you'll get a much better insight of where Kubernetes is going In the future rather than the hopes and prayers of a product manager, you know, or a press release So that way, you know, we because it's open source, right? There is a lot of contribution that's done based on the engineers recommendations and you know, sometimes that's that's well reflected in the press releases and sometimes it's not So we think it's a good idea to talk to the people who are actually doing the work and we hope you do too And so feel free to put questions in the chat and you know, or if you have any comments or whatever We are always happy to have more engagement as it were So definitely let us know But today's show we are going to focus a little bit on virtualization and so we have two guests and the reason is is because Virtualization seems like something that isn't normally a part of Kubernetes being a you know a container orchestration platform And I think what we're what we're starting to see is that it's more than that, right? It's that it's an orchestration platform for lots and lots of different things if you go back to like our first episode You can see us talking to Clayton Coleman about You know how using it in general as a control plane And it's kind of actually come up a number of times throughout our interviews. So We we think there's a lot more going on there. So we invited two guests one is David Vossel. I hope I said that right Who's from the kubevert team and the other one is bond and does who works on kvm or the Kernel virtualization manager. Is that right? I was like, I don't remember what the virtual machine. Yeah I was like, I can't remember what the expansion is. So And hopefully we can talk a little bit about Virtualization and how it relates to Kubernetes. So let's start with David. Do you want to introduce yourself? I I always say it's very difficult to remember or find out or discover or have any consistency to what titles and roles are Within red hat. So I find it's much safer to let people introduce themselves Yeah, sure. I'm David Vossel. I'm an engineer at red hat I'm contributing to the kievert open source project and Really it's kind of evolved into an ecosystem. So I'm kind of contributing to the kievert ecosystem at this point Um, I was involved with the kievert project early on Got the opportunity to design a lot of the way that operates today So I'm coming at this from a kievert perspective Gotcha, cool. All right bandit. How about you? Hi, my name is bandon. I work in virtualization. So As Langdon said, I work mostly on kvm, which is the kernel based, you know, virtualization module in the Linux kernel and That's that's an that's an ecosystem on the virtualization virtualization side as well. So it's qmu kvm and libvert, which are that's, you know Things that usually my team takes care of Both upstream and downstream And then yeah, I've been working. So I was I was always interested in systems and I think the best way to deal with You know, different kinds of systems issues is to work on virtualization because It has all things that you can think of be it devices be it, um, you know CPU or be it interrupts And so that got me stuck with virtualization for a long time. It has always kept me busy. So and then besides My work at red hat. I also teach at bu which I skipped this semester, but hopefully next semester. I'm gonna teach again And I also work with red hat research We have a project going on on fuzzing a qmu With uh with boston university that's going well. So yeah Cool So, uh, one of the things we always like to run in this show is uh, um, sorry josh, did you have a question? No, I I didn't actually go go ahead like okay. Sorry. I was just gonna say We always like to ask like kind of what brought you into the open source world and so David I was wondering if maybe you could tell us what like how did you end up here? Yeah So I've been on the periphery of open source for a really long time like back in the 90s So my dad's office would have old computers And I would inherit these old computers and I would need something to run on them So I would use a linux if that's all I could find And at the time I think like you could find linux and office Stores like the boxes or whatever. Yeah Uh, yeah, so I might have gotten a few of those like mandrake linux or something like that, but I think I downloaded some in the back of a book Yeah, it did Yeah, so I would download it It would probably take me like a week to download an ISO But I would I would download and running on these whole computers and I had no idea what I was doing but um, that was fun and then uh, I never really contributed to open source as a result of that But I was on in the periphery And after university, um, so I was using linux throughout my studies in computer science and things like that Uh, I needed a job And I had lots of opportunities But one opportunity kind of stood out and it was a company called digium that Maintained the asterisk Project, which is a telephony project. It's a pbs. Oh, yeah the Yeah telephony. I can't think of the what the what do they actually call those boxes? the like the thing that My phone runs an asterisk. Oh really? Yeah. Yeah, so I did that And I got to contribute to uh, you know asterisk open source project and all you know more linux stuff And it kind of just took off from there and I've been lucky to Uh contribute to open source majority of my career at this point. So that's how I got started I still remember, um, I had a project a million years ago, uh to basically we're doing an asterisk implementation uh to do leader as part of a backing system and discovering that you couldn't actually run it in ec2 Um, you know for a long long time. Uh, so uh, that was a that was a challenge. Um, but it was actually related to virtualization Which is kind of was it was the timing. Yeah Yeah, I couldn't quite remember why exactly. Yeah, um, but that was a an interesting experience Yeah, the uh, I still remember also, you know speaking of old linux distros A stack of a three and a half inch floppies that was literally this high You know of slack wear, um And installing it on a computer and being actually concerned that the monitor was going to light on fire Um, all right We had this sink rate wrong and actually destroy the hardware Yeah, like a similar memory to that I have is is watching folks download those stacks of floppy disks to acquire linux on what I think were like, uh macintosh, uh Like lcs in the computer lab at the time Like loaded disk after disk after disk to copy them off to take them back to the rooms and do that linux install Nice. All right, so bandin on to you. Uh, what got you into kind of the open source world? Yeah, I mean, isn't it amazing all open source stories My story I can completely correlate with david because I can I can kind of feel like okay That's my story as well except that it started like a little bit later. Uh, I I don't think I had uh Access to uh, you know, I just I didn't even know what open source is in the 90s. Um And I think the first time uh during my undergrad days is when You know when I had to make a choice between either, uh, you know buying software, uh That was available from a shady place for a much reduced price compared to you know going for a version that was available to download online Even though the speeds were kind of they really suck um is when I started kind of realizing the importance of Open source without realizing that realizing that that's free or open source software I remember the first time I was kind of browsing in this in this marketplace back in india where You know, you could get software for a much reduced price and I saw CDs for red hat linux nine Not red hat enterprise linux nine, but red hat linux nine and I I bought them I I didn't know that I could download them for free, but I I just said I have to try these out So that was my first time. I actually Kind of tried something which was uh, you know open source and I installed them on my desktop only to find that I could Either run my network card or my sound card. I cannot And and and that's that that was how I got uh, like interested I looked at this I was able to look at the sources for uh, you know, red hat linux nine They were in the CDs and I didn't did not understand a bit But it was really I felt empowered to know that oh, so this is the code that kind of runs on the system That's really cool. Uh, but but I think the real contribution I Made was believe it or not. It was on minix during my Uh, when I was doing my masters, uh, we had a project on minix three and I found a bug in the network stack in minix three and um So I sent an email to uh, you know, andy tannin bomb And with a patch saying that okay, this is the change that is required And he acknowledged it. I mean, I don't know if it it got into a release. I didn't know that There was no easy way to find that out but the very fact that you could find an issue and can fix it and get it accepted I think that was what got me kind of interested in the whole process and um And that and yeah, and the rest is history for me. Uh, I Uh, I had an internship at red hat when I was able to work on uh, kernel build tools followed by You know working in a hardware company in massachusetts where we were working on device driver hardening So that was kind of a real kind of first experience working on Uh, you know contributing to open source software, uh, linux-based. So I was able to submit patches to drivers Um, and yeah, and later on in virtualization. So yeah, here am I Nice, that's pretty cool. Um, I like both those stories. I I agree though abandoned a lot of them are You know, I think a lot of the origin stories are often the same Um, you know, or the uh, at the very least it's the oh, you know, I have a scratch and I hitched it right is For sure like and like like and listening to to bend and tell that story. I was I was uh thinking Uh, a lot of the reason why I know the command line pretty well is because I have this janky laptop For a long time. I couldn't configure video drivers, right? So That'll do it. Yeah Um, all right. So talking a little bit more about what, uh, is going on in, uh, the kind of You know kubernetes world, um moving on so so Why are you Or why do you think that, uh, you know, kind of virtualization is still important in a containerized world? Um, and feel free whoever wants to answer that first Uh, yeah, i'll jump in Containers have to run on something So what are they going to run on? Uh, I mean, um Of course you can run on bare metal, but uh, I think that virtualization has, um, it's going to maintain itself as kind of the underlying substrates the containers and applications run on because they're easy to manage you update them easy, uh, they're kind of, um More flexible than bare metal and that in that aspect Um, and they can do kind of interesting things so you can like over commit virtual machines on hardware Where you can't really over commit bare metal and bare metal so As being this kind of substrate the applications live on I think that it's going to have a really long lived life within kind of this new ecosystem that's emerging with containers and even, um What do we call it serverless and things like that those have to run somewhere as well So where are they going to run? Oh, I thought it was just in the cloud. Yeah, exactly and serverless Well, it's going to move to the background. I think that's what we're seeing with virtual machines They're moving to the background where maybe right as virtual machines originally started Even when infrastructure as a service originally started like dc2 Uh, we saw people packaging up their applications inside of virtual machines. So we saw Netflix do that They uh, they they used, um, like these image creators to get all their applications into a virtual machine And then they just kind of scale out that way And then we saw that transition to painters But um, that's Where was I going with that? I don't think that that use case is going to make a whole lot more sense For very long. We are seeing a move away from the applications being packaged inside of the virtual machines But still they're being run on top of the virtual machines just in a different way It's just kind of a reshuffling So so david it's interesting. I hope you use the word flexible in the in kind of the center of your answer there Vm's more flexible than than um, then real hardware I In an odd way and I wonder what your thoughts on this would be in an odd way I almost think about one of the I agree with that flexibility thing But then on the other side if i'm a provider I often think and answer that question by thinking about how vm's are actually a little bit less flexible for the user meaning There's security parameters and resource allocations and What rights i'm granting on that vm are are more configurable They're more flexible for me if i'm the provider But I can I can lock them down in the form I deliver them to users them I mean, I wonder in a world where we're mostly renting computers from a cloud provider to run our stuff I mean, do you think that's an important dimension of what like kind of the The vm surface needs to provide is is that ability to To like lock vm's down lock down their resource allocations when you talk about like over commitment kind of touches on that Certainly. Yeah So isolation is one of those features that virtual machines provide that containers also provide but it's a stronger form of isolation and You know, we're probably going to talk about qvert a little bit I'll just say in qvert we actually have kind of two layers There's really multiple layers. There's lots of dimensions here That one of those is the hypervisor itself, which is a really strong form of isolation that you we can give people and then Within that we're also running the hypervisor in namespaces. So kernel namespaces And then you also have se linux, uh, or I forget the bunch two equivalents of that so it's like Isolation and security through depth That you're providing when you're using shared resources And I think the hypervisor is an important part of that especially when you're renting out Shared resources like that where multiple companies are running on the same hardware. You have no idea what you're running next to Yeah Yeah, I think is it app armor. Is that the that the yes Yeah, yeah Yeah, so yeah, so I was kind of curious about bondons Um, sorry our video seemed to have switched on me Um, I was kind of curious about bondon like where are you seeing the growth in you know, kind of kvm as a reaction to Uh, you know like kubernetes and containerization and that kind of is like what's the what seems to be more of the focus Um, so I yeah in reaction to containers. Um That's a good question. So So kvm is um Is it a pretty low level in the stack? So, uh, that kind of makes it immune to Uh, I mean the good thing is we don't have to think about these things because we know, uh, that the api on top of us Is going to take care of it. So but that said When you know, if you when you talk to me about virtualization, I don't think about only kvm. So, you know, I also talk think about qmu and livevert and so With respect to, you know, how, you know, uh kvm is going to work with containers I think there's a lot of kind of emphasis that's been put now that I don't think was in the past is You know, how can we expose, uh, you know, livevert apis that kind that can help with these kind of things And that that was something I don't think, uh used to happen Previously, so that's one aspect. Um, when it comes to the real kind of Feature that kvm is exposing which is the hardware isolation that david talked about You know the the the mechanism itself is pretty transparent to containers because, uh, you know, They don't really have to talk to kvm directly and that kind of makes it very very simple for us Not to not to worry about those kind of things. Um Yeah, I think that pretty much sums it up. Yeah So just kind of for the audience, um, what what is the difference between qmu and kvm and livevert? Yeah, that's a my question was going to be along very similar lines It's like Can can you kind of sketch for us just to give us a foundation for the folks who are listening? Like where where's the line where kvm stops and livevert starts being the sort of user interface to managing those features How do they fit together? You know likewise what I mean, I like with chemo as well So, um, yeah, I I kind of think about so if you So does this really popular device drivers linux device drivers book? I don't know if they still publish and release new versions But they talk about, um mechanisms and and policy So they say that the kernel implements mechanisms and the user space kind of You know uses those mechanisms to implement policy and I kind of think of kvm and qmu in in those respects So kvm is the the the code in the corner of that enables hardware virtualization Um, so whatever it is x86 arm and a few other architectures that the linux kernel supports um kvm is gonna enable Hardware virtualization and give us an interface to be able to use them from user space And that's where qmu comes into the picture qmu is the user space qmu talks to the kernel via a device file Throw a set of iocto's To request services from kvm, which is in this case You know all things virtualization all things hardware virtualization that is a qmu also does a major part of qmu actually is also emulated devices so Even though you can pass through devices to the guest That's not really the norm. I mean a lot of devices that the guests use are actually just emulated devices that qmu implements Are for that matter In some cloud it could be something some other user space which might not be qmu So that's that's another good thing about this mechanism policy You know separation that you don't have to use qmu with kvm You could use your own user space that talks to kvm and you know gets those services And then coming to live word is where we are saying that You know applications will find it really cumbersome to talk to qmu directly So let's build an api on top of it Which will be easier for applications to use and that's what live word does basically You know it's it's a layer on top of qmu that helps applications to talk to qmu Which would have an easier If you are a user then you probably would have an easier for you to talk to qmu directly as like a human user But for an application it's it's live word. That's how I make the distinction between the three And actually that kind of gets to me. Oh go ahead. Sorry Just a quick clarifying point on that because of it. You mentioned a couple of times Wherever there is hardware virtualization support like with vtx on intel It is there or was there historically any support in kvm for Virtualization on architectures without that hardware support like I remember some of the techniques in zen For plain old x86 who didn't have vtx You know and they're elaborate and like you wouldn't want to have to use them But they're fascinating sort of from an implementation point of view. Was there ever any non hardware virtualization kvm could do That's a good question. So when it comes to The real hardware extensions that implement virtualization. No the answer is no that said kvm is really a hybrid model where It takes ideas from zen as well. So So kvm actually came into being after hardware virtualizations were introduced by by intel and amd So that's the first part that said we do have a lot of Paravirtualized interfaces for example, which is obviously brought from zen. So for example timekeeping is one of those things. So One of the things kvm does is you know share this It has this shared space between the guest and kvm. So which we are able to time keep Use timekeeping features. The guest is able to kind of get accurate timing without having to Performance of you know, severe penalty that you would have if you had an abulator device right That said the other thing is um in the in the in the the initial days of virtualization Um, you know hardware extensions had had a lot of limitations and what kind of instructions they can execute In what processor mode? So for example in real mode, um processor was not able to execute certain instructions when you were running in a guest So kvm would emulate them. So it's a mix of all these things together And my answer to you would be it's yes and no both It's it's no because yeah, we had never had kvm when hardware virtualization did not exist but now Even though there is hardware virtualization But kvm also does a little bit of emulation that you would expect from a traditional kind of an old hypervisor like that right on That's crazy. Um, all right. So kind of moving on into uh, kind of talking about kubvert. So like where You know, where to where does that relationship live? Um, you know, does kubvert use qmu? Does it use kvm? Does it use liver? Does it use all the above? Where does qvert and and kind of you know, one of those tools kind of where does that line drawn? And does that does that line stay still? Does it move around? Does it matter? Yeah, so Qvert is really just a fancy wrapper around all of this So we are using liver. We're using qmu. We're using kvm Um, we didn't necessarily need to use liver. We chose to because it gave us some It allows us to rapidly iterate on this stuff quickly because it liver gives us a really nice interface like a user interface Let's see a lie interface But an interface to to manage the lifecycle of virtual machines that we've had to create ourselves But at the end of the day, we're using qmu kvm really And we are a wrapper around this we're launching What's essentially just a qmu kvm process in a container well in a pod in kubernetes pod and qvert itself It's just a set of controllers to manage the lifecycle Of that pod and also to provide that pod the kinds of resources cluster resources that it needs So if we're talking about like cpu and memory, we're reusing the kubernetes scheduler Provide those resources to pod and then we have the kind of a small glue layer that's passing that on to kvm process Same thing with storage and network. We're using the regular pod network getting an ip address and we're Have the glue in that pod to give that ip to the virtual machine. Same thing with persistent storage. You're giving it a persistent store pvc um To that pod you're attaching a pvc to the pod then all of a sudden you have a boot image for your virtual machine so qvert is just Just layers and controllers around this kind of underlying technology And actually I mean so for me that brings up a big question which is like When I think about a container or by extension of pod right because a pod in a lot of ways acts a lot like a container Like a virtualized machine expects a lot of Things you know like like it expects all the ports to work right expects, uh, you know certain kinds of stories it expects You know, um, I mostly I guess I'm thinking about networking But so how does qvert or or wherever right like something toolchain how does it kind of You know tell the virtual the virtualized os right that Oh, no, you're not really you're not running in a pod. You're you're running just like you normally would because I presume You you kind of masquerade to it so that it doesn't you know, you don't have to kind of change the inside of the virtual virtualized machine Is that sure? Yeah, I kind of get what you're getting at. Uh, so when we launched the kvm virtual machine Uh, the environment that it's in looks very natural to it And I have my gmu is going to see the kvm device it's going to see uh things like the ip address and the um And persistence storage all kind of just they just look like mounts or interfaces within that environment It looks like you're just running on normal like a normal non containerized environment, so we've we've done a lot to To kind of make it appear that way and that that's where I talk about the glue So we we actually reach into that pod With a prute village daemon sets to kind of set things up for us in a way that mimics what you would expect So when we actually launched the virtual machine Using, uh, we'll ultimately live vert to the call commute kvm Um, it just looks normal to it. It didn't think anything different. So, uh, we've we've recreated that environment for the virtual machine That's yeah, it was kind of yeah, that was that was kind of what I was expecting. I guess You know, so so are there trade off or are there negatives to that like, you know, I mean part of part of why You know, like I said, I go back to networking a lot but part of why that networking works the way it does is to kind of minimize the resource consumption And provide some level of or different kinds of security And so are there negative sides to kind of pretending that the virtualization or the virtualized os is running in a normal you know, kind of You know virtualization environment or or are you dealing with those negatives on the outside? So kind of like, you know, yeah, you set it all up. So it feels like it's a normal environment But when it tries to get out again or whatever that's where where you bring back, you know that density or security or whatever whatever those trade-offs might be let's say There's necessarily negatives. It's just it's different so Like if you if you had complete control over a bare mill machine, you're running live vert on it You can create your own bridge interface and your own network and do whatever you want Here we are You could say we're limited by the types of networks that we provide to the virtual machine But really we're not so we have It's just a different layer. So at the cluster layer We can create multiple interfaces and pass multiple interfaces the pods today using multis And we can provide like sr Rov devices and things like that So we can do a lot of the same things that we could do if we were in complete control over a Bare mill machine But we're doing at the cluster level using kubernetes like apis to to kind of manipulate those sorts of things and assign them to the virtual machine pods So it's really just a shift of how we look at these things. It's It's still flexible, but just in a different way. So we still have that kind of control, but it's different looking I got you. I got you So david i'm i'm curious kind of at an implementation detail level to maybe help me understand that interface and where Kvm infrastructure ends and kubernetes wrapper begins, right? Like where is that line unless that interface looks like? um Sorry, I kind of lost my train of thought there. Um, you talked about a set of custom controllers managing resources For these pods that we're going to launch vms into Do those custom controllers manage a set of crd's and is there a a custom set of api endpoints represented in those crd's for Managing and communicating with and monitoring the the set of vms that we're running in that cluster like what's the implementation of that actually look like? Yeah, exactly. So we have a our own api and if you look at our api you wouldn't know That it's necessarily kvm or kimi or libvert behind the scenes. We have maybe if you're really knowledgeable of What's going on behind the scenes? You might see certain values that make sense only for kvm and things like that, but Um, our api is unique for kvvert. Um, you you have a virtual machine api That describes, you know, how to create a virtual machine and things like that And we have two layers of controllers. So we have control plane It's living at kind of the cluster level and it's managing the life cycle at virtual machines at the cluster level So when you post your virtual machine to uh, kubernetes cluster this These cluster level controller is going to say hey, I see a new virtual machine I'll create a pod for it to live in that pod is going to get scheduled onto a node somewhere with correct resources Assigned to it says right now cpu of memory that you've Requested there and then we have a daemon set like previous daemon set that's going to live on every single one of our notes it's going to see when that Virtual machines pod gets scheduled there. It's going to see that pod come up It's going to reach into that pod and it's going to manipulate some things. Uh, so it's going to be the thing that's helping facilitate network and some of our other Things around persistent storage and stuff like that and then within that pod itself We have a really small daemon set that's just kind of gluing all those things together So the things that the privileged daemon set went and set up for us Our shim process inside the pod itself is going to see those things and construct the live vert domain xml Uh, and ultimately pass that live vert the main xml to live vert and actually start the cuming process and then the result is because of this whole chain of operations and controllers You have a virtual machine that starts in a pod It really it kind of haves like a pod in some ways too because the virtual machine can talk to all their pods and all their virtual machines and You've kind of just have a virtual machine all of a sudden appearing within this cluster and it looks like just a native application within there So hopefully that answers your question. Yeah, there's a lots involved in multiple layers to get to the point of actually starting the virtual machine and Kubernetes is kind of coordinating the lifecycle and kind of the management and resource placement of all that So Follow-up that and this is also sort of for you as well bundon and it is If i'm if i'm working with kubernetes am i working with nested virtualization a lot because Somebody's running the cluster on a VM in the first place and then trying to do kubernetes things Is that a use case that's ruled out? Prima fasci and we don't ever do that like how often does that come up? I'm just like it just immediately popped in my head I'm like every cluster I ever touch is running in a VM in the first place So what does it look like if I didn't want to run kubernetes on on those clusters? Yeah so We definitely support from a community standpoint Nested virtualization and use it every single day. That's what my dev environment is We see people using it in production on infrastructure as a service because they can manage their virtual machines in really unique ways and provide levels of Uptime guarantees that are difficult even with like ec2 So with kevert you can you can live migrate your virtual machines. So you you don't lose them and things like that Um, and that's nested virtualization. There's limitations to what you can do with that and it's complicated And you can hit some really crazy issues with that um, but it's definitely um Something that the community uses it's not something red hat supports Right now and their open shift virtualization product. It's certainly a really strong case within the community Yeah, so I I had a question for david. Um, so you mentioned this certain, um, you know Level of generalization if I understood it correct and the api so So is it is it Is it true that um, there are that's uh People use something else other than livevert um, or is that um Do you know of cases where live word is not in the picture at all? Not for Qvert today. So qvert does have livevert in the picture. It was designed in a way to isolate usage of um livevert and Really even kvm to that container. So the pod that's actually um Running the virtual machine itself And the api was we were trying to design it agnostic of what that underlying technology was Um at this point it's so ingrained into our design that I'd be really surprised if livevert Community kvm wherever replaced or swapped out, but we we have the potential to do something like that in the future Yeah, we needed to yeah Yeah, yeah, and on the on the nested virtualization aspect I wanted to I wanted to add that um, yeah for a long time After nested virtualization came into being I mean the the the best use case that we could think of was uh testing and which is um, you know Running a guest inside a guest and making sure we are able to expose features Or for example, as you said, we don't have a bare mission bare metal system at hand in hand. So we want to Set up, uh, even though I would say that nested word and First level virtualization are not really the same. Uh in terms of In so a bug that reproduces on nested word necessarily does not mean that that bug is gonna, uh, you know, Reproduce on a bare metal system with virtualization But this this this whole new, uh, you know Thing with newer newer use cases that are coming up. They are they're very interesting because They are stressing nested virtualization in unique ways and uh, I mean one of the things that I have personal experience with is We are able to find new and interesting bugs that we had never found out before and they are not Replaceable on bare metal or just through regular virtualization. In fact, we found Recently we found a hardware bug which was only reproducible because of the nested virtualization setup and so I think That's kind of the good thing of uh, the whole nested virtualization getting introduced into the picture. Um And yeah, it's interesting Right because there is I mean In a sense or at least in a subdivided sense You are exercising different hardware when running nested vert, right? You got the the, uh, Second level address translator and like some of that's implemented in mmu correct. So If I never run nested vert Never touched that bit of hardware inside my cpu, right? So yeah Yeah, uh New I always love new and interesting bugs. Um the uh, so It's kind of moving on a little bit. So what's the future? Do you think of the uh, kind of virtualization within kubernetes? You know the you know when we were thinking about the the setup for the show, right like The obvious answer for virtualization in kubernetes is uh lift and shift, right? So I have a running application I want to be able to manage it on the same control plane as I'm running everything else Okay, let me you know, let me copy that virtual machine over You know what I'm done, uh eventually that'll end and or it may not be the best choice for all applications Uh, so what's what's the plan? What where we're going to be in 10 years? I don't know five years two years So there's a story arc here, um lift and shift and it tends towards unicornals, right? Traditional vert is what we're talking about with lift and shift and so you would take an application that's uh Maybe it's a legacy application and you you want to transition to um A containerized or cloud native. I don't know whatever this buzzword is Like uh infrastructure, so you want to move to kubernetes Um, but you want to bring your virtual machine with you great So you vert allows that and I think that was the thing that kind of justified us building this to begin with um, and we needed that but That's not the full vision. So that's the thing that lets us do the thing we want to do Uh, and that's where we're seeing a lot of traction at least initially and I think we're we're still kind of at maybe The first part of the adoption curve for kubernetes and that's what we're using it for um, the next step I think is uh using kubernetes as infrastructure as a service So having the same sorts of patterns that you would see in something like ec2 or azure or gcp um And we're getting there on the development side where we've kind of have at this point It's not necessarily adopted by users quite yet, but that that's where we uh, and so that we Yeah, just to help me understand to interrupt briefly to help me understand kind of what that means That's that's sort of uh, I want to do terraform kind of things But I want to write kubernetes style yaml because I've got my investment on the tooling and I and I know the The terms of and the way the api works and is that Kind of the idea to think of it like that There's a set of operational patterns that we see in infrastructure as a service so if you look at like ec2 we see auto scaling groups and uh, what people are doing with these auto scale groups They have lots and lots of virtual machines. They're scaling horizontally and instead of It goes back to whole pets versus cattle analogy when something goes wrong with a virtual machine and they're out of scale group They don't care like what what actually have nobody's going to look at Somebody or something automation is just going to kill that virtual machine and no one's going to spin up to replace it So that's what I mean infrastructure as a service like patterns. We're we're introducing those into kubernetes We have that today and then Once we have that which we do the next thing we are working on is for Ultimately qvert to be the substrate to run more kubernetes clusters on so now we have an ecosystem where traditionally you would have used Like vmware on bare metal and run kubernetes cluster on top of that Now we have pure kubernetes. So you have kubernetes the bare metal level running qvert to launch your tenant clusters on qvert virtual machines And we're I'm working on that right now That's the kind of the next step and then the future step the one that I don't really quite understand that's on the horizon is Ultimately, I think That qvert is going to be the harps of multicluster in a certain way because it's the again the substrate that you can actually run Multicluster on so if you're looking at a new project like kct which Go look it up if you're not familiar with it that it has the potential to like Launch clusters on demand for your workloads So if you have a workload and the cluster doesn't exist for it An infrastructure doesn't exist for it yet then perhaps We can have smart controllers that can spin up infrastructure in the fly for this application to live in and things like that And I think ronetti or qvert has potential to be the thing that's powering that for bare metal Um, it doesn't necessarily make sense for public cloud infrastructure Maybe it does. I don't know that when we're trying to replicate as sorts of things on bare metal That's where qvert would be. So there's a Long tail here. I think for qvert and then the thing that kind of justified us making qvert So that traditional virtual machine use case That's the thing that got our foot in the door And I think uh, the potential is much greater than that But it'll be behind the scenes people won't know that they're running a Bit it's there. Yeah. So uh, so can you repeat what what project was that? Did you say acp? kcp Oh, oh, okay. That's the United States stuff. That's uh, that's literally the project we talked about with clayton and I think yes um So turtles all the way down is is what you're telling me that's exactly what i'm saying. Uh, yep So that's really interesting particularly the part where uh, where you get um, if you have really sophisticated live migration, right? Uh, you can you know, then you can have you can actually kind of realize that dream right where you can kind of say Oh, you know what? I need that new kubernetes or I need this kubernetes cluster to be over there Uh, you know for performance reasons or whatever and you can actually live migrate the entire cluster to Wherever it's 9 a.m. So that you can handle all those logins or or whatever You could do really crazy stuff. So certainly live migration is interesting Uh for lots of different reasons because it allows us to Live migration allows us to do uh, like an update of the underlying cluster without impacting all the clusters that live on top of it Uh, which is great. So you're shuffling around virtual machines or rang kubernetes clusters as you're updating the nodes that are underneath it That's great. But another thing that's interesting is we can start thinking about the spend and resume of clusters So you have like, uh clusters that you spin up Um, and maybe you've already started created a quorum and everything within them And then when you don't need it rather than tearing down the like persistent part of that You just kind of suspend it give all those resources back down in line clusters. Then when you need it again, it just kind of Starts up again. It's right where it was. Uh, so there's lots of just Really advanced future looking things we can do here So so that was actually I was kind of like I had a related question in the back of my mind for a little bit there It's that but that brings it up. Well is that um, you know virtualization in general is slow to start At least compared to a container Um, is is that an issue or is it more of a I mean the way I've dealt with it in systems that I've built right is that I just account for You know, hey, I need a minute to start this thing over here So that means if I'm gonna do, you know, if I don't want to ever be cold You know, I ever want to be out or whatever. I just start the thing, you know, right? And make sure there's a minute in there and then, you know, and then bring the other one down Whereas with containers, I can kind of do it more instantly. Um, which On paper, it's kind of like You know, there's a lot of features that get written up by the tech rags that um A lot of the time I'm not really sure I care about as much as the tech rags give it some reason to write about something Um, and that's one of the things that I always wonder about is think But is there a performance issue there where you need to be able to start the the Uh, virtualized environment more quickly. Is that something that needs to be addressed? So that you can do some of this magic that you're describing over time? um We like to limit So there's late I can say that like latency So the amount of time you ask for something before it actually comes available And when we look at auto scaling, so if you're trying to auto scale your application and under the scenes We're trying to provision infrastructure to meet that demand Then the quicker we can do that the better. So the idea of like pre-warming Good play a role Um Just things within the guest itself. So we we saw a huge improvement back in like the I forget what it was Whenever like system D came around when you started having just the boot order in parallel rather than sequential That that improved boot times quite a bit. I think ideally anything under a minute Is probably good enough for most people when we're talking about staying up infrastructure When it gets to like five minutes and things like that, uh, then you're probably to optimize I mean infrastructure as a service when we look like ec2, uh, they they take a while start up ec2 instances Try cloud formation. Yeah, right 40 minutes later I think we're faster than that. Hopefully All depends on the environment Yeah, it matters if that's the short answer right, but it's all right I guess it matters, but it's kind of like There's a threshold where it it matters, right? There's kind of like a almost like a step function, right? You know if it's under a minute or something. Maybe it's fine But then there's planning around it or you know planning, right? So like you have to build into the tool some level of planning around it So that you know, we can kind of tolerate that outage. I guess for me in my development experience Um I need to tolerate when something goes away anyway, and I'm not sure how spin-up time is different than going away So it's kind of like I feel like that's one of those things that I'm building it into my system anyway You know what differences but I was just kind of curious on your take Josh, did you have a follow-up to that? Well, I was just kind of wondering like It is a follow-up to that question like so So like I was just asking David Bondon about Uh a number of concerns specific to this use case and this kind of driving infrastructure as a service with kubernetes When you're working on kvm and QEMU QEMU having a really hard time saying that this morning for some reason. Um, nevertheless Like are these things Are these two teams and you two folks communicative about these issues Or is that orientation you described before of mechanism rather than policy at the kvm layer Mean that you you necessarily stay loosely coupled from sort of what concerns are this project that's using kvm What does it have versus other ones like Are kvm folks specifically thinking about these kubernetes and kubernetes use cases And in communication with that team like is does that interface exist at the team level or do we restrict it to software? Precisely to to make sure the coupling isn't too tight and that one is concerned with policy and the other concerned with mechanism Yeah, it's it's it's yeah, I don't think it's very very tight at this point and Whether it's going to be any advantage. I don't know. Uh, we probably will but that said yeah, I mean there are people who um You know focus on these aspects even in the in the in the in the virtualization group. Um, so there are people who You know spend a lot of time understanding containers working on them. Um, it's just the nature of um, you know kvm work Probably it's not conventional to Think about uh containers But that said when when you guys were talking about this question this interesting question that langen brought up And langen asked me something similar a while back and I I said, yeah, we don't think about that But yeah, it really depends. Uh, sometimes we do because um, for example, um Uh, you know boot up times is is uh, is that's a that's a interesting topic that comes up once in a while And there has been changes in qmu to actually improve boot up times Um, like for example, let's not get a traditional chipset emulated. Uh, let's not emulate a traditional chipset rather Boot up something that is like just takes the shortest amount of time possible Not initialize a lot of emulated devices So we do get into those only, uh, but they they It's not part of the I don't know. I don't think it's part of the like a process as such right now But it's kind of increasing. You know, it's it's becoming more and more frequent and common right on And then my next question is uh, it is not so much a follow-up on that. Uh, but it's more a uh A free skate. Let's call it. I'm just going to say It's a both of you. Um, uh ar 64 I mean what that really means is Can I use any of this stuff outside of the intel architecture? What does that look like are their teams working on arm servers? I know kvm runs there. So what's the kubbert picture look like for the future on? I can speak for kubbert. Uh, yeah, we, um We have community members that already run on arm. So we support it from a community Uh it's There's two there's multiple layers here. So we have arm at the Like a controller layer. So if we're going to run on arm infrastructure, we probably need to build our controllers Even the cluster level controllers to to run arm and then We have like that underlying like that the nodes that the virtual machines run on We we need to have all of our controllers that are at the node level to the arm as well, but then there's This complex use case where we have containers, uh, might need xa6 for the, um control plane No, we might have Different types of nodes within our cluster some xa6 some arm and that sort of things get really complicated But we don't do really quite well. Yes being able to schedule things accurately to the right architecture and then have the right Um components on there that are built with the right architecture But uh, we're looking at things like that as well. Um, yeah so So related to that, um, does do either of you, you know, bondon david, do you either of you see Yourselves, you know, maybe not you but like your teams, right making direct requests of cpu manufacturers um Where you know like to you know bondon's comment made me think of this is like there's a there's a lot of junk On that chip that you don't care about. Um, you know, and so if there's a way where You know a r64 knocked out, you know, half of its Uh, uh calls Blanking on the right word, but uh, you know, like would that be better? Um, and is there a way, you know Does that say is something that you see going forward that there's actually specialized chips for this kind of scenario So is it uh, I yeah, I don't know the answer to that. Um All I can say is I have experience with uh x86 And the turnaround time To listen back from hardware manufacturers is really really long. So by the time, you know, you get back to here from them The ship has sailed. So I don't know how advantageous would that be that said I I think Arm traditionally has been kind of more in has been more Working more in a sync with software, you know, providers, you know, listening to their requests Just because of the nature of how arm works And maybe it's it's a possibility. Yeah, but honestly, I don't have an inside or good experience You know that I can share with you right now. That's oh, yeah, that's a good idea to talk to Well, so then That kind of Unless you have something to add to that because it like it makes me come to the What I see is the analogous question on the software side, which is Are we going to run other systems than linux? Are we going to run unicernals above linux virtualization? How much call for that you hear people doing that in the community? Is this a really popular idea? Is this stupid and crazy? We've got a good networking stack in linux. Why do I want to run a Uh a kernel that's built just around my application like is that a use case that's important for for the kubernetes? It comes up. Um, I don't have a lot of experience with it. So I I know I think some people are even doing this Already with qvert. Um, I don't know their use case and why um My instinct is that it's a kind of an issue use case that I could be totally wrong and it's Something that I don't understand well enough to really speak to but I know that it's something that exists in our ecosystem today Yeah, it's it's one of those like unicernals like I'm kind of on both sides of the fence there the There's a lot It makes a lot of sense right to use like an x86 chip because it's standardized and it's got a general use case and all this other stuff, right? Um, but then there's also, you know, hey, I I built this chip specifically for this particular scenario Um, and so you kind of have the same idea right with linux and like a unicernal, right? Is that I have kind of this general purpose thing, uh, and it does quite a good job At a large number of things, but then there's also a small piece. You know that maybe, you know, it could be focused on So if you're running a golden image kind of world, right? Anyway, you know, so we're you know, we're building a virtual machine and then we're putting, you know Golden copies of it out or taking a golden copy and putting out instances Which is kind of a containerization idea as well Maybe something like unicernals make sense The problem I see is that the unicernal doesn't get exercised anywhere near as well As the general purpose solution So at least for me, I have yet to see, you know, there's a lot of good research in the unicernal space But I have yet to see in the unicernal space Kind of a good reason that offsets The fact that it's not getting it exercised as well and that you have to kind of go through these extra steps Um, but that's just my my two cents. It's kind of like So that was a that's a nice easy closing question I did want to ask quickly though to david. I noticed your website redirects to a music album Um, and I just wanted to ask what that was because I didn't actually listen to it. How interesting Um, yeah, I guess that's still my github. So that used to It's kind of funny that used to be my resume And now it is me playing goofy banjo music and that is about the All there is to that story. Yeah Uh, that's that's pretty hilarious. Um, so yeah, so I was uh, yeah, I was looking for your twitter handle For a tweet and uh, it does not exist. There is no twitter handle There's no social media. I github is about as far as I get Well, now it means, you know, if you're if you're doing the the music thing now, you can just start a tick tock and you can Start there Scare a tick tock somehow. I don't have tick tock, but I feel like they already know everything about me All right. I feel this in the exact same way It is actually appealing about tick tock. I'm not even on there, but I'm afraid of their surveillance somehow Right, right my nephews are on there and somehow they know that they're my nephews and that somehow correlates to me Well, David, you found the leak. The question now is how to plug the leak So I will say we we actually had a pretty good question in the in the chat But we are out of time and so uh in the interest of saving time We will not answer it. But basically is that you know should cube slash open shift Be following the linux model of you know, do one thing well rather than do all the things And what I will propose is that we will try to bring that up on the next episode with our next guests And uh, we will we will let you off the hook because we know we know our guests have already plans to do other things today Excuse me, um, and uh, and so we'll wrap up there But thank you so much, uh, David and Brandon for joining us on the show Thank you to the guests, uh, to the audience. I'm sorry And uh, we really hope to see you next time. We are the last tuesday of the month And uh, we have a whole bunch of great, uh, guests lined up, uh, you know, check out our website at cubebyexample Dot com, uh, if you want to see more about the show watch fast episodes or see any NCR kind of upcoming guest list And uh, we'll see you next time Thanks to both of you. Thanks for having me. Bye. Bye