 Hi, and welcome to the KVM Forum 2020 panel discussion. My name is Stefan Hoenitzie, and I'm going to be the moderator for this discussion today. We have with us panelists who are from organizations that contribute to QMU and KVM. And we have a list of questions from the community that have been suggested as topics for today's discussion. Before we dive into the questions, we're gonna go around and let the panelists introduce themselves. So let's begin. Would you like to introduce yourself, Susie? Hi, everyone. This is Susie Lee from Intel and managing the Intel open source virtualization team. Thank you. Richard, would you like to introduce yourself? Hi, I'm Richard Jones from Red Hat, and I work on V2V, some virtualization things, some MVD things, and other topics in the VRT space. Thank you. David Kaplan? Hi, David Kaplan from AMD. I'm a security architect. I focus mostly on confidential computing technologies like encrypted virtualization. Thank you. Peter Meadah. Hi, I'm Peter. I work for ARM. I've been seconded into Linnaro for about 10 years now working on Quemium, mostly dealing with ARM-related emulation work. And we also do a bunch of the admin and build type stuff as well, didn't duck fast enough to avoid that. Thank you. And Hubertus Franke? Yeah, I'm Hubertus Franke from IBM Research. I mostly work in architecture and operating systems and the interfaces and how they basically surface up in cloud environments. Thank you. Okay, well, let's begin. We have an etherpad that was submitted by the community with questions in a bunch of different areas and we can just keep going until we run out of time. This year, one of the trends seems to be encrypted VMs, confidential cloud, and so on. So let's start with a question from there because I know that several of you have been looking at this area and are involved in that. So for the first question, let's start discussing what other use cases for encrypted VMs have you looked at besides improving privacy in the cloud? So feel free to just jump in. Yeah, so I can start on this one. So, yeah, I think it's an interesting question. Certainly there are a lot of use cases in the cloud and I know at AMD that is probably the primary place that we focus on when we talk about confidential computing, whether it's traditional virtualization, container, lightweight, virtualization, things like that. But the question was about scenarios beyond the cloud and maybe there's one that I'll offer that I think could be interesting in the future and that would be sort of a bring your own device type scenario. You can imagine a corporation that uses employee devices, but they have sensitive data, they have special programs that they want to use and they don't know what malware or other programs might be installed on the employee device. And so in that sense, you have a similar trust model to what you might find in the cloud in that you have a employer that wants to run a secure workload in an otherwise untrusted system. So I think that could be an interesting scenario maybe in the future. Although as I say right now, where at least today, AMD were primarily focused on cloud. So I would say that anytime you basically have some form of service provider, right? Whether it's in the cloud or not, such as edge computing, these are interesting use cases, right? It's not clear to me yet at this point when applications busy driving towards the edge, whether we still gonna include this into the cloud computing scenario or not, but certainly similar to what David just said, the moment you're running in an effectively untrusted environment and application, they are certainly gonna have to look at encrypted VMs. Yeah, one thing I want to add is, certainly a lot of the usage is for the encrypted VM is in the cloud space, but also we are now seeing a lot of the development of a virtualization use cases in the client space and also in the IoT edge space. So for example, there are more and more workload consolidation happening there. And also the virtualization-based security, right? For example, you want to run virtualization-based TE environment. So, and all this are based on VT. So this could be a lot of architectural options on how to do this. So I think that's having encrypted VM there, we're definitely, I think that's, we're bringing a very interesting architectural option into that and has a profound architecture impacts. Okay, thank you. So we have a follow-up question that was posted about memory isolation. So I guess this is a more technical one. It's a question of how is memory isolation being done? And you can interpret that how you want. I'm not sure whether they're thinking about caches and avoiding side channel attacks and so on. So please go ahead. Well, in general, this is done through the memory controller, right? So when essentially encrypted VM posts a load store operation, it's effectively going through your caches and at the end, it's basically tagged with the address-based ID. And at the memory controller level, it's going to go through the various encryption mechanisms that are being provided. So data is then encrypted, writing out and on the way in, it's being decrypted from a caching. I guess Susie and David can speak more to it, but you will effectively have to tag the caches, right? So to make sure that nobody can snoop on your cache outside your address space. Yeah, I mean, I would just add to say that that kind of gets into the implementation details. And I think different vendors have chosen to implement this in different ways, but I think that everyone has some sort of a solution for it. There are a few different general techniques for isolation. There's cryptographic isolation as you've already kind of talked about, where if you don't have the correct encryption key, then you're not able to access the data. There's also so-called logical isolation, which would be where you use a mechanism, whether it's page tables or something of that sort, to actually block access to data that you're not supposed to have. And I know at AMD, we've used both in different cases. And I'm sure that other vendors are similar. It's actually a good point, David. I mean, do either in space isolation, but you guarantee that you cannot access or you basically, if you allow access, then it doesn't have to make sense, right? So that's basically content isolation, right? So to speak. Yeah, I mean, to take it a step higher, right? There's kind of four general types of isolation. You could have physical isolation, which would be running on two different machines. Obviously, that's not what we're doing. Otherwise we wouldn't have virtualization. There's temporal isolation of running one workload and then sort of getting rid of all the traces of it and then running a second workload. And then you have the logical isolation and the cryptographic. And I think that for confidential computing, the logical isolation and the cryptographic make the most sense, but all of them have trade-offs. Okay, thank you. And this leads us on to kind of the final encrypted VMs question. And that is about this mechanism is designed to provide confidentiality. But what do we do or what does the software and the hardware do in order to mitigate issues that might be discovered later on in these designs? How do you, what should users do in order to protect themselves and not put all their eggs in one basket relying on this mechanism? Yeah, it's a good question. I can take a stab at it, but I don't wanna dominate the conversation on this. Certainly, it is prudent to think about that scenario. And I know at AMD, we've done some work, especially with our newer technologies to try to provide stronger guarantees around mutable components in the architecture. It's very common in these kind of setups to have some firmware or trusted components that can be upgraded in the field, which is great for fixing bugs. But then you do have to deal with the issue of how do you prove that you're actually running the version that you need to be running? And so we've taken some steps recently to create more of an architecture around that where there's actually a cryptographic proof of what version you're running. And that can help ensure that we are able to deploy patches when needed. And you can be assured that you're running with them. The other thing, which I'll just kind of give a call out to the Red Hat folks here is Red Hat has a very interesting project called NRX. And as I understand their vision, one of the goals is that sort of you write your application and then it can run on multiple different back ends, whether it's AMD, SCV or Intel SGX or even ARM. And so sort of the question of how do you avoid putting all your eggs in one basket? Well, if you do have a infrastructure like that and you wake up one day and you discover there's a zero day in one vendor's technology, then it becomes very easy to just switch your target to a different one. And so I think that's a very interesting approach. Yeah, and I was gonna mention NRX as well. Of course, NRX is based on WebAssembly, which is, as I understand it, how they're gonna do this portability between completely different platforms. Okay, thank you. We have a slightly related topic about cloud and about hardware, about new hardware innovations. And the question is about with hyperscale clouds building their own silicon hardware. And I guess it's talking about tier one cloud providers who are able to optimize everything to the last few percent and are able to deploy custom hardware for their cloud. How do we keep users of commodity hardware happy? This is an open-ended question. So I don't know if you have any thoughts in this area. Yeah, I can take a first step on this one. So I think that's, it's really the, you know, the level and the pace of the innovation you can drive, right? To make sure your customer workload can run well on your platform. So I think that involves a lot of, you know, deep engagement with your customers to really understand, you know, what their workload, the characteristics and how the, you know, we can design the hardware in a way that is able to support all this software workload. And also, I think another thing is, you know, for this, you know, this commodity hardware, the terminology here, I think we actually are talking, you know, we're having a large deployment, you know, that's a base and we're talking a wide range of, you know, the segments, right? For example, the, you know, the client devices, the IoT Edge and also, you know, the other way to the data center. So I think this brings us a very unique, you know, end-to-end advantage to allow us to, you know, have a better, you know, workload compatibility across, you know, the full stack and, you know, we can optimize for the end-to-end usage but brings us more opportunity to optimize for the end-to-end stack as well. And another thing is, I think beyond, you know, in this commodity hardware space, we are also offering accelerators, various accelerators for, you know, the customer-specific usage, the second-specific usage. So you can kind of tune the hardware and using accelerators to optimize your, you know, software. Thank you. So, Stefan, this actually also goes back to the previous question. How do you basically not put all your eggs in one basket, right? It basically means you have to move up the chain, you have to use portable libraries, right? And for instance, when you take machine learning as an example, right? People have basically joined effectively the TensorFlow community, right? That becomes ultimately my portability platform, so to speak, right? And underneath you essentially build now devices that effectively cater to exactly that interface, right? So you step away from specific hardware where you can, number one, right? That's kind of, at least the end-to-end usage is somewhat isolated from hardware changes then, right? But we also see that, particularly with hardware, these in the cloud space, more and more things are basically driven down out of the OS space per se and being driven down into the hardware devices themselves, right? Network crowds becoming significantly more capable with the virtualization capabilities, right? And ultimately they don't even shine through to the end-users, let's say it's a VM. It's only the very high-end VMs such as utilized for HPC and so on that are effectively interested in getting the access to a much lower interface, for instance, around DPDK or something of that nature. Thank you. I think that maybe Peter has an interesting perspective to share here, because from the ARM architecture, an area where ARM was extremely successful is in allowing the integration of custom systems on chips and custom boards. And yet they ended up also providing a standard server platform. So I don't know, Peter, if you wanna kind of, if you have any thoughts on this, because it's kind of interesting that ARM has evolved into the offering a standard server platform for maybe having a more commodity, you could say hardware environment versus the custom designs that ARM is also extremely popular in. Well, so I'll start this off with the disclaimer that I'm not an expert in this area of how ARM does stuff, but I think my view of what has gone on with ARM has basically been there's a balancing act here. So different companies that want to use bits of ARM hardware and build their systems around it, they want to have things that they want to do that is the difference that they bring to it. But you also want to have a common ground which is what ARM provides in the architecture itself. So that the idea is that it's different where the difference really is significant and useful and where it doesn't matter so much, you try and avoid those differences and you can gradually standardize things. And I think that's definitely where you can see that in the server space, where the server space is much less tolerant of random weird stuff. And so there are a bunch of things like the server-based system architecture specs that standardize that so that if you've got a distro, you can run that distro on whatever server hardware you like. But there are also people in the server space who, while yes, they're doing standard systems are still putting some of their own magic source in there to, because that's the point. That's why they don't want to just build something completely off the shelf. So you've got to maintain a balance there, I think. Thank you. Okay, so one of the topics we have is CPU architectures. And that's always an interesting one for KBM because KBM supports multiple architectures that implement virtualization in different ways. They have different instruction set extensions and approaches to the virtualization hardware features. So where do you see potential cross-architectural collaboration, for example, with encrypted and isolated guests, all doing their own things? Are things going to converge? Or what areas do you see where maybe in KBM we can have common infrastructure? So in terms of KBM, one of the, when we look for incidentally encrypted VMs, right? I mean, there's AMD, SAV, Intel, TDX, for instance, is going to be coming out as announced last month as an example. The whole level of key management, right? I mean, in confidential computing, you cannot have any of your data or any of your keys in the open. That means from a service provider, as well as from ad customer, I need to have an infrastructure in place that allows me to shovel keys around in a secure fashion that is not being exposed to the service provider, okay? There I believe is an ability to have a common ground. As David knows, features like, for instance, VM migration, right? Which essentially has to go at a lower level. And again, the encrypted, shovel encrypted data around seems to be, there's a commonality among architectures, right? For instance, when they're using encryption keys as discussed earlier on, right? Again, that would be provided in a more generic feature. I think going further into architectures, the whole area of encrypted IO, right? I strongly believe that, particularly in the cloud, that IO devices will largely become completely enabled with offload functions and that these virtual functions, so to speak, can be reached through to the virtual interfaces of the virtual agents running virtual clients, VMs, containers would directly access the IO devices to bypass the operating system, right? So again, along that path, the whole setup of having a clearly secure and encrypted channel will be important, right? Again, many of these things have to be set up by QMU at the end of the day, right? And there seems to be commonality across different architectures. I think that's a good point. I think that it's unlikely that we'll see much commonality when it comes to hardware implementation, just sort of because of the business reality of that, but providing a uniform software interface, kind of like what KVM already does for virtualization, I think is very reasonable. I think that IBM Power also has some, confidential computing technologies in it and I believe that there's already been some effort to merge some of that with the work that AMD has done just to share some of the code there. So I think that, yeah, that's gonna be a good area, especially as now more vendors have announced technologies and from an end user standpoint, I think they really appreciate a common interface. Yeah, I very much agree on that. I think a common interface is definitely the place where we can have a lot of collaboration. For example, I think in the, the encrypted VM space, we already seeing some good collaboration on, for example, how to define the exposed key management, key ID to the C group, right? How to abstract in a way that can support both, the Intel TDX and the AMD SEV. And I'm sure there's many other spaces as well. For example, hey, for example, for the encrypted VM technology today, we required to modify the guess OS to collaborate with the hypervisor. And then currently the interfaces are different across vendors, right? So, are we able to kind of have a unified interface that can certainly help our customers to deploy much easier? I think that's definitely one area we can look into. Thank you. So on this theme of software, you know, common software interfaces and so on, we have a question about management stacks. And the question is, what management stack do you have in place today? Is it lipvert based? Is it a custom QMU management tool? And maybe even a custom virtual machine monitor that you're using instead of QMU. And how has this changed over the past few years? This is an easy one for, for Richard Jones from Red Hat, right? I mean, I would turn this question around and say, how has lipvert itself changed? And how is lipvert changing? I mean, lipvert was this monolithic single node management demon. And then we tried to, you know, fit that into the, to the Kubernetes model where you're running everything at a pod. And then we came across the problem, should each pod run its own copy of lipvert and so on. And from that, we started to basically look at how we can make lipvert more, not so monolithic, more separable, move things like the creation of the QMU command line out into separate libraries. I mean, it's not something that I'm a huge expert on this because I'm not really directly involved in this, although I am sort of using some of the fruits of this. But certainly this question is in a sense, backwards, I guess, because maybe if you, if you thought that lipvert worked in a particular way, it's like this huge monolithic demon, well, you know, take another look at it now. It may be different from how you expect it works, you know? And I think the liper team is extremely aware of the sort of traditional model and the problems with that. And there's a huge push within Red Hat. There's certainly no secret to, you know, move everything towards Kubernetes and OpenShift. And so, you know, the making lipvert run on OpenShift is like number one priority at the moment. I hope that's a sufficient answer, Stefan, but thank you. So, I mean, since I'm with IBM Cloud in a way, right? In a research organization, right? I do think we use lipvert basically as a conduit, right? It does give you additional lifecycle management not to have to do with the insane QMU command line interfaces, right, that everybody knows about. But let me turn the question a little bit around, right? Which is essentially is when you ask about VMMs, right? We actually see that there's quite a bunch of activities like Firecrackers, Intel, Cloud Hypervisor, VMMs that are basically spawning up that are trying to move different technologies in. So, for instance, many of them are Rust based under the premise that Rust is a better and more secure programming language, right? I'm not gonna go into detail here about that, but it is out there this question, right? And so our focus has been more on the QMU side, right? You know, it's a huge investment that has been made in QMU right over the many, many years. And we're trying to leverage that and basically addressing some of the concerns that people have raised with QMU, which basically similar to lipvert has been, it's a pretty large entity of code, right? And it also has gone through quite some life cycle. It's much more configurable these days, right? And new techniques can be, for instance, be integrated into that particular code base. So, for instance, in our end, we're looking at control flow integrity. You know, we're having now hardware features coming with various architectures that allow us to do control flow integrity in hardware, right? Need support for that in the compiling two chains, right? Number one. Number two, as you know, Stefan, there's also the proposals, I think largely driven by you effectively, right? That essentially says, okay, can I take the somewhat configurable architecture that exists today in QMU and do a piecewise migration to new technologies, like for instance, introducing Rust as an IO emulation, right? You don't wanna have this monolithic code that once you have been broken in because maybe of a device driver bug or something of that nature, right? And to basically put your VMM at risk, right? So essentially compartmentalizing it is actually a very good idea. And the IO submodel exactly gives you that ability. That's number one. On the topic of next generations, where I think we have seen that what used to be the more legacy VM technologies, like, okay, here's your machine model emulated, right? We have now basically realized that the interactions between the VMM and the VM itself is very chatty, right? That's busy where devices like wrote IO and then sprung out, right? So you can maybe provide basically a more streamlined implementation. And I believe more work can be done in that case. At the end of the day, and I think that's where some lessons can be learned from Firecracker and into Cloud Hypervisor is to really in a cloud environment, think about it, what is my machine model that I really need? Do I actually need a, do I really need a PCI bus emulation, right? It's not clear to me. We actually need that, right? If your machine model truly is just here, so a bunch of word IO devices, do I need this to be through a PCI bus? That is an architectural feature that's already being emulated. So I think there's quite some research that can be done in that area. Thank you. Okay, so we have a bigger list of questions around the developer community. And these are things from how do I get started about the QMU project itself to the process and how organizations are able to get features upstream and collaborate. So we can take a look at some of them. So the first one is about specialization versus generalization. It's should developers work across the full virtualization stack or should they specialize on this particular component like KVM kernel module or QMU? And how does this work today in your organization? I can take a crack in that if you like. Oh, so now go ahead. Yeah, so I think some of that depends in terms of what you want to do. It depends on your own preferences. Some people really like being able to put together a complete feature by doing a little bit of work at every layer in the stack. Some I tend towards the other end of the thing. I tend to like to look at one component and get quite deeply involved and knowledgeable about it. And then just work on that one component. So some of that is just personal preference. In terms of how does ARM work with this? We tend more to be a bit more split up in terms of what I'm mostly doing. QMU stuff and there are some people, other people who work for ARM who are doing only the kernel stuff. But that's kind of organizational reasons rather because it's necessarily the most efficient way of doing it. I mean, I think at Red Hat, we have people who work in both that way and that way. So some people work across all the layers of the stack and some concentrate on a single layer. So it's very much down to the programmers. And I think if you're talking, if this question is really about how do we get new developers on board and should those developers go that way or that way, I think it probably doesn't matter. I mean, getting new developers is the thing rather than which particular way they work. There certainly are features which we develop. I mean, almost all of the features, I should think that go into QMU, sort of end up having a Libvert component and then even perhaps a, you know, a Vert Manager or a Kubevert component on top of that. So there may be sort of two or three different places, languages, styles, communities that you have to interact with and also get a single feature added. Whether or not this is a good thing, it has its ups and downs, speaking diplomatically. What about at Intel, how does that work? They're contributing to say the Linux kernel and the KVM kernel module versus QMU versus higher level project. Yeah, I agree with what Richard and Peter said, right? I think it's just, we need expertise, you know, in both areas, people who are considering on one component and the people who are, you know, able to look at the whole stack and drive the, you know, the system optimization system charge. So I think both are very, very important. It's really depends on, you know, the engineers, what's the engineer's passion, you know, which way he likes better, right? And also where his talent is. Thank you. David, did you want to add something or should we move to the next question? I don't know much to add. I think what other folks said is true. You know, I will just point out that at least when it comes to implementing new hardware features, especially things like all this confidential computing stuff, that really requires a full stack approach. And so it does require that expertise of, you know, being able to know enough about all the components to actually fit things together. So I certainly think that's a valuable skill. Excellent. So the next question we have is, how would you describe your process for developing new hardware features and enabling them in the Linux and KVM software stack? And how can it be improved? I mean, maybe there are some frustrations there. Maybe I can take a first step at this one, right? I think that's for designing our hardware features is start with defining the problem, right? So what problem are you trying to solve? Is this, you know, supporting a new emerging use case or you want to make the existing use case more, for example, more efficient and more, you know, secure, right? So during this problem definition phase, our software team actually works very closely to our hardware design team to give them input on, you know, hey, what are the, you know, pinpoints we see on the software side, right? And I'm sure, you know, that as a vendor do the same thing, you know, the software team and also we get the input from our ecosystem partners on what are the pinpoints they are observing, right? So this is a defined the problem phase. And then we were starting on the, you know, technology readiness kind of phase, right? So during the past findings, during the POCs to really make sure, you know, the we have a hard, you know, sound, you know, hardware design and the hardware implementation is software friendly, right? So, and then that's when going through the, you know, the after the technical rate of phase, we move on the, as soon as it's approved, then we go to the play POR phase, the plan for record phase, right? We will, you know, that's such execution, right? That's have our engineering team will do the, you know, the pre-city can enabling, right? Before hardware is available, we will implement the, this in the, for example, in a software simulation environment. So to make sure we have a, you know, code as early as possible. And then we will submit this to the, you know, the open source community to get the community feedback on the architecture on the implementation. And after, you know, that's many, many rounds of discussion with this what we got we merged into the open source community, right? So this is the upstream part. And then we were on the downstream part, we will work a lot with our, you know, downstream partners like the Rahai, you know, Suze and other and our CSP partners to make sure all these technologies can productize in your distro in their deployment. So I think that's kind of the high level flow of how we do, you know, our feature enabling. In terms of improvement, I think that's, as I say, I think currently, I see, you know, we, our software team is already involved a lot in the hardware feature definition. We'll do many kind of hardware software co-design together, you know, but I think that's given the more and more importance of the software, you know. So I think I'm looking for, you know, we can do more on the, you know, the software hardware co-design side. So to really, I think that's a lot of opportunity we can kind of dig into in that space. Yeah, I'll just add to say, you know, it's a challenge definitely, especially because the hardware design cycles can be so long that by the time that feedback, especially from the open source community is present, then it's sort of too late to change things. And, you know, that's, that is a challenge that I'm not quite sure what the solution is. You know, certainly when, you know, company like AMD is developing new hardware features, we will have conversations with our software partners about those and get feedback. But, you know, we don't typically have those conversations with the public mailing lists for kind of obvious reasons. And I'm assuming that, you know, Intel works similarly. And so, you know, the downside of that is that by the time that there is a discussion on public mailing lists, probably things are pretty well-baked and there's less opportunities to incorporate feedback. So, you know, I think that's something that it would be interesting to improve. I'm not quite sure what that solution would look like, but that is sort of a gap, I think, in our design process. I think it will be interesting to see what the RISC-5 folks do with this and whether they manage to make a better job of the whole interaction given they don't, they're not unlike the rest of us, they're not working under that same set of restrictions. So, are they going to be able to make a better designs as a result? That will be interesting to watch. Yeah, so this whole thing seems like an area where AMD or Intel and IBM and ARM need to come together and invent a time machine that way you can go back and not do the project that didn't land. So, if you do that, that would be great. I think that would be a good solution. Well, one way of dealing with this is to get early engineering into public hands, so to speak, either in a designated open source lab or something like this, right? And obviously the problem, as David pointed out, is that you often don't wanna let your new ideas surface too early, right? For competitive reasons, right? I understand that. But at the same point, you know, getting some engineers on maybe even emulation software, right? I mean, that would already help, right? Because once the hardware is big, there's very little you can change anymore, right? Then you have to work around it. Okay, great. Thanks. So, up next, we have some questions about getting into open source, about new contributors joining the projects and so on. So, the first one is, virtualization and systems programming is in general considered low level, is a low level software field, that new developers may find inaccessible. How do you recommend getting started in open source virtualization as a developer? Stop submitting patches is really the key here. I mean, I think some people don't see the low level aspect as being a barrier, but see it as something that's really interesting and exciting. I mean, you know, anyone can develop web applications, but developing, you know, low level bit banging hardware stuff is a rare skill and exciting and interesting for many people. So, I don't see that being a barrier particularly. Yes, I mean, in some ways working at a low level is kind of, it's almost easier, because if I'm working on emulation of some feature, somebody has hopefully produced a several thousand page specification that says exactly what it needs to do. I don't have to guess. I don't have to do all this. I don't have to deal with UI aspects very much, which is just as well because I'm terrible at them. So, in some ways, it's quite easy. It's like the spec says this is what you've got to do. You've just got to translate it all into code and hopefully it will work. I think kind of the barrier to entry with some of this is that software components like Quemio are now so huge. We have millions of lines of code that it could be hard to get a grasp on where you should maybe start. So, I think my advice for that is not to try and grasp the whole thing in your brain at once because nobody on the project has a view of how the whole thing works. But you kind of, maybe you have a small outline of where roughly all the pieces are. But mostly it's like ignore all the stuff that is not relevant to whatever feature you're trying to implement and just go ahead and try and deal with a bit. Look at the code that you need to look at and don't look at the other 900,000 lines, basically. One choice is to basically look at the open issues list and pick one that looks interesting. That's number one. Number two, well, with RPMs and things, in many cases you can stand up the system into a running level. And my favorite tool for that is basically just run it under the debugger and hit control C and see where you end up, right? And that's often just walking the stack gives you a lot of insights into any software system, right? And at least in the past, that's our learned system. So I'd really just hitting control C at runtime and seeing where's the code currently stuck and what can I learn along the way? Cool, thanks for sharing that. Any other suggestions on how to get started in open source virtualization? I think the other thing I would say is to, it helps a lot to come and talk to us because we know the code base, we know what kind of features seem like they're relatively tractable for somebody who's new and some parts of Quemu are, to be honest, just not very well maintained. So if you're coming along and your idea is I'm going to contribute to some parts of Quemu and it turns out that there's actually nobody else in the upstream community that's really working on that at the moment. It's going to be much harder to find somebody to review your code or give you suggestions or whatever. So if that's the thing you really, really want to do then go ahead and do it. But if you're kind of just interested in generally getting started, then picking an area where there are other people working who can give you a helping hand, I think is important. Thank you. So as a follow-up, I think you mentioned subsystems that are maintained to various degrees. One of the interesting things that sometimes comes up in upstream contributions is this difference in the maybe quality or the amount of time that's been invested in different parts of Quemu. And we have a question here that says, is Quemu still accessible for hobbyists with limited time? And I guess that may be referring to, we have a lot of infrastructure in Quemu that someone who's new and maybe only focused on one particular new feature would have to learn and might not know. So what are your thoughts on that? How can we make Quemu not just a good corporate open source project, but also good for hobbyists? There's always been a dilemma with very large open source projects that you have to put these kind of standards and codes of contributions and style guides in place because you hope to increase the total quality of the code by doing that. But you actually, by doing that, you also make it harder to contribute. I don't know if there's a really good answer to that really, except to probably make it more automated so that, even if people aren't necessarily fully aware of how to format their patch or something, they can submit something and then they will get an automated return saying, formatted this way and then they can proceed in steps that way. But I don't think that any project has really solved this very well. Yeah, so I think that it's got to be harder for hobbyists these days just because Quemu is bigger and standards of, we have gradually raised standards as you have to as Quemu has sort of morphed from being a, here's a nice emulator toy to, here's something that's actually going in people's service and it's got a security boundary and all the issues associated with it. So it's harder. I think it is still possible, absolutely. But also the direction of the project is going to be influenced by who is putting in more hours and inevitably the corporate contributors are putting in the bulk of the hours and that's just the way it is. I do agree with Richard that we could definitely do better about making our process easier. We have a fairly old school process that's mostly borrowed from the way that the Linux kernel tends to work and that is not very sort of 21st century friendly to new contributors, but changing process is very hard to face. Okay, thank you. We have some questions about containers and VMs. The combination of containers and VMs or the choice between them or using both has been an interesting thing to see in the past few years. So the question we have is, how do you see the future usage of full machine virtualization evolving into the ongoing competition from container-based deployment models? Well, I mean, I think we've seen for a really long time particularly since Intel's clear Linux project to sort of morphed over time into Cata containers that there is a space for the two to coexist for the virtualization technologies certainly in the hardware and the low levels of the stack to harden containers because at the end of the day, containers aren't actually very secure and people are using them in the expectation that they are as secure as virtualization, which I'm afraid they aren't really. But with virtualization technologies at the bottom end, you can actually give people that promise, deliver on that promise. So I think that's where it's going. So the cut. Yeah, I also think there's a spectrum, right? I mean, when you go from containers, you have exactly the kernel exposures, right? I mean, the various projects that we have, for instance, pursuing and trying to figure out can we actually take some of the emerging virtualization techniques and isolation features that hardware providers and basically drive them deeper into the kernel, right? So you have actually memory management techniques to isolate parts of the kernel because the kernel doesn't have to touch many of the data structures that the user uses, right? At the same time, you're basically having a resource problem, right? Containers are rather thin. You love the way of how containers are being managed, right? And slipping a VM underneath will cost you dearly, right? In terms of memory overhead and things like this. So one way, and I think QMU is already going after that is how you can get to thinner machines. How can you make your distribution smaller, right? So that the overhead you're paying for a virtual machine backing a container is basically removed or not at least reduced, right? So there's basically across the stack, many things that can be done to provide the customer choice, right? Where you can still get the same way of managing your applications, maybe through a container image, right? But at the same time, provides increased security or even isolation, right? I mean, there are various projects where we, for instance, the security features of SAP or TDX can be basically raised with a VM to back a Cutter container as an example. Yeah, to me, I think my personal opinion is, I think to me, I think this is not a binary choice. This is a VM or a container, right? To me, I think that this will be a kind of a blended technology. So that's, and the people may have a different usage requirement and then we will pick the kind of technology that builds base suit for them, right? So for example, I think today it's, it's already in the container space as Richard and who just mentioned, there was the ad using the VT technology, the lightweight VM technology to improve the security and isolation for container, right? And also a lot of container, I think 80% of the container today that's surrounding in the VM, right? So not one container in one VM, but there were pods in the one VM. So I think that's, yeah, I think it's not a binary choice to me. That's this kind of, it was to be a blended technology. Yeah, I think I agree with that. I think that the use cases of lightweight virtualization are really interesting with containers. I also wonder what sort of the economic impact is going to be if it becomes just so much cheaper to run container type workloads and especially in public clouds that could incentivize people to go even further down that path potentially. Okay, well, thanks a lot. We were reaching the end of our time. So I wanna thank you all for being part of this panel and I hope that this was a good discussion that everyone enjoyed. Thank you very much. Definitely, thank you. Yeah, thank you. Next.