 Okay, welcome to the KVM forum panel discussion. This is, we have a whole bunch of panelists here and remotely, Susie will join. Unfortunately, she couldn't make it in person, so she is joining us via Zoom. I hope you're able to hear us, Susie. Yeah, I can hear you well. Excellent. Okay, so my name is Kashyap Chamarasi and I work at Rahats Cloud Engineering Team and I'll let the first panelists introduce themselves. Susie, you wanna go first? Oh, sure. Yeah, so I'm from Intel, so my engineer manager, my team, is working on the labor intelligence in Linux for both bare-metal and hybridization. So sorry that I cannot attend in person, so I'm just gonna try my restrictions. My name is Paolo Bonsigni and I work at AdHet and I'm the maintainer for KVM. I am Christopher Dell, I do system architecture at ARM. I'm Sean Christopherson, I'm at Google Cloud and KVMX86. I'm Will Deacon, I'm working at Google, mainly on protected KVM and I maintain the Arch, I'm 64 architecture, but not the KVM bits. That's from my mark, wherever he is. There. Okay, I'll kick off with something that has been partly the theme of this KVM forum. So traditionally, virtualization has been used for isolation, partitioning, scaling, and so on, but one of the new things is using virtualization for security. And Microsoft has this virtualization-based security VBS and Samsung as its NOx. What is Linux and KVM up to in this space? Should I start? I don't know, anybody can, not everybody has to answer on everything, but... Yeah, answer. Yeah, I mean, one of the both good and bad things of Linux is that the maintainer community is very valid. I think a lot of people have different opinions on virtualization-based security and on how to implement this kind of isolation features. And I think this is one case in which it's really hard to get people on the same page. But I mean, I certainly believe that there is value in using virtualization for security. And even without something like the BS in Windows, the KVM work, for example, is certainly on the same idea of having an isolating hypervisor and not just a hypervisor providing the service to user space. So it was very interesting to see the KVM being developed and actually put in use in Android. And my hope is that it kind of becomes a trailblazer for other architectures to do the same and that we can have isolating hypervisor. I don't know if it's a term, but that's what I could call it. There's quite an interesting thing with the KVM, which is that, okay, so we have a stage two translation for the host and the primary purpose, you saw Quentan's talk earlier on, is so that the host can no longer access the exact memory. However, even if you don't have any guests with this stage two, we could potentially offer hypercalls. So for example, Linux could just say, I want this piece of what it thinks is physical memory to be read only or non-execute. And you can do that for, you sort of knock out all the aliases at that point, which is quite nice. We did play around with, for example, trying to make the kernel text read only. Sounds like a really good idea until you hit the static key. And the first thing the kernel tries to do is patch its own text, you're like, oh no. So it's a nice idea there, but making them practical for, for it to be able to use by Linux is somewhat challenging. Yeah, I guess we've been a bit spoiled in Linux, because compared to Microsoft, they always had problems with the programs that patched the heck out of kernel space. They wanted really to prohibit that. While in Linux, it's the opposite, like we patch things as we see fit for performance. So Microsoft already had been doing the same things badly with like observing if there were any changes and crashing the system, hope mercilessly if it happens, so they could do it well with virtualization. In Linux, the trade-offs traditionally have been different. So again, it's very different history of the systems and you see very different trade-offs and it's difficult to go, to change your direction and steer back towards something like what Microsoft has been doing with realization with security. It's, and I don't want this panel to be about praising Microsoft, but still you have to give credit when credit is due. But I think that one of the, PKM is one of the first steps towards potentially start applying virtualization for security. And I think that it's interesting. It's going to be interesting to observe over a long time. Is there room for standardization there or will it effectively be completely locked in that you run Windows and Hyper-V and you run Linux and KVM and they don't work across each other, you can't migrate them unless you introduce nested virtualization or will be sort of standardized around common principles. I think there's room for doing that standardization but not necessarily good bodies of doing that. So yeah, that's one of the things that I think would be interesting to observe and that will go. Okay, next somewhat related, thank you, is that KVM over time has gained a lot of support for a lot of virtualization extensions in X86 and ARM, we have Susie and ARM and X86 people here. What has been your experience and challenges? What works well that goes unnoticed or what are still the pain points? I know it's a bit of a broad question. Sean, in his keynote earlier in the morning, he outlined some but maybe not everybody attended that so. Are we limiting pain points, technical stuff or non-technical stuff? Both, you tell us. I mean, I think we kind of beat the dead horse already for the non-technical stuff. I can add something to that. Go for it. So I think, I used to maintain with KVM-64 and eventually for me I not actually burned out but I just had enough of that work and it wasn't actually the maintainer work that that destruct when you were talking this morning that it was simply the amount of reviews where had I had more time to structure CI loops or set up things or initiate testing structures, I would have still found that interesting and would have maybe stayed but I think we're conflating having to review all patches and being the maintainer at the same time and I think if maintainers had the opportunity of saying I'd like to take your patches but unfortunately they're not reviewed nor do I have time for review, I think that that might be a way for actually making maintainers life a little bit happier but again those rules are not written down anywhere. It's a bit, yeah. I think with reviews specifically even if you have them written down and even if you have the manpower to throw at it it's a balance because if you just have a bunch of reviewers and they have different voices and different opinions then inevitably you're gonna get a case where it comes down and the maintainer's like what is this? This is terrible. And it's like well this reviewer told me to do it this way and so if you have too many voices to be cooked in the kitchen then you're gonna have a mess so there's a balance there but I agree like it's not feasible to scale and have one person be the problem. Well having too many reviewers was never our problem but I mean again you would be a wonderful problem to have but I mean you just have to look at the list the KVMR list right and there are a lot of patches I think Paul Mark again he's on the receiving end of all of that but if you look at them people might think okay well it's probably just bringing up parity with x86 and okay some of it is but most of it is architecture extensions new CPU features, Arata workarounds that kind of stuff and they're quite hard to review. I mean honestly Mark I keep pointing out to me he is the best person to review there are some people who are getting up to speed with it but there's an awful lot of prerequisite knowledge for somebody to give a meaningful review on that code. Yeah. Yeah for me that one point that is not maybe surprising if you don't know the way that KVMR development works but something that Sean brought up was that the good enough mentality because the good enough mentality becomes technical depth scenario later like as you enable more and more hardware features at some point maybe the hardware starts doing things that you did well enough before and now you have to basically make sure that sometimes maybe even you can migrate from for a machine that accelerates it in hardware to a machine that doesn't accelerate in hardware so you have different states from the older emulation and the new one and if you improve the implementation you have to be able to migrate from the old implementation to the new one and again different state. So that's one thing that probably wasn't realized at the time. We got sort of lucky with necessary virtualization because it didn't support migration for a long time but as we get more, not just new features but also acceleration, hardware acceleration of existing features you find out that some choices that you made 10 years ago become like an absolute pain point. So is that what you wanted to say? Yeah, so my team is working with the KBM community for 15 plus years. I think we are very happy with our experience of working with the KBM community. So it's very effective with the KBM community we see a lot of the active contributors. Not only from people are, hey, this is I need to enable these micro features but also I think that some people actually have a core sense of ownership. This is my project that want to make it better. For example, contributing to the infrastructure and to the bug fixing central. And also I think on the maintenance side the patch actually opened, I mean it's a reveal in time and the open time the maintenance will give us the very specific feedback on how to make it better. Not just hate this impractical condition but specifically about how to make it better. I want to hear from our engineers on this year this is a very beneficial project to improve capability in growth because they can learn that from this contribution and the experience. So I think that's a very important part. Looking forward, I think the challenge, one of the challenges in the past a lot of the contribution is limited in the KBM space but with the future become more complex. That's, we will need to touch many other heavy current subsystems. So I think it's important to take an end-to-end view there. That's, for example, how we leverage other subsystem infrastructures and how when we design a feature how we take both the bare metal and virtualization into consideration that are complicated. So with the virtualization does not become a second thought. So all this is, I think that's some of the new challenges where the player will be submitted towards that. Thank you. So a related customers, confidential virtualization is a big theme and yesterday's KBM forum, most of the sessions were related to those themes. So it's a red hot topic and a lot of challenges, each vendor and each architecture has their own challenges. What are some of the areas where vendors can work together on this? I have one potential thing that I see here is that attestation is the remote attestation is one of the common challenges and that the concept has been around for a while. Every vendor seems to implement their own mechanism and exit its world. How is that with ARM? This is one example. I want to pass on attestation. That's a good idea. Probably in a further part of the talk. This is a talk, right? This is a talk after this. This is a talk after this. About this type of attestation. Yeah, it was supposed to mention that. We should all go after the answer. I planned the thief attestation. No, yeah, I mean, really, it's very hard for, because you see software people that have to deal with hardware people doing software and the interaction is complicated. There's many different approaches depending on who your customers are. Of course, a 390 where you have one machine is your data center. You just make the machine bigger. It's very different from x86 where you have more horizontal scaling. And yeah, I'm not sure. I don't want to say that it's impossible, but you can see that it's really challenging to find an approach that fits for all. At least what we can do is try to focus on similar interfaces. The implementation may not be the same, but like another point from Schoenstall today, not coincidentally. The more you make things similar to what already exists in hardware, for example, the more you can rely on existing documentation and knowledge. And this is a design point, has been a design point for KVM since forever. Sometimes we maybe took it even too far, like with Secure Boot and System Management mode, but still it's been a design point to try not to invent things. And I personally think that sometimes hardware vendors should resist the urge to invent things a little bit because otherwise you get into the situation where you have 14 standards, no one fits your use case, so you invent the 15th. I think the way I would phrase it is I really want hardware vendors to give us building blocks so that software people can go take things and build things that fit their needs instead of companies saying, here's our solution, we've built the whole thing for you. Now can you use it? Like that's, it's just, it's very hard to adapt, it's very inflexible when we want to change it. I'm like, PKBM, I'm an x86 person, but I love the PKBM thing. Like it is just fantastic. Like you get to do everything in open source. Is that a fantastic thing? Yes, it's wonderful. I mean, I wish that we got to do that on the x86 side and go write the whole thing in software and just have the building blocks that were provided by hardware to do that. And I think that gets you a lot of things like standards. Like we have a way of doing this in Linux and it works across everybody. We hide a little bit of that in the low level software because hardware is slightly different, but only in the building blocks. And what gets seen at the end level is what's supported in Linux, not what's supported on Intel, AMD ARM, and 50 other vendors. But speaking of that, there is nothing that's preventing you from having PKBM on x86 as well. I guess there was a proposal, some talk, I mentioned it yesterday or the day before. So you can have it on x86 too. We could have pieces of it on x86. Yeah, there's trials, you can do it. I think that's interesting though, back to, you mentioned attestation. I mean, that's got a lot of fingers and a lot of pies and it's not just hardware, it's not just architecture, it's things like the way that you boot the machine, which I know with x86 might be one true way, but on ARM, there's quite a few different ways of doing that. So I remember the ACPI wars not too long ago where we introduced ACPI support on app 64. But it does mean that it's hard to find people to own the whole end-to-end attestation story because it's not just in the hands of one person to implement it on ARM 64. We can't just say to ARM, hey, you do this. And they say, well, but we only do the architecture, you need to talk to these people. And I think that's a big challenge to try and get somebody to own that problem, but also someone that people actually trust. I think it's also just not a well enough understood problem at this point in time. So the standards are gonna come out, the use of TDX and PKBM, ARM CCA eventually is gonna come out and being actually used. And we'll see, and we'll see where we can, if we can just share plumbing or if we can share principles or standards or wherever it gets us, but I think it's gonna have to be a bit of one step at a time, unfortunately. Yeah, so I want to have an example here of how we have elaborated this in the competition community space. So in the past, the guests trust the host and advisor, but now that's in the competition community, that's the host OS and HEPA is no longer in the PCB. So this is a new strength model. Basically, hey, the guests need to, they need a hundred guests to against this host and the HEPA is a tax there. For example, the guest posts, they share some, that's a share buffer, some eye-over-trization, how we make sure the host input is not, there's some issue and I'm trying to attack the guest confidential secret data from there. So this needs to be a lot of the guest hardening work, like the auditing, like the fuzzing testing. And this is not only for the host, I'm sorry, this is not only for the guest kernel, but for many of the layers in the guest, like the guest firmware, like the guest rub, guest stream. So there's a huge amount of work needed to harden the guest's software stack against this new thread model. I think that's certainly a lot of things we can collaborate with across the hardware vendors and with the Linux kernel community. With the Linux community. Okay, thank you. What else have I got? I forgot to mention in the beginning that the etherpad was supposed to take questions from etherpad, but it was crashing for me. That's why I had an offline backup and a set of prepared questions. Okay, so next one, it's a bit on ARM related. So in 2020, you gave a KVM forum talk on exposing KVM for Android. And you talked about virtualization being the wild west of fragmentation. So how do you update that definition in 2022? How do you see that? I can't extend the analogy on the fly. I'm trying to think about it. We're all on the same horse together, maybe. I don't know. But I think one of the things that has changed, so it was only a couple of years ago. So it takes a long time to change this kind of stuff. But one thing that has changed is that we're now, as Android, engaging more with the silicon vendors, with the OEMs, and with the Android virtualization framework, which is the broader project which PKVM is like the hypervisor part, but obviously there's lots of user space bits and interfaces to talk to the virtualization services. That's provided a focal point. People want to enable this on their SOC. So we're now being able to actually find, okay, so what are you doing in your hypervisor today? How much of that do we need to support in PKVM? How much of that could we move to somewhere else? And that kind of stuff. So I think we'll get there. But it is still quite fragmented. It's just, I guess, we're working on it. And yeah, I'll leave it at that. Chris, do you have a different? We're definitely seeing that the steps you've taken with PKVM is also pushing things all the way back to SOC designs where they're starting to think about, okay, so virtualization is a thing for security in this space, and we need to start thinking about how we actually designed chip that's coming in three, five, seven years down the road to cater for this, and that's an amazing piece. Okay. A related one, while we're still on arm and related to this. Again, in that talk that you mentioned that there is a 2020 talk on exposing PKVM for Android, there's a lot of third party code that is running at a highly privileged exception level in architecture. So you said that needs to be deprivalaged using PKVM. And how's that effort going along? Is that? Yeah, how ambitious and naive I was in 2020. It's definitely the goal, right? So with the parking PKVM for a second, right, on an arm SOC and Android SOC, you have trust zone and you have the secure world where there's an awful lot of third party code that as you mentioned has an elevated level of privilege. And maybe that's not the worst thing in the world, but the problem is that's hard to update. You wanna give an example of this third party code? Things like DRM runs over there quite a lot at the time, key management, although maybe that's the right place for key management. Key management's bad. So just some examples of things that run over there. So I was like, yeah, so updating that software, particularly when it gets big and complicated, is a challenge. And moving that to VMs would potentially offer a way to update it more easily in the field. And secondly, it would allow us to provide some level of portability because we have a virtual platform, a virtual machine. If we can standardize what that VM environment looks like, then you might not have to integrate and develop this code on a per SOC basis or at least be easier to port. So I think there's some big advantages there. The momentum is quite slow. I mean, we have shipped, we are shipping the PKBM and Android 13, this year's release of Android. And we are using it for running parts of actually the Android runtime compiler. But we're not currently running any stuff that we've moved out of TrustZone. And part of the reason for that is that TrustZone has been around for quite a long time. It's seen quite a lot of adoption in the hardware. And you can't just move those things over, right? If the piece of hardware you need to talk to is glued off on the secure side, well, okay, it's gonna stay. I think we'll get there, but it's gonna take a little while. That is one of the things that the ARM Confidential Computer Architecture is trying to address. And again, that's a very long-term thing, but it is a separate world that doesn't trust TrustZone nor does TrustZone have to trust it. So that is one of the potential gains over PKBM approach. But yeah, again, adoption will take time. Okay. Any other comments? I think we'll say next. Okay. What else have I got here? Okay, one last on sort of ARM64. I'm gonna have to see what's in it. Yeah, you're sure it's getting an easy ride here. Is that right? Learning lost. So, KVM's ARM64 port is being used for small form factor devices and larger ones in the cloud server and phones. How is the KVM community? And the developer community dealing with that diversity. Yeah, do you want to take a picture or do you want me to go? No. I think it helps when 75% of the people involved are employed by one employer, Google. So seriously, it eliminates a lot of friction because you have people and you can go, hey, you want to hop on a chat and we'll figure hash this out. When you have people that are spread across geographically and different problems. I disagree, actually. In the sense that I say the bad things about Linux in the beginning, but now I can say the good things. I think the idea of having a single operating system running from machines with like 16 megabytes of RAM up to machines with 16 terabytes of RAM is kind of unheard of. So, and if there's one thing that Linux has been doing very well has been to be able to scale across different use cases. And personally, I find it maybe not natural, but I would be surprised if it wasn't the case. Like that KVM would need two different hypervisors for PKVM and for data center use. I mean, you hear. That said, it's always good. The part that is more surprising is that the maintainers could pull it off. Like because PKVM is a very different use case. And many times with Linux, the obstacles are more of a non-technical nature. So I'm totally not surprised that the same hypervisor can work from fonts to data centers. But still, I think you guys still did a great job because there's hurdles to cross and you did it. You did it, so. I mean, you do hear people say, you know, today's phone is yesterday's server. And I think in terms of CPU and memory, that might be true. But I think when you start to get down into the deep dark areas of things like IO topology, it's really not the case, right? So, yeah, although it's not surprising that we managed to sort of support all this array of devices, I think the bit where it perhaps is more challenging is around, you know, getting, for example, VFIO, right, if you want to do device pass through on a server part, okay, sort PCI, it's all coherent, jobs are good and it's just plumbing. To do that on a mobile device, you might not have translating on the news. Yeah, but just like... Tomorrow's phones will be more likely to have an SMMU compared to today's. So we'll get in there. I mean, at the point where you start to import half of Linux into your protected layer of PKVM to make things work, the whole idea kind of falls apart. Yeah, I think that is gonna be a real challenge. Okay, thank you. A complete change of scenery. Do you see Rust language being used in the KVM kernel module itself? If so, why? If not, why not? And if yes, which areas? I think Susie should start talking about Rust user in user space and the wheel about Kosti M as well, and then we can back to the question if you agree. Yeah, I'm not a personal Rust expert, right? But I do see, I mean, especially on the kernel space, but we do see Rust provides a lot of security about features, so there are many activities in the user space to use the Rust to provide the security-oriented in the space VMM. I think you're all well, you know, for the cloud advisor, Krosby M, you know, the fight tracker, yeah, there's a lot of activity in the space. Do you want to talk about Krosby M? So yeah, because we're using Rust quite a lot, actually, in the AVF project, not in the hypervisor part, because reviewer shortage, that's one way to make that even worse, right? Then the, yeah, so Krosby M, we're using as our VMM, that's written in Rust, and we're also currently looking at using Rust for the first stage bootloader of the guest, and that's quite exciting because we managed to get some really good performance by entering the guest with its MMU enabled. And this Rust code, we built a crate for basically building page tables. So it's cool that you can do such low-level things. You might think, oh, you have to do the early page table management in assembly code, because it's always been done that way, but actually we have a crate for that now, so. And we can initialize the page tables, enter the guest with the MMU on, it means we don't have to do cache maintenance, and it's really blindly fast. So I see plenty of scope for Rust there. The original question, I think, was seeing Rust in the hypervisor itself. No, no, I never say never, but I think it's gonna be some friction against that. There's friction, but I think also part of it is you have a working system today. If you want to replace that with Rust, you have to take everything you written C and throw it away and replace it with Rust. And that is both technically challenging and can be very politically challenging within a company to say, hey, I want a bunch of engineers to go work on Rust and do this thing that we already have, but it's in C. And so when you have something smaller and more contained like the bootloader code in your guest, that's a very consumable piece. You can go actually do that and not take four years to deliver it. Versus if you want to say, go do KVM's x86 emulator in Rust, like why first off and second off, like good luck actually doing that in any reasonable timeframe and having something that comes up to where we're at today, so. You could apply a lot of other techniques in those four years to make the C code better as well. I'll move it to user space. Or just like, I think one possible way to use Rust in like for things that KVM does is to move them to user space and use Rust in user space. Like you have to write it anyway, might as well write it in Rust and you'll get extra advantages from doing less things in the current, fewer things in the current. I think Christopher, your point is actually very interesting. So the idea of doing something else in that time which you could use to benefit the code, we've had a relationship with some universities looking at the PKVM hypervisor code and they've done something kind of similar and they've been looking at the C code and then trying to annotate the ownership types on top of it and then reason about the lifetime of objects and that kind of stuff. I think the real question is, why didn't you write PKVM and Rust? Because we needed to do it quickly. None of us knew it. There was a related question on the ETA parallel, you see that it's from S. Rutherford. Could we rewrite the X86 instruction emulator in something safe? I mean, even though like, writing an X86 in the code, there is not probably on the top list of things that people want to do. I guess, rewriting the X86 emulator in Rust is probably the lowest hanging for it in all of KVM. The question is if it's really time well spent, but for sure it's the lowest hanging for it. That's what I can say, I guess. I think, sorry, you could do it. But again, do you want to spend the time to do it and if you have to assign three engineers for two years to get it done, what would you rather, you know, you could take those three engineers and do something else? Probably not worth it. I mean, a lot of it is just tables. There's not that much code in the X86 emulator. In terms of lines, in terms of what it does and what it has to do, there's a lot. Yeah, there's a lot of complexity in like paths that can be followed and therefore bugs that can be introduced unwittingly. So yeah, but we've had bugs in the emulator, so like not bugs, logic bugs. Are we stating the obvious today? No, no, no, I don't mean logic bugs. I mean bugs of the kind that Rust and France are meant to prevent. So there would be some benefit. The question again is whether the benefit outweighs the cost, of course. Okay, I think we've got five minutes left and I've got one loaded question that will probably take up all five minutes. So this is mostly an age old debate, the idea of standalone hypervisors versus hosted hypervisors or to use the inflammatory terms, type one versus type two hypervisors. So how relevant do you think is that today? And we know KVM is neither and what do you think of the pros and cons of KVM here? And yeah, dots on this debate is relevant at all? I don't think it's relevant at all. The whole type one versus type two is, I think it's pointless to differentiate there. You can make type one hypervisors out of KVM. PKVM is for all intents and purposes a type one hypervisor. Probably looks a lot like what early days of hyperv years end look like. You can make x86 KVM on type one hypervisor if you want to. It's all about what you want to do. If you want to take KVM in a direction of security and whatnot, you go PKVM. But the benefits of type two KVM are that you get all the goodies from them in the next kernel, memory management scheduling and all that stuff. So it's just about, I think differentiating type one versus type two is just pointless. I've been asking a few occasions, right? PKVM, is it type one or type two? Oh, it's type one and a half. You know, just like this. I usually say it's type two with benefits. But actually when I had that discussion with Avi when I was submitting the KVM arm patches, I remember him saying that the MMU notifiers was really the thing that made KVM not quite a type two in his view, right? Because Linux really does things for the hypervisor. And that's a big difference from, like I think the type two was intended to say when you had to like install something that looked like a driver in Windows and make it behave as if it was running virtual machines being a workstation in 1999 or whatever that was. And yeah, I think that is true, that they collaborate is the real difference. And even if you look at the type two hypervisor there are such diverse designs. Like starting from VMware workstation where it's like a completely separate driver that just happens to run in kernel space to Apple hypervisor framework, which is as limited as possible. And a lot of the code that would run in KVM or in VMware workstation actually is moved to user space. KVM has lots more help from the kernel in terms of, in terms of for example, context switching and so on. So there's so many ways to do a hypervisor whether type one or type two that it's not even a single variable. Like you can probably find three different axes of how to design a hypervisor. It does demonstrate quite nicely how we're useless at naming things though, right? Type one, type two. And then you look at what we have on ARM 64 and we have VHE and NVHE as well. It's just the naming soup. But it's a good archeological exercise to try and find the actual definition of type one and type two. I think it's a scant PDF that you can't search for Popex Goldberg's thesis from 72. I recommend finding it, it's fun. I think the easiest way to find it is to search in the KVM mailing list because sooner or later somebody posted the link. It might or might not be dead, but it's probably a good way to start with something because I think that the whole type one, type two was misused for 15 years of life of KVM basically. Susi. Yeah, no, I think this is kind of a two old kind of terminology, it's not that relevant to me. I think the most you wanna say is the one of the hypervisors. Sure, you're looking for your use case, right? So, for example, you are, is this for example, requires a super light hypervisor and are you requiring for function safety, for the time and all these other things we can discuss. Is that relevant to have, do you need to really see hypervisors to do these kind of things or do you need to have an OS part of the hypervisor to provide all the other benefits? So, I mean, it's not really a terminology, it's really to provide a feature definition for sure. Okay, we're just one minute short. Any closing dots in 10 seconds each? KVM maintenance, now that you have two X86 extra maintenance, what are you gonna do with all that extra free time? You're implying that all of Paula's time was going to KVMX86 and that was the problem, as it was. I don't know, but there are certainly things to fix that are not kernel code. So, my hope is to be, I think without being too immodest, I was overall doing pretty well in the personal relations with the developers. Other people were actually doing better than me in the code review, like I'm not very good at code review myself, I have other things that I'm better at doing, but I think where I want to go is, not because I have extra free time, but because it's needed, is basically to remove the hurdles to contribution and to have a better way to onboard new people. And an important thing that should be done is to do that across other kernel subsystems. I know, for example, the XFS developers work in a very similar way to KVM developers. They probably don't know it, because I also don't know very much the details. But if we do these things, like find documentation, onboarding documentation, sharing skates, it should be done not only for KVM. Okay, we're one minute over. That's all we have here. Thank you for the discussion and thanks for attending, everybody. That's all we have here. Clap for me.