 Zato vidim, da nekaj prišliči, ali nekaj prišliči. Prečo početnji, ga imamo početnje vsečnje informacije. Na bovih, je nekaj interpad, kaj prišliči bovih. Vseč je vsečnja skedulom.com, tako početnja se. Vseč nekaj prišliči. Pohdaj, nadobno včem sem prepracieto predstavnjamo pri hovček, rečno obštvo na tudi glasbenosti v svetu, ale bo tako nekaj, kako Live Grand včem je. Tretnjamo in glasbeno, informacije mu je nekaj v toželji je za vadi je dla осveti. Sgovorimo, da sem proti tudi pri glasbenih, survey sem pri 1700 v skupnimi. Zobr, navetno zelo jaz! Zemem pa z triggeredom, privedem pa Prado Bontzini. versus engineer at Red Hat, and we will start the conference by giving a status update on KVM and things that happened in last year. So last KVM forum was virtual but was also in September. And it was about the time that Linux 5.5 Rc1 was released. Now it's a bit later in the 6.0 release cycle, it's probably going to be out at the beginning of October. So we had basically almost 6 releases and a pretty large number of commits, Kaj so pravosti za 2500, z kaj je 200 vstavljene relizenje in relamo je početivosti za relizenje. Zato se pripočeva 2200 in 180, v prijovih 6 relizenje, ali je tudi vse zmačan je, da se vse je vštih relišenje in zato se relišenje vstavljene je radiobizena, in we are not introducing more bugs, at least relatively to the amount of chain going on in KIVM. The C remains a very mature hypervisor, but with a lot of exciting things going on and in particular in 5-16 the RISK5 architecture was finally merged upstream, so I guess we can give an applause to the RISK5 maintainers. And for the past few years I've given this status update, I put the commits in every release to highlight release with particularly high activity release with particularly interesting things being contributed, but this past year there was like a hectic amount of activity like always. So in each group of five releases, so leaving out the last one from 5.14 to 5.19 because it's not completely released yet, you can see that we went up from 1200 commits like five years ago to 2000 this year. It's like going up by almost twice as much. And you can see this why. It's mostly Google sending out patches like crazy. And it used to be that Red Hat was the first with roughly the same amount, but now we have 1000 more just from Google. So I guess another applause for Google and for the people that reviewed the patches especially. And this is the amount of commits by architecture. You can see that this five is particularly high in this year, even though it's relatively less developed and with fewer people working on it. Because there was the first set of patches that really got the supporting KVM and then also supports in the test, self-test. So you can see that actually the self-tests are basically 20% of the commits. That's a lot. The ratio of lines in the hypervisors versus lines in the self-tests is like one to three or something like that. For every three lines of hypervisor code, there is one line of test code. And I think that's probably the highest in Linux or close to the highest in Linux. And there's also a bunch of changes going in in generic code for doing stuff better across all architectures. And that's also an important thing in the evolution of KVM, so good to have. And let's see what happened in the last year. One interesting theme that I noticed is that we are using Linux library code more. KVM tends to be relatively small and tends to do things on its own without much help from Linux library code. Sometimes because it's just not necessary, but still this year we have new code for the main slot and the VCPU lookups using respect to the interval tree and XRA data structures. And also the X86 page table destruction was revamped and is done using the Linux concurrency managed work queue. And I don't know if we will see this more in the future, but it's certainly a good thing to have and an interesting change compared to the past. Going to the single architectures on X86 we have an API to have a continuous TSC over migration. So usually the way it works in QM, for example, is that the TSC stops on one side of the migration. It starts on the other side and there's one moment where the TSC stopped, but a few milliseconds passed. With this change you do the opposite. So usually the TSC, the clock cycle counter continues to increase over migration and it's basically like if the guests were really long SMM or something like that. There were many cleanups for the API virtualization and many cleanups for the memory management unit, including the page table destruction that I mentioned before. The new scalable MMU is now basically on feature parity with the old one and a lot faster, so that's good. An interesting thing is the ZenEvent channel delivery in KVM. You may wonder why we need the ZenEvent channel delivery in KVM and the reason is that AWS is running legacy instance types on KVM while exposing the ZenHypercol interface. Some things have to be done in the kernel and they were already there a couple of years ago. The one that is important for performance is event channel delivery and it's now in and it also supports a bunch of changes and improvements in other parts of the KVM code. So it's nice to have new features that maybe are a bit niche, but they prompt more changes and more removal of technical details. Another interesting one is eager splitting of page tables on migration where you don't have loads of page faults when dirty page tracking starts and instead you split them eagerly in the background and that's good for performance as well when migration begins. For x86 hardware features on AMD we had a bunch of nested virtualization improvements including support for accelerating like the nested nested where the nested guest is also hypervisor which is a bit crazy but it's good especially for debugging and testing of the nested virtualization code. In theory you can go infinite levels. I don't know if you find bugs before it gets too slow or not but anyway these things help with nesting even more. The epic virtualization on AMD now supports physical epic ids greater than 255 which is also known as x2 epic because the physical epic ids more than 255 is the x2 epic. On Intel probably the biggest feature was dynamic save states for AMX which is an extension for matrix multiplication. This was done more than Thomas Gleichsner because it was done in the common Linux code. So thanks to him as well, virtualization of IPIs and something that is going on is improvements on the performance monitoring in particular precise event-based sampling. For ARM there's a lot of useful things like for example new instructions for timed event weight, wait for interart with timing and wait for event with timing, asymmetric setups of the performance monitoring unit support for Apple M1 processors and there are some things that are apparently within spec but only done by Apple. Doing suspend and resume using the PSCI interface allowing user space to choose which hypercalls are available and guard pages and stack traces for the hypervisor stack because as you may know ARM also supports a mod that used to be the legacy one but now is also used for protected KVM where there is a small tested base running at exception level 2 the hypervisor mod and then Linux runs as a kind of privileged guest in exception level 1. So in particular EL2 can now filter which system registers available in EL1. Some hypercalls are only available at initialization time and not anymore after to protect the hypervisor from mischief further and also EL1 doesn't have to share all the pages from itself to the small hypervisor. It can only share subset of pages if it wishes to and it also helps reducing attacks of phase. For S390, a lot of the changes that went in are for secure guests so for example is the story of secure VMs, device level for communication with the secure guest firmware and also virtualization of interrupts for secure guests. Storage keys for the little that I know about S390 are something that keeps on giving and there were still improvements and fixes in several years of S390 being supported by KVM. Another important set of changes is maintenance. Power had relatively fewer changes, it's more stable and the changes for power don't go any more through me, they go through the architecture tree. Under the hand we have new X86 maintainer that are going to send a request to me. The details are still to be decided because it's the first release where this has happened so I won't go into the details because I don't have any and anyway they're probably going to change in the next few releases as we sort things out. So what's next? For X86 I will probably use a lot of free time that I now have from having some maintainers to improve the CI and make it more automated but apart from that it's probably going to be a lot of work on confidential, on finishing confidential computing support in the next year with Intel TDX and the third step in enabling MD7 with S&P and for the other architecture one thing that I've been noticing is that people have been working on having more feature parity with X86 for example, around pre-verbalization features there have been patches posted for ARM on asynchronous page faults, still time is already there on ARM but not on RISV5 and there are proposals being worked on in the RISV5 hypervisor interest group right now only for still time but probably also for asynchronous page faults in the future. Another thing that I've seen for ARM is support for dirty pagering and the scalable MMU which have both had interesting complications due to differences between ARM and X86 especially in the page table management area and I think this is good because with these architectures being in wider and wider use it's kind of nasty to have support only on X86 for some of these features or some of these scalability optimizations. So I guess that's all. Thank you, I will leave the stage to Alex for the QM update and have a nice KDM forum. Thank you very much.