 Hello, I'm Michael Serkin, working at Red Hat. I'm the chair of the Virtio Technical Committee, which is a governing body that develops the Virtio specification. And now I'm going to tell you what we've been doing in the last year. First of all, the Virtio Technical Committee made some changes to its charter. One of the changes has to do with the scope of the Virtio specification, which originally was designed to only cover the interface between the driver, which is part of the guest running in a virtual machine, and device, which is part of the hypervisor on which it runs. Nowadays, there are multiple implementations which look differently. For example, there are hardware Virtio implementations, which can be used on bare metal systems. Or we pass through to a guest of a virtual machine, but not be part of the hypervisor. There are nested setups where the device is part of the hypervisor, but not part of the hypervisor on which the virtual machine runs. All of these cases are now declared in scope for the Virtio specification. The Virtio Technical Committee has been de facto registrar of device ID and feature bit numbers, and this has now been included in our charter. We have also made official our strong commitment to compatibility. In particular, any device or driver compliant with the specific version of the specification is sure to also be compliant with any future version of the specifications. We plan to release these future versions every 12 to 16 months. Finally, Cornelia Huck, which is a long time contributor to the Virtio specification, and the co-editor of the spec working at Red Hat, is now a co-chair of the technical committee. Congratulations, Cornelia! It's easy to track changes made to the Virtio specification over time, because we maintain it the history on GitHub. Looking at the change history historically, we can see that the rate of change has been stable in most of the recent years. You can only see a large spike around 2014, which is due to our work on the initial Virtio 1.0 revision, during which we have imported and documented a large body of existing code. However, looking at the number of individuals contributing to the specification, we can see that this number has been growing steadily ever since Virtio 1.0 has been released in 2016. More than that, looking at the number of organizations involved in the Virtio specification development, we can see that this year is a record year. We have exceeded the number of organizations working on the Virtio specification that has been the previous record set in 2014. What drives these new contributors to work on Virtio? It turns out that for many of them, the reason is outside the cloud space, which is a traditional for Virtio. Instead, there are use cases that involve automotive, internet of things, mobile and other workloads. You will see this as I am going to discuss some of the changes that have been made to the specification in the last year. Of course, I cannot list all of them, but just to give you some examples, I would like to start with the Virtio sound device. This is a new audio device supporting audio input and output. It is very useful for automotive, and it is the largest single change made to the specification over the last year. The specification has been contributed by Anton Yakovlev, working for OpenSynergy. Good job, Anton. Thank you, OpenSynergy. The second largest change to the spec is documentation of the new Virtio memory device. This one can be seen as a replacement for the balloon device, while the balloon device takes guest memory and passes it on to the host, kind of sealing it from the guest. The Virtio memory takes the opposite approach, adding host memory to the guest. This reversing of the roles turns out to be able to fix multiple issues in the traditional balloon device. Lots of thanks to David Hildenbrand at Red Hat, who implemented this device and included the specification. GPIO is, of course, a general purpose input-output device, which is widely used in embedded and Internet of Things configurations. Any of you who worked in such configurations are, of course, familiar with it, which is the reason that Virish Kumar from Linaro added support for Virtio GPIO device to the specification. Next, System Control and Management Interface. He is a management interface present on ARM systems. It allows managing power, system state, sensor access, and more. Thanks to Peter Hilber from Open Synergy for adding this device to the specification. Free page-hunting is not a new feature in that it has been suggested many times in the past. The idea is that guests can suggest areas of unused virtual machine memory to the host. By guest and host cooperating in this way, they can manage the memory of the virtual machine more efficiently. This sounds kind of simple, but it took many years to implement and even more time to document. Lots of thanks to Aleksandr Duik for being persistent and including this in the Virtio specification while working for Intel. Virtio network devices are very popular. In the last year, support for UDP segmentation, which improves performance when transmitting packets from within the VM, as well as support for hash reporting, which improves performance when receiving packets in the VM, has been implemented by Yuri Banditovich at Red Hat. Thank you very much. Additionally, Vitale Mirenio at Marvel added support for flexible driver notifications. This enabled support for hardware virtio offload devices developed by Marvel. Thank you, Vitale. The iSquare C bus is of course very common and embedded and automotive systems, which is probably the reason that led Jai Deng from Intel to add support for Virtio iSquare C to the specification. Over the last year, the Virtio GPU gained ability to export resources between devices, as well as ability to pass global resources. This has been developed by Kurchetan Singh and David Stevens while working at Chromium. The Virtio file system gained over the last year support for the notification queue, which allows reporting asynchronous events from the server to the client. This work has been done by Stefan Hainochi at Red Hat. Thank you. And last, support for lifetime metrics for devices with limited lifetime, which are common, for example, in mobile systems, has been added to the Virtio Block device specification by Enrico Granata at Google. And there are more. So, I cannot list you all. I'm very sorry, but thank you to all who contributed to the specification and to those that I did list. I have just mispronounced your name. Sorry about that. Also, thanks for all the viewers of the spec for their feedback. And of course, for developers who actually are implementing devices and drivers according to the spec. The work is driven by your experience to those not directly working on Virtio. Virtio uses extending to new fields and drawing new contributors. So, join us. Virtio is easy to extend. A lot is going on. You can work on performance, or you can work on new features. And with that, thank you very much. My name's Alex Benet, and this is the Chromium status report. If you take 445 open source developers and you mix them together on a mailing list, this is what you get. But before we go into the details, I'd like to frame this by talking a little bit about me. So, I've been hacking on Chromium for around about 8 years. And that puts me in with about a quarter of the developers of the project that have experienced between 5 and 10 years. As you can see, we've got a fairly good mix. We've got a fair number of people that have only been contributing for less than a year, up until this hardcore 10% that have been working on this project for quite a long time now. I'm very lucky that I'm a paid developer, and Chromium is my main gig. But that makes me in common with the majority of the developers that work on Chromium. Only about a quarter of the contributors work in their spare time. The rest are doing this either full-time or as part of their main gig. Now, I think it's time to talk about the event. There's been one thing that's affected everyone in the world over the last couple of years, and that, of course, has been the global COVID pandemic. And I was interested to see if the effects of the rest of the world have had any changes on how we develop Chromium. So, looking at the developer survey that I sent out a couple of months ago, I asked for people's impression, and the overwhelming majority of developers said that the pandemic had no very little direct impact on them. There was also a small percentage of people that said that, given that they had more time to work on Chromium, it was an overall positive. Looking at the actual data itself, I plotted out the commits, although our raw commit data is fairly noisy, so I've had to average it out to make it a little bit smoother. And you can see for the duration of when the pandemic started at the tail end of 2019, it's had very little effect. In fact, you might even notice a slight upward tick in the total amount of activity in the project. However, there has definitely been one change in the way the developers have worked over the last year, and that's been the switch to working from home. So, we already had a fair number of people that did a degree of home-based working when they were working on the project, but that has very much become the majority over the last year as work from home mandates have rolled out across the world. So, now let's have a look at the releases and some of the key changes. So, we seem to be keeping well up with our regular three-releases-a-year cadence, and I don't see anything that indicates that that's going to change anytime soon. It seems to be working pretty well. So, I did a plot as a word cloud of all the various things that were mentioned in the sort of subject line of the commits. And you can see, as you'd expect, all the major architectures, so we've got PowerPC, MIPS being mentioned in commit messages, as well as subsystems such as the block subsystem, TCG, and stuff like that. It's also quite gratifying to see a lot of mention of testing, so that just gives you a sort of broad overview of the sort of areas that are seeing changes in the code base. But let's look at some more specifics. So, one of the things that's improved is our support for encrypted guests. So, this is currently AMD's secure virtualization technology. But I expect, as other architectures catch up, we'll be seeing needs for support of this sort of thing in the future. In the block subsystem, QCAL2 gains subcluster allocation, and this basically enables more efficient use of QCAL2 backing files. Another big change is a full emulated NVME controller. I believe there's another talk at KVM4 and where Klaus is going to talk about the history of that. Another section is the multi-process device emulation. So, this is separating the emulation into a separate process, which has some interesting security implications. And finally, ACPI hotplug is now the default for PC-based architectures. As with every other year, we have been involved in Google Summer of Code, and this year we had three contributors working on various parts of the code. So, Lara worked on improving our virtualization emulation for the AMD processors, which have slightly different virtualization architecture. Nitesh worked on a terminal user interface, TUI, that uses QMP to talk to Kremu to control it. And Mamoud did some cache modeling using TCG plugins. There was another bunch of students that worked on Rust VMM projects, which was done under the auspices of the Kremu project. So, that included Bogdan's work on mocking frameworks for VertiOQs, Harsha Dartvarden's work on integrating Vhost user Vsoc with Cata containers, and then Galen did some work on a Vhost user, a scusy backend. Of course, aside from virtualization, Kremu is quite popular as an emulator. And this year we saw, say goodbye to a number of targets, LM32, Moxie, Talgiex and Unicor 32. I mean, as far as we could tell, not many people were using these architectures. Some of them had been deprecated from Linux, and it was getting increasingly hard to find binaries and tool chains to build tests. However, we did say hello to a new architecture, and that's the Hexagon DSP architecture from Qualcomm, which was added in the last year. Whilst we're on the subject of TCG, I thought I'd talk a bit about some lower-level TCG updates. So we now have support for split writable and executable JIT buffers. A mirror view of the JIT so you can generate into the writable portion and execute out of the executable portion, and this is something that helps with running on MacOS on Apple Silicon. There's been improvements into the core TCG code with the way it handles constants. So again, allowing us to generate better backend code. A relatively recent addition was an improvement in the way that we handle breakpoints. And this was triggered by a regression we found with Windows that for some reason generates breakpoints into itself as it's booting up. So the new breakpoint handling is a lot more efficient and doesn't actually generate code that needs to be thrown away. And finally, the tiny code interpreter, which has always been the subject of deprecation requests, got a big update. So it was modernized to run all the latest TCG code that is generated by the new vector operations and is now properly included in our CI. So it shouldn't regress again. Chromium is also very useful as a development target. So there's a number of things I thought I'd mention here. So we now have the ability to capture USB packets in PCAP format so you can monitor all the traffic between a guest and a USB device, which gives you a good idea of what's going on between the two. The guest loader is a companion to the generic loader and it's basically there to enable you to test hypervisors, which expect to have a kernel and guest loaded into memory at the start and need some sort of pointer to it. We've continued to enhance the GDB stub, so we've got small features like adding Alt's Fees support for Linux user and also fancy features like reverse debugging, which builds on our existing record replay architecture to allow you to do things like reverse step and reverse debug. TCG plugins are now enabled by default anywhere where you have TCG available. The runtime impact when no plugins are loaded is practically non-detectable and so it's useful to have the plugins available if you want to do further analysis on your code without having to recompile the whole of Chromium. And finally, there is the Chromium storage daemon, which is your one-stop-shop program for dealing with various block storages. On the internal side, there's been a bunch of cleanup with things like the accelerator being abstracted and TCG-specific CPU ops being abstracted away, and this is all in the purpose of making the Chromium build more flexible and allowing you to build Chromium with only a particular set of accelerators or without TCG support added in. Another area's change is the common translator loop, which is embedded in translator ops, is now completely converted. So all guest architectures now use the common translator loop, which makes the management of that code a lot simpler and also allows them to use the other features. A while ago when we were doing SVE, we did a big rewrite of the original Berkeley soft float code, and the main change of that was we were trying to get away from a bunch of magic constants and shifts to something that was easily followed and you could see the common way that floating point was handled. Well, that transition has now been completed with the 80-bit and 128-bit soft floats all being handled in the same core code. Finally, there's with and without default devices, and this is another build option that allows you to specify which devices you want to include in your build. So say, for example, you're building a KVM-only binary, you may not want to include a whole bunch of devices that are only used for emulation. On the process side, there's a couple of changes. We've joined the 21st century finally and we now have a code of conduct. We've also continued with migrating more things to GitLab, which is now the canonical copy of our source code repository. So the bugs have migrated across and that includes a bunch of quality of life improvements. So for example, bugs get automatically closed when they're referenced in commits. More and more of our testing has moved across to GitLab. We still use Travis and Cirrus, but we've done work to integrate the results from those into one place so you don't have to look at multiple websites to see the state of a branch. As part of that, there's now also playbooks for adding shared workers. So this is part of our drive to reduce the load on GitLab's own shared services that are used for all open source projects and we can add specific architectures and have our own dedicated machines. Finally, Paolo talked about MISON last year and the MISON workers continue to evolve. Probably the biggest change has been adding, but reducing the number of tests that are done in the configure shell scripts and moving them into as native MISON tests. Finally, I'd like to talk about the documentation. So last year, we migrated everything to Sphinx and now we have a new unified manual. So the manual includes all your system, user tools, interoperability specs, as well as developer information in one nice searchable and browsable manual. It's also seemed to have had a positive effect, so we've got about 10,000 new lines of documentation that have been added to the system. Right, let's quickly go through some of the developer awards and stats because I know that's what people are interested in. So most commits Richard Henderson took it this year just slightly ahead of Philip and he also keeps that if you look at most lines changed. I thought I'd also look at who had the heaviest delete key. In my mind, a commit that removes more lines than it adds is also a good commit because it's generally removing lots of craft and Marcus has been leading the charge on that side. Finally, with reviews, there's been about 7,500 change sets and I think we had about 7,700 reviews in total. So it's now fairly rare to find a commit that's not had at least one review before being merged and Richard and Philip were very close side-by-side in having the most reviews. And finally, I will just mention testers because sometimes although we have quite complete CI, you need to have manual testing, actually build it and do stuff on your own machine and again, Philip has done the most of that. Right, with that, I will say thank you. I've been Alex Benning. You can find me on Mastodon as under my handle of ST Squad which is also the same handle I use on IRC. So goodbye and hope you enjoy the rest of KVM Forum.