 Rhaid, roeddwn i. Fylltwch gwych. Fyny'n Alex Benay. Fyny'n ein gennyddiad yna. Fyny'n amser yn y Cwemewisio ILC ym Fyny yn y Gwemewisio'i gyrddol. Fyny'n amser yn y gydig o ddwyng, mae'n iddyn nhw'n ymwneud. Mae'r Cwyrtys i Codau Cymru. Mae'r Cyfnol Cymru. Mae'n amser i'r cynnwysio ar gyfer 9 pas o'r cyfnod. Fyny'n ymlaen i'w cyfnod. I'm in the sort of second group of people that have been working for five to ten years, obviously next year I graduate to being a fully wizened engineer. About a third of our developers have over ten years experience working on the code base, so we're a fairly mature set of developers. But at least we've got about 20% that have been working on it for less than two years. I asked this question last year just to gauge the impact of the pandemic. Before the pandemic, quite a lot of people were working in offices. Sorry, that's the next slide. Why people are developers? The people who are paid to work on Cremu still dominate. We have about a third of a quarter of the people working on Cremu do it in their spare time. Everyone else is sort of paid for one degree or another. Now the working habits. It seems that the impact of the pandemic is still holding on. Most of us are still working at home and those that aren't are mostly working in a hybrid environment. Anyway, enough about the HR things. Let's look at the last year of code. In total, we've had about 7,500 commits that have been processed over around 500 pull requests and nearly 450 developers working on it. If we look on the rate of code change, it seems pretty stable. You can see a small bump over the last two years, but otherwise it seems contributions to Cremu continue at a nice steady pace. I wanted to have a look at the subsystems that were being worked on. To do this, I split up the top level directories and looked at the commit rate in each of those directories. Anything in the top of the directory wasn't counted. Unsurprisingly, target-related code and hardware emulation make up the bulk of the code changes in Cremu. You can see other major subsystems. The block subsystem has always been pretty active, as have the various user mode developments. We can also see documentation, and pleasingly, for me, testing is also quite high activity. I also wanted to have a look at what the most active targets were. To do this, I looked at the commit rate both in the target sub-directory and the hardware target-specific sub-directories. You can see the big architectures that are getting a lot of activity. Rys 5, PowerPC, X86 as well, although most of that is KVM-related, and then everything else falls into the noise. This year, I also wanted to look at the rate of code churn, so Paolo pointed me at a tool called Gitathesius. This is a survival plot of the code going into Cremu, and shows how long the code stays in Cremu before it gets replaced with something better. We're roughly saying around 50% of the code that goes into the code base can survive up to about five years. This is a sort of alternative plot that shows you more of an archaeological view of the code. You can see the very early code added to the code base at the bottom slowly being squeezed out. Then we're progressively adding more and more, hopefully, better code into the code base. Let's have a quick look at new features and developments. A couple of years ago, we moved our build system to Mison, and now pretty much all of the make-file activity is restricted to the test TCG directory. We're almost completely removing our reliance on our crafty make-file system. There's about 600 lines of change to the make-files, and around about 2,000 lines of change to Mison. Mison is definitely the way to do things now. We've moved our cross-compiler detection from the test TCG up into the main configure, and the main driver for doing that is so we can reuse our cross-compilers for building firmware. We're continuing our migration to using LCI tool to build our dockerfiles for building all our various targets. The main driver for that is to make flatter dockerfiles that are better cache, so hopefully most developers don't have to keep rebuilding the dockerfiles over and over again. They can just rely on caching it from the cloud. We've also been removing some of our subsist subprojects. This year, we dropped Capstone because all the distributions finally caught up with a version of Capstone we could use. I think LibSlurp is going to go the same way fairly soon. Before we get on to the emulation features, I'll just mention hypervisors. Hypervisors support in the last year, HVF for AR64 on these fancy new Mac M1s. This was actually merged just in KVM forum last year, but it's within a year, so I thought I'd count it. Paolo has already mentioned risk-v gained support for KVM, and there have been some changes for SGX for Ax86, which is the software guard extensions, and even HyperV gained support for synthetic debug. In the block subsystem, the block layer in Chromium has always seen a lot of active development. You can do a lot of fancy things with it. This year, the Chromium Storage Demon, which exposes a bunch of those features, got support for VDUs. You can now export block devices both to hosts and guests. A number of new devices, this isn't all of them, but a couple of the more relevant ones. Compute Express Link, which is like a very fancy version of PCI, that got merged. It's interesting to note this was something that was done because it's still relatively new hardware, and it's quite hard for people to play with systems that have real CXL, and this work was instrumental in helping support for CXL be upstreamed into the Linux kernel. Another system to note, VFI user support. This allows for out-of-tree emulation of PCI devices, and then another one, PM bus, which is an extension of SM bus, which is an extension of I2C that came in this year. Emulation, CPU emulations. We got a new architecture in the last year. This is the Lunarch architecture. We have support for both user and system emulation, and there's even a TCG backend if you actually have any hardware that can run it. So it's nice to see another architecture being added. But the other architectures haven't been standing still. All of the major architectures have seen development and additional instructions added, quite a lot of vector work. So RIS5, PowerPC, S390X and Hexagon all added their vector extensions. I believe X86 will eventually catch up soon because there's work to get the slightly lagging X86 emulation with AVX added. I only moved on to a separate slide because I was running out of space, but we've traditionally added all our emulation to a thing called CPU Max, which is basically the most arm that you can get. This is great if you want to try out new features, and although it does expose bugs from time to time when new features get added to the kernel, so people have been asking for more concrete models. So we added three this year. Cortex A76 and Neoverse N1, the N1 is like the modern server class processor type, and the A64FX gives you an 8.2 baseline with SVE. But we have been adding more features and if you want to play with scalable matrix extensions, which is basically to AI what SVE is to vector processing, you still need to use the CPU Max model for that. We've added a ton of new machine types and awful lot of BMCs, so BMCs are Baseboard Management Controllers, so these are the things you used to control your server farms, and we've had quite a lot of them added. One of interest is the FBY35, which is Facebook's BMC, I think, and this is one of our first board models that's got a dual SOC. Each one has its own address space, so they communicate to each other over an I2C interface, and this is hopefully one of our steps towards getting to be able to emulate fully heterogeneous systems. Open Risk, if anyone remembers that, joins the 68K system in having a virtual platform added that allows it to run slightly more modern operating systems, not rely on some of our older machine models. And then we've got things like the Cano Key, which is a USB security key device. I thought I'd have a quick look at the survey. So one of the things that's been a tension within QEMU is obviously we were originally built as an emulation platform, and then KBM came along, and there's been a lot of interest in virtualisation. So I was trying to gauge from the developers where people's interest lay was in virtualisation or emulation, and it's actually still a fairly even split. I think there might be a slight increase in people interested in emulation, but I think we've still got a lot to do for both. The production development question maybe wasn't properly worded. What I was trying to get to is how many people are concerned about making sure QEMU runs fine in production environments up in the cloud, and how many people are interested in making QEMU work more for them as development. And I suspect there's a correlation between people who use it for emulation and people who use it for development. Finally, QEMU would be nothing without its contributors. So every year we get involved in a number of internship programmes, so the two big ones are Google Summer of Code and Outreachy, and this year there's five projects. Do you know how fans work on NVMe performance optimisation? Rishi Lu worked on Snapshot Restore for ffuzzing for QEMU, and then In Order support for Vert.io by ZGAL and Zone Device support for Vert.io block by Sam Lee. We also partnered with the Rust VMM project because they were relatively new to using GSOC, so they used our organisation for their extending AR64 support for their VMM reference project. Finally, the bit I know everyone's looking for, the league tables where we can call out all the contributions. So by change that Richard Henderson has been knocking it out of the park this year. I think a lot of this has been driven by refactoring and improvements of the TCG code that obviously touches a lot of the emulation systems, and we have Philip, Peter, Paolo and Mark following behind. But it's not just about adding code. We also like people that delete code because often the code is old and crafty and deserves to die, and here Thomas is definitely ahead of everyone else in having the heaviest delete key. Obviously, CremU wouldn't be getting far without its maintainers and we count that by looking at sign-offs or non-author sign-offs for patches as they go through the system. And although I think Peter gets a slight bonus from processing the majority of the pool requests, more about that later, we have a number of very dedicated maintainers making sure code gets upstream. Finally, I'd like to give a shout out to our reviewers. When people ask me how they can contribute to CremU and they're looking for projects to get involved in, I usually suggest the first thing you can do is review some code because we always need people to review code. You don't need any special skills to do it, you just need to be able to apply patches and say what you think. And this year, Richard Henson has been saying quite a lot of what he thinks followed by Philip, Peter, Michael and Alistair. Finally, just to call out the employees that pay our wages. By lines of code, we have Red Hat followed by Lenaro, my company. IBM, none covers everyone that hasn't got a company affiliation that we know of and then Qualcomm at the bottom. I will call out the El Dorado guys because if you do this by change sets, they take the fifth place for their work on PowerPC. So, with that, I will say thank you and hand over to the next person.