 Good morning or good afternoon. My name is Paolo Bonzini, I'm a distinguished engineer at Red Hat and I would like to present QMood status report for 2020. Let's start looking at the previews here at Lights. We had the pre-kated Python 2 support, introduced Kconfig, developed faster boot and started using Sphinx for documentation. This light comes from the last year status report and I would like to report further progress, especially on the first and the last bullets. The big change with respect to Python support is that we only support Python 3 and we follow the Python lifecycle, so we do not support any more Python releases that have been declared end of life by DC Python developers. We also completed the switch to Sphinx and I will talk more about the benefits that this brought later. Among the other highlights of 2020, I want to point out new targets and boards because these had existed for a long time as force of QMood and they were now merged upstream. Another new feature is the Vertiophile System Demon, which was already presented last year but is now merged. And finally, the improved CI and Mesonbil systems are both very important for QMood developers. With respect to CI, GitLab is now the main CI system that we rely upon. It also takes care of building containers for developers to produce CI issues on their machines and for other CI systems such as Shipable. However, Shipable is being phased out in favor of GitLab itself. We still use Trevis to test the wide variety of build configurations and to cover native builds on non-X86 architectures. As builds have moved to Serous CI and now Serous CI covers Windows builds as well. We also use it for FreeBSD, as has been the case for a long time. Another new addition is the SSFuzz project that relies on the phasing support that was also merged in the beginning of this year. And finally, we are now running Covertly Daily rather than Weekly as before. For the future, we plan to limit further use of Trevis and we would also like to add non-X86 runners for GitLab that are specific to QMood. This would let us integrate patch with GitLab CI and make sure the Trevis contributor will be able to use those runners. We also have some configurations that are not yet covered by CI and they are only tested by Peter Madele before applying pull requests. This set should shrink further and further until ultimately the CI can be used as the gate for maintenance pull requests. Now let's talk a bit about technical depth. How QMood suffered from it and what we did about it in 2020. One common aspect of technical depth is that often it appears in areas that grow by accretion and without a solid design foundation that supports that growth, typically there's also limited documentation and few people knowing their internet details. If those areas then are modified by many people, the changes will not be reviewed accurately despite the best intention of the developers. And that's how technical depth emerges. Often we also speak of technical depth for areas where the tools we use are obsolete and have limited interoperability with the rest of the world. For example, this was the case for documentation and documentation together with QIM and the build system was one area where QMood suffered from technical depth. QMood was using the QIMFO as the source format for documentation. The QIMFO is a perfectly fine format, but it's hard to extend because it's even hard to just find a good parser for the QIMFO besides the Make QIMFO tool. Therefore, it was hard to integrate the documentation build with any other tool than the shell and make. For example, we have had documentation comments in the code for almost 10 years now, but they were basically unused because the developer documentation was just a bunch of files. It wasn't properly bundled into a manual. Also, the only time where we built and uploaded manuals was at release time. By using Sphinx, we were able to extend the process with Python code, basically creating entire parts of the documentation programmatically. We reused the kernel docs script from Linux to include documentation from the source code in the developer manual, and we used Pandoc to convert by existing tech info sources to restructure text. We also have now continuous deployment of the manual as a result, and you can find the latest QMood manual at any given time on QMood.readadocs.io. The next area that I'd like to touch is QIM. The main problem with QIM probably was that it was even clear to most people why QIM existed. When QIM was introduced, it was presented as a consistent object model and into unified the configuration of devices and backends. But this doesn't really answer the question of why QIM looks like it does to the programmer. It doesn't explain the principles of QIM to developers. We have made some progress in that area. First of all, the QIM documentation is more accessible now that we have a proper developer manual. But also, through making these discussions, we got the definition of QIM's design that looks like this. QIM's M system is about objects and their properties, and it lets objects expose properties to multiple channels. These channels include QMP, the command line, and the human monitor. There is a lot of work to do on QIM, for example with respect to introspection. For now, what we did was improving the documentation, reducing the boilerplate that is needed to implement QIM classes, and also making the Qdev APIs more similar to the rest of QIM. Qdev was the pre-existing object model that was used for devices. And while it is now based on QIM, a lot of its APIs had retained the original flavor. By making these APIs more similar to the rest of QIM, we hope to make it easier for new developers to learn Qdev and QIM. And finally, the MISOM build system. I will stand for a little bit on a softbox and talk a bit about it, because it's a very large change. I don't know who said this, but I picture two friends at the bar, which is a rare occurrence this day, of course. One of them is a little bit tipsy and says to the other, you know, the problem with programmers is that when they have a problem, they start to program. And this is true, and that's how you end up with this kind of code in your build system. And also this code. Now, I must say that this beauty was also very well documented. Probably it had more lines of comments than lines of code, but that didn't make it any easier to debug. So, even though we cannot guarantee that the build system will be simple, maybe we should make sure that people need to debug the build system as little as possible. What does it mean to keep the build system logic as simple as possible? The choice for QIM with new build system was that each file should only be read once. So, you first gather the data, you process it, and then move on to the next phase, which operates in the same way. In the old build system, reading the same file multiple times, for example, all the main files, made it slow, but also close namespace collisions and ordering issues that were hard to debug. The problem with doing this kind of surgery to a project as large as QMU is that it's not really possible to convert everything at once. For QMU 5.2, we have established the foundation at the beginning of the development phase, and then converted much of the low-hanging fruits. Everything else can be done in due time, and the process will make sure to work with Meson upstream. Whenever there is something that can be improved in Meson, we have noted it already, and we have explained any workarounds that were needed. In fact, going from makefiles to Meson was a very large change, not only in terms of the sheer amount of code changes, but also in terms of paradigm and tradeoffs. Shell and makefile, for example, are very flexible, but they are rather low-level, and they only support strings as the data types. On the other hand, Meson has high-level constructs and data types, but it operates at the level of a command, as an array of strings, rather than at the level of the shell pipeline. Another difference is that make is a declarative system, and the macros we had on top were not really declarative, but they tried to fake being declarative. The Meson DSL instead is more of the imperative kind, though it lacks aliasing and mostly has unmutable objects, and that mitigates the difference. It also makes it harder to misuse Meson. The number of lines of code is not really different, because one of the scripts we used to use the transition is actually pretty large. The script called NinjaTool will hopefully disappear already before the next release, and once you discount it, the new build system is already about 1000 lines or 10% smaller. Most of the reduction comes from the configure script, but the makefile cells also become much smaller, and especially all of the complicated logic from those slides is gone. Of course, if make is also gone, the build is entirely non-recursive, and the remaining makefile logic is manageable, since it's only about 400 lines of code. Finally, here is the fun part. I decided not to include the traditional account of commit and reviews, but rather do a little who's who game, and starting with our interns for Google Summer of Code. We also participated in Outreach, but unfortunately we didn't get an intern for that program. In Summer of Code, however, we got three, and not only did all three students pass, also their code has already been merged. Cesar contributed emulation for you to have security keys, Philip worked on Linux user, and Hamad established a framework for continuous benchmarking of TCG performance. Moving on, here are a few shout outs to some members of the community. In many cases, their work has been mentioned already earlier in the presentation. For example, Thomas and Alex did a lot of work on CI, the QMQ Dev refactoring was completed thanks to Marcos and Booster, Daniel Barangé and Eduardo Habcos, and Eduardo also worked on documentation together with Peter Maydell. Also, Richard Henderson kept on doing great work on TCG and done a lot of other parts of QMU. And also, I would like to thank Loran and Philip for keeping alive the hobbyist origins of QMU, so to speak, and of course to Peter for merging everything and ensuring that QMU development runs smoothly. So, what's next for 2021? It's quite likely that we will use GitLab more. Here, I listed five features that QMU could use from GitLab. Probably we won't use all of them, but still, here are some ideas. Generating and deploying QMU's static site, QMU.org, could be done through a GitLab pay plan, for example. And perhaps even the primary repository for QMU could be hosted on GitLab instead of relying on the QMU projects on servers. Release tables could also be prepared during GitLab's CI, which we don't currently do. And this would make the process of cutting your release more automatic. GitLab also provides issue tracking and the wiki. Currently, we use respectively launchpad and media wiki, but immigration here is more complex because of course we would have to move existing data. A hot topic is going to be rethinking the QMU API. We had a use mailing list thread between last December and last February. And one idea that surfaced was to simplify the relation between QMU and management tools by making the configuration of the VM more homogeneous. For example, right now we have substantial differences between how to configure the VM initially and how to later hotplug additional hardware backends. This means we would have to look at all of QMU's 131 command line options and decide for which we would need to provide an alternative means to do the same configuration. For example, through QBI, possibly QMP since management tools already have to deal with it. And another thing that would help management would be to provide official bindings for the management tools to QAPI. These should cover multiple languages of which the most important probably are Go and Python in order to let people focus on working with QMU and not revamp the QAPI wheel. Finally, for security we would like to be able to isolate security-sensitive parts of QMU to multiple processes. For now, we have the host user servers supported in QMU Storage Demon. But an extension to this idea is to use different languages, not just different processes, including of course Rust. For this reason, Marc-Andrelois has looked at QAPI bindings for Rust. Not so much for consuming QAPI as was the case for the previous slide, but for exposing Rust language constructs to QAPI, roughly the same as we do in C already. So that's it for this year's QMU status report. Thanks and enjoy the rest of KPM4.