 All right, thank you very much. I assume everybody can hear me, if not just shout. This talk will be a lot more enjoyable if folks ask questions during it instead of waiting till the very end. So I encourage you to throw things or scream or call me an idiot that it's all fine and well. I'll be covering a lot of topics in this talk. So let's go ahead and get started. So for this talk, I'm gonna talk a little bit about what QMU is, what the history of the project is, and then why it's important for cloud. This is cloud open after all, and I suspect that most people are familiar with QMU, but I think the nature of the project is such that it's played a quiet role, and hopefully that's something that'll be changing. So QMU, what we bill ourselves as is a fast, full system simulator. That means that you can use it to create virtual machines either using virtualization software like KVM or Zen, or using emulation. So using QMU, you can run a ARM system on your x86 laptop. You can run a very performant Windows guest on top of Linux. You can do all sorts of variations. But beyond just that, and I'm pretty biased here, but I'll make the claim that QMU is open source hardware emulation, that when people are talking about anything related to open source hardware emulation, that it at least has its roots in QMU as a project. So clearly QMU is a big part of the KVM project. It's a big part of the Zen project, but what's lesser known is that the Android SDK is based on a forked version of QMU, and if there's anybody in Google in the audience, I'd love to see patches upstream for that. Likewise, VirtualBox is a long-time fork of QMU. Again, I'd love to see patches there, although we've been pretty mean to those guys, so that's probably not gonna happen. And just about every embedded SDK, so whether it's free scale or ARM or any other embedded chipset provider, they usually have a QMU-based SDK so that you can run your ARM code on top of your x86 system. So in terms of how the project started, the project was started by probably one of the more famous hackers of our time for Reese Belard, and he truly is a genius in every sense of the word, in the positive and negative senses, really. He also wrote FFNPEG, JS Linux, and a ton of other cool things. If you ever wanna waste an afternoon, look at his webpage, and he's got little compilers. He won the OfficeGated C code competition a couple years, which doesn't bode well for QMU, but nonetheless, he's a really, really bright guy. The project started really just as a portable just-in-time translator for translating processor architectures. I won't talk about the nature of it, but basically it was a really clever hack based on disassembling GCC object code, and then using that to bootstrap a just-in-time translator. The thing that was really interesting about QMU when it was first introduced was how portable it was. Typically in this space, you have handwritten translators that translate, say, from ARM to x86. They're very difficult to write because they need to be written by somebody that both understands low-level x86 and low-level ARM, and there's just not a lot of those people out there. So even though QMU wasn't the fastest translator out there, because it was able to do all of these combinations with reasonable performance, it became pretty popular pretty quickly. Now, once you can do isotranslation between architectures, it's fairly natural to start adding hardware emulation. I mean, indeed, that's exactly what happened, and the initial focus was PC hardware. So we grew a full PC device model very quickly, and then starting there, we grew device models for many different architectures. Today, I think we support something like 12 or 14 target architectures and well over 400 different devices, so it's really extensive at this point. I think the thing that's the most interesting about QMU, the thing that's kept me involved in the project this long is that it's truly a grassroots community. There was no large corporation that decided to start the project. There's no community evangelist. There's no marketing department. It really is just a collection of people that care about the project, that find it useful, and that work on it. Not that there's anything wrong with community evangelists. I don't mean to offend anybody in the room, but it's very much an organic community, and it's relatively quiet. In terms of we don't brag that much about ourselves. So let's talk a little bit about the evolution. I mentioned this a little bit, but I wanted to talk a little bit about more, mainly to explain the current state of QMU. It started out as a, what we call Linux user, which really means a mechanism for running x86 Linux binaries on top of Spark, which was where it all started. So if you're familiar with the transitive projects, Rosetta from transitive, it's the same basic idea. That's what we restarted with, and we tacked on system emulation as sort of an afterthought. So the whole notion of running a virtual machine was a sort of a second thought. The original clever trick that I mentioned is something called DynGen, and it turned out that that trick was too clever for its own good, and for the longest time, we didn't support compiling with GCC3, which was a giant, giant, giant pain. So that got replaced with something a little less clever called TCG. Again, it was tacked on after the fact. Virtualization support. When Fabrice was still involved in the project, and he hasn't been involved in probably the past five years, he sort of drew a line in the sand and said, I'll never merge any virtualization support. And then he left, I took over, and I merged it right away. So, but that again was sort of an afterthought. It wasn't designed to do virtualization, it was designed to do emulation. And a management API. Initially, the way you interacted with QMU was that there was a text-based, what we call the human monitor. And it's incredibly inconsistent. It's not meant to be parsed by humans. And unfortunately, as is the case with software, it doesn't matter what your intentions are. People are going to do things you don't want them to do anyway. And we found ourselves that people were using this interface, writing programs against it, libvert was, for instance. And so, we ended up introducing a new management API based on JSON, and it was a very difficult task to migrate people over to that. And then finally, and this is one of the things that I have sort of mixed feelings about. We never started out trying to essentially replace the Linux storage layer. And what's happened over time is that that's really what's happening. So we implement file systems in the form of image formats, snapshotting, raids probably coming soon. And again, none of this was ever the intention to begin with. This is where users have pushed us. This is where contributors have pushed us. Now, all that said, I think that one of the things we've done that's helped the project be successful is that we've taken a very hard stance on compatibility. And so, this becomes a blessing in the curse because if we started it with Linux user emulation and we've tried to maintain compatibility all the way since then, there's a lot of compatibility work that we do. It's probably one of the biggest challenges that we have doing development because we have to keep everything working the way it did to start with. So, in terms of the growth of the community, the project's been around for 10 plus years. It's been an absolute roller coaster ride. I'm gonna show a very pleasant slide next that makes everything look wonderful. But the reality is we had the terrible flame wars, the equivalent of the Linus doesn't scale, discussions upstream. The thing that's gotten us by though, the thing that's kept us going as a community is we've been very inclusive. So, no matter how obscure your architecture is, I'll pick on open risk as an example. I'm sure most people haven't even heard of it, but it's a completely open architecture down at the hardware modeling level. As long as the patches are reasonable and don't hurt the rest of the tree, we'll take it. And we've always had that kind of inclusive model. And that creates a very rich community. Of course, it creates a very complex command line and it creates a lot of less maintained areas of QMU. But that's the trade-off in terms of creating a good community. So, this is the happy slide. That's probably difficult to read, but I'm very proud of this slide. So, this is a comparison of Linux and QMU development over time. This is commits per day. And Linux is about an order of magnitude bigger in terms of commits, lines of code, et cetera. So, QMU is a roughly one-tenth of Linux in terms of the size. And what you can see in this graph is a very aggressive exponential growth. And that's really the point of this slide, is to show that not only are we a growing, a rapidly growing project, but that we're growing at a very, very fast rate, even compared to another fast growing project like Linux. And of course, this means that we'll be better than Linux in a year because that's how exponential growth works, right? But nonetheless, I think we're at a point where we can declare some level of success in terms of community. Okay, forks and merges. I already mentioned this before. Back in these days, things were pretty bad. And that's when we saw most of our forks happen. And some of the forks were never that extreme. The Zen and the KVM forks are good examples of this. And both the KVM and the Zen upstream communities have done a fantastic job working upstream. I'm happy to say that in the past two years, both projects completely merged upstream and those forks no longer exist, which was just a huge effort from a lot of people. There are a few cases where major, what I'll call major forks almost happened, where large sets of the development community almost decided to go in a different direction. And the reason I mention all of this is because I think this is part of a successful open source project. If people aren't complaining, that means they don't care. And if you don't have that strife upstream, if things aren't always on the cusp of falling apart, then you're not really achieving as much as you can. So it's okay. If your communities are having these types of discussions and arguments, I'm here to say that it's okay, it's part of the growing process. One of the things we learned from these forks, though, and I think Lena says similar things. Once the distro shipped something for three years, it doesn't really matter if you thought the code was ugly. It's obviously useful code that people care about. So some of what it took to merge those forks back was realizing that maybe we needed to compromise a bit on our quality, not on our quality, but on our standards and sort of what we'd be willing to tolerate from a design perspective. So in terms of how our overall development process works, we have a hierarchical maintainership model, very similar to Linux. I don't understand how the Apache models work. I'm a big fan of dictators, so it might make me a terrible person. We have something like 40 sub-maintainers. And depending on how you count the contributors annually, it's around 250. It's probably closer to 300 or 400 if you count over a longer period of time. One of the big things that has allowed us to grow and I think is one of the biggest contributors to our growth is our release process. So we've stabilized on a quarterly release model where we have a two month development window and then a one month release window. And having predictable releases, I think it encourages contributors because as a contributor what you care about is your software being used by other people. And if you don't know when your software is going to appear in the hands of a distro or somebody else, it discourages you from contributing. So it was a huge, huge change for us to switch to a predictable release model and I think that's been one of the reasons why we've had such great growth in recent times. And then the relatively new thing is that we're doing major releases every two years. In fact, the 2.0 release is coming up next February. There's no significance to major releases, they're just numbers. But it's helpful to bump the number and get a little press every once in a while. So that's the plan there. And this talk probably got accepted because 2.0 sounded exciting when really it's just another release that isn't that big of a deal. So it works. These are the important things in life. We'll never get that high. We maxed out at .7, so sorry, we won't get there. Okay, so let's talk a bit. We talked about the history of the project, how the community works. Let's talk about features. The first point I wanted to make is that when you're talking about open source clouds and I think Jim made this point in his State of the Union today, the vast majority of clouds are running open virtualization today. Now in the case of Zen, there still are people using ZenPV for whatever silly reason they are. But when it comes to ZenHVM and KVM, you always have, QMU's always the front line to the guest. So the thing that you're interacting with as the guest is QMU. Even if your hypervisor is Zen, even if your hypervisor is KVM, I like to describe it like the syscall interface in Linux. So even if you're running Nome or even if you're running KDE, your application is fundamentally talking to the Linux kernel via system calls and the same is true with QMU, regardless of what hypervisor you're using. And even, I think one of the things that has led to Linux being successful is that the kernel unites all distribution. So even though there are differences between Suza and Fedora and Debian and Gen2, at the end of the day, 90% is consistent because the system call interface is consistent, libc is consistent. And the same is really true with open virtualization because the model, the hardware model that QMU presents is consistent. The difference between what a KVM guest sees and what a QMU guest sees is really not that great. There are some different drivers and there are some minor details, but for the most part, it's very consistent. And QMU has that unifying effect. So let's talk about specific features. So I'll be mixing current features with future features and I will try to differentiate between the two. But I apologize if I say that we have things or if I make it seem like we have things that haven't happened yet, but I'll try to watch out for that. So our virtual IO framework is called Vert IO. It is based on lockless ring cues, which more or less all power virtual IO is based on. So there's nothing really innovative about that per se, but what is different about Vert IO is that it was designed from the beginning to be hypervisor neutral. Now in reality, that hasn't really materialized into anything useful yet. The only hypervisor that really supports Vert IO is QMU. However, that made for a very, very good code. The Vert IO code in the kernel is probably some of the best kernel code there is. It's incredibly well documented. It was started by Rusty Russell, so I won't take credit for this part of it at least, but Rusty did a really fantastic job designing it and it shows. One of the things that's unique about Vert IO compared to other types of power virtual IO is that we tried very, very hard to design, at least the PCI transport, to look like a regular hardware device. If you contrast it to how either the Zen power virtual IO or the Hyper-V power virtual IO works, it's sort of natural to take the model of saying that since we're doing virtualization, why would we carry over all this legacy hardware stuff? Let's make everything from scratch and let's do a better job than hardware did. The problem with that kind of mentality is that you're ignoring everything that was learned by the folks that made that hardware and you're throwing it all away and you're likely going to invent something that isn't as good. We found a number of interesting things by trying to design our hardware to look like real hardware. For instance, we got PCI hotplug for free and a number of other interesting things. So via Vert IO, we support all the typical devices you'd expect, networking, serial, hardware random number generator, which I'll talk about in a little bit, a balloon driver. The other interesting thing is that we're actually undergoing a standardization process with Vert IO today and the expectation is that Vert IO will be the first standardized para-virtual IO framework out there. So I'm not sure what the schedule is, but we're working on the 1.0 version of the spec right now via the OASIS group. I'm a little concerned about this to be honest. Go ahead. Yes, yes. So, yes. The question was between now and when the 1.0 standard is standardized, is there a forum for people to comment about Vert IO? There is a mailing list that's part of OASIS called Vert IO Comments. It is open to everybody, even people that aren't members of OASIS. So yeah, you can go in there, you can post comments, you can make suggestions. And joining OASIS isn't that hard either. So if you wanna participate in the process, that's a good thing to do too. One of the things I was gonna say is that go ahead. Right. So the question was, when you hear about Vert IO, you think about PCI. But the ARM community has made something called Vert IO MMIO. So what's the likelihood of Vert IO MMIO surviving there being convergence? I had an argument recently with some of the ARM folks about this because I think Vert IO MMIO is a terrible idea. And what the QMU ARM person claimed was that the reason they had to do Vert IO MMIO is because the kernel people couldn't write a proper PCI driver for ARM. So yeah, that's an open area. So there are lots of warts with Vert IO MMIO. I don't know if it should exist in the long term. But there's also still a lot of active work on PCI in ARM because I don't think there's a lot of hardware with PCI. So there are some issues there. This is a great example of why... Yeah, but this is a good example of why deviating from hardware is a bad thing. So some of the things they're facing with Vert IO MMIO is how do you do hot plug? How do you add many devices? How do you do discovery? All of those problems are solved in PCI. But there are logistical problems. The other thing I wanted to mention just in the interest of time is that we still are working on emulated IO. So even though there's a lot of focus on Vert IO, we would always prefer to emulate a device well than to create a power virtual device. So if it's at all possible to emulate existing hardware and get the kind of features and performance that we need, that's the way we'll do it. And there's one very good reason for it, and it's the 800 pound gorilla, and that's Windows. I don't know if anybody here's ever written a Windows device driver, but it's an awful exercise that you never wanna do unless you have to. So if some poor company has already done that work, we want to use that and not have to do it ourselves. So a good example is recently we added support for the VMware power virtual network and disk drivers. And that's an area that's likely to get more focus in the future since there are some active contributors in that space. A closely related topic and one that's actually really exciting right now is graphics. So today we support VNC and we support a protocol called Spice. Spice was introduced by Kumranet, which was the company that started KVM. You can think of Spice like our desktop. It's a very similar protocol to our desktop. Hopefully Avi's not in the room and won't scream at me for saying that, but that's the case. There are JavaScript clients available for both VNC and Spice and I know that they're integrated in the Nova dashboard or not the Nova dashboard, the OpenStack dashboard. One of the cool features we have too is native WebSockets support for VNC. So no VNC, which is the JavaScript VNC client uses WebSockets as the protocol and we actually have native WebSockets support now on QMU, which was a lot of fun. But the really exciting thing that's on the horizon, it's still in a research phase right now, but there's a lot of promise for it, is something called Virgil, which is a project by Dave Arley and this is a 3D graphics based on Vert.io. I think he posted a demo recently where he was running OpenArena and I think he was getting something absurd, like 240 frames per second in a guest. There's still some work to do. I'm not terribly happy with the state of it now because it doesn't negotiate down capabilities very well, but I think that this is going to become the way that we do graphics for desktop use cases at least. So that's very, very exciting to me. For those that have gone to previous KVM forums or other virtualization events, there's always one talk about 3D graphics that it's just right around the corner, that that's going to happen in the next six months, but I really do believe in this one, so we'll see. Storage. So storage is an interesting topic and it will be multiple presentations at KVM forum. I sort of just wanted to summarize in this talk the state of storage and where things are going in QMU. So, much to my chagrin, we've really converged around QCAL2 as the de facto standard format for QMU, yes. Yeah, so the question was how is QED doing and the answer is not so well. This is where backwards compatibility versus features come into play. So QED was another image format that we proposed a few years ago that solved a lot of the problems in QCAL2 and I think in an elegant way because I wrote some of the code. But there was a large group of people who were very tied to QCAL2 and made a very violent argument against it in the back room over there. So let's not open up that discussion here. Anyway, with QCAL2, a lot of the features that were introduced in QED have been added as extensions to QCAL2 so you got a lot of the same benefits although with tremendous complexity. The other big change is significantly improved supports for snapshotting. So the preferred way of doing snapshotting in QMU is still as what we call external snapshots. So these are snapshots that are visible in the file system. The general problem with image formats is that they are essentially file systems and if you know how complex it is to write a file system from scratch, making an image format from scratch is just as complicated. In fact, one of the things happening right now in QCAL2 is journal support to have a full transaction journal. So we try to leverage the host file systems as much as we can and that's why we prefer to do external snapshots but this seems to be a losing battle. Yeah, I won't go into detail there. But then on the good side, one of the things, and this is really the thing I'm probably the most proud of, all the things that we've done in QMU in the past five years. QMU had something of a reputation for being slow and the thinking had been for a long time that we would put all of the performance-sensitive devices in the kernel via the host and that QMU would just do the sort of the slow path things like the serial port and stuff like that. Of course, having code in the kernel means it's running in ring zero, it's got full privileges on the system and so there was always a debate about whether we could get the same level of performance in user space and thanks to the work from folks like Qua and Stefan Hanoizi and a number of others, we implemented a new version of Vertioblock in QMU and on a very, very large storage system we were able to get 95% of bare metal which is just insane, I think it was somewhere around 1.4 million IO apps per second and it just blew away everything else out there including VMware, so it was a really good accomplishment and more importantly showed that there's no reason why high-speed devices can't be implemented in QMU, that they do not have to be kernel backends and I think that this is an indicator of where we're going in the future so I'd like to see networking done in QMU, I'd like to see other high-speed devices done in QMU. Migration. So migration's really interesting. The general idea behind migration is that it's a converging algorithm. At a very high level what that means is that the guest is dirtying memory and we're copying that dirty memory over the network and we're essentially trying to beat the guest and we're trying to copy over memory faster than it's dirtying memory. When this works, it's great and it works well but the problem is that there is a race. Now different hypervisors address this race in different ways. The most common approach to addressing the race is to simply give up and if it takes too long and the hypervisor's losing you just stop the guest and the guest gets a long period of downtime but that's not terribly acceptable so we have some approaches and this is an active area of development so that we can try to win the race, more often at least. One of the most recent ones and I got the acronym wrong, I see now, is XBZRLE and this is a compression algorithm. This is a compression algorithm based on observations about how the memory looks when a guest is actively using it. It is more effective than just doing GZIP. I think that would be my first question so that's why I'll go out there and say it. The other very recent development is RDMA. You can achieve a significantly greater throughput on if you have the hardware on doing RDMA and RDMA hardware is actually a lot more capable than Ethernet at the moment. 40 gig is not unheard of. So that's something we just recently merged and I was actually really impressed. I did not expect this to be an easy merge but it turned out to be. And then my personal favorite approach to winning this race with the guest is to simply cheat, which is always the best way to win anything. And the way we cheat is we delay the guest. So if we're losing, if we're not able to converge fast enough, since we're the hypervisor and we can do what we want, we just slow down the guest. We just give it less run time. And it's a little crude but it's also terribly effective. So yeah, I'm always a fan of that. So another really interesting area that I think is a space that's still still very relevant for cloud is local storage migration. So at last KVM forum, Mark McLaughlin got up and did a talk on OpenStack and one of the things he said was that cloud loves simplicity. So cloud doesn't use a lot of virtualization features because they're too hard. They require too much thought from the management point of view. There's too many restrictions. So if you want to scale, you have to simplify things. To me that's a challenge. That means how can we make our features work more reliably without less restrictions? Local storage migration is one of those things. So traditionally to do migration, you need to have shared storage and that's a big inhibitor since a lot of clouds have ephemeral storage. So this is something we merged probably a year or two ago. Obviously if you're moving a lot of ephemeral storage, it's slow and you can't necessarily do load balancing using a mechanism like this. However, you can use it for evacuating a node without bringing guests down which potentially can be a good thing. My suspicion is that minimizing downtime is sort of the last mile of cloud for virtualization. It's one of the areas that we'll have to put a lot of work into. A live update, this is a feature that really excites me and is very much related to this topic. Upgrading sucks. Having to take guests down, especially if you're an infrastructure as a service provider, having to schedule downtime windows so that you can perform maintenance on a physical system is a big, big problem. And it's the thing that customers complain the most about. So why not use live migration, right? Why not migrate to another system, do your upgrade, migrate back and call it a day? Well that requires that you have extra hardware, it requires that you have shared storage, et cetera, et cetera. So we came up with the idea of doing a local host migration, but why would that matter? Well, you can install a new version of QMU, do the local host migration and effectively pick up that new version of QMU so that you're essentially doing a live upgrade of QMU without having to bring down the guest or suffering any downtime. Now, the initial problem with that is that that requires twice the memory for all your guests because you need to have two copies of the guests. So we came up with a clever mechanism that uses VM splice to essentially do page flipping, and yes, it is that awesome, to essentially do page flipping from one process to another so that you don't have to have those double resources. This is still an active area of development. It's gonna take quite a bit of effort to get VM splice to perform, which is ironic because VM splice is there for performance and it doesn't perform very well. But nonetheless, this is something I'm really excited about and one of the things I'm even more excited about is that I think this is a good model for doing live update even beyond just QMU. So one of the things I've talked a lot about and maybe one day I'll get to implement it is doing something similar with Kexec where you can squirrel away the guest's memory in a well-known location, Kexec your kernel and then resume the guests and if you can Kexec in a sub-second time period then potentially you can do a full system update without downtime. So this is obviously future work, but to me this is the exciting stuff. Manageability, I already mentioned before, we have a really tangled history in terms of manageability. We started with a text-based protocol. We evolved to a JSON protocol because JSON was cool at the time and we wanted to be cool. As system developers we don't get to play around with the high-level web stuff so maybe this was a mistake but it's what we did. The good news is though that it's fully specified now. So if you're an application developer looking to write against QMU, it's a very easy thing to do. We have support not just for RPCs but also for notifications so that QMU can notify when interesting events happen. And most importantly, we have an absolutely rigid compatibility model. We do not knowingly break management interfaces, period. And we see this as our most critical responsibility as a project to maintain strong compatibility. Security, this is always a fun topic. So recently we introduced a virtualized hardware. So there's been a virtualized hardware random number generator in VertIO forever but like most people I really didn't know why it mattered until recently. So when there was the factorable dot net disclosure and there was sort of a big to do about the fact that there were a lot of vulnerable systems because of lack of entropy, we prioritized getting this implemented in QMU and thanks to the help of folks like Peter and then we did it the right way, hopefully. So this provides a much better entropy source to guests. I've not seen any solid evidence that entropy is worse in guests. We've certainly done tremendous amounts of testing even without using like the NIST test suites even without the entropy device. And even though it's theoretically possible, I've not seen worse entropy in guests. But nonetheless, there's a concern that things are too predictable in a guest and that would result in poor entropy. But the very interesting about QMU is the fact that we have a very layered security model. So because QMU is a normal user space process it can obviously run unprivileged and indeed this is exactly what Libvert does. So Libvert will always run QMU as an unprivileged user. We also support mandatory access control via SE Linux. So that adds an even another layer of restrictions. And the more recent advancement is that we actually have sandboxing support. The same or the sandboxing support that Google added for the Chromium browser. It turned out that this is something that both projects cared about and so we work together on it. That gives us a model where we have a very rigid environment that the guest is running under. So one of the issues with sandboxing is that the more restrictive you make the sandbox the more things break in QMU. And unfortunately because of the whole compatibility requirement we can't have a very restrictive sandbox by default. We're very open to having restrictive sandboxes that are options. But the default sandbox we can't break things because it's a compatibility issue. RAS, so a very heated topic upstream for folks that follow QMU-Devel was the recent introduction of the guest panic notification. So it's a very good idea and a very sort of simple idea. If you're running a large cloud and you're having issues with guests crashing if you're the cloud provider and you don't really know what your guests are running what your customers are running how do you know when they're crashing? If it's stuck on a blue screen then it's just it's running as far as you can tell. So we introduced a mechanism where when Linux crashes it will actually notify the hypervisor and tell the hypervisor hey I crashed and then that way you can either alert your customer or you can take some kind of some sort of corrective action. Along these same lines I'm at least personally interested in para-virtual watchdog functionality. So one of the things that we've discovered over time is that because of the differences in how guest timekeeping works with normal timekeeping a naive watchdog implementation will actually have false triggers very often because they don't understand the fact that the guest doesn't always get to run all the time. So introducing a para-virtual watchdog would solve this problem and then we wouldn't have to deal with everybody making bad watchdogs and then calling me up in the middle of the night to complain about it. And then the other big area and this first came about a few years ago and then sort of died off but it's picked up some steam again is fault tolerance. So there's the Kamari project which some folks have picked up again and then there's a new project called Colo from Intel. I don't know much about it actually so I can't comment on it but both of these approaches use high-frequency check pointing. So they're essentially doing live migration over and over again which is different from how say VMware does fault tolerance. It's an interesting space and I think we'll see more out of it in the future. Okay, in terms of other architectures there's a lot of development happening right now on both power server power and 390 this is being driven out of IBM. Both are merged in some form there's also ARM which we've talked about already quite a bit and this is being very actively developed and there's quite a lot of emphasis on this today and MIPS in terms of virtualization is something new on the horizon. I'm not sure why you'd virtualize a MIPS chip but people are doing it and this is an area that's tremendously exciting and this also is where we got a lot of our benefit in terms of Verde block data plane. So the history of virtual SMP and QMU mirrors that of the Linux kernel in terms of we started with something that was single threaded and the way we added SMP support was that we just made a big lock and we dropped the lock whenever we were running HP CPU and sort of ran things in lock step and we've been slowly breaking that big lock up and it took Linux 10 years to eliminate the kernel big lock. I'm hoping it does not take us 10 years to eliminate the QMU big lock but we'll see. The thing that I think is really exciting about this though is that we've learned from the kernel and we're sort of making the jump directly to RCU. So we've been doing a lot of refactoring, a lot of effort to make things more conducive to be converted to RCU so that we don't have to do the conversions twice at least and this is going very well. I think in the next release and the 1.7 release we'll introduce the beginnings of RCU support and hopefully have full RCU based dispatch to devices by the 2.0 release. So this should be a real good thing. The thing I'll mention too is that it's not that we didn't know we needed to have better locking, it's that we came up with too many clever tricks to avoid having to solve this problem for so long. So we knew that we were gonna have to do it, we were just able to delay it for a while. So I also wanted to include some predictions. I don't know, yes. The question was, are you still gonna run everything in a single IO thread? The answer is no, we're gonna split it up. In fact, we already are splitting it up. So there's already a VNC thread that does a lot of the VNC work, there's a migration thread and with the Verdeo block data plane work it's a dedicated thread for devices. So we have to be careful, we don't wanna have too many threads because if we have too many threads it'll create more problems than it solves. So the main focus is making it so that everything can run in a different thread other than the main IO thread and then we have to figure out what the right combination of threads is in order to maximize performance. Okay, couple of fun predictions that may or may not come true. These are mostly things that I'm interested in though so there's some bias. A lot of people complain about our command line interface. It's big, it's unwieldy, it's got lots of warts. What's required is for somebody to spend the better part of two weeks just completely rewriting it and coming up with something new. We'll see if it happens, but I would love to see it happen. We already have some summer of code students doing some work on improving the GDK UI, specifically adding copy and paste support. I think this will be expanded. This is an area where we need contributors but there's no reason why QMU shouldn't have a good desktop experience. I look at something like VirtualBox and there's no reason we shouldn't have a user interface in an experience that's as good as VirtualBox. It's just a matter of writing a little code. Unfortunately, I think the storage layer is going to become more and more independent of Linux. For whatever reason, things just don't move fast enough for us from a storage point of view. So it's likely we're gonna do RAID. We were talking earlier about checksumming to detect storage corruption. Unfortunately, I think we're gonna end up doing all these things in QMU. And then the big one, and the one that's probably unbelievable if you're a QMU developer is we're gonna have to do something about backwards compatibility with live migration. It turns out that making sure that you can migrate from an old version of QMU to a new version of QMU, especially when you have four releases a year, becomes a very hard problem. I mean, you've got 20 or 30 different releases, and then you have the distro downstreams that are a little different too. And getting all of those things to work together and with each other is very, very hard in practice. So we don't have the brilliant idea yet that simplifies the problem and makes it really easy, but we'll get there, I'm sure of it. Okay, so I'm running out of time, but real quickly. So why should you pair about whether you're using QMU in the cloud? I think the biggest thing is openness. Like I said, open virtualization, it basically means QMU, and by using an open platform, you avoid the walled garden problem. And portability is obviously a concern. Yeah, let's skip that. So let's go right to questions. We have two minutes left by my account, so, Johannes? Yes? Sure. The question was, is there an overlap between the GDK UI and the Libvert UI? I believe you mean like Burt Manager? And the answer is at the moment, no. There are many restrictions. It's difficult to make a good UI. Well, I won't go into this. The answer is no. I don't want to say anything negative about Libvert, but I like the Libvert guys a lot. But no, there's no overlap. Other questions? Yes? So that's a great question. The question is, will QMU ever move to C++? That's what the question is, I think. This comes up every year. I'm actually a fan of C++ as Avi was. QMU's a million lines of code. Moving a million lines of code from C to C++ is hard. And especially when you're dealing with a development community that largely are C developers. And the benefits of doing C++ don't necessarily make up for the challenges of doing that. So we'll see, I think is the answer. Other questions? I think we're just at the end of time. All right, thanks everybody. Thank you.