 Hey. So this is going to be, well, I've been asked by the Open Virtualization Alliance to talk to you guys about KVM, which in this audience is a little bit like talking to oceanographers about water. So I'm not sure how much you're going to get out of this, but we'll go for it anyway. I'm from Intel IT Engineering. And what we've been doing is we've done a few private cloud installations of our own. Cloud is in our only business in IT. Obviously, we have a significant little side business making chips. So we have a design environment as well. And there's been some play in that field as well. And it's not necessarily cloud related. So I'm going to talk about both, even though this is a cloud audience. So what are we up to at Intel IT? Well, when it comes to open source and open stack, we want to help bring open stack and open source generally to the enterprise table. And I think there's a lot of sessions going on around here this week about enterprise readiness for open stack, et cetera. But one thing that is genuinely enterprise ready, and I think nobody here is going to be surprised by it, is KVM as a hypervisor. When it comes to KVM, we definitely want to, as Intel, we want to leverage our competitive advantage. We think KVM runs best on Intel architecture naturally. It exposes all of our wonderful chipset features natively. And we also want to support and encourage the development community. And a good portion of the KVM development community is actually in Intel, some of it anyway. And of course, we're doing that through things like we're a member of the Open Virtualization Alliance, which is why the slides have those titles on them and not a bunch of Intel Blue. So what about KVM? Well, these numbers aren't going to look very exciting to some of you, I'm sure, especially anybody that's doing public cloud. But this is just kind of a little bit of a background of what we've done. Our initial deployment, my involvement with OpenStack goes back to Cactus. We ended up deploying a production private cloud back in 2012 on Essex. Our version of KVM at that time was the 1.0 release. I don't know if everybody knows the difference between the QEMU system x8664 and the one down here at the bottom, the QEMU KVM 0.15.1. There's a bit of a divergence in the code base in KVM that you get one or the other, depending on what distribution you're running. That one, we had about 1,000 virtual instances for external services. We deployed that, like I say, on Essex. It felt very early and fresh and new to us. It's old hat now, of course. Our current deployments, we have multiple uses. I pointed this out earlier. We've got cloud, but then we've also got our engineering space for design engineering. So when it comes to cloud, we've got a Grizzly deployment. There's your 1.4.2 is the current rev that we're on. And I think that number's up to date, the 3,500 instances. None of this, like I say, is going to be very exciting for people here that are running gigantic public clouds and stuff like that. The numbers underneath represent our density of VMs. And we're running about a 40 to 1 ratio of VMs per host and about 100 DCPUs, more or less. So we obviously oversubscribed. And I'll be talking about that a little bit later. In the engineering space, we spun up. This was kind of interesting about a month ago. Over the course of about a week, we spun up 12,000 virtual instances to explore some issues in EDA, just to kind of shake out the environment and stuff. And that was a really fun project, to just kind of whip up something like that and just fire it out there and be able to test a design environment that is actually going to run later on 12,000 physical machines, on 12,000 virtual machines in order to see how it's going to function was a great thing to do. And I don't think we could have done it with anything other than KVM, frankly. KVM was the only choice for that. So that's a little bit about what we've done there. So as far as KVM itself, some of these benefits that we've got here, we did a study. When we were getting started with our Essex Cloud, we conducted a study on standard cloud workloads, databases, so forth. At that time, KVM was already at par or better than the marketplace offerings at that time. And some of the ones that we're lagging behind have caught up by now. It seems like the hypervisor realm is about near stable on performance, at least from our perspective. We chose it for stability. It's been absolutely rock solid stable. I don't think anybody here would disagree with that. I'd be surprised if you want to talk about that. That'd be great. Obviously, the tight open stack integration. If you deploy open stack, you get KVM, thus the oceanographers and water comment. The drinking our own champagne, of course, I mentioned, we've got KVM developers in-house. So we're very interested in integrating it and using it. And finally, for hypervisor efficiency, we get pretty good ratios. And for some of our use cases, we're running up to 70 VMs and hundreds of vCPUs per host. And that usually works well, but not always. I'll get into that a little bit later. So this is the problem slide. We, of course, everybody has problems when they deploy any kind of technology at all. But there isn't much that hasn't either been solved at this point or is solvable. I put Windows Guests on here because Windows Guests seem to be the most problematic thing. And yet, in our experience, almost everything we've come up with hasn't been a problem with the virtualization itself. And it's all been solvable. The one thing I wanted to point out, and if there's one thing you take away from this, I'm not sure that there is much to take away from this. But the one thing that you can take away from this, we actually had a problem with Windows Guests on a performance level. And when we were researching it, we were wrapping the, we were using an image that we'd wrapped like about six months previously. And then it was getting patched up to the current release using the Microsoft Updater. And we were working with it and working with it. One guy decided that he was getting tired of patching these instances every time that he spun a new VM for this testing. So he just went ahead and built it up and rewrap the image and then put it back out there. And it actually got rid of the problem. So I'm not quite sure how that worked, but it turns out that Windows images are something that you want to rewrap frequently, bring them up to the most up-to-date service packs, then add the packs over the top of them because there's obviously some kind of non-deterministic behavior going on there. And the other thing is to check your flags. There's a lot of flags in KVM. If this were a more content-heavy presentation, it probably I would describe some of them. But there's a lot of great stuff around taking advantage of CPU features. There's a dash CPU flag that you can identify in Intel technology. Maybe a lot of people that are deploying smaller clouds may not know this. You get a default value from KVM when you launch it. You can specify a dash CPU flag that will give you a bare minimum spec for the hosts that you're running on. So you can say dash CPU Westmere, for example, which will give you all of the chipset features of the Westmere platform on Westmere and above. So that way, you're not running into any trouble with backwards compatibility if you're deploying wide. But you can take advantage of all of the chipset features going up and forward. The other thing that I wanted to point out was that when you have big multi-VCPU instances, like, say, a great big eight-way machine or a database server or something like that, and you oversubscribe it onto a hypervisor with a bunch of other instances, you can run into trouble. And I don't know if it's true of the other hypervisors, because I haven't been running them at scale in my shop. I will say more about other hypervisors in a second. But when you are oversubscribing these large multi-VCPU instances, you can observe some performance problems, and even occasionally some, especially with Windows guests which have issues with IO delays, you'll occasionally even get corruption or crashing on the Windows host. So watch your oversubscription. If you keep the number of VCPUs pegged to the number of physical CPUs, you'll never have a problem. If you use nothing but single CPU instances and oversubscribe, you can go as high as you want. But if you start oversubscribing on multi-VCPUs, you might end up having a little bit of trouble. So for future direction, which is the more interesting piece anyway, I'm going to say this about the hypervisor agnosticism. Like I said, it seems like hypervisors are really not, I mean, the big hotness here is the containers. That's where everybody was. I saw the room downstairs with the crowd flowing out of it. So I don't know if hypervisors are all that exciting to people anymore. It's more like the water. So what is it that's different about KVM? I mean, let me back up on that a little bit. We obviously run more than KVM. We have a large install base of other vendor hypervisors, and we need to understand them and comprehend them from an open-stack perspective and from other departments as well. But what is it that's different about KVM versus those other hypervisors? It's really how they're managed. Because if you get a vendor solution, you're getting a whole bunch of stuff with it that takes care of launching and deploying and so forth. And I'm not talking about open-stack here. I'm just talking about all the stuff that you use just to control the thing. If you've ever tried to launch a KVM command line, you know that that's a risky proposition. There's a lot of flags. You manage it, of course, with LibVert. And then there's the universe of open-source tools that surround it. So it's a little different. It takes a little bit more understanding of the open-source universe that surrounds KVM. But it is every bit as full-featured, but it's just not KVM itself. Heterogeneous Cloud. So we're getting into this space now. And basically, this portion of the talk is a blatant plug for my colleagues who are doing a talk on single control plane for open-stack. It's in room B312 at 5.20 PM. Encourage you all to go see it. This is a, we are going to put together Heterogeneous Cloud with single control plane, or actually, we have done this. And they're going to tell you all about that. And really, that's about it, actually. So I doubt there's many questions, but I'll take them if there are. Cool. Thank you.