 Well, welcome to Berlin. Last talk of the morning. So I'll give you a small piece of background too. So I work in the Advanced Systems Engineering Group. So we don't just work on the containers. We're also responsible for a couple of projects you may have heard of, ClearLinux, which is doing really well on Pharonics at the moment, and Chow, which is an example of an orchestrator. And there's a common theme running through those projects about reimagining what we can do in the whole container cloud stack. So can we make a tenfold difference to performance? As Jim said, we're at ContainerCon, so I will be talking about containers. And we all use them. I was thinking I was writing this talk. How many containers do we think we have active in this room right now? I reckon one each, maybe. You'll take social media, email, a bit of online purchasing while you're watching the keynotes. So I'm thinking 3,000 containers in the room. But not all containers are made equal. If I'm doing my internet search, I type in my query and I was thinking I've got about three seconds before I think that's timed out. I've got a rogue server, refresh, move on. If I'm doing my banking, if I get a guarantee that, well, give it a few more seconds and we might not leak your details, well, I'm going to be a bit happy about that. So containers aren't made equal. We all have different needs. And as an implementer, pretty much you have two choices. We'll get to that in a moment. So when we talk about containers, I think I need for this talk clarify. There's an overloaded word. There's two sorts of containers we talked about. So there's image, which is the thing you want to run. And then we have runtime, which is how do you want to run it? And today's talk is going to be about the runtime. I've got this container. How do I want to execute that? So that's the distinction we need to make. And today, pretty much you've got two choices. You're going to run Linux software containers with software protection, or you're going to run a VM with a hardware back protection system. That's a pretty hard choice if you're an implementer, unless you're doing that interesting one where you run a Linux VM in a container or a container in a VM, which is always a bit of a funky setup. A binary choice. We don't really want that. And those users, you don't really care. Is it running in a VM? I just want it to work and work how I need it to work. And when we say what do we want? Well, as we've said, there's banking, there's financial transactions, there's other high security issues. And there's things where I'm not that bothered about the data. If you want my browser history, I'll give it to you. Yeah, that's fine. So we have this spectrum of things we want to cover. Some people want one end of the spectrum. Really time critical. Other people want security. But we don't have that today. What we'd like is a continuous choice of features. So we want to be able to say, we're on the spectrum between fully featured, accelerated, secure, whichever VM features that you're trying to retain, all the way down to the various minimum lightweight container. Where do I want to sit? And that's something we're trying to enable. So how do we get there? How do we move from today's binary choice, all the way to, well, I can pick and choose, cherry pick, add the bits I want. Make that an easy way to choose. And I should probably also note this isn't just about the runtime. It's the whole stack. Predominantly today, if you're orchestrating, your orchestration is probably tired somewhat to your choice of execution unit. You can't mix and match. You go, well, this one I want to be on bare metal, but that one I'd really like to be in a VM. And this one could be in a software container. You don't generally get that option. You just get launch. Well, I want to have a bit more refined control. And that's something we do in Chow, one of our other projects. It's an example of what we feel the future should be. You can have one orchestrator looking after all your myriad of containers. But if I talk about how do we get there, like many things, I think we should probably talk about how did we get to where we are today. We can't talk about we want to fix the container issue, the VM issue, without talking about, well, how do we get into that situation? And it's history, it's legacy. VMs, I think VMs might even be older than me. So they've been around a long time and they weren't designed for containers. Containers are pretty new technology. VMs, you know, it's meant to be a whole self-contained. I look like a machine, so much like a machine. You can't even tell you're in a VM. Containers don't come from that world. They don't have that requirement, generally. I'm a container. Pretty much I want a sea livery. I just want to run. I don't know which machine I'm on. I don't want to browse all the hardware. I've just got a small job to do. So, you know, we need to understand rather than looking at, let's look at the VM and improve it towards containers. Maybe we start at the other end of the spectrum. Let's look at containers and how do we get towards a VM. And I'd like to bust a few VM myths. So who thinks virtual machines are big? Well, they don't have to be. I've seen embedded systems running hypervisors. You know, tiny amounts of RAM. Admittedly, that VM may not have the features that you want to run a container. But it's not actually that far off. Containers don't require that many features at the bottom end. So, and along with, a parallel with, I'm big. Big comes slow. Well, if you're not that big, it's pretty hard to be slow if you're very, very, very small. So, if VMs don't have to be this humongous birmoth that you can't really use in your search engine container space because they're just too slow, that's a legacy thing. We can move beyond that. So, what do we do about it? Well, this is where we come back to our tenfold improvement mandate that we're having in a lot of our work we're doing in the open source technology. Simple will not do. We've been told, you know, my boss basically says, if you're looking at a 10% problem, if you're trying to optimize 10%, you're looking at the wrong problem. I don't want to see a 10% improvement. I want to see a 10 times improvement. If we just do 10% optimization, it's nice. And we do that, but we don't do that on the big problem. Once we've got the small problem, then we'll do the cleanup. So, simple is not where we want to head. Yeah, it's not how we're approaching the problem. Need something a little bit more radical. So, and for us, this was look at the legacy situation and we don't need that. We don't need a legacy VM. We don't need to look like a whole self-contained computer, a PC, an IBM PC from when I was a boy. So, throw that away, start again. Pick out the pieces we need from the VM. So, how have we been doing this? Pretty much, first line, as I said, we dropped the PC legacy mode. If you look in QMU, QMU, I think the smallest PC you get in there is the Q35, which was a tiny PC from way back. Well, we pretty much took that and chopped bits off. So, about, I think it was about a month ago, we pushed some stuff upstream into QMU. So, there you go, gym or open source. We pushed a thing called PC Lite. So, this is a tiny little PC. It's only got the PCs we really feel we need for a container. We did do the optimisation at 10% here and there, but we did that all layers through the stack. So, we did that on the host in the container itself. We did it in the kernel, we did it in the user space. So, we used a few key technologies that have come around in the last couple of years. Some interesting technologies. Non-volatile DIMs are bought with them, execute in place. We've got direct mapping through DAX. So, we map things from the host directly through into the container using these. And they save you not only space, because you are not using so many pages, but they save you a speed as well. You don't go through the buffer cache, you don't go through all the file system layers. So, we get a great deal. And a lot of there is this, if you win on one, you tend to win on the other. You reduce the size, you reduce the speed. No. And we're in a fixed environment. We don't have to support every PC model in the world. We don't have to support every PCI car you're going to scan the bus for. So, we optimise at that level as well. We've got a bunch of things we need to still push upstream for the optimisation in the kernel. And that comes down to me and finding the time. But all of this is open-source and going upstream. So, where are we today? This is something that's been available for probably a year or more. We're on version 2 at the moment. We've moved to a new hypervisor manager. We're doing some optimisations. So, we're getting sub-50 millisecond boot times on VMs. And you can come and see a demo at the stand. So, if I said VM traditionally, if we say it takes a minute to boot, well, shave 10% off. You're down to 54 seconds. That's pretty much a minute. If you do a tenfold improvement, when you're down to six seconds, well, we've done more than order of magnitude here. We've done two orders of magnitude. So, we're in the sub-second range easily for VMs. At that point, VMs become viable for many of the container workloads. And then with a VM, it is still a VM. You do still need a kernel. You still need a user space to launch your workload. So, we do still have an overhead. But traditionally, VM containers, you launch many of the containers like we saw earlier. I'll have 10 of these. I'll have 20 of these. You can share much of that container. So, the overhead goes down. We're down to maybe 50 megabytes per container instance overhead. That's not going to work for every situation for the lightest-weight containers, but for most containers, that's acceptable. So, now, we are in a position where you can think, well, maybe a VM does bring me some features I need, some accelerations, some security, some actual hardware protection. And where are we going next? We're still working on the basic infrastructure. All the work we do is at the commonality level. So, we work on the common kernel, the Quemuda, all the hypervices you use. So, we're not working on a specific stack up. And all this work goes upstream. So, it's sort of benefit of the whole for the future. We work with all the people doing the stack APIs. Containers are maturing. We're now getting standards. We work for those people a lot. So, particularly, we want to make sure those standards aren't too constrictive, that they're not just refined to software containers. So, we try and identify where there may be a case that a VM has a slightly different need. And we do continue to optimize for space and speed. Really, we want to look for that next tenfold improvement. What's that next leap of faith, that change of architecture? So, we are redefining what's possible. We're an open project. We're already upstream. No live demos. I'm not that brave. So, we've been upstream for a while. We actually went out on GitHub. The code is available, but we're now up on GitHub. We have an IRC channel, mailing list. Come to the website where there's clear Linux and clear containers, and you'll find Chow. And we are here on the booth all week, next three days, anyway. So, come visit us. We've got some demos. I'll be on the stand. My colleague, Sammy, will be on the stand. We'll be available for technical chat if you want to get involved or you want us to help you take this technology cos you can cherry-click parts out of the stack and put them into your runtime or your container instances. So, come on and have a chat. Thank you very much. I need some more coffee. I'm a half over here without coffee is killing me. And have a great conference.