 Good morning everybody. How are we feeling on day 3 of Vancouver and Vancouver ODS? Good parties last night. Thank you HB for the great party. Also at the Intel. And as I said this morning to a couple of colleagues, friends don't let stackers turn up the day 3 or leave the hotel without your part. My roommate let me let you leave the hotel without your part. Okay, we have got a whole day today of news about what we're doing in the OpenStack community here at Ubuntu. And to kick us off we're going to be doing a session on Lexi and KBM. But I just want to do a quick show of hands. How many people in the room have apparently got OpenStack in production? That's fantastic so almost half the room. How many people here are using Ubuntu OpenStack or Ubuntu as the base to their OpenStack lab? That'd be shy. That's about half the room. That's about right before expecting. The data was out on Monday. 55% of OpenStack deployments have now been done. On Ubuntu having told to talk to a mic. 50% of OpenStack deployments have now been done. On Ubuntu and critically so many of the large clouds. And if you were there on Monday, Comcast, Walmart, digital film tree, all talking about how they're using Ubuntu as the base for their OpenStack deployments. So it's great to have you here for what we think is the biggest breakthrough in virtualisation since virtualisation started. And I'm going to hand over to James and team who leads our OpenStack architecture at Canonical to talk through Lexi and KBM and why we think this is so exciting. Thanks, Chris. Can everybody hear me okay? Okay, cool. Excellent. So let me just get set and then we can go. Technology is already failing me. There we go. Right. Okay. So just quick introduction as to who I am and who my colleague Ryan is. So my name is James Page. I'm a technical architect of the Ubuntu OpenStack team. I've been with Canonical for about four and a half years now, but I've been deploying and working with open source technologies for about 15 years in fairly large enterprises before Canonical. I lead the team who's responsible for all of the packaging that you use for Ubuntu OpenStack and for all of the charms that we deliver for deploying OpenStack using our Gigi service orchestration tool. I'll hand over to Ryan to introduce himself. Hi, I'm Ryan Harper. I've been with Canonical for about one and a half years on the Ubuntu server team technical lead there. And prior to that, I was at IBM's Linux Technology Center, and I've been working on virtualization, open source virtualization for about 12 years there. Okay, so before we dig into some benchmarking figures that Ryan's going to run us through, I'm going to take you through our products and around containerization and the work we've done in the last six months since Paris when we announced LexD, our new Light Advisor product. So what's LexD? LexD is our Light Advisor. It's designed to run full machine containers. So we're differentiating from things like Docker and Rocket, which are very much around running process-based containers. It's a full machine container, and I'll dig into that in a little bit more detail. We're very much targeting getting close or bare metal performance for those containers. So it's a little as possible between your workload and the underlying hardware you're running on. That allows us to achieve some very high density figures for the number of containers you can run on a certain specification of hardware due to a very, very small overhead for each container. So let's dig into what that machine container terminology means compared to process containers in a bit more detail. So in the slide I've just popped up, we've got a host machine, either bare metal or KVM. We'll talk bare metal for purposes of this talk, but we've got a couple of containers deployed on it, and each of those containers feels very much like a full system. So they're running a standard, trusty or vivid Linux image. You've got init running, you've got cron, syslog, you can SSH to it. It feels very much like a full virtual machine that you would get under KVM. And that's the parity we're looking to achieve at LexD. We're not trying to compete with Docker, and in fact it can include Docker within that. So it's perfectly feasible to run a number of process containers within a machine container and have a level of nested containerization to give you more flexibility if you want to do that as well. Okay, so kind of digging into LexD features in a bit more detail. Some of you may have seen my colleague Tyco's DOOM Live Migration demo from Paris, where he live migrated a running DOOM demo between machines. And we've been doing quite a lot of work on that in the last six months to leverage CRIU, the Checkpoint Restoring User Space Toolset to perform smooth, fast and reliable live migrations. There's still a little bit of work to go on that, which I'll touch on later in the talk. But if you haven't seen that demo, please do go down to our booth in the expo hall where that's running all the time and you can see the fantastic DOOM Live Migration running. We've got a real strong focus on security. So by leveraging a number of security technologies, we're able to deliver unprivileged containers. So these are containers where all of the processes within that container are running not as root. They're very important in terms of the security around our container architecture. So by doing that, we're able to provide an increased level of assurance that if a golfer bid, a container breakout, did happen, then that's as an unprivileged user on the host. So the impact of such a possibility is much reduced. But to kind of beef up around that, we're leveraging a number of well-known technologies. App Armor in Ubuntu is very much used to enforce security about what the process is within a container, a running container can do on the host OS. And we're working with most of the major instruction set owners in terms of enabling hardware assist for container security as well. That's not something that's generally available, but as and when it becomes more consumable, that's something that we'll be delivering as part of LexD. So touching on another couple of things, networking and storage. So the approach we're taking with LexD is basically if it can be presented on the host, we can plug it in a container. So we're not doing any heavy lifting in terms of integrating with something like OpenVswitch or any SDNs or specific storage solutions. We're taking the approach that if you can present it on the host OS, it can then be consumed within the containers that you're running on top of that OS. So we're avoiding the heavy lifting in LexD. Fortunately, we have quite a well-known cloud product, which does a lot of that for us. So this fits nicely alongside something like OpenStack to provide the networking and storage components into a container architecture, which leads me quite neatly on to the other project we've been working on in the last six months, which is our Nova Compute LexD driver. So this is a drop-in replacement driver for, say, the Libvert KVM driver that allows you to manage LexD containers in an OpenStack cloud in a very similar way to KVM instances. We've got a technical preview release out with our Agunti 150.04, which came out last month. And in that, we've got some support for fast root FS cloning. We run everything by default as unprivileged containers to assure a level of security. We've got a base level of neutron integration with the ML2 OBS reference implementation. We're fully integrated with Glance, so you can store your LexD root tables in Glance and use those across your cloud just in the same way as you would do for Libvert KVM. And we've enabled support for LexD in our GG Tom for Nova Compute, and it literally is just a single config option to switch between KVM and LexD as the two options. And that allows you to deploy an entire OpenStack cloud using LexD containers underneath the hood. Okay, so I'm going to have you have to write now. I was going to walk you through some of the detail of containers and some of the benchmarking we'll be doing. Thanks, James. So what is in the way of the workload? Let's get started on that. So what we have here is kind of a layered picture showing how we're stacking things together. We've got our physical machine, you boot your Ubuntu Linux kernel, put Ubuntu Linux on top of it, and on the left, if we start up a KVM instance, virtual machines are meant to look very much like bare metal, and so they have all the same things that your bare metal system had. It has a BIOS. It has a bootloader. It's going to load its kernel, and it's going to run through its operating system, user space, come up, and then you get to the point where you can run your workload. There's lots of extra stuff that happens to simulate that machine. In the kernel, it's still probing virtual devices, emulated devices, doing all these things, and that has to happen because it wants to behave like a real machine. In LexD, we don't have any of that. The containers run as processes directly on the host, as processes. We do go straight to init, but we don't have a BIOS. We don't have device drivers or things like that, so that makes things a lot thinner. So if your application is running in the virtual machine, a couple things have to happen before it starts to run. When your application is running, the virtual processor in the KVM needs to schedule your workload process. The host itself needs to schedule the virtual processor to run as well, and so a lot of this introduces just extra time and latency and overhead to actually running what you need to get done. So there's a couple of interesting things that happen on large S&P guests, as well as overcommitted hosts. The host operating system in the Linux kernel has to help provide some switching between all these different virtual processes. In the pure virtual machine world, we talk about a problem called lock holder preemption, where your guest OS is holding a lock while your other parts of your workload may need to do some work, but it's not actually being scheduled by the host. And this is actually a big deal. The Silicon vendors have introduced features to detect this particular situation and tell the hypervisor, by the way, you're not going to get any further with this particular one. Why don't you schedule something else? So for lexity, this is not a problem, and without having lots of extra layers of resources being occupied, that leaves lexity to do things quite well. Let's look at that. So density, just from a memory footprint perspective, we don't have all those extra stacks in that layer in that picture, so we can put a lot more of these machine containers on the same system, and this means that we can do more with the hardware that you have. You can pack more into there. This increases our utilization. So in Tudor, we kind of knew this. Containers aren't as heavyweight as virtual machines, but we really wanted to kind of take some time and say, well, let's put some numbers next to this. And so to get a real number, we took an Intel server, 4 core, 16 gigs, pretty stock setup, and we set up Ubuntu Linux on it, and we decided we'll launch KVM instances with Ubuntu server image, and we'll do this until the hypervisor is out of resources. In this case, the one that we're most constrained with is RAM. So we'll do that, boot up as many as we can until we can't anymore, record that, and then we'll do the same thing with lexity using the exact same image, but using the lexity virtualization technology. So let's take a look at that. On the left, we launched about 36 containers. This is another run where we got a few more. And that was what we expected. These were 512 megabyte virtual machines. They boot up in KSM's active and doing some work to deduplicate pages. But at some point, just all that overhead that happens, these are all full guest OSs with that big stack, or that big stack that's got everything in there, it just takes up space. So we rerun with lexity's basically exact same experiment, but with containers where each container, machine container just has dramatically less memory foot from each one. And so with that, we had a dramatic number here over 600 of the same Ubuntu images are running on this same server. So that was extremely impressive. Now, we saw this and said, this is really good, but let's take this and put it in the context of the cloud. So if I can take a single server and stack very, very, very many lexity machine containers, and then for the cloud, that means we should have some significant density. So we built a 10 node open stack kilo cloud on Intel servers, a little bit bigger. I think there were six cores and 48 gigs of RAM on them. And then we ran in a converged architecture, so a lot of the infrastructure was housed on some nodes. And that gave us about six compute nodes for us to run this experiment. And we did it with KVM, and then we did it with lexity, the Nova NC lexity driver. So let's look at where we got. Now, some explanation for the graphs. These are ganglia monitoring graphs that we grabbed while we were running a boot test. So we just kick off a Nova instance and keep it going and just keep it going until the cloud kind of falls apart. The blue in the charts represents committed memory. So that is memory that's been allocated to the, from the host perspective, it's allocated to running applications and processes. The green represents cached memory, buffer cache, page cache, things that Linux does to speed things up for read and write operations and things like that. And there's one more color, purple, on the lower right on KVM indicates swap. So that's when we started committing memory that's not being actively used and touched into swap. And things get noticeably slower when we get in there. So on the KVM side, we had about around a thousand instances or so is when we kind of hit the red line in there and we started getting into swap and it continued to swap and we committed another, I think it's another 48 gigs or so above to get that last remain. And we started swapping when she said, all right, we'll kill it now. We can't get any further with that. And we re-ran with LexD. And as we were getting up, I think it was around, it was about maybe 1,200, 1,300 instances. You know, we're steadily increasing and it was good to see this memory footprint where we have tons of green just as we have a huge amount of headroom where we can continue making containers. And we kind of, we flat line for a little bit and you may be able to see under the 1,800 there's a bit of a flat line and the benchmark came back and said, I can't allocate, I've got a resource problem. And I said, well, I'm looking at the chart here and it's like, there's plenty of RAM left. So I was talking with James and said, what's going on here with this flat line? What was happening? So it turned out we'd hit the scheduler limit. So the scheduler was saying, actually the compute host has got no more memory. So the default RAM and memory allocation limits that we ran the test with weren't sufficient to deal with the density of containers that we could get on each compute host. So we actually just twiddled those knobs, increased the over commit a lot so that we didn't hit it again and the cloud started creating instances again. So it just shows that the difference between, you know, we didn't hit that on the KVM test. I think we probably would have seen it at about 1,400 instances with the configuration we were running. But we hit that relatively rapidly with the container test and we just had to tweak that up and allow a higher level of over commit on each host. So once we fix the over commit ratios and set them rather high from 1,000 or something, just keep on going, it continued to climb on the web. We ended up cutting off about 1,800 instances or so as the infrastructure, the convergence for structure was just making the spawning really slow. And so I think we've covered in the picture kind of shows exactly what we wanted to show in terms of how frugal LXD is with memory resources. So another feature, something that matters is startup time for developers, right? If you're iterating on a project or a workload or whatever, you can go through your workflow faster and faster if you're not waiting on things to happen. I mean, how many people have rebooted a physical machine and you're waiting for the five minutes for the enterprise firmware to finish probing devices just to get to the boot loader and run? And virtual machines are certainly faster, but LXD, remember that picture that we drew, we don't have that virtual BIOS, we don't have another boot loader, another kernel to load and run, we're right into the workload immediately. And so this means startup time is dramatically different. So LXD launched Ubuntu VM1 1.5 seconds and we're in, we're running, right? All the processes that you saw earlier are already going. You can SSH in, that's there. For the same KVM instance, you've got to go through all those things in that stack to get all the way up and finish it in it and then we're in there. So that makes a big impact for workflows. And so if you recall, the first density test that we did, we took some, and you look at this startup time, we went back and looked at the time it took to do the density run. So when we launched the 36 or so of the KVM instances, we kept how long did it take for us to actually get that far. And then for the much higher, you know, over 500 for the LXD containers, we looked at the time it took to get there. So the same server launched our 37 in 943 seconds and then when we launched, it was amazing to see that we had launched more than 14 times the number of instances, but in less time, about 20% less time than KVM. So startup time matters and the density matters, and that really combines some powerful technology. I wanted to talk about another LXD feature when you think about that stack and how thin it is and what this enables. Latency. So latency is important for many workloads. So telcos are very much interested in reducing amount of time it takes to respond for message passages and things like that. You know, even in an open stack cluster where we're sending messages all across the bus, latency can make a tremendous impact on the speed of your cloud and how quickly you can react. So we did a benchmark here. So we looked at the ZeroMQ as a latency benchmark, and so we set this up. The remote latency setup here is two KVM on the top. We have two KVM VMs running Ubuntu. They're on the same host, on the same software bridge. And the test itself sends one byte packet, a million of them, and then averages the latency across at the time it took for each one of those as well. And in this case, you know, one of the KVM guests, when the test start, needs to send the packet. This is going to go through, you know, the networking layer of our own net. It's going to go down into the host, take the packet, goes to the bridge. It's on the bridge, comes back up into the other guest. It has to wake up and be scheduled and picked and then sent into the user space, and then you get a response. And so that can take a long time. Exact same setup with LexD, two container machines on a shared bridge, running exact same test, and we're over 50% less latent. And that's just, there's not as much in that stack that we have to go through, which means that your applications can communicate at a significantly faster. The other test here, the local latency, takes the software path, or the networking part of the path. It's just the application talking to itself into threads. And it was interesting to see, the local latency on the bottom looks to me, scheduling latency, right? We have two threads inside your virtual machine that need to be scheduled and context switched between. And we see a lot of the, you know, context switch overhead that you'll get in a virtual machine when we have to do VM exits or you've got two schedulers and you've got to make sure that it happens quite right in LexD containers. We don't have that extra layer or an extra kernel or extra scheduler. It's just the same, you know, processes on the host. And so we saw the same 50 plus percent latency improvement on there. So before I hand it back, just wanted to cover some of these key benefits, right? So you think about that picture about how much we're taking out, but still giving you a full machine container and delivers, you know, these key capabilities, right? Density, right? We use less memory per system than you will with the KVM machine. That means you can stack more things together. Startup time, so your workflow, how quickly you can run through things, how you get through these processes has a significant impact on workflows. And finally, latency, right? These, for folks who are interested in high-speed workloads that require very quick response times, getting that, all that stuff out of the way of your workload means that you'll be able to get more of what you need done faster. Okay, thanks, Ryan. So as I detailed below, we've got a technical preview release out with our Ruben 250.04 release. So if you want to try it now, you can go and try it on 150.04. We're obviously not recommending this for production use yet. We've got a number of features that we want to work on and implement before we're recommending that people start using this stuff in production. And our target there is really around our 16.04 LTS. So this time next year, we'll have our successor to trusty out for 16.04 in April. And we're aiming for a rock-solid container story with OpenStack at that point in time. So test now, give us feedback, tell us what's rubbish, what's good, and we'll definitely work on the results of that feedback. But in terms of what we generally got on the roadmap, we have an integrated roadmap between the Lex-D team and the guys riding the Nova Compute Lex-D driver. And we've got a few things that we're targeting over the next 12 months. So full block device support to allow better syndicate integration is top of the list right now. We're not able to directly present a block device into a Lex-D container right now. So we can't, for example, provision a SF RBD volume and then map that into a Lex-D container right now. But that's something that we feel for kind of parity with KVM and Libvert. That's something we absolutely need. So that machine container experience feels the same for Lex-D and for KVM and allows, you know, a much more comparative feel to the solution. So that's a high target for us. Broader Neutron SDN support is part of our work in the OpenStack Interop Lab. We're working with a number of SDN partners, all who have expressed interest in Lex-D and helping us enable their particular SDN in Neutron to work with Lex-D containers. So I've got a series of work there to enable things like contrails, stuff like that. Broader image format support. So Lex-D is currently constrained to using a root-table format which, if you use a cloud regularly, is not what you use. You're typically using a raw image or a Q-Cow or something like that. Lex-D itself is going to be growing support to support a wider range of formats. So we'll get that natively via Lex-D. So if you're using Lex-D standalone or as part of a cloud, you'll be able to use the standard Q-Cow2 image that you're familiar with using with Ubuntu on KVM but with Lex-D containers as well. We alluded to this during the performance management we talked about, especially around density and the fact that we hit the scheduler limits pretty quickly with containers. So resource management and usage reporting back into Nova is, again, another key focus. So figuring out how we represent the lighter footprint of a container back into Nova and how we manage that more effectively. So how we limit a container to, say, only being able to consume a couple of cores or a certain percentage of the memory of the host. There are all features that are coming in Lex-D to be able to constrain containers and have a container know it's only got a certain footprint. It's only got two cores. It's only got half a gig of memory. That's a feature that isn't in Lex-D in the compute driver right now, but that's something that's coming over the next two releases. And full live migration support. So we can do live migration now. There are some bits that don't get live migrated, mainly the security bits. So Tyco's been working on that to make sure that when you live migrate a container, the security around it also gets live migrated as well. So that's the broad roadmap we got between now and 16.04. I hope you can see how we've come in the last six months to have an initial product for our 15.04 release and where we're going in the next 12 months. It's pretty exciting. I think it's a very compelling story, especially when you start looking at the latency figures that Brian highlighted and how much closer you can get a workload whether that's a virtual network appliance or a big data workload. You can get it right down next to the bare metal and get about bare metal performance, but still have the usability of a cloud-like open stack. Okay, so we've got 10 minutes or so for questions. I'm hoping there might be some. So if anybody wants to ask a question, we've got some mics. I can see... What are the guest OS limitations? So right now, if you want to do anything that requires interaction with the kernel from within the guest, you basically can't do that. So running something like OpenVSwitch within a container is not possible today. I think we're going to look about how far we can push that limit. So there is always going to be some limit on a container compared to a full KVM, because you don't have a dedicated kernel. So it's sometimes not safe to permit those operations from within a container. So that's one key difference. I alluded to the fact that we can't do block devices yet. That's another key difference between containers and live at KVM, but we're looking to plug that gap. And I'm going to look at Tyco now, because I'm sure he can think of some more. Tyco, do you want to cut the front, and then if there's any lexity questions, you can answer that. Yeah, so it's Linux-only guest, so of course you can run a Windows image under KVM on top of an OpenStack Cloud. You can't do that with containers. It has to be something that will run on a Linux kernel. So you could run a CentOS container. Obviously you're going to get the Ubuntu kernel still. You get some propagation of the hostOS kernel down into the container as well. So there's some considerations when you're deploying that you need to consider. Right, another example of an operation. Hello. Another example of an operation that won't work is mount. Right now the superblock parsers in the kernel are not considered secure. And so they've never had to not trust what was on the superblock to begin with. So if I'm a bad user, I can craft a nasty superblock that will exploit some buffer overflow in the EXT4 parser and then I have kernel access and I can do bad things. So we don't trust mount right now, so you have to ask LexD, can you please mount this block device into the container instead of actually mounting it yourself? And there's a lot of tap dancing you have to do to make sure that the user, the container has never been able to write to the block device before it gets mounted because if it has then they can write the bad file system and bad things happen. There's a lot of kernel stuff that's, I think, unexplored and we're also interested in looking at that work and making sure that is all safe and secure as well. So I've been looking at KVM performance and density for the last few months. I get very, very different results. I get a KVM guest starting at about 150 milliseconds. I got density similar to yours. Have you looked at improving KVM in your OS? So these are out-of-the-box numbers, right? I think that's the most important part. You install it and you have it, KVM. You've ever seen the QMU command line or whatever. It is sort of infinitely tunable. So there's lots of things that you could do. There's always a trade-off in those places. So you certainly can trim a lot of that stuff and do it. The question is, do you want to do it everywhere you go? Right? Do you control the KVM or kernel or whatever in the cloud or whatever? Can you control those tunables in open stack if you're consuming it? But you get 10 times faster like Z, right? So because you get 150 milliseconds boot time, you get the same density as you get. So but with all the advantages of... So there's a difference between the time it takes for the processes associated with the KVM sentence to be active to the time when you actually have a usable system and that's a significant difference. I know what I'm talking about. Full hypervisor and full guest OS boot, right? That's what you measure as total. Yeah, that's what we've been looking at. It's time to use of all rather than time to active, if that makes sense. Yeah. You're looking at the same thing, right? All right. Could you talk a little bit about the user space management tools like LXC or Libvert or Nova API? Right. So let me talk a little bit about the Lex... LXC. So LexD is a REST API web service. And then the command line that they have, LXC connects to that and provides a very straightforward interface. So LexD for different operations, image import, update, alias creation, profile settings. But it's all done via REST API. So even the command line client is stalking REST to that. So that makes the integration in LXD it's using the exact same API to define and create and everything. I'm still trying to understand why I would want to use LXD versus just running either Docker or system dns. So the graphs and images that you showed, Docker have shown them in the past, etc. So that your density aspect is no different to running any other container technology. But I need to add an additional demon in there. So this is the difference between machine and process container. You know, Docker is playing very much in that process container space and that's great. And yet the density figures will look very similar because it's all using the same underlying kernel technology ultimately. What we're aiming for is this, you know, the full machine container. So if you have workloads or deployment techniques or whatever it might be that, you know, work with KVM and full machines right now, we're aiming to make those directly transferable without having to rearchitect to something more process-centric, but still be able to leverage all the lightweight container technology. Does that make sense? I think I understand what you mean. So a full software stack that condenses into a single container rather than running on a VM. Yes. There's some more questions here. It's, yeah, you can't run a custom kernel in the container architecture. It's just not feasible, but, you know, we're very much targeting that space between process container and, you know, we're aiming for the full machine container here. So, yeah, okay, yeah. That's close to full machine you can get without having a separate kernel. Yeah. Neutron. What's this? Sorry? What's your current state of integration with Neutron with this? Absolutely. So if you're running the Neutron ML2 OBS reference implementation, that 100% works with LexD with the Nova Compute LexD driver right now. Do you guys have any capability to guarantee, so you guys show that you can overcommit, like no other, but can you guarantee RAM resources or something like that? Yeah, and that kind of fits into the resource management part of our roadmap. So it's about ensuring that, you know, a container that's of a particular flavor type, so, you know, either a single core or a certain memory configuration can only consume those resources on the box and no more. And, you know, how far you take your overcommit ratio is then, you know, based on what you think your workload profiles are going to look like, so, you know, how too big are people asking their containers to be compared to what their actual memory footprint's going to be. So there's a balance somewhere in there to getting your overcommit ratio right, and once we have the features in terms of limiting CPU or memory usage for a container, then that becomes a much more integrated solution, which we should feel much more familiar from initial operating Nova with KBM. Question on the block storage integration, what's the timeline on that? So I think that's part of our 1510 plan, so that's within the next six months of development, so between now and October. So hopefully buying Japan will have a good story there. Is there a beta access? I can get early access? Not on the block storage yet, all of the Lexity and Overcommit development is done in the open, and we have a PPA for that, so yeah, you can track it. If you're willing to build from source with a whole tree, the Lexity tree is open, so you can get it as it comes in. Thank you. Questions about using a different type of OS, like can I have cores running in the container? In a Lexi, it's not possible. So you're limited by what you can run on top of a Linux kernel. So CoreS is Solaris-based? Am I on the right operate? No, I'm not. I don't know. If it's Linux, right, it can run in there, right? If you're not dependent on specific kernel features or whatever, but any other Linux Lexity you can run. In Lexi, there is a problem, right? We cannot run Fedora and Mejo and Ubuntu and stuff like that. Sorry, I didn't catch you. Can you speak up? In case of Lexi today, you cannot run a different version of Linux on Ubuntu. There is some file system incompatibility. Sorry, you can't run a different version of Linux, meaning the Linux used like... I can run precise on trusty. Is that... Yeah, he was talking about Fedora. As far as I know, those work as well. If you have an issue, go ahead and file a bug, and like Theo will look at it. That should be supported. Intel announced a clear container, I think, technology. How is that different from Lexi, both technically and in the marketplace? So they're using, I think, VMs, but calling it containers. They just have tuned down a lot of the VTX stuff. I mean, they are the guys who do this silicon, so they really know how it works. How it's different from Lexi is we're still using the traditional kernel virtualization like namespaces and things like that. I think the biggest difference, I guess, is that they probably can't nest, because they're using VTX and they have these sorts of restrictions. It's just KVM. Right, yeah. So it's just KVM, but super lightweight, I guess. They've done a lot of optimization. A question. What kind of hardware assist are you looking at? Hardware assist? It's working with the silicon vendors to, you know, it'd be in the realm of the VMX VTX, those kind of protections that they're using. We have to be hand wavy, I think. Yeah. Any other questions? Yeah, I mean, there's quite a bit. So some of it could be virtual BIOS or whatever. The standard BIOS, I think, it's got like two seconds that it waits no matter what. You can turn it off, but you have to tune it. You're loading the kernel. I don't have the histogram to break that down right here, but it's definitely there. The big rest of the time is all that, you know, once we're through the kernel, we've done device probing. Then we're getting to a knit and a knit spawning, you know, lots of more stuff to deal with. The fact that it has hardware, whereas in the container, we don't have a lot. So we go, you know, we're exacting a knit directly, right? And then system despawns in parallel. We're done. Yeah. You can tune it down. You really can. We were going for out-of-the-box numbers, right? You know, install the package, run the commands. This is what you get. Well, so today we support things like just, you can feed us basically a tar file and it has a little metadata file in there that describes the personality and things like that. That the tar file expects. And I don't know that we have any direct plans to build something to generate those tar balls for you. I think if you like Dockerfiles or whatever, you can generate that tar ball however you want and feed it to us and we'll be happy to launch it and use your username spaces and do all the security for you. Well, so we're going to distribute, you know, the pre-built Ubuntu system container images. And that's basically because we think that LexD is best suited for system containers. I don't think we're competing with Docker and so I don't think we want to get into a, you know, they have a really great story for how to build images and how to share images with people. And, you know, that's very useful for the app container case for a system container where it's, you know, I don't want to have anything pre-installed, you know. I just want an Ubuntu and I want the Ubuntu that the Ubuntu people say is Ubuntu. So there's not a lot of image manipulation that needs to happen there. So I think for our use cases, it's not critical. However, we do have this API and you can feed us a tar ball that was generated however you want and we'll run it in username spaces for you. So... We're about out of time. Sorry. We have to wrap up now. Does any more question? Yes. Feel free to come up. For the break. Yeah. Yeah, thank you. Great talk. Well done.