 All right, we're about to get started. Just a couple of minutes while people are finding seats. You're finding a little bit of space. Please do come and enjoy some fantastic tacos. We have a taco truck just outside of the convention center, dropped by for breakfast tacos in the morning or lunch tacos. I'm a native to Austin, and I'm so excited that people get to experience the wonderful thing that is breakfast tacos. It's just like everything you want from breakfast wrapped up in a plate, and you eat the plate, and then you're done, and no trash. That's how ecologically friendly we are here in Austin. You can also enter to win. We're raffling off one of these orange boxes. Actually, not either of these, any of these from my demo. But we have one, brand new and ready to go. If you go to this URL, you will be asked to submit to fill out a little survey, and you will be put into a drawing for an orange box. I had a bit of a hand in designing the original orange box. This is an incredible machine. It's like your data center shrunk down into a suitcase. There's 11 nodes in here, one's a master node. There's 10 more nodes, and it's great for development, for prototyping. You can take it anywhere you want, plug it into any outlet anywhere in the world, and run a little cloud. Now, on that note, we're going to talk here today about how we are creating the world's fastest open stack. What do all of these have in common? The underlying technology that makes all of these go fast is fundamentally containers. And that's exactly what we're bringing, have brought for quite some time now, to Ubuntu OpenStack is containers, our containers. And that's containers at multiple levels. Now, when we think about containers, and we really have to take a step back, and we think about virtualization, we think about that first generation of clouds, whether it's VMware's vCloud, or something like an Amazon AWS that's built on a type one hypervisor, Zen, where instances themselves are running inside of virtual machines or para-virtual machines, or using another hypervisor like a VMware. And then there's another generation of clouds and plenty of clouds that use underneath at the plumbing layer a second type of virtualization, a KVM, or maybe on your desktop you're using a virtual box or vagrant or another layer on top of that. Well, there's actually a third type of containers or virtualization that's driven by containers, and that's really an incredible thing. Was anyone at the OpenStack summit in Paris? Does anyone remember Paris? So I had the distinct pleasure with my colleague Taiko of actually unveiling Canonical's plans around LexD. It's something that we worked on for a long time in something that we called LXC, Linux Containers, and that was work that was originally led by IBM that Canonical took over and has continued shepherding the Linux Containers project, linuxcontainers.org, but we'd been working for a few months at that time and we're ready to unveil the concept that we wanted to take Linux Containers to the next level, and that is the concept of LexD, a hypervisor for containers, but not docker containers, not process containers. We're talking about machine containers, okay? And that's a big difference. It's great to put one process, just one process inside of a container and say, hey, you are my Apache container and I want a lot of Apache containers and you are my MySQL container and I want a couple of those in a cluster, but if you want to run Apache and MySQL in the same container, that's not a process container, that's not an application container, that's a machine container at that point. Machine containers boot an entire operating system, right? We launch Sbeninet, we run Syslog, we run an SSH theming, we run all of the things that a normal application, a traditional application that expects to run on a Linux machine to run. And that's very different than an application container, which certainly has its place in a microservices architecture where you're going to chop up every piece of your architecture to do its one thing and one thing very well, but machine containers, you can treat just like a virtual machine. You can allocate memory and disk and network and IO to that and you can put limits, upper and lower bounds on it. You can guarantee that it has so much CPU and memory and disk. You can limit it from taking over the rest of the system. Those are machine containers and that's what we build our Ubuntu OpenStack on top of and what we put into OpenStack as another hypervisor. So machine containers kind of fit this really nice area right in the middle of, they're like physical machines, right, you know, that you shouldn't really know the difference between running a process inside of a virtual machine or a physical machine or container, but they're like virtual machines. They spin up really fast. They share resources with the underlying hardware, the underlying architecture and at its core, at its core it's using the exact same primitives that we use everywhere else in the world that we do containers, Linux containers. It's C groups under the hood. We're using Setcomp and AppArmor for the security and the protection profiles around it, discretionary access, mandatory access controls. We relaunch these containers as non-root users by default and that's extremely important. You're gonna see me do a few demos here in a few minutes and you're not gonna see me use sudo anywhere, why? Because those containers are running as unprivileged containers. Root inside of that container is not root outside of that container. Dustin inside of that container is not Dustin in another container. It really is a secured environment. Now very important to making all of this work is the fact that LexD is a REST API. It's REST API driven. So this sort of architecture shows you how we deploy OpenStack across these machines. So in each host, and those hosts in these cases are physical hosts, they're physical machines. There's ABC and we could go on and on. On each of those hosts we run one Linux kernel. Now that's really the beauty of containers and how we can achieve 10 to 20 times the density using machine containers as compared to KVM virtual machines. Because we have one kernel that's shared across all of the instances, all of the container instances on that system. So if I wanna run 60 LexD containers on one node, there's one Linux kernel for the host, there are 60 guests that are all sharing that same kernel and then they're each allocated their own disk space and CPU and memory. If I wanted to run 60 KVMs on that system, I'm gonna run 61 Linux kernels, one for the host and then we're gonna boot 60 more Linux kernels. Now for Linux on Linux workloads, that just doesn't make sense, right? Now if you wanna run Windows on Linux or Linux on Windows, of course you need a virtualization layer. Actually that's not true. Anyone heard about that Ubuntu sub system running in Windows? I need to update this deck now. That's a really crazy strange example that we won't go into here. But for the Linux on Linux workload, containers and container machines fit that model extremely well. Now each of those LexD processes, the LexD process is a single process. It's a demon, that's what the D in LXD stands for. It's part of Linux containers, LXC, but the LexD is the demon that watches all of the containers on the system. It's the single entry point for an administrator to talk to and say, list my containers, start a container, stop a container, back up a container, snapshot, restore, clone, live migrate, limit the resources, reconfigure. That's the one entry point that we have for all of those. And for the eight plus years previous, prior to LexD, of the LXC project, we never really had that. Those of us who knew what we were doing could type a really long command or we'd write a script or sort of wrap that in an interface, but LexD gives us that. And it's not just a command line interface, there's actually a REST API for that, which really fits well into the open-stack model of being API driven, so that we can have external utilities authenticate. Everything that LexD does is TLS authentication, SSL. We authenticate against that daemon and then we run whatever we need to run against it. We query it, we execute actions. And with that REST API, we then can write all sorts of applications that talk to a pool of LexD machines. We've done that ourselves, of course, with the Nova LexD project. So Nova LexD is an extension to Nova. It's a hypervisor plug-in for Nova that allows LexD to act as another of the hypervisors that Nova can manage. So Nova, of course, has always had support for KVM, which isn't going away. We still love KVM for full virtualization workloads. You've got other drivers for VMware and even Hyper-V. And LexD sits alongside those. The beauty of LexD is it can run anywhere you're running Linux. It doesn't require any hardware acceleration. It doesn't require VT. It runs in virtual machines. You can run Linux containers inside of a virtual machine. And the promise of recursive virtual machines inside of a virtual machine has never really come to. But we can run extremely efficient Linux containers inside of VMs. So we can deploy all of OpenStack into a public cloud for testing purposes or even some fairly useful use cases. The LXC CLI uses the REST API to talk to it. It'll talk to local hosts. It'll bind a socket to local hosts if you're talking to your local LexD. But you can also query and communicate with any other network-addressable LexD daemon on the network. Now what makes LexD really go fast in Ubuntu is ZFS on Linux. It accelerates the entire experience. Primarily, primarily because of the copy-on-write snapshotting and cloning. So underneath LexD in Ubuntu, you can configure, it's one command, LexD init. So you do run that as sudo because that is a decision that's shared across all the users using LexD on the system. So sudo, LexD init will prompt you for a couple of questions. If you answer the question, yes, I want ZFS. Yes, please, I want my containers to go fast. We use ZFS. ZFS has extremely efficient deduplication and snapshotting. So you pull down an image, that first image you pull down from the LexD repository. You can run your own repository. We put up public images for a number of OSs. You can pull LexD images for CentOS, Fedora, Ubuntu, of course, Debian, Sles, OpenSus, excuse me, a bunch of different OSs and run those as LexD containers. You download that once. You've got one copy of that inside of the image store that LexD manages and it's stored in ZFS. And then every time you start a container, you make a copy on write snapshot of that. If you don't use ZFS or another copy on write based file system, then you end up having to extract that entire copy and store copy after copy after copy and your deduplication is not present. You don't have a dedupt file system at that point. ZFS also does continuous integrity checking. You can know immediately if there is a problem and it actually can repair and recover a number of common problems or common file system problems and you won't even notice it. And efficient compression. You can actually store this compressed. There's a CPU cost to that, but if CPU is cheaper than disk, that's a personal decision. But ZFS is really what's gonna make these demos here in a second go fast. So LexD is fast, secure, efficient. It's extremely fast due to the fact that we're sharing a kernel, the fact that we're using copy and write snapshot so both storage and compute are very quick. It performs just as fast as running natively on the underlying hardware. It's efficient and allows you to use your underlying architecture very efficiently, get the most out of your machines. And then it's secure. We run as a non-root user by default. We have discretionary access controls that extend that non-root user into the host file system so that if a user does escape, they can only read and write with that non-root user. Usually a nobody user is able to read and write. We use mandatory access controls. We have app armor profiles that enforce mandatory access controls, ensuring that again a user that escapes that, that process has no more privileges than the unprivileged process that it was before. And we apply all that by default. So why is this the world's fastest open stack? And I'm gonna go into a couple of these. First of all, our hyper-converged architecture deploys in minutes. The conjure up tool that Mark demoed earlier, I'm gonna run that again, I'm gonna kick that off. The guys have been working on these boxes for a couple of days now. How many dozens of clouds have we deployed? We've deployed open stack hundreds of times on these machines in the last three days. Why? Because we can do it in minutes. We do it 10, 20 minutes and it's consistent. It's the same every single time. We can launch lexity instances in a matter of seconds. I'm gonna do that for you here. In a second Mark launched a few through Horizon as well. We can snapshot the entire environment in under a second. Actually it takes me about two seconds. I should update that. In two seconds we can snapshot the entire open stack environment. I'm gonna do a really nasty thing into one of my open stack nodes. The node that's running Glance or Keystone and whichever one we choose here in a second. We're gonna destroy it and then restore it from backup. The instances themselves that we run in open stack perform at bare metal speed. Equivalent to bare metal. We're gonna run a benchmark that demonstrates that. And then we can migrate services in real time. We can take an instance running on one node and move it to another node as fast as we can move that data across the network. And at gigabit speeds on these little machines, that's pretty quick. In your data center where hopefully you're running 10 gigabit or 40 gigabit or more, that's essentially real time. That's basically instant. That's important because it's fun to stand up those dozens of clouds that we've installed here. Install it, destroy it, install it, destroy it. But as our friends at Deutsche Telecom know, it's not about just standing up a cloud and then throwing it away at the end of the day. It's about standing up that cloud and keeping it up for months, years, decades maybe. And to do that, you've gotta be able to manage it operationally over time. And an essential tenant to managing it is upgrade. The open stack that we've got deployed here is OpenStack Mataka. Who's running Mataka already? Awesome. We're running Mataka already. It released last week, right? We're running Mataka, but at some point we're gonna upgrade from M to N to O to P. And we need to do that over time. And the way we upgrade OpenStack is we upgrade Ubuntu, of course, underneath, and then we need to upgrade OpenStack. And the way we can do that is by migrating services off of a system because they're contained, doing an upgrade, and then migrating it back. That's really important to the operational efficiency of a real OpenStack in production. To be able to evacuate an entire rack of all the instances running, all of the OpenStack services running, perhaps perform some physical maintenance on it, replace the RAM, add some CPU. And then bring it back into commission and put those services back on it. That's really why LexD is super important at that layer underneath the cloud. So at this point, I think we're gonna do a demo. And because I can't work on a command line with a stuffy sports code on, I'm gonna take that off. All right, here we go. So first of all, I'm gonna need to probably refresh a couple of web pages and see my authentication keys get timed out here. So let's log back into this one. We're gonna see we have an OpenStack already up and running. And in this OpenStack, if I go to the administrative tab and I look at the hypervisors, OpenStack Mataka, by the way, I look at these hypervisors. I've got three machines that are dedicated hypervisors. And the type of hypervisor is a LexD hypervisor. And that's really cool because these are little machines. These are Intel Nooks. And each one has 16 gigs of memory and merely 160 gigs of SSD, nothing like a real server. And we'll very quickly grant you that, but we can't really bring real servers or it doesn't work out well when we try to bring real servers into a room like this. But we can run on these very small machines and we can run a lot of containers on those machines. I've run as many as 636 LexD containers on an Intel Nook, on a 16 gig of memory Intel Nook. And that's because for those 636 containers, there was one Linux kernel running. It took about 20 minutes or so to boot all 636 of those containers. And about, sorry, that same amount of time for every one of those containers to draw its own IP address, its own addressable IP address. I'll admit that I had quite a bit of networking work to do to rearrange things a bit to get enough of an IP range to handle 636 unique IP addresses. And then once I ran it a few times, you can crank through several thousand IP addresses in a matter of a few minutes, as I said. So we needed a really big network, but doing that, you can do this at home on your laptop running Ubuntu. The second thing to look at is what kind of open stack we're talking about. For this demo, we're talking about a relatively basic open stack. This is the core of open stack. Jonathan, in his opening keynote this morning, mentioned compute network and storage. We've got Nova with an Lex-D driver. We've got Ceph for a storage backend providing both object and block storage. Neutron with a basic open V switch. MySQL and Rabbit, Horizon and Keystone. And that's really a core open stack. It's not a lot of bells and whistles here, but for our customers and partners, we can create some really interesting open stacks. We can model it first and deploy it using all of the same tools that I'm talking about here. This is what we're running and how they interact with one another. Each of those lines is an interaction. It's a Rabbit MQ providing the message queue for everything. It's the Percona database, the MySQL database providing a storage backend and so on. Okay, so the first thing I wanna do is I'm gonna kick off a conjure up. So conjure up is essentially a chaperoned experience for deploying open stack. It's a little like dev stack from one perspective in that it's one command that you can run on one Ubuntu server and deploy all of open stack on that one machine. But interestingly, it can also drive deploying that same open stack to many machines. If you have a Maz, it can deploy them straight to bare metal, as much bare metal as you want. And you can also switch from this slick command line interface to the much prettier autopilot, the graphical interface, the GUI through the web that we saw a little bit earlier. I'm going to deploy open stack with Novolex D and I'm gonna do it all on this machine. You'll see here that it's telling me that I'm running this on the Mataka release. And this is gonna take a few minutes as it bootstraps. It's gonna start up one node where we're gonna instantiate a juju controller and then that juju controller is then gonna spin up, I think about 16 or 15 other Linux containers, LexD containers, on the same system and deploy an open stack service one per in each of those containers. I happen to have that already running here on this system. I'll make this just a little bit bigger and I've got a checklist of all the things that I wanna show and there's so many that I had to make a list for myself so that I don't forget any of the goodness. So the first thing is to look at what's actually running here. So I'm gonna run juju status and pipe that to less and we'll see, this is a command line view of that same graphical web interface that we just looked at. So this is a graphical interpretation. This is the sort of text-based interpretation. You'll see three pieces of Seth, Glantz, Keystone, MySQL, Neutron, Nova, a dashboard, Horizon and Rabbit. You'll see how all of these are related and that's the, again this is the sort of text interpretation of all these extremely important lines and what their status is at any given time. And so all of these are related to one another. Now, how does that translate to LXC? I'm gonna run LXC list. I think I'm gonna need to pipe that to less as well. These are all machines, LexD machines. Every single one of these is a machine that has booted running Ubuntu Xenial, Ubuntu 16.04 which just released last week. You'll see every one of them has its own IP address and that's an IP address that's accessible to all of the machines on the network, at least at this level. Ignore the Snapshots column for now. Now that I've pointed your attention to it, you're not gonna be able to look at anything other than the Snapshots column, but we're gonna look at Snapshots and restore one in a second. Note to self, don't point out what you don't want people to look at. So all of these machines are running and I can pick any one of these at random. I don't even know what this one does but we're about to find out. So I can do an LXC exec, that machine and we're gonna go into bash in that machine and if I run a PS, we can make a pretty good guess as to what this one's running. This looks like one of our Nova compute nodes. Wow, how does that work? It works like this because LexD inside of LexD just works. We can run containers inside of containers inside of containers inside of containers and so on turtles all the way down and every single container performs at the same level as if it were running on bare metal. Why? Because it's actually running on the bare metal. These are just processes, process groups running on the host kernel. I was hoping not to look at a Nova compute node so we're gonna do that again and pick another one at random and we have a three and 16 chance of getting another Nova compute. So let's take a look at this and see what's this one running. Ah, that's a little bit smaller. This is one of our Ceph machines. We see Ceph down here at the bottom. So this is what it looks like inside of a LexD container. PS shows what looks like a system, that's booted. You see the first process, PID 1 is S been a knit. You'll see that we're running things like logging and simulating debus and R-Sys log which is important if you wanna get any information off of it. SSH, it's kinda nice if you want this machine to feel like a machine to be able to SSH into it like a machine. I'm sort of taking the fast track into the machine by using LXC to exact bash in the system but we could just as easily SSH in if we had our keys appropriately installed. So let's look at the storage underneath this machine. I said ZFS earlier. So if I do a Z pool, points for typing password right on the first try, we see we have our ZFS pool. The pool's name is LexD. It's online. I love seeing this line at the very end. That's always extremely important. No errors, right? Sudo's Z pool list. We'll show that this is a 200 gig Z pool of which 30 gigs is used. That's pretty amazing considering how many machines and how many snapshots of those machines I have here. I've got 16 LexD machines running. Everyone running its own copy, its own unique copy of Ubuntu 16.04 and I had five or six snapshots of every one of those containers. So we're talking about copies of over 100 copies of an operating system there and that's part of what that Ddupe, that copy on right, is really doing for me here. And then I can get a little bit of data out of this. If I do a C pool IO stat dash V. I thought I had, thought there was more information there. But what we can see is a bit of what's going on. If we were to watch, we were to watch that over time. Now launching an instance. It's just as simple as this. So if I do an LXC list, you can see currently all the, all the LexD machines that we have running were generated by Juju. Juju took names that were these, these UUIDs and sort of enumerated them. If I just wanted my own container, I could LXC launch. I'm gonna launch an instance, an image called Ubuntu Zennial. I'm gonna throw time on the front of that. So we'll see how long it takes to start these up. And this is about a three second process. That's three seconds to boot, to allocate this machine, boot it, run Sbeninit and draw an IP address. That's remarkable. We can do this over and over and over again. This machine is pretty heavily loaded at this point. Some of the stats you can see at the bottom in yellow, we can see that the system load on this little machine is seven, that's on a four core system that on 16 gigs of memory, I've got about 78% use. So this is a pretty heavily loaded machine but it's still quite responsive and I'm still able to do, it's running all of OpenStack. It's doing quite a bit here. And now at this point, if I LXC list, we should see at the bottom of our list those three new containers that we ran. All three of those containers have their own IP addresses and we can get into any of them. And this is gonna be a little bit quieter machine. There's no workload deployed on it but it is certainly there. So let's play with the limits. Let's take one of these systems. Actually, I'll delete these three. So let's delete dash F, that one and that one and that one. Reclaim some of those resources. And let's limit the CPU and memory for one of these systems. So we're gonna take this one at random. Clear the screen, LXC, exec, that. That's cat proc CPU info. And grep processor. And I'll see I have four processors. That means that this instance is able to see all four processors on the system. And let's run, let's see how much memory this system has. So this machine has, this is one of the ones I've already poked, this machine has access to 256 megs of memory. So it's not pegged on the CPU side but is pegged on the memory side of things. So I'm actually gonna set the limit of how much memory and CPU this machine can use. So I'm gonna hop over to my cheat sheet here as I've forgotten the exact command. And Tyco's gonna help me with this one. What's the limit? Limits.cpu equals one on this machine. It's on that machine. Limits.cpu equals one limit.cpu. One of those is gonna get me there. Thought I had a screenshot of that. Helped me out with that one Tyco in the meantime. We're gonna get that one. I do that one all the time and I've just forgotten the order. In the meantime, what we'll do is benchmark. We'll benchmark one of these. So on the host system here, I'm gonna run sysbench, testcpu, run. This is gonna execute a system test against the physical machine that's doing this host. It's gonna run 10,000 operations. And this typically takes, in this case, it took 10.7 seconds at this point. If I exec into any one of those machines, let's pick one here. And I may need to apt install sysbench because I doubt that that's gonna be installed already. I'm gonna benchmark the CPU inside of this container and see how it compares to our 10.7 seconds just a minute ago here. So now we're gonna run the same sysbench, test equals CPU, run. And we wait our 10 seconds. Now what's important here is that from a user perspective, you are a user of an open stack. You want an open stack that took a little longer. Let's try it again. What's important here is that you as a user of an open stack that gets allocated some instances, you want those instances themselves to perform as fast as possible as if they were on bare metal. There is a little jitter. We saw 12 seconds here. We see 10.6. What we're seeing is close to equivalent performance as running on the physical machine. In fact, what we see is, we see instances where customers or users want to use LexD as an alternative to allocating whole systems to a user for some period of time. There are certainly other alternatives to doing something like that. Ironic comes to mind where with Ironic you've got a driver that plugs into Nova. Nova can allocate a physical machine that gets installed and then that user receives access to that system. There's a lot of problems with that. That user with physical access to that system can do really nasty things like overriding BIOS or injecting Trojans. It really gets ugly and it takes a long time sometimes to reboot a physical machine into a... Reboot a physical machine into a... I can see everyone now. Reboot a physical machine into a brand new newly installed system. With LexD what you can do is say I want to run one container and only one container, LexD container, on that physical system. The startup happens in seconds as opposed to minutes to reboot and reinstall a physical system. You can unconfine the amount of resources that the user who's been allocated that LexD instance so that they have access to all 64 CPUs and all of the terabyte of RAM and all of the multiple terabytes of local disk so that it feels like that user has access to a big physical system and their processes are running on the host kernel on the physical hardware. But with LexD we can still prevent things like writing to certain devices, overriding the firmware, overriding the BIOS. And it's really a great model for the I want a whole physical system but I want the API and the manageability of an open stack. All right, so what we're gonna do now, Tycho check the syntax for me. It was the equal sign? Limits, there we go. And I could have just read the man page, right? And then let's change the memory. Limits.mem, mori to 512 with an m, mb. Geez, that's why we need machine learning to figure out how users actually interface with a system. All right, so we've now set the memory to 512 and the CPU to one. So now let's go back into that machine and let's check our processors. We now see exactly one processor, which is all the CPU that this machine's gonna receive, right? Let's check how much memory is available. We see we have a total of 512 megs of memory. So we've shrunken how big that system can be. We've limited how much, I don't even know what this system's doing, let's check. This is one of our Ceph nodes. So the Ceph OSD, it doesn't need access to that much memory or CPU to do what it needs to do. We've now confined it a bit. Let's snapshot and restore. How much time do I have left? How am I doing on time? Yep, all right, so I've got five minutes or so. Let's snapshot and restore. This is pretty cool. So let's take our list of LXC machines. I'm going to get just a list of machines themselves. I'm gonna loop over those XRGs. I'm gonna loop over those. Ready? I'm taking a complete snapshot of my entire OpenStack infrastructure. And in the time that it took me to say that sentence, we now have a snapshot of my entire OpenStack infrastructure. If I do an LXC list, we're gonna see that column that I told you not to look at earlier. You can look at it now. It went from five to six. I have a snapshot of every one of those containers. So now let's do something really nasty. Let's go to this machine. We're gonna go into that machine, which is running, oh, I don't know, this one's running, it's one of my units of staff. Let's do something bad. That's not advised, right? We're gonna do it anyway. And now we're gonna try to go back in and holy cow, I can't find Ben Bash. What happened to it? Well, it's a good thing I can restore from backup. So we're gonna take this system and we're gonna LXC restore. I need a backup name. So if I do an LXC info on that container, we're gonna see a whole bunch of snapshots taken. And I wanna restore snap three that was taken just a few seconds ago. And then I couldn't even start the sentence and it's already restored. So now we're gonna go back in to this machine and wow, look, Bash is up and running and we've got our OpenStack back. How cool is that? I think we're out of time. I've got maybe time for one or two questions. There's so much more to say here. This is really exciting. Of course, we can launch instances in OpenStack. Let me just launch some instances while I'm taking this question because that's always fun, right? Go ahead, sir, what's your question? Absolutely, I can, so the question is, can we limit the root disk size of each container? We can currently limit the number of CPUs, the amount of memory, the amount of disk that any container consumes, the network IO, that's another fun demo to throttle the IO, do a big W get and limit the IO and watch the W get slowed down and then unlimited and watch it speed back up, and disk IO, IOPS essentially. Anything else, Tycho? That's it, those five. Yes, sir? What's the, the question is, what's the relationship, if any, to the Open Containers Initiative? So the Open Containers Initiative was sort of a, it was, it's a project led by the Linux Foundation. It was really meant to help bring Docker and Rocket, Docker and CoreOS together to, I guess, kind of stop a container war that was about to take place in the West Coast. We're supportive of that. I think the Open Containers Initiative has two goals. One is to establish a container runtime, application container runtime, and a second one to define an application container format. The containers that we're talking about here are machine containers, and so it's a little, it's a little different. The run time for us is boot. Run has been a knit, right? The format is a root tar ball. We didn't need a foundation to help us come to terms on that, frankly. If something comes out of the Open Container Initiative that we should support in LXD, we're absolutely supportive of it. Good question. Yes, sir? Great question. Everything that I showed so far here was Ubuntu on Ubuntu. The question is, can we do other OSes on Ubuntu? And the answer is yes, unequivocably yes, we can. The only limit being, if you're running user space code, that depends on a kernel feature. That's a, you know, a rel or CentOS or Fedora feature. No, but for doing user spacey stuff? Absolutely. We've got customers running scientific Linux, CentOS, Debian, inside of LXD containers. No problem. It does, but you know, there's a lot of code out there that doesn't depend on the kernel. You know, I've been around Linux for a long time, and there was a time when the kernel was the most, the numbers behind the kernel was the only thing you cared about, right? Thankfully, the Linux kernel has stabilized so much that it just kinda works usually. You might wanna tweak on low latency or tweak here or support for a video card or a network card or something like that, and that is important. But if your workload is a database that doesn't care what kernel, go for it all day long. Yes, sir? Why ZFS instead of ButterFS? Does anyone else have that question? You're welcome to run ButterFS under LXD. We've got great support for it. I ran ButterFS for 18 months before I started using ZFS. ButterFS is native in the Linux kernel itself. It's been around for going on 10 years. Have you ever filled up a disk with ButterFS? It's not a pretty experience. It's better now. It's better now, indeed, indeed. That's why, you know, you're welcome to use ButterFS. We've got support and guides on how to use just a basic directory underneath, ButterFS underneath, ZFS underneath. In our testing, we found ZFS to be bulletproof, just rock solid. And it's been a great experience for us. Thank you, though. Please do try ButterFS and let us know. File bugs you can get hub.com slash LXC is where you can find, I should put this up as well. So where you can find more information about this work, please file issues. If ButterFS is not working well enough for you, we'll look at it from an upstream perspective. Over here. Yes, sir? Wow. I have no answer for that question in this context right now. Grab me afterward and we'll talk about it. The question's about LexD and networking and specifically VLAN networking. The beauty of LexD is you just tell it what bridge to use and it just uses it. So as opposed to a whole bunch of code inside of LexD trying to recreate SDNs or provide network functionality, LexD just says, give me a bridge and I'm going to allocate, I'm going to point instances that boot to DHCP off of that interface. And that bridge can be a local bridge, which is what I have here. It could be, you know, it could be a Melanox or a Juniper or a Contrail or, you know, anything, anything like that. Excuse me? It does not have to be. Absolutely not. It just has to be able to communicate through that bridge. By default, it's going to DHCP and request an address. But if you've got a static route and you can get out through that bridge, go for it. Yes, sir? In the back. Yes, sir? Each vendor has its own distribution of. Each container has its own distribution. Each container boots a root file system that is an OS, Ubuntu 16.04 or CentOS 6 or Fedora 20, whatever. Yes, it's just a root file system that makes up that container. Yes, sir? Is it possible to run the container inside of LexD? Is it a Docker container inside of LexD? Yes, yes, yes. You can run Docker containers inside of LexD. If you check out my blog, I demonstrated that last week running on a Z mainframe. So we had a Z mainframe with an LPAR and inside of that LPAR, we had a VM. Inside of that KVM, we had Ubuntu. Inside of Ubuntu, we had LexD. And inside of LexD, we had Docker. And it all just somehow works and works well. You can use constraints. How are we doing on time? Yes, sir? Everything that is supported by LexD is supported over the REST API. The REST API exposes everything. It's not like the REST API we bolt on after the fact and we add a feature and then for a while it's not supported in the REST API and then one day someone opens the, no. The REST API is where the interface into, that's where the command line tool talks to get directly to LexD. So REST API, first class citizen from day one. Last question, then I'll step out and answer some more. Yes, sir? That's absolutely right. So the question is, is the snapshot and restore utilizing ZFS? And that, yes, absolutely. So LexD does have a little bit of knowledge about the underlying storage. It knows about ButterFS. It knows about ZFS. And when it needs to perform a snapshot, it asks that underlying storage system to perform a snapshot for me on its behalf. Yes, so there's another slide here where in just talking about the process containers versus the machine containers, I'll leave you with this, the LexD source code is about 28,000 lines of go. 28K lines of go to create essentially a hypervisor that turns containers into virtual machines or machine containers. By comparison, the Docker source code is over 400,000 lines of go. They're both written in go, so from an apples to apples perspective. And part of the reason why is because LexD's defined mission is to boot a machine. That's it. It just wants to boot a system. Everything else is left to the OS running inside, to the administrator on the outside. It's really trying to stay out of your hair from that perspective. So from a storage, we don't have a whole lot of logic built into LexD itself. We're using the underlying system and to the network point, the same thing goes. We didn't bake a whole bunch of network knowledge into Docker, excuse me, into LexD, where we could just leverage good networking outside of that container. Thank you so much for your time. I'll step outside here in a second.