 Thank you. So good afternoon. I hope everyone's holding on in there. As I said, I'm Mark Baker. I work for Canonical. I'm the product manager at Canonical. For those of you who don't know, Canonical is the company behind Ubuntu. Keep lucky for the user survey here. Thank you Foundation for running the user survey. But Ubuntu is a very popular platform on which to run OpenStack. So, many people, most people that you could argue with say, run their OpenStack on Ubuntu today. This is a selection. This is the NASCAR slide, I believe people call it. The Wall of Logos slide. These are some of the people that are running Ubuntu OpenStack, different types of OpenStack on top of Ubuntu today. A lot of those you've seen are super users or on stage at the OpenStack. Some bits, some of you guys like Best Buy or RoaTT and others. And again, a small selection of the customers that we engage with. That's the corporate advertising open. You'll be glad to know. So, what are all of these guys having gone? No one to guess? Do I just get on with it? So, all of these guys run containers, right? All of these technologies are based upon containers. And who's running containers? No one. Container's a big opportunity here, frankly. Good. So, all of these guys are running containers. It doesn't matter. A lot of them running Ubuntu as well. So, you'll see, you know, guys like Kuroku from WRT just doing stuff with containers. Proxpons for the OpenStack containers. Docker, of course, for the darling of the container community. Results and others, right? So, containers, containers, containers. And this talk really is about how we can use containers. How we are canonical with the OpenStack. We can use containers if we wish to deploy, manage the scale of the stack in kind of interesting ways. So, we took a little about hypervisors. IBM, back in the probably some smart IBM Distinguished Engineer, back in the mid-70s wrote a paper on hypervisors. So, you know, hypervisors, the concept of hypervisors has been around for quite a long time. And they split that into two types. Remember it wasn't wrote that paper's name, I can't remember. Split it into two types of hypervisors. The type one, kind of full, power virtualized hypervisor environments, AKA kind of VMware and Zen type of environments. And then the type two, the kind of hardware accelerated hypervisor environments, things like VirtualBox or KVM, some others too. We can also have the development of something called LXD or LexD. And this we're somewhat cheekily proposing as being a type three hypervisor. So it's a hypervisor designed specifically to run containers, to deploy, manage, scale, and operate with containers. LexD provides a machine container. Who knows what a machine container is? Good, well, I will show you in any way, even if you tell me you are new. So, hopefully you can see this. This is, by the way, this session has lots of live damage in it. There's a lot of screen in it. So you can't tell that. See what I'm typing? Please tell me. You're going to be really bored. So, I want to show you Docker, right? Docker is what we call as being an application container. So if you can see my table here on the, I guess, your left-hand side, if you're looking at it. If I just go and connect to a container, I'm just going to enter my pseudo-password, because Docker has to run, certainly 110, the version I'm running here. It has to run fully privileged. I'm inside my Docker container now. I want to see a bunch of stuff, and if I go and have a look at... Anyone that gives live damage, by the way, you lose the ability to type rapidly as it goes through. So you see this application container is Docker container is only running one thing. It's just running batch, right? That's because I created a Docker container, just sitting down in Iran, that has just batched in it, right? And I only see that straight Docker container. So if I come back out to my shell, look at the top of the screen so you can see, if I go and have a look at a system container, let's do the same thing. So LXC, a Linux container I've created, here it's using LXD, the hypervisor. If I go and do the same thing, it's going to access that. This is, you can see, I'll see a whole bunch of stuff by LS, but in my home directory. But notice, one, I do need to pseudo for that, right? So I'm just running as an unprivileged environment. And two, if I have a look in that environment, you can't see particularly well because of the scrolling. There's a whole different set. So let's try that again. There's a whole different set of processes running, right? It looks actually much more like a full Linux environment. So that's because it is much more of a full Linux environment. So not just the one process of a Docker container or an application container. LXD or LXD are full system containers, right? So it's kind of very analogous to a VM, right? It's just like a VM. Now, if I want you to, in here, I can do, let's do pseudo app update, for example. I can go, this is right on my old Wi-Fi hotspot here. So you won't be impressed with the speed. But I can go and pull the updates. I can even go in, is there anything to update? All the access are up to date there. But I can go and update this. This tool will be back to where I want to do that kind of stuff, right? So that's the difference between a system container and a machine open class as being an application container. Let's get back to first place for that in a second. So LXD provides a machine container. It's something that looks and smells just like a VM. It can be any type of Linux, right? Any, as you probably know, containers. A few more Linux containers on the Linux platform. No windows just yet. So LXD provides those machine containers. Applications containers just like Docker host a single process on the file system. The example we just showed you there was a course bash. A machine container, LXD, boot of full OS on their file system. The thing it's sharing is the kernel, right? So there's only one kernel on my laptop right now, as far as I'm aware. And so, but all the user space processes, all the libraries, et cetera, are wrapped in that container and it looks just more like a VM. So I actually jumped ahead to the demo. So we see these kind of Linux containers, the machine container sitting somewhere between a virtual machine and a physical machine and a kind of Docker or process container, right? Sitting on Docker LXD, it acts and behaves just like a VM except it's sharing the kernel and that gives us some benefits, right? So LXD is an API driver, right? So the API means that we can access it remotely over a REST API. So using a single machine, I can go and interact with containers across all sorts of different platforms. There's also integration, which you'll see down here. LXD, there was that over and I'll explain that in a second. Sitting on the meeting list, there's something called ZFS, or ZFS if you're English, right? You just heard of that? Two people, good. Good, good, good. So ZFS or ZFS is a very well-respected fastest technology. It provides a lot of very cool fastest features, Snapchat, backups, copy and write loads. You can read the details there. It's very high performance. There's a lot of DGP on the fly. It's very, very efficient, very well-respected, right? People who knew and loved Solaris before they got hooked on Linux generally talk about two things that they loved with Solaris. Excuse me, you guys, I still have 30 minutes here, is that right? Extra time for me. So the people that worked with Solaris generally loved two things about it, right? One was de-tracing and the other was ZFS, right? Those are often the things, back in the day when I went to Red Hat helping to replace Solaris and Big Data Centers in the city. Those are the two things that Solaris had a bit of a mode about, right? Where is de-traced, where is ZFS? They got over that. But anyway, so this is sitting on... Everything I was just showing you earlier on, those are LXD containers that are sitting on top of ZFS and that makes them super fast because LXD sitting in conjunction with ZFS makes the sporting containers, running containers, snapshot containers, running containers really very, very fast indeed. So why did the world's faster open stack? Well first up, I don't have any data that shows what it is, right? There's no benchmark that says this is the fastest. I could go make up a benchmark and do that but there's really no point. But this is a hyper version of the architecture, it deploys in minutes and it deploys in containers. We already saw that earlier on. Dozens of LXD instance containers this is launched very, very quickly with the snapshot stuff. I'm going to show you all of this. I'll just kind of come to that. So in my environment here, back on exit of that, I come back onto my OS. By the way, let's do something if you're familiar with dispatch. Dispatch is a benchmark utility. I'm just running this natively on my laptop. It's an old one that I travel with that I don't get too upset if it gets stolen. So it's not going to be overly impressive but it's going to run about 1,000 or 10,000 iterations I think of calculating something on the processor here. So as soon as that finishes, normally it takes about 10 seconds, there we go. So yeah, 9.94 seconds. It's very little depending on the jitter that we have on the system and stuff like that. If I go back to my container and I can hopefully do the same I just want to stress this. This should be within a few percent in terms of the performance. The reason I'm doing this and I want to show you that running a container is akin to running on bare metal. If I do a benchmark, whenever it comes back, there we go, within whatever it was, 400th of a second. So pretty basic. In fact, it looks faster actually in a container which is probably just a jitter on the system. So that's the few percent. So what does that give you? It's giving you native bare metal performance. And that's important because there's no doubting it. KBM, your hypervisor of choice can add a little overhead in terms of that network performance. But we don't flip to something here. Let's go back here. So once this screen, here I have a system that is running OpenStack. So if I run here, you'll see how much of different things are running containers and other databases. Whole heap of stuff. If I do a Xe list, you'll see that there are about how many 16 or so containers. Each one of these is running in OpenStack Service. This is all running. It's not actually on my laptop because I don't have enough memory. You need about 16 giga-round to run this on a single machine. But it's running on, actually, an Intel NUC that has 16 giga-round and a nice SSD. So it's actually based there in Colorado. So this is okay for that. So I've got my 16 servers. This is all being put together actually using something called Juju. I'm not going to talk about Juju other than saying you'll see that there's pretty much a one-to-one mapping between the different services that have been deployed. We've deployed that matching with Juju and the number of containers that we have. Let's just scroll up a little so that you'll see we're running there with an Xe list. You'll also notice that on the right-hand side we've got something called Snapshots. Snapshots. SCD called Snapshots for the time being. We'll come back to Snapshots in a minute. So all of these services are running. If I go and connect on to my system, you'll see that oh, there we go. So this is the environment I have that's running. This is all running on a single Intel. Now I'm running in 16 also containers. Nothing particularly exciting going on right now. Don't have a look at the instances. I've probably only got one running. Let's go and launch an instance. Let's call it demo2. Very creatively. Take it down there. Go to next. Go to Xenio. Next. Go next. Then next. And I think we can go there. So go ahead and launch that. Let's go and spin up an instance within the understanding environment. So this is all running on that single machine. Off it goes. Starts building it. Spawning it. You'll see the image name here coming into your clues. It's Xenio-Lex-D. So what's actually being launched is not a VM, but it's a container. It's a Lex-D. It's a full Linux container. It's running inside of our oversight that's deployed in Lex-D containers. The reason that we're able to do this on a single Intel is because of that efficiency. So this is thanks to something called the Nova Lex-D driver. Something that we've written in the process of working with John Garber, Nova-DL, trying to upstream in the same gentle way. So it should be available to everybody pretty soon. Good. So, as I said, we've got a number of things running. Let me just flip back to, I've got a cheat sheet here. I just want to make sure that I'm showing everything that we've got. So it's a good job we do that because there's a bit that I forgot. So I put this together. This opens that cloud together using something called Cutter-Up. So Cutter-Up, if anyone's running a book 2604, come out of here. You can see here. It's clear so we can get that and history. This is a machine that I launched this morning. You can see these are all 29 commands that I have run on that machine since it was born on this other Intel NUC. So you can see it's pretty, half of these are duplicates because I can't type or I was just practicing. So I've updated it, upgraded it, added a repo, installed a bunch of Cutter-Up ZFS stuff, run some config commands and then ran something called Cutter-Up OpenStack. Cutter-Up OpenStack across my fingers gives us this very simple because this driven user interface to be able to use the container, to be able to build containers underneath either on a single machine or multiple machines. Here I get three choices. I can build an OpenStack or build an OpenStack with NovaXD. The hypervisor allows us to be able to deploy managed containers within Nova or do something with the OpenStack or a pilot. That's another tool that we have which I won't discuss with you right now. Very, very simple. If I go ahead and select that, a little bit of lag. I'll skip out of that. What it does is give you a very simple interface to be able to say, build me an OpenStack, go choose which services I want to build it into the container. This is how we arrive essentially at the number of containers we have deployed here, just running our OpenStack environment. Let's go and have a look at if we do Alexi there. Choose one of these. I should just ask the audience to choose one of these really because I was being adventurous. I won't do that. Go into that and see which one this is running. Keystone. This is running Keystone. Pretty important part of our OpenStack environment I think we'll all agree. Let's come out of that. Because it's such an important piece I'm going to let's go and see. I'm going to take a snapshot of that. I'm going to snapshot it. Let's go ahead and call it backup two. I'm not going to conflict anything. I'm going to take a snapshot so it's taking a full snapshot of the Alexi container that's running our Keystone environment. We just saw do that. That was really fast. So that's nice to ZFS. It's running underneath it. Let's go back into that machine. So this is running can you see that on the back? It's running our Keystone environment. Let's just go and check. It still is. So it's running our Keystone environment. And let's go there. You see that? Is that recommended? So now this is going to give us some problems actually. So if I could do that again. I think you'll probably agree that's going to make life difficult for Keystone. It's running an environment. This was our production of the second environment. That's going to cause us some trouble. But thanks to the fact that we're taking a snapshot of that we want to be able to get our Restore a good version. Let's go. We can do we can get the right one. Let's go and grab it from our Chi Chi. There's something that's just too big to type in your Dingerland demo. There's two, right? How long does that take? A second? Two seconds maybe? So if we go back and do the same thing back into our environment as I am. AUX. Our cloud should still be running. I should, because my fingers be able to continue to navigate around my environment, right? So you begin to see how this is gives us not only a kind of pretty high-performance environment running at native bare metal speed. Both our open stack environment is running containers and workloads are running within it bare metal speed. But you think how it gives us nice means to be able to provide some resilience in our environment, right? So availability. People know that, operators will know, availability is not just about how to keep systems available. It's how quickly can you restore them when they go barely up, right? Which, inevitably, they will do at some point in their life. So just to make that point again, we have all of these containers running. If I want to take a snapshot of my entire environment if I've got the right thing in my history, yes, here we go. Just using this Alexi list running on to XI's minus I, Alexi's snapshot. Just take the snapshot of every single container in my environment and then if I do Alexi list again, you should see we've now got the snapshot of every single container that's running an open stack service. So I can do this all day, I won't do because I haven't had enough time, but I can do this all day is go into each service in turn, do something rude and unusual to it and then just restore it continuing having my happy, happy, open stack environment. So the other piece I've come back to and I showed it to show you just the other piece here, you'll see that in terms of hyperbisers on this cloud we haven't got any traditional, I guess you call it class of this, traditional hyperbisers. So there's no KBM in this environment right here. This is designed specifically to run on a small, lightweight environment and do like this. You could run it in your production system if you wanted, but really it's designed for people that want to be able to build a real cloud, something a little beyond the Dev stack, be able to spawn real VM to it with services that are connected in real way. So we've only got this one hypervisor here which is the Lexity hypervisor and you'll see that that's running a couple of instances. So there's a whole heap of backup slides here in this demo file, so I have to scroll through those really fast. So this hypervisor, the hyperconverter architecture, there are very many different architectures that you can use to deploy an over stack environment. The way that counter up deploys this is deploying it all on a single system. But even if we had multiple systems it would spray or distribute each of those application loads across as many machines as possible. And that's often referred to a hyperconverter architecture. For those of you who remember Piston, if you're old enough in over stack to remember them, Piston uses a simpler thing, so running spraying services across as many machines as possible, sharing computing and storage. And so if you're using counter up to deploy an over stack environment, then it uses the same model. And we've got that exact architecture in play, many of the production customers I've talked about. So if you like Sky, you'd like to go to telecom and others. And it's running very successfully, very fast. So please do give it a go. We've got say dozens of those Lexi instances launching in seconds, we can snapshot everything super fast and run at bare minimum speed. So, if you want to get this to go, you're going to need to do a few things. The first is going to be running Ubuntu 16.04. So this is all pretty new stuff. It's all on Ubuntu 16.04. If you're going to deploy Ubuntu 16.04 on a machine, you can probably deploy it in a VM if you've got a chunky enough machine. But deploy that. You're going to need 16 gig of RAM. Update, get all your archives up today and then go and install something called conjure up, conjure dash up. Fiddle app. Get install conjure dash up. Run that. That's going to allow you to build the entire open stack environment on a single machine with that over Lexi environment. Let's go back and see whether we've got as far as that. There we go. For whatever reason that wasn't big. I'm having to run new RecessSH tunnels to be able to get here. It's quite likely that one of us cracked out. Forgive that. Other places you can get more information are ubuntu.com. We want information about how you can use Lexi to deploy management-run forces of containers. That's there. There's also information there about what we're doing with Nova Lexi. If you want to be able to find that out. If you want Lexi, their Linux container project sits on GitHub like most things and also linuxcontainers.org. So go and check those out. And that was it. I'm finishing with four minutes to spare. Does anybody have any questions? I guess thank you very much. You get four minutes back.