 Alrighty, so thank you very much for coming to this session We're here this session is to talk primarily about containers and we wanted to kick off With something quite exciting and interesting and that's exciting and interesting means a demonstration Demonstration of the entire internet Modeled in containers and live migration application. So I'd like to introduce Tycho and since Tycho is part of the Ubuntu server solutions team He's actually well known in the canonical Ubuntu community for being the tallest engineer on the team With with that with that power he's going to demonstrate The internet in containers. Thank you Tycho Hi, so What I'm about to give a talk about there. There's a colleague of mine who's based in Montreal who runs a small security conference there and what he was interested in doing for the security conferences he wanted to have because There's a lot of attacks on the internet itself We've all heard of like BGP routers getting screwed up somehow And so he wanted to build the whole internet so that his little security conference could attack it and play with it So what he did was he he he built the internet using LXC LXC and He built it using unprivileged containers because this was a security conference He didn't of course want the the attackers who may be attacking the internet could also attack the containers and take over his Host that would also be bad. So everything you're seeing here is unprivileged containers using the username space technology So what I have here is a picture that Shows all of the sort of there's 250. I think core BGP routers here And this is a picture of how they're all hooked up together inside. This is all running on my laptop here And so you'll notice first of all that there's no links across the Pacific. That's just because the JavaScript Software doesn't support that so he didn't put those in so you get some sort of interesting behavior when you're Processing when you're doing trace routes and things like that inside the internet so the first thing that I'd like to show is just the Memory usage. Yeah, I guess you guys can everybody see that This the the containers here take up about 1,800 megabytes of memory So there's 250 containers here. So that's about seven megabytes per container And that's again unprivileged containers and that's a full of bunch of so you can log into all of them and You know app get install whatever you want or do Anything you know that you're interested in so the next thing is That I'm going to go pick this Router somewhere here. So this is the Vancouver router, which is on basically on one end and So I'd like to log into that one. He has another thing I should mention is this script is called the internet So you can go to github and clone this script and you can make one of these yourself There's there's a little bit of setup for them privilege containers, but then after that it's pretty straightforward So so what I just did there was I attached to the particular container this container that was in that is, you know Vancouver and so now I'd like to pick some other Container Say this one in Cape Town called CTO dot Linux con dot CTF and so what we see here is this is a trace route of The containers you can you can see from walking from the router that I was logged into all the way to this one in Cape Town And this is the the latencies here are simulated latencies using some kernel features in 316 and So you can see that if you look at the bottom here if you can read it It says that the latency to Cape Town is about 370 milliseconds from Vancouver and I asked the only guy that I know from Cape Town Mark. So worth if that was correct And he said yes, so I There you have it And now I'll turn it over to Dustin. All right. Thank you. Tyco Okay, so let's dive into a little bit of content about containers and we'll start with Docker This is the classic hockey stick chart that every VC investor and every startup founder is looking for You can see up until about 2013 anyone searching for Docker was looking for pleated khaki pants and then about 2013 bam straight into the stratosphere and that is what's happening right now In our industry around this concept of Docker. I said it again Docker darker. Darker. Darker. Ah Dirkadirka It's just there's so much Docker out there right now in the in the world There's there's meetups and in your hometown and every and every in every tech city Anywhere has hundreds of people people, you know, that'll fill a room just to hear what's new and hot next about Docker in particular Ubuntu loves Docker we love Docker within the Ubuntu community the Development side of what we're doing in Ubuntu our users also love Docker We've made it the easiest simplest way to install Docker and run Docker It's three simple commands app get installed grab an Ubuntu image Docker run and boom You're dropped into a shell inside of a Docker container that Fundamentally is an Ubuntu image And the inverse is true as well Docker users Docker developers also love Ubuntu There's been over a million downloads of the Ubuntu image That have been retrieved from the Docker repository and as Mark said in in his In his his talk just a minute ago. That's 6x that's six times as many downloads as the next closest base image Let that sink in for a minute that's incredible that really is Powerful and why is this happening? Well, Docker's built this incredible ethos around what it takes to build Applications and what is the best way to deploy? Applications and what is the best way to administer? Applications and so, you know, there's this entire mindset that is core to everything that's going to happen around a Docker deployment where you know the developers are building their applications, but they're packaging them as containers as self-contained shipable mechanisms that You know, it's it's it's much of It's it's it's really self-containing everything that it takes to move that application around from cloud to cloud from host to host From platform to platform from assist admin perspective being able to deploy those applications and have all the system dependencies Bundled together and then expose a single port and boom that services up and running and then the Docker hub the repository the place where We pulled these numbers from That's how people share their work and it's really a beautiful model. It's it's got this this perfectly cyclic approach to building applications The perfect Docker file looks something like this and what's beautiful about this is that it's so easy We're talking 15 lines of code here most of which is white space and comments It's simply start from a base image in this case have been to 1204 Install whatever it takes to run that service in this case Apache Set up whatever environment variables and users you might need Expose a port and then this last line is the real magic run a command That command is the only thing running inside of that container. Excuse me. That is PID 1 inside of that container Child processes that might fork off might also be running but what you don't see here is a boot process this container didn't didn't boot up what you don't see here are TTY's or or an SSH Terminal or any of a knit. There's no upstart. There's no system D. There's no CSV in it It's simply that Apache binary running inside of that container and it's beautiful from a security perspective But Docker is fundamentally not a hypervisor You talk to any anyone at Docker and they will explain to you that Docker is really an application distribution mechanism It's a way of of building apps shipping apps containing those apps and then ultimately deploying those apps and frankly It's it's fantastic at doing that and we love it for what it what it is in doing that What mark introduced in the first session is what we're calling a new hypervisor full system container high performance It feels like a VM it boots. There are TTY's there is an SSH demon It has the entire experience of everything that you'd expect from a full system virtualization Like a like a KVM or like a Zen, but it's all happening inside of a container Design with security as the first and foremost design principle That work is based on the user namespaces, which is upstream in the Linux kernel. It's some work that that we've been leading It utilizes set comp to confine the sys calls that a container might need or have access to App armor for mandatory access controls around the the profile of that of that container And we're actually working with hardware vendors on the next iteration of CPU features for hardware assisted Containerization so it's it's similar technology. That's on the roadmap as soon as it's available in the silicon It will be available in Ubuntu as well Live migration is a core tenant to To a proper container hypervisor. It has to be smooth fast and reliable. It has to come up As quickly as possible minimize that downtime between when a service migrates from one place to another We're talking fractions of a second Downtime in the demo that we're going to demo or in the demo that Tyco is going to show here in a second It'll be a little bit longer than that, but you'll see why Live migration is so important to everything that we're doing in the cloud and containers provide a fantastic Mechanism for for doing that at scale Speaking of scale density density is what we are hearing from from every customer that we're talking to they want to Maximize the amount of density of guests per hardware and by customers. I mean service providers typically the service providers want as many Units on a given system as realistically possible without degrading You know as long as you can still manage the the the service level agreements and without degrading the experience of other users Containers mean more density. There's less overhead than the full virtualization stack And that translates into more instances and more efficiency Performance We're we still we love KVM. We love what KVM does. We love the performance of KVM But we've seen numerous white papers at this point comparing KVM and Zen to the performance of LXC Docker and other container solutions And containers give you as close to native hardware performance as possible as as theoretically possible the overhead is Essentially zero at this point So mark introduce in the first session the new extension to LXLXC, which we're calling LexD. It's a persistent demon It's a demon that runs on a system So you you take a system and you'd have to get install LexD and now there the LexD demon is running on that system and it's managing the containers on that on that on that host so it's acting as a Provider, it's also providing a REST API and it does that over it can do that over a remote network Or a local socket and this is important when you want to tie LexD into a Nova compute or some other management engine perhaps You know beyond open stack It's also going to introduce or that also is introducing a more powerful graceful CLI and we're reworking the LexC CLI as well. If you're familiar with LexC right now, there's 20 something commands. They're all LexC dash Something we're throwing that away. We're taking the git style approach where there's one command an action verb and then Dependent parameters upon that. We've we've really we've really improved the command line experience for LexC and LexD both Core to that live migration Snapshot checkpoint migrate restart snapshot checkpoint migrate restart That's how you move a machine back and forth and that's based on the upstream Linux kernel work around Checkpoint restart you may you may see talks are here about Cree you CRI you That's checkpoint restart. There's a bit more to it than just checkpoint restart. You've also got to to migrate as well Secure by default that that is again something we've heard from every customer We've talked to multi-tenancy is king in open stack environments and service provider environments That security has to be there and there's quite a bit of work around especially the network stack to to make that happen And that's all work that is currently in progress and we're making good progress on it The other piece I mentioned that work is storage Dynamic extensible it has to be able to be tied into other systems into Hardware provided Software-defined storage software-defined networking and being able to tie that use that inside of containers is is also important Leslie it's worth mentioning that this work is all currently implemented in go We've had a tremendous experience with go Working in go on juju the performance the concurrency the security of the language The the built-in network primitives the fact that there's a web server that you can start running in in a line or two of Go code makes the that that REST API a very simple extension of everything else that that demon is providing We found some of the smartest and best cloud savvy developers in the world are working in go And it's it's really a pleasure working with that community and working with those people So the other half of this is Nova compute aspect of it It's currently released in open stack Juno, which released a couple of weeks ago now We were trying to save the fanfare for for this week The previous codename of Lexi and Lexi was flex and so it landed in open stack Juno and Ubuntu open stack Juno as Nova compute flex You can actually start instances if you're running Ubuntu open stack Juno you can start instances as LXC containers Lexi containers at this point. We call it a tech preview and that We did not get it upstream for for for Juno, but we're currently working on upstreaming that work for for kilo This is the beginning of the summit the beginning of the cycle for kilo and this work is absolutely going upstream into into open stack for for kilo So Nova schedules the instances as full system containers what that means is from a end user or a sysadmins perspective You know you're launching instances and they're landing inside of containers inside of instead of KVM or Zen And all the benefits that might come with that the density and the performance of course also the ability to run on non Intel 64 hardware, so you know power Arm other architectures certainly come into play when you're talking containers and you're not dependent on Hardware provided or hardware assisted virtualization The images that it boots are straight from glance that's beautiful You're seeing the pattern here right these instances work just like any other instance in in open stack itself The networking is provided by by neutron This involves quite a bit of work around neutron and LXC so that Containers can draw that that that IP address and floating IP and and so forth and handle the the mapping and the the tunnels That's that's all work that we leverage from from neutron. We're currently working on Utilizing the storage that Swift-Saf sender all the the various storage mechanisms from from open stack Funneling that into the instance which is running inside of a container All right, so you just heard from Dustin about Lots of real hypervisor stuff So now I would like to show you some real hypervisor stuff one of the things that we've been talking about is container migration So just to give you a little heads up about what's gonna happen. So I I have two hosts They're connected by a network and then I'm also going to connect to the hosts And then I'm gonna I have some containers on one of the hosts and I want to Migrate one of the containers from one host to the other And then I will try to migrate it back And then we'll see how all this works So in the spirit of as Mark said taking our life into our hands I'm gonna try this and so I hope you all bear with me So what I have here is on top on the right hand side of the screen you see on the top. There's host one It's just running the LXC Lexi info command So it is telling you right now the container is stopped and on the bottom that's host two And it's also running the Lexi info command that's telling you that containers not running there either So I'm on I'm logged into host one now. So So I'll start this container and it comes up and then I Will just go ahead and try and migrate this container And so the tool that we're using to do this migration is a tool called Kree you that I've been contributing to for the last six or eight months And I've also done a Lexi wrapper around that tool that migrates some other stuff that a Kree doesn't necessarily know about so So basically what you saw there is it went from the top to the bottom and it Migrated everything that the containers now running on the the bottom host and then I can run this and migrate it back and Sometimes it takes a little while. I have basically done no work on optimizing this process at all So this is really just checkpoint our sync restore. So there's no iteration or anything so that's very cool I can migrate a container but What you guys are all interested in work what you guys are all interested in our workloads And so I have sort of an unusual workload that I really important workload. Yeah, so I actually don't I Know that this is a big cloud conference And so I probably should have done a cloud workload But I'm a little bit of a rebel So I'd like I'd like to do this. I don't know if any of you guys remember this this was a video game that came out a while ago Yeah So what I like to do so this is a VNC and I'm running doom inside this Linux container Actually, I have a web server running here too So I could show you I could W get some stuff, but I figured this might be more interesting So what I like to do is migrate this container And so you'll see the doom freezes and it does its thing and it stopped on the on that one host And then doom comes back And then it runs That's sort of the reaction that I had when I first actually I said some other things that I probably won't say here, but This is It's pretty cool stuff. So you can migrate it back to So this is these are the sorts of things that we're gonna bring to you with Lexity is the kinds of technologies Probably you won't use it to play doom, but you might use it to some other stuff That's maybe worth some more money. So Yes Yeah, so it's just what says here. Yeah 29 mic. Yep. So This this is it's running in a frame buffer because actually crue right now does not support dumping arbitrary devices So this which is one of the reasons that I can't do sound here because we don't support dumping sound devices Or video devices So patches are of course welcome. There's a crue plugin engine So if you are you have your own custom device that you want to use you can write a plugin to crue to dump that device Or I like I say I've been working with upstream as well and they're they're very open to patches and things so That's the coolest thing I can do today Internet in a box and live migration of doom of vmx server Cool any other questions go ahead correct. Yes, correct Uh those containers in vm's versus lxc or lxc lexie versus bare merrill Right, so I can refer you to two white papers that we've studied one from ibm that was published this year comparing Um comparing kvm to containers docker Uh specifically and then there's a second one from a university in brazil Um, it was a it was a graduate white paper comparing uh lexie to lxc to zen okay and in both cases the conclusion the short of it was Containers perform better than full vert hypervisors kvm and zen in some cases They were about the same but certainly never any worse And and in particular workloads it preferred formed at bare metal at bare metal speed So depending on your workload IO being one that really benefits especially disk access really benefits from having direct access to To the hardware as you would in in a container. I mean you can always set up vm's to have You know full access to block devices But in in that case so iow workloads you're talking, you know, hadoop or big data, you know And that whole suite of no sql databases really good Uh really good use cases for that for that for that that native performance the other thing that we've seen, you know I mentioned several times security and multi-tenancy being very very important and we've heard that from a number of particularly our our ISP type customers where multi-tenancy is is so important to them we've also seen a whole another class of of customers where The the noisy neighbor problem isn't nearly as big of a deal or even the hostile neighbor problem Isn't nearly as big of a deal for the most part in their workloads everyone running on that machine is Sitting around the same cube or in the same room more or less In which case the the multi-tenancy thing isn't nearly as important someone stepped on your toes You know knock them over and start a new container or something like that So getting you know hacked by your by the next ip over isn't as much of an issue That said we're still desperately are keenly focused on the on the on the on the security aspect as well In the back You're absolutely right. Um I Let's add that in taiko. No, you're absolutely right. C groups and namespaces totally totally go together glossed over that one C groups and namespaces are really the two fundamental technologies that make lxc and lxc and lexd possible I highlighted some of the other ones in that that's That that tends to be the the you know our focus, but yes c groups so important To everything a container does Thank you Yes, sir Oh, yeah, absolutely. So Back to the portability Of story for containers another beautiful thing about containers as I said it runs well on non intel 64 bit Architectures and I mentioned arm and power pc. I neglected that it also runs well inside of A kvm inside of a virtual machine There you are, you know, you will pay the some penalty for the for the for the full virtualization However being able to run multiple workloads inside of a vm is An extremely powerful and useful scenario on every one of these orange boxes Mark used deploy kubernetes on on this one. We're going to use these two and In demos coming up soon But we are packing workloads very densely on either the physical servers each one's right. There's 10 of them in here Each one is represented by a light when we need to pack more than 10 services to deploy something like a hadoop Like a open stack like a cloud foundry We're we're co-locating services on machines and we're doing that inside of lxc not inside of of vm's There's a open stack is running on this machine Which we're going to use momentarily and on one of the nodes We've got mysql inside of a container rabbit inside of a separate container on the same machine Keystone on on that machine and the dashboard on that machine for separate containers One piece of hardware four different ip addresses each one is you know addressed independently as a unit And that's giving us fantastic performance of each of those components But also you know densely packed onto one system and then that gives us nine other systems that we can You know put more interesting workloads on maybe a nova computer something yes, sir A mixed hypervisor So like a nova compute that's hosting both kvm and and lexity Good question. I don't I don't know Um Let me get back with you on that one. I'll give I'll give you a card Yes, sir Local storage only and in the in the tech preview that that we released, you know two two weeks ago It's a local. It's your local disc. It's whatever you've provisioned that flavor To have from a local local local disc Right, right the disk image is part of that that art sync that that moves over right Yes, sir migrating across clouds Uh There's no reason you couldn't do it. I guess as you would just be slow Um At the core there's a sorry at the core. There's an r-sync of the base data Um, then you pause the system You do another r-sync of anything that might have changed during your first r-sync And then the memory of the system is moved over as well as processor states and and processes and anything else Right a bit of accounting So if you're if you're willing to pay the you know slowness cost, uh, then Absolutely of the wan of going over a wan right what we were doing here was over a gigabit connection Alrighty, well, thank you very much Dustin. Thank you. Taiko probably the best ammo. You'll see your day So we have finished a little early But I know a few people are asking about the schedule because they didn't see it just for your information It is on the screen outside the uh, the door here, but coming up straight after lunch. So that'll be at um Well at the end of lunch, we're gonna have um extreme open stack. So mark referenced Some of the performance testing work that we've uh, we've just completed recently Spinning up open stack on 500 plus nodes. Some of the lessons learned there with juno and some of the enhancements in neutron Um server tech lead canonical james page is going to be running through that So, um, if you're interested in learning about performance tuning in open stack and some of the enhancements in juno Please come back straight after lunch. Thank you