 All right. Hello everyone. I'm Tycho Anderson and I am an engineer at Connacle and I'm working on LexD Which is the hypervisor that isn't And so that's what I'll tell you about today. We announced LexD at OpenStack in Paris About what was that four months ago or so and so we've been working on it ever since So the first question is what is LexD? LexD is a container based, we're calling it hypervisor That The underlying technology is LexD, which is a container tool that we've been involved with for a long time and So the way you can think about it, I'm calling it a hypervisor But it's not really traditional virtualization in the KVM or you know a VMWare sense Because containers are different containers when people talk about them What they're really talking about is a collection of technologies in the kernel things like namespaces, c-groups SCLinux app armor and those sorts of things and they're talking about a way to use all of those technologies to build Isolation So what happens Is that when you start a container you you put it into its own namespace or something like that and So it when you start this another process you put in its own process ID namespace and it is the It is the init process or process ID one and so it can it can do whatever it wants and it thinks it's the world And so the difference here is that we're sort of virtualizing the kernel at that level So there's lots of little in it in it processes running as opposed to Lots of you know whole hardware virtualization like KVMs And so that's sort of the two-minute. What is a container pitch? So what what is lexity lexity is a way for managing? System containers, so we're providing a daemon with that the exports arrest API, which I'll show you some examples of It's also a daemon that can do hypervisory things which I'll talk about in a little bit and then Finally, it's a framework for managing container-based images And I'll talk a little bit about that as well So I guess the then another thing is what isn't lexity. So canonical has been doing containers for a long time but Lexity is not a network management tool. So There are lots of different ways to create and manage networks one of the Projects that we have a canonical is called the open stack integration lab and we have Something like more more than half a dozen I think eight or ten now software to find networking partners and so all of these provide nice ways to manage your network We're not out to invent another one of those. So what lexity will do is it'll tell you about your networks What networks are present on that machine, but there's no no network configuration at all? It's also in the same vein it's not a storage management tool there are lots of vendors Selling storage tools. There's lots of open-source stuff again open stack has an implementation of Block storage and things like that and so we're not interested in getting into that business. However, this is a little bit Misleading because we do want to be kind of storage aware for example Since one of the things that when I talk about hyper advisory things one of the things that we're going to do is we're going to allow you to live migrate containers between two To hosts sort of in the same vein that you know KVM or other hypervisors would let allow you to live migrate things But what we'd like to do is be Storage intelligent so for example if the source and the target hosts are both running butter FS One of the things that we could do is do a butter FS send in order to get More efficient data representation across rather than our syncing and having to scan each individual file or We're also in talks worth another major hardware vendor about even more storage specific and container specific ways We can improve the performance of migration And the last thing is it's not an application container tool So one of the things people ask me a lot is what is the difference between Docker and Rocket and Lexi and we kind of see these as sort of orthogonal things When you when you want to do when you're developing an application and you want to deploy it Docker provides this very nice thing the Docker file where you say start from Ubuntu or whatever and you Install these few things manipulate the file system in this way and then you run Apache and That's that's how you or you run, you know, whatever your application is and that's very nice But some in some cases it doesn't work in one ex or it's not sufficient and one example is a canonical Whenever anybody publishes an update to a package We we as is part of the launchpad infrastructure. We build that package and And we run what are called the dep 8 tests So a Debian has this thing where you can indicate here's how to run the unit test for a particular package And so the problem there is that when you install the package you install all of its dependencies You get all these daemons running and they all need to be up and configured correctly in order to Make sure that you know the package is running and then when you run the tests You're not really running the application. You're sort of the test suite doesn't is not like a great entry point You don't say exact this test suite So what you really want is is a whole Linux system You want it to look like a Linux system and then you just run the test suite and you you see whether or not it works and so a canonical every Debit test that's run on any package every time you upload a package all of its downstream dependencies get Their tests run and all and all that kind of stuff All of those are run inside containers and so that's kind of the distinction where the Application containers are very nice if you're a developer and you're writing an app And system containers are nice if you're trying to do Systems engineering or you for whatever reason you want a full Linux system. And so that's kind of the distinction so now I've hopefully managed to convince you that Lexity is a container based thing and so now I will use this what is Lexity Slide further as the outline for the rest of my talk. So next thing we'll talk about is the rest API and So we have several endpoints Like just traditional rest endpoints the first one is containers This is where you you know you you can start stop create delete snapshot other things do various operations on Containers the second endpoint is images, which is a an endpoint that you can use to manipulate images Using this kind of this image-based workflow that I've talked about a little bit and then the last is networks So this is an endpoint you could use this to ask, you know, what networks? Does this host have which containers are on which bridge that kind of thing and then when there's lastly there's some other administrative ones? things like You know so for example create is a asynchronous operation because it might take a while and so there's a Way to ask the state of operations you can ask for example You could ask Lexity instance What's your underlying base file system if you wanted to do some of the file system specific optimizations that I talked about earlier I mean then all of this is secured by client certificates and TLS 1.2 So we're we're trying to do the industry standard thing there So I'm just gonna throw up a little bit of gobbledygook here This is a W get call that you might use to create a container There's a lot of stuff in there about getting the certificates, right? No, that's really important It's really just you see the HTTPS colon That's the URL you would use to create you post some data you the name and then I have in there dot dot dot This is basically just Where does the image come from you know where is What selects do you gonna use as the base image to create this container? And then the response you get looks something like this. It's just some json that says Okay, we're running it and here's the operation key And so I'm gonna do this a few more times hopefully the code all this crap doesn't scare anyone the real goal here is We have been looking for input on how to design this API from both our partners and internally And we would like your input as well if you are a potential user of Lexity and you We have a spec on our github Page that describes how to use the API and if you see something there that looks ugly, please tell us Because we don't want it to be ugly. We want it to be nice and we would like you to use it so A second endpoint this is Maybe now you've created a container and you do a get And so it tells you some stuff including the container's name And then there's a thing here that's empty called configuration configuration in this case might say something like this This container is allowed to use 25 of the ram on this machine And the reason it's 25 and not a specific amount is if you suddenly migrate containers between machines with different amounts of ram You may want You may want to rewrite that rule So for example, if you have a big compute host and when things spin up you migrate this container over to that Big compute host and it uses lots of ram You you don't want to stick with your one gigabyte limit that you originally had you want to go to 25 of the big compute host ram Or other things like cpu sets things like that There are all sorts of things you need to think about when you're migrating and how to rewrite that kind of stuff So those are all Configuration things and then profiles are basically just collections of configurations So for example, if you're a web host and you sell one kind of container That's a one cpu 512 megs of ram thing. You can just set that in a profile and then every and Every container every person who buys one of those gets assigned that profile And then the last bit here is just a some status information. It tells you Whether or not the container is running and some other information about it So the networks endpoint here. I just ran this again on my laptop on my way here And so I have it tells me I have four interfaces. I can't there's this is read only I can't write to it at all But it tells me I have my local loop back. I have the wireless card on here I have lexi br0, which is an interface that gets created when you app get install lexi And then I have vr br0, which is another interface that gets created when you run libvert And so these are just all things that I run on my laptop here And so that's those are the interfaces that I had so that this is just an example of this Is what you what you can ask lexi about you could ask lexi Tell me all the containers that are on lexi br0 and it would give you a list of those containers But you cannot create any networks or anything like that Okay, so Hopefully that's the end of all the scary stuff and next is some pictures. So I've told you a little bit about hypervisory things So what does that look like? So the first is snapshotting And this is kind of a cool thing Suppose that you have Application that takes a while to get into a steady state You can think of like a tomcat or some sort of jvm based thing. It takes a while to boot up But once it's there, it's good to go One of the things you can do with lexi is you can do what's called a stateful snapshot. So you can actually Turn you know get get the container all going the way you want it start the thing Get it into a steady state and then snapshot serialize the ram state as well as the disk state And then you can you can call that collection of those two things in image And then you can start 50 of those images or whatever And the idea is that it will be faster to start those to restore from the serialized ram state Then you have then it would be to just start tomcat cold. So if you want to scale out a bunch of those really fast That would be a nice way to do that And that's again using the same technology that we're using to migrate Which is a tool called cu by the folks at parallels that we've been contributing to recently The next thing is injection So for example, if you have a container and you Create it and now you want to drop some cloud init data in there to start Something, you know, you want to here's the set of packages to install or here's how you attach to the system Or whatever cloud init stuff you want One of the things you expect your hypervisor to be able to do is drop those files in or read those files out In order to Manipulate the state of the container. So that's another thing that lexity provides a nice rest api so you can Get containers get information in and out of the file system including if you know, it's some crazy block device Lexity knows how to mount it and do all that kind of stuff And the last thing is the migration itself. So Um If you have two hosts and you want to migrate a container from one to the other There's a The one thing you don't want is a lot of downtime if you're spending all this time engineering migration. That's not Something you're after and so the way that people traditionally engineer migration engines in the way that We're looking at doing it with again the folks at parallels and a tool called p-haul is that You do this kind of iterative thing where you transfer as much state as you possibly can And then you say okay since the last time I started transferring state What's the delta and then you transfer a little bit less and a little bit less until eventually You just you make the jump and you go Um, and then that the idea is that the amount of time that things are down is very small And and so you need a Damon on both ends in order to do that negotiation and for example If the restore on this side fails for whatever reason, maybe it doesn't have enough ram or who knows what then you need a Nice landing pad for the fish or the the host Or the container so that it can land and then potentially go back If it needs to have something got screwed up and so With lexity we're We're engine we're building this in from the ground up and then We're using this the all this takes place over web sockets. So the entire migration Procedure is negotiated over web sockets and the benefit of that is that we don't need any external connections or Anything like that. We don't have to be able to open another socket as long as the two hosts can talk Http to each other they can migrate in between And additionally one of the features that we're looking at implementing in the future is that the client can act as a relay so if you're trying to migrate from aws to to Some azure or whatever you can You know you can Download through your client the container and then send it over to the other network And so the last thing is a framework for lexity is a framework for managing container base images And so one of the things I described earlier was you can snapshot potentially running containers as images and these potentially running containers Are Full of you know all the states and everything that you set up And then you then you can send that off and you can publish that To an image server and all lexity images function as image servers So you can publish that to your lexity instance And then other users can go and deploy your image if they like it or You know manipulate it and publish their own or whatever And then so we're also providing a kind of a really tiny version of access control where You can publish either public or private images and An image is private that means everybody who doesn't know the secret code to the server can Uh Not see it, but everyone who does there's no like per user images right now It's something that we could look at adding in the future if people are interested Um So uh lexity road map 0.1 will be released in the last week of january This is container management only This is basically the what we're calling the minimum viable product and so the Release actually was supposed to happen today, but instead i'm here giving this talk to you and our the other Developers are all busy also doing lexity work right now So this will be you know the create start stop I've actually got a demo here that I can show you for a little bit about what's possible with Will be possible 0.1 Lexity 0.2 will be released february 18th and the reason I can tell you it's exactly february 18th Is because that is the day before the ubuntu feature freeze date And so this will have our full images Support as well as experimental support for migration the reason that I say experimental If you saw our announcement of lexity in Open stack paris one of the things that I did was I actually had a running doom and I migrated that doom in between containers Um and that works, but there's still a lot of caveats There's a lot of things that are just don't work exactly right when you when you start migrating again, so I probably wouldn't use it in production, but uh, we will have at least what I what I demoed in In open stack paris to to be will be available And then 0.3 is a full specification implementation. So this is all of the things like profiles and other configuration bits and Other minor bits that I talked about will all be available and this will be available sometime this summer And then another piece that I've been getting a lot of questions about Um One of the other things that we talked about in open stack paris was hardware hardened containers So we're talking with uh several large I'm sure you know them hardware vendors About what we can do to Use either what's on the silicon now, um, maybe vtx or whatever extensions Somehow to make our containers more robust securely and then also What we can do in the future. So if we could design anything, what would we want it to look like so that we could make containers more secure? So I have a little Demo here, so it's okay. I guess I hear all uh 10 minutes. Okay, cool Uh So I'll do a I'm running lexity instance Here on my host and so you can see here it prompts me for the certificate fingerprint. Um, I say Uh, yes, that's okay because I somehow I memorize the fingerprint before whatever. Um, and then I It asked me a password And that's that's kind of our one little access control where you can uh, you can There each lexity has a password and if you know that password you can get in and kind of do everything. So um So now I'll I'll create an image. Um, so the syntax you see here image images colo new bunchu is uh Is just a shorthand for whatever the latest long-term service release is and we have if you look again in our documents We have some mention of Other how you could get a particular release of a buncher if you wanted to So now I can I can start this container And the another thing to note is that this client is implemented Using the api. So this is basically just a little rest client And there's nothing really fancy about this the fanciness is all happening on the server side Uh So if we look here You can see kind of on the right hand side here You can see there's espin and it and then all the child processes This is kind of what you get when you get a new bunchu container. This is all the stuff that's running And then on the left hand side, this is a particularly interesting piece You can see, you know, there's me my user There's root and then there's this funny user 600 000 And so what that is is that's actually you can see the user namespace as it worked there So this in it process is running as user id 600 000 So for example, if for whatever reason somebody inside the container breaks out and they're Being very nefarious and they want to attack my system They are still can only do everything that user 600 000 can do and ps looked in my etsy password and couldn't find a user 600 000 because there isn't one and so Assuming that traditional unix, you know user protections exist even if somebody breaks out of this container. They're still Just jailed into that one user Let's see. What other cool stuff can I show you so So, um, I gotta type the right thing So here I just ran cat etsy issue, which tells you the version of The the host that's running We could we can you know also cat etsy hosts we can I'm trying to think if there's any other interesting files here So I can download files to or there's also a file push command if I had an interesting file that I wanted to push So that is sort of my Summary of lex t. Does anybody have any questions about uh Anything that I've covered Sure So I'm an open stack nova guy so my questions are all about that I suppose So I see on the lxd web lxd website that there's plans to integrate with nova Do you know what the timeline is for that work? Yes, uh, there's a guy named chuck short who is doing that work And he has a prototype that I think the plan is for it to be in ubuntu open stack Next cycle and then I think I mean the ultimate goal is to try and upstream it I'm not sure what his timeline is for that But I I know that the the his current prototype is in ubuntu open stack And the lexd based one will be in the next ubuntu open stack Is that what nova compute flex is? Is that the prototype? Yes. Yes. So flex is the code name that we were using for lexd before Before we released it or before we announced it. So okay, but nova compute flex isn't actually using lxd yet because lxd isn't released is Right, exactly. So so that's why I mean what that's one of the reasons that we have to do a Release on a day before the feature freeze because Nova compute lexd will or nova compute flex Will use lexd and so we need that in our archives in order to To make that happen. Okay, cool. Thanks So you mentioned that this is different than docker because docker is mostly Application oriented. What does lxd do that docker cannot do advice first off? Well, I think it's more about the toolchain design. So for example in the in the era of You know immutable infrastructure You one could argue that it doesn't really make a lot of sense for docker to be able to do migration because It's immutable. So there's I mean, there's no state. So what's there to migrate? Why don't you just fire up another one and throw the one that you didn't like away? So there's a lot of decisions like that where we're kind of Featuring migration You know one could argue that the docker people if you're doing it the right way You aren't migrating or you shouldn't be migrating. So there's I think there's a lot of just It's I mean, it's all fundamentally. It's all using the same Linux kernel technology It's all using the you know the namespaces C groups all that kind of stuff to implement it. So there's there's nothing to stop you from doing what one could One or the other could do it's it's really about design decisions that we've made and that they made Does that answer your question? It sounds like there's Well, they're actually there is people at google that are working on live migration for docker. So You know, it's it's it's I think that's up to the docker community At this point if you if you read their documentation and you do things the way they want you to do them You just run one application you run a patch here you run whatever in docker and So that's kind of their design constraint and ours is that that you're running some sort of a knit system d or whatever And I guess So that's sort of a distinction How do snapshots and migration work with sockets and all of that? Sorry How are sockets maintained through snapshots? Right, so It's basically If you migrate and you migrate successfully within a tcp timeout window everything works Assuming again, you migrate to the same network and things like that. So there's no We don't have any black magic that nobody else has there. We you know, we're we're subject to the same laws of physics that everyone else is so We best effort hope you hope it goes fast Uh, you mentioned the image server Is there any plan to actually have a dedicated process that can actually do the job of serving images out like the Docker hub registry as opposed to actually having just any lxd server Yep, overloading that use case. Yes, absolutely. Um, there is uh There's if you go to I think it's a registry Linux containers.org today. Um, we have a it's a very simple registry uh of Just images that lxd provides The lxd upstream provides itself But yeah, there's I mean, there's no reason you couldn't just implement the api as a separate daemon and and potentially that's what we may do Is just have a separate go binary that can also That it can also do that Okay, how many more questions do we have left before we Just you have one. Okay, so we have like two more. So we're just gonna Uh, since you mentioned migration and there was a question about sockets Are you using the crue of uh infrastructure of the kernel for checkpoint and restore? Yep. Yeah So we're using also the user space tool as well To do to do both I've got a question if you don't mind. Yeah Could we speak to see um user space containerized user space apps and like the on the mobile phone platform? Uh bring those take to the to the mobile platform Yes Sorry, um I have to think very carefully, uh about what I say here actually. Um, yeah, you can Okay, let's take one more question since those are pretty short you lucky guy We're currently using um A sort of a multi node distributed app. It's got about 14 separate docker containers. There's a couple of things Um, there's the docker announcements for docker machine and and stuff and docker orchestration Are you guys planning on doing this similar thing? You mentioned the tool chain decisions were a key architectural difference Um So I think our plan is to actually not get into the to the orchestration Um business we have uh canonical offers a tool called juju that will do orchestration that will absolutely use uh Uh lexity uh for us lexity is just a very small piece of the of our entire story um, so if you I think if you were wanting to use the The um canonical canonical way to do things It would be to use juju and and uh tool called mas if you're running it on your your own hardware And then that the juju would know how to drive lexity Or for example, there's uh There's some mention of open stack We're also doing a Driver for lexity for open stack So you can if you're if you're familiar with open open stack and can use that as the Driver for your infrastructure then um That will work too So um for us we're not lexity is really just the container piece There's really nothing else that we're not you know, no networking. No storage. We're not trying to invent anything new here It's really just this is the system container piece and then everything else will uh use that so Exactly. Yep All right. Thanks. I think that's all we have time for thank you taika. Yep