 Okay, well the next speaker is myself. So hi I'm Stefan Krabber. I work at Canonical on containers. I'm the LXC and LXD project leader and Today I'm going to be going through a year of upstream development on LXD and it's been a busy year so Just briefly in case someone doesn't know What LXD is? LXD is a container manager, but it's a system container manager So we don't really run the your usual Docker type workload. We do run an entire operating system. So think of it more like a virtual machine It's designed to be very simple both on the CLI and our API. It's Got we try to also make like terminology and everything reasonably clear It's very fast. There's no VM or anything involved It is very safe because we use just about everything Every security feature you can think in a kernel plus the ones we've implemented in the kernel So that means all of our containers use the user namespace by default They use second they use a partner. They use the drop capabilities Pretty much anything can think of we've got it And it's very scalable it works on a single laptop just fine You can run multiple systems and then track with them over the network Or if you want to go even further you can start in cluster them and then runs like tens of thousands of containers of against that What LXD isn't well, I can I went through that already, but it's not a position technology. We only use Containers and namespaces in the kernel So it runs on every architecture. It's very fast runs inside VMs It's containers. I mean, you know what eyes it is also not the fork of LXD We are the same project just LXD has been ran for the decade. It's written in C It's a low-level C library to interact with the kernel with a set of tools LXD is one level one level up it uses LXD through a go binding and drives it that way it's Let's us like that way we couldn't provide some of the more modern Interactions and Use go to make things simpler for us for ourselves while still keeping liberal XC in C as the low-level interaction with the Linux kernel Because for anyone who's been doing it in go interacting with namespaces from go is not very fun See works way better And as I said, we don't really run application containers if you want to run application containers That's perfectly fine. You can run Docker inside a LXD container We do nesting just fine that box now as for the actual part of my talk where we look through a year of of containers Well of LXD So we're kind of the highlights for this year. It was a second LTS release for us and that just released means We tagged something so we did two or two years back with its trio this year And then we will support it with bug fix and security only for five years so Next this trio has been released in April. It's a second LTS release. We've done three bug fix releases on top of it so far We do backport most of our bug fixes onto that release And we do have a five years commitment to bug fixes and security updates on that We've also done ten feature releases. We are effectively on a monthly release cadence for LXD so we did miss two as you can tell that was because Well, it took us a while to do this trio, I mean the LTS took us a little while So we were hoping to be able to do the LTS on the side and but that didn't work out So we had we ended up not releasing and I Think it was February and March that we skipped And we've done as I mentioned three LTS bug fix releases Now there's another thing that's kind of exciting. I happened this year. It was a bit of a surprise some extent Chromebooks while all Chromebooks ship with LXD now. So user base grew quite a bit. Thanks for that That happens late September It is advertised as cross teeny or Linux apps on Chromebooks When you use that feature It kind of looks like that the first time you use it It installs Linux just was kind of funny considering you're already on a Linux machine, but anyway and once we've done that then you effectively can install So you get a terminal you can play with the terminal you can install packages inside it but if you know how to get to it you can also drop into LXD and you can see that there's one container running which is called Penguin That's the default to the main container that runs on the Chromebooks It's got a special set of libraries and hooks and pass through setup So that it can reach your graphical server and can render very easily There are some things are still being worked on Current walking progress on their side is USB pass through GPU pass through and sound Those things will be coming. They are all supported by Lexity. They're just not available right now on Chromebooks because Out of an evidence of safety the way those containers work on Chromebooks is actually by first running a virtual machine and Then instead of your machine running Lexi with all your containers So every user's got a virtual machine and then you can have as many containers you want inside it Which makes things a bit trickier because while we can share a GPU very easily on the system directly They first now need to share the GPU with the VM, which is pretty tricky And then we can get it from inside the container, but it's coming So here I can see I've been spawning a bunch of containers What did I do like I did yeah, but different distros to just show that you can use our normal images You can run whatever containers you want on your Chromebook It's all persistent. It all works. Well, it uses barfs as its storage. So if you copy stuff, it uses sub volumes, and it's reasonably quick and as I mentioned you even have gooey access so in this case what I did is I just in the terminal that comes when you enable Linux apps in your Chromebook I installed frozen bubble and Then you can just go into the up launcher on your Chromebook like you don't need to do it from the CLI or anything You go into the up launcher and alongside Chrome sheets slides, whatever you've got in there You're gonna see frozen bubble because it actually pulls the list of applications and packages that are installed in your container And they show up in your launcher. You click on it and it spawned and Spawn inside your container and shows up on your graphical server So it's pretty neat integration really looking forward to seeing them land More complex integration as I mentioned GPU USB and sound are gonna be pretty neat because people who wants to run steam inside there And so far it does so long as the game is 2d and your CPU is beefy enough to run GL in software But it's gonna be much better once we get GPU pass through on those Chromebooks All right, so next day itself Um trio trio was pretty busy. So kind of the highlights there clustering We presented that last year at first them. It wasn't merged yet. We merged it a few weeks afterwards So you can take a bunch of xd servers When you do the initial configuration through xd init it asks you whether you want to set up a cluster you say yes And then it pretty much works as usual and all other systems all they have got to do is say I would like to join a cluster and the IP address and now you've got a bigger and bigger and bigger xd CLI API everything works the same way when you spawn your containers they get balanced unless you tell us where you want them So that's that's was a big piece of work But it's been working pretty well with the annoying a bunch of bug fixes on it obviously But we've got people running tens of thousands of containers on clusters and books just fine Other new feature was also something I presented at first them last year lexd p2c which is physical to container Import tool so it's a tool you run inside An existing system and that sucks it up and sends it to lexd to run inside a container So that tool was released as part of trio. We also did nvidia runtime integration So that's passing through your CUDA libraries your nvidia driver and a bunch of that stuff for containers that want to do deep learning and AI type workloads We added support for hotplug of tunics character and block devices So effectively with that you can say hey I want this serial port to go into that container and even if it's not there when you start the container when you plug it in It's just gonna detect the ui van from you dev and then pass it to the container I Did some more logic for storage volumes? effectively letting you Copy storage volumes across the network as well And we added a new device as a proxy device effectively let's you say that you want TCP port whatever on one of the host IP to be forward is to whatever in the container And it's a post different modes like it will do yeah, it will do IP tables or it will do Some fence your internal proxying we do so you can even forward between UDP and TCP and between Unix socket and TCP or We've supported a lot of weird combinations All right, I'm gonna go pretty quick with we've as I said within 10 releases So I've got about a minute for release The next one was a lot less busy as it turns out when you only spend a month on one instead of three months There's less stuff So we added backup support that lets you through the API or CLI export a container as a table either Just like a simple table of the root filesystem or an optimized version Which includes like a binary blob of the storage driver you're using so if you're using burrFS or ZFS You can get that optimized blob which makes it much faster to re-import and also will save you space Because it bundles like all the snapshots in all the metadata. I'm excited we've added Automatic fan networking for clusters so that's effectively as overly network that exists on Ubuntu If you enable clustering and you don't have physical network that's shared between your containers It will use that so that you get the same network and you don't have any weird Issue with like a container big on one machine another one began on another Now going to 3-2 Container migration between storage pools so we did custom volumes two releases before that But we still have a bit of a big gap where like containers could not be moved between two storage pools on your system so finally fixed out we've Expended for the proxy device can do by adding support at that point for UNIX circuits both abstract and normal file base UDP and port ranges and we've also made it possible with like a single rest API call to join an order to a cluster Before it was like five or six calls Then next release we've added a feature that lets a container pull images from its host So if you want if you're in lexity inside lexity, it can avoid going on to the network to the node the image It can just don't know it straight from the host So it's a pretty convenient pass-through we've added So a new API to query networking details from the host that includes getting Packet count bite count all that kind of stuff for any network device It's probably useful for clusters when you want to like look at a remote host in order to actually be gonna be compatible with what you want We've added a small feature that people are kept requesting which is like a flag You can set on a container so that you can't delete it without first and setting that flag because people do typos and printed notes Like it's so much when their production containers go away So we did that and on the proxy side We added support for the HAPoxy header protocol and we added UID GID mode control for UNIX Sockets as well as supporting Privileged dropping in the proxy device which was needed if you are using a proxy device to forward x11 traffic Because we can use that device not only for like normal web traffic or whatever But you can also use it you to forward your X servers sockets to the container so you can run graphical applications And for that you need the UID and GID to line up with your own user So we need the bridge dropping for that and we've added a built-in P prof server that you can turn on on any address you want to then pull statistics or debug data out of lexity when it's running All right After that so we improved on to on top of what we did for the automatic overlay feature for the bridge on clusters by also having a DNS forwarding protocol effectively that makes it so that the local So we use DNS mask in the back end and yes mask will generate Records for your containers, but if you're in a cluster you want to see all of them Not just the ones that are on your machine So we've added logic to quickly forward and replicates those across the cluster so that the DNS view is also consistent We've added a new API that would let us pull all the data units to list all your containers in all details in a single rest Query instead of Like sweep or container or something we had before that makes things massively faster. Obviously, that's what we were working on getting Listing and tracking with 10,000 containers to take about 30 seconds instead of five or six minutes so We got that to be pretty reasonable and we've added file capabilities support in a bunch of different places that included When remapping the container when unpacking images like as some mentioned earlier in this room Fight capabilities are getting pretty important for things like ping And we were running into problems there So we made sure that we used the feature that landed a few kernels ago To support unprovenged file capabilities and that lexity itself knows how to read them convert them and write them properly as it remaps the containers and The release That one we improved our external authentication support So we support something called macaroons, so they're fancy cookies. That's the name That that should do decentralized authentication and external authentication for lexity. So we've added support for multiple domains in there we've added extra checks for security and we've added separately we did some work To Guarantee that the cluster upgrade is consistent So effectively having the nodes that are not upgraded yet to be aware that they need to upgrade now So they can do something about instead of waiting until someone updates Then and I need to really start speaking fast now Next release was so pretty big one. We added projects So that's a way of grouping containers images and profiles and hiding them from each another That's very convenient when you're working on a bunch of different things on the system Or if you've got a system shared with multiple people you can use that feature to isolate Who can see what effectively and grouping things together? We've added Done some more some more work on storage with snapshots. We've done the initial secret V2 support Just initial. We don't do any of the fancy controllers yet other support for encrypted certificates because Yes, if someone stole your lexity credit, maybe it'd be bad. So now we support password protection there As Christian mentioned this morning, we added a few event injection for USB devices And we improved some of our network handling to make it faster Last word on the release we added incremental container copies. So that lets you update a existing container To effectively run a backup host and you can run background copies from one host to another and just keep that container up to date We've also switched our default keys to elliptic curve mostly because some architectures were stupidly slow with RSA So easy was significantly faster to generate for what should be actually slightly better security And I think that's the last one. I hope so it is We've added automated container snapshots recently so you can give a cron type pattern on your container and lexity will generate snapshots for you We've added support for moving containers between projects We've added support for replicating images within a cluster to make sure that you can't end up in a situation where You've got an image in database, but it's nowhere near on disk because the node that had it's gone And we've added support for better configuring what address you want clusters to use for the internal traffic and for public traffic And that's it That was a pretty busy year. I mean we're a team of three people I mean for now, but it's been a busy year and We expect next year to be just as busy really So that I'm not just I'm just just a half All right Anyone's got anything? You mentioned The containers do you have the container configuration backup or the storage? so Backup right now The way we do it is so when you request a backup it will create that of the my star ball effectively It dumps it into one specific actually on the host that you could start onto whatever you want There's no just like we just like images on this you can't tie them to an ex just particular storage pool backups are the same way But you can't totally Mount whatever you want on the backup site tree and then it will store that there We might at some point make a way of actually tying that to a storage pool It just gets tricky then if you try to delete the storage pool on Much of gonna cases don't it funny Thanks for some good work this year Yes question. What's the status of way land and LXD containers? So we've not I've never actually even time I let up friends way and I usually just forward X 11 and let's let X Wayland do his thing But the Chromebooks are way land and clearly they've got it working. I'm not exactly What circuit they need to pass? I think it actually depends on the compositor and the way that the API works for a particular shell effectively But it's clearly possible because that's what the Chromebooks are doing. Hi again. Thank you for a nice presentation and all your work There's been some changes and with introducing the proxy service and I'm wondering how does it behave with the containers that I mean if it's just IP tables nothing how does it behave with? Containers migrating across the cluster and does it actually do proper nothing to wherever the container is within the cluster? No, so like yeah, that part it doesn't and that's obviously a bit of a problem sometimes more Recently often people are trying clusters would also have like an externally managed network And they just manage it as part of the network rather than expecting Lexi to be the gateway We could do some very fancy things if we wanted because that proxy As I mentioned that doesn't only do IP tables. It also does its own thing It can attach to two namespaces and then does internal poxing between the two Which is very convenient because it means you can run you can have a container that actually has no networking and You bind a service on local hosting that container and you can totally have the poxy poxy to that that works fine in theory we could change that thing to also poxy internally so IP over the network, but It seems like a lot of work and we've not really had a strong enough use case to to spend a month trying to do that That's it. All right. Thank you