 Good morning. All right, so this is an introduction to LXD on Daniel. I'm Stefan Krader for Canonical, the upstream project leader for LXD, LXD, LXD Plus. So let's get started on LXD first. So what's that LXD thing? LXD is a container runtime which specializes in system containers. That's a bit different to some of the other runtimes you might hear a lot about these days, rocking stock doorscoring. I'm going to get into a bit more details as to how it differs. LXD, especially when compared to its predecessor LXD effectively, has been designed to be very simple. It's got a clean coming out interface that we've really worked on, so it's a little bit can work from how they do things like normal laptop, and then skid it out all the way to the address of container hosts that you can call maintaining the exact thing, tooling and commands. It's designed to be extremely fast, thanks mostly to using containers and other network machines. It is designed to very much act like you're directed with virtual machines, but it's dealing with containers as opposed to you don't have any of the kind of a finished up head, and you might just usually get virtual machines. It also means it works at all architectures, regardless of whether they are any kind of support for better decision extensions. It is secured by default. That means that we use all the camera security features that are available these days to protect our containers in a way that you can give root access into your LXD container, not be a lot of concerns that they're going to get access to your posts. That's pretty different from what LXD was doing, but different from what the other container run types do, where even though you are free in the interface, your user, like if your UID 0 or UID whatever in the container, it still doesn't have UID, which means that you can. If you get a hold of any kind of resources that comes from outside the container, you can then just create and escape the container. With LXD, we decided to go with all the security features that you could possibly use by default and then allow you to select every don-off the ones that you don't need or don't want for a particular container you're running. That means that we do use user-based space. Everything is random, as in the previous user, if you escape the LXD container, you're going to have a lot of much right files, in random nobody UID, even if the user has escaped from it, as you're applying it, as you're running zero inside the container. LXD is also designed to be very scalable. The same tooling does for LXD, you can maintain multiple machines using exactly the same command line tools as you use to interact with your local LXD. It also does offer a REST API. All of the interactions that our tools do with the demand itself are actually all over the REST API, so there's no special things that can be done remotely. Which means that you can then build tooling, either on scripts or integration, things like OpenStack, or other tools that directly interact with LXD containers, which creates them and actually comes inside them and really do whatever you want with them. So, if you've got that code, not typical, but maybe somewhat mid-scale type deployment of LXD, it kind of looks like that. So you do have a number of Linux systems. They all run a reasonably recent kernel that I've been seeing that for three years now, so it's not that recent at all. You need to begin the industry further than I have a kernel version. On top of the Linux kernel, we are using the LXD library, which is the same base as LXD. But on top of that, in the early LXD library, we then have LXD, which handles all of the user interactions and exports the REST API and those of the network out of it. Then on top of all of that, you've got LXD REST API, the LXD directory for all of those machines. And as API clients, we've got our common line tools, whatever they use in the demo. Or we've got a plug-in trick to getting to OpenStack called nobodyxd. If you install that inside your OpenStack, you can then be able to mark whether the OpenStack image is meant for VMs or for containers. And the player number that you use as a player, they will just get either a VM or a container. But as far as what they're saying, it's exactly the same thing. Like all the OpenStack images are pretty much going to be in the same way. LXD is not a traditional technology. It is entirely container-based. That means we don't have any prepare stuff that comes with that. It also does mean that you cannot run something that requires a special kernel, or kernel modules, or another operating system entirely inside the LXD container, as you said. But a lot of people are actually running Linux on top of Linux and using the transactions for that can feel like a bit of a waste. Especially considering the kind of density difference you get between containers and machine. Containers, when they're in the idle, they really don't use any kind of resources. The scanner does a pretty good job at not wasting time scheduling things that are not asking to be scheduled. Whilst with virtual machines, you do, you will always get some amount of experience of not having to switch between your VMs just because of all the virtual hardware and all the way all that stuff works together. That means that you can run, like in the thousands of complete idle containers with any kind of system. Whereas you might be able to run maybe a hundred or so VMs that do the same thing. Obviously, once the workloads don't start pegging your CPU, you're going to end up with somewhat the same capacity, because VMs are pretty good, doing a purely CPU-bound task. But for I-Link and for workloads that are not as active, you can get a lot more density using LXD containers. LXD is also not a fork of LXD. It's developed by an XM6 team that's behind LXD. We're still maintaining the LXD tools. LXD is based on the LXD library. So all of our interactions directly with the camera are actually doing the LXD. But it was, for us as a product, a really good way of coming up with a new clean user experience without breaking anyone who's using the lower-level tools that we have for. So we keep supporting both. And if you want something that's very easy to come together to control, to understand, that does the network structure and all that kind of stuff, you can use LXD. If you are working on embedded systems or other systems where you are very resource-constrained, then you might want to just use either the LXD directly or use the LXD tools that are timely comparison. And lastly, what LXD is, it isn't. LXD, as I said, runs system containers. System containers means you have an entire LXD store, typically unmodified, exactly as you would on a physical system or your machine. You can actually move a physical state of your machine inside the system container quite easily, so it's a matter of getting a root-wise system across. It's pretty different from what you get with Rocket and Docker, which are usually defined as application containers. Those are typically fm-roll containers that run a single process. As soon as that process exists, you consider the container as having stopped effectively. They don't usually run with a system. You can't usually use LXD in two days or attach it to them or that kind of stuff. They're really meant for typically state-based workloads in a natural service environment where you just form a bunch of training containers that do just one thing very well and nothing else. You usually don't have data attached to them. That's usually going from some separate service or separate container. Whereas with LXD, you can move any existing workload you've got. Use just all your configuration tools that already exist. And just do it at the same container instead of wasting an item that's based in Iraq or running a physical system or wasting precious resources by running virtual machines where you're effectively running the same environment inside and outside the VM. Main LXD components. So, LXD, with the same management containers, containers can be snapshotted. We can also interact with their files, configuration, attach devices, pull log files, quite a number of operations we can do. I just recommend the API. All of our containers are based on images. We do publish daily images for a good chunk of these shows. You can also build your own images if you want to or you can publish an existing container as an image that you can then use to create more containers on different machines. We also have an abstraction layer which is pretty useful especially when dealing with resource constraints and attachment devices and that kind of stuff. Those are pull files. Pull files can have config and devices attached to them and then pull files can be attached to the containers. So, you can just do your configuration once for a given type of status and then just use that pull file for your containers. So, more recently on Sosovic over a year now we added support for network configuration which means that LXD and SAPI as you create bridges, either you think they're in a scanner or when they switch to the backend bridge other physical devices or just keep it internally virtual with the tools doing nothing. You can also it also supports basic tunneling so we can do sharing cross calls, doting cross calls so that you can create a virtual network like we don't need in VLAN so any kind of physical configuration on your network and we very well, yeah, pretty recently now implemented storage pools and storage volumes as well. So, before that we support a single type of storage backend. Storage backend it does support directory, LVM, the RFS and CFS at the time then we the the storage pool speech effectively lets you create multiple of those and then you can choose cutting up egos on my CFS it says debug, storage pool, cutting up egos on my slow hard disk LVM there's volumes to that which actually you can attach a given mount point inside the container so you can say that all the containers on slow speedrun based storage on the RFS for example and then you've got a very fast SSD backed LVM or something that you can then carve volumes from and attach that to say Vali Portugal SQL or something inside the container because of its receivial network awareness can also pretty easily interact between different hosts so the images on one host can be marked as public which then returns your host into an image server so if you bind it to a network you can then pull those images and create containers from them on separate hosts exact same thing works for containers you can copy containers between hosts and move them into hosts and create a bunch of other pretty finicky entities you can even do it long so you can have a container that's running from one host to another where it's running states all the process memory and CPU states and all the stack it's standardized down to disk transparent across and then process memory to use against the way I'm targeting that's pretty cool technology but really depends on your workload something weird that can be standardized by CPU but it's going to hit on this still pretty good visual in the box so now for the stack let's bring out of this showing you what things about so first thing I'm on the Debian stretch system pretty clean install just installed it from the USB stick to this here and I was going to show you how you can install it next day I'm going to come back to that afterwards but unfortunately we don't have native packages I'm taking it right now but what we do is the native package for the Debian stretch which we're going to install core base and then we're going to install Lexty on top of that and then we pick Lexty and it's showing Lexty it says now it's putting release as stable release from Lexty and Lexty does release monthly for feature releases so you're going to get some new features that are possible so possibly some model changes in there every month the alternative is we do have an LTS for entrap lexty which releases every two years it gets support for five years where we then do bugfix and security updates with the bugfix release that tends to be a better fit for these throws because that's not that's why some of them Lexty big written go dependency might be a bit annoying sometimes so using the LTS release means we can change a little bit once and then you go because we want that in new months whereas the feature release might change quite a bit from month to month which may be difficult to catch up with some full package so next is Lexty the first thing you can do is with Lexty that's an interactive configuration tool it also if you need to set up a bunch of different machines you can do it once for your post settings and you can just feed that in one shot and they won't ask you anything you can also directly talk to the API all code commands by hand that's always funny I think the problem just snap the as itself to the path and sometimes it's not properly gash so that's the Lexty in the training it first just call it default and we want something very optimized let's use burlifest when I say very optimized Lexty will store images on burlifest because I wanted it to be exposed to the network so it's optional to Lexty itself it binds on the network otherwise you can talk to it on the Unix okay just what it's doing right now I want to use Unix okay but since my demo will also include some cross post work in this to have it bind on the network I try to say critically for burlifest there we go so I was saying for storage Lexty does do optimized storage so if you use burlifest then we are ZFS as well I think there's always burlifest for that when an image is in a ZFS or image or VMAV and that does put it in to make it much faster to create than I thought anyway my next can be a sort of great storage right now so I can It's actually pretty quick, but it did have to unpack it and kind of did it to birth as some volume and trade the container from it. So we've got that done. We can also go back and send those to the manager, of course, why not? So long as you're on a pretty recent counter, you usually don't have any problems running on the distro. So in this case, of course, then you'll stretch and send those for you on the distro. Now that that's done, you can do the exit list that shows you your container's running, the IPv4, IPv6, et cetera, it says. What are the other systems of ephemeral? So you can do a dash dflag, which would make the container ephemeral. As soon as the container stops, it will just get white. We can get some information on the given container with the xcnpo. So we can see all of this in local devices inside of what IPv4 has. You can see some information, number of processes running, amount of CPU time it's used so far, memory usage, what was the peak memory usage for the container, and network set of six and all devices. You can all be useful for people who want to do monitoring for me, but outside the container, we don't have to install any address inside it. They can put that data from the API. And to get a shell inside the container, I can just do this. The other nice thing is even, so that works with the network, that also we have REST API, but it does not need anything inside the container because it just injects a process inside the container, which means that they can do this. So I don't want to have any IP in the container, but I can still interact with it just fine. Which can be pretty useful to go and recover as it is going to go responsibly. Now, let's play with resources a bit. I'm just going to need to restore that container because I would kind of like to do that, but I'm going to just leave that. Okay, so in that container right now, if I go and look at the number of processes, I've got quite a few on that system. And we should have quite a bit of memory right now. Yeah, 32 gigabytes of RAM. I'm just going to reconnect on an other shell, just to show you that the container itself is not going down. So I'm going to set in C1, I'm going to limit CPUs up to two. So I'm going to go back here. And now if I look, I'll have two CPUs, there we go. And same thing for memory. Just take a bite. And take a bite. Obviously, don't try to set the memory limits to low, low at what you're using right now. You're going to have a pretty hard time. The counter is going to trigger the other memory and it might not be super happy by it. But otherwise, that's fine. The other thing that's always kind of nice too is, if anyone ever played with DC, he probably know just how annoying it can be. But thankfully, we do it for you. So just insert your WGAT now, which will let me download some random big blood from a local machine. That should be done pretty quickly. And we see 112 gigabytes per second. But now I'm going to set a limit of 10 megabits. And if I go back here, we see it's going to go down, down, down, down, down on the hand, cap to 10 megabits. Or we can have it move back up to 100 megabits. And it's going to go back up slowly too. 100 megabits. All right. So like I said, I'm stuck out for you. You can just limit ingress and regress of any given container. You just send the DC queue limits and all that stuff for you. That was pretty long. The other thing that's interesting here is rather than modifying the container itself, so you can see the first recommand that I was doing and it says directly on the container. The last two I was actually modifying the default profile, which means that all containers that were like using that profile on this host are now that limit applied. That can be pretty useful if you need to set the different cases of something. You just change the profile, just applies to a container's life. All our resources can be changed immediately. There's no need to restart anything. The next obvious feature that's always nice to show too is we support snapshotting. So you can create a quick snapshot of the container. It's going to be listed in infos. We see it as a demo snapshot that's marked as stateless. You can, if you've got a few working, if you're working on it, you can even store the running CPU and memory state of the container and restore it as well. So it is going to state this container. Now let's just go and do some damage. Let's see, just executing. That's just what it's like, why not? At which point, it's a bit sad that it's not going to have a status name. No more easy. And we can just restore this thing real quick. And we're back in business. We're working exactly what you'd expect. Now, the vice-pass route is also pretty interesting. It doesn't work the same way as VMs. With VMs, if you want to pass the device, either it needs to be like a control device that's created by our advisor, or it needs to be using the FIU and passing a full PCI device. That can be a bit annoying, depending on yourself. So here, what I've got is one of those half-fully named physical device EMP7-0, I think. So we can move it into the container, say, attach EUNIC. It's a physical device that needs to be moved. It's called EMP7-0. We want it to call it the same thing inside the container. Just run this, and if we go back into the container, we'll see that we've got, at the top, EMP7-0. And it's got some host at this point. You can also pass any kind of unique character device or product device that you want to store in the container. So in this case, I'm just going to pass that KVM. So just add a new device of talking to some character device, pass that KVM, and you've got that KVM in the container. Which would now that you've run QEVU and then run your own feature machines this side of the business container. So if something goes wrong with QEVU's virtual devices, then the user is dead to the side of the container. I'll briefly show you the rest of the API. So in this case, I'm using curled against the UNIX socket. You can do the same thing against the HDBS socket, then you've got to pass the client certificate. We do TS authentication file API. In this case, I'm just going to be listing the containers. So we see I've got C1 and C2. I'm trying to list what's going on for C1. That's going to be pretty long, but... If we list C1, we see all its direct config, what type of character is, what device status do it, whether it's a program or not, what profiles are applied. And then we see the expanded config, which is the configuration it's got after all profiles have been applied. So if you usually want to see what's actually going on with the container, that's the one you want to look at. The normal config is just a container-lowerable config, but doesn't make sure. I think that's inherited. Let's see, it's running. And the last thing, the cool thing, we've got access to the container versus what it's saying. And you can just do slush files, come off its host, and there you go. Let's get the file content. Say I wanted to release the file, I can just do ex-delete, send it to the tree request instead. It's a success, and if I go inside the container, I'm probably not going to post it anymore. There, post files come. So that makes it pretty easy to have some script around, pushing files into your container, modifying everything. Our command line tool will let you do things like, ex-delete file, edit, c1, it's c, it was there. And it spawns a text editor for you, pulling the file, letting you modify it, pushing it back over the APIs once you've modified it. Now, just showing you how things work remotely. So I'll switch to my laptop now. I've got the Infant Setup containers on my laptop. I will also have a list of remotes that we can see. So we've got a bunch of default remotes. The images is what I used to call the getting the distance of community-run image server. Local is your local XT, that's the default remotes that you just do in XT as you're actually talking to local. Then I've got sub-hunt, one of my different systems, and I've got the open-to-cloud images source as well. But the machine was doing stuff on my own, so I was going to enter and ask about it. It does, it's a Sage-like thing which shows you the certificate on the server. You can go and compare using an XT info on this target server if you want to discuss it. I'm pretty sure I'm not being money in the middle. Then it asks you for a trust password. That's what I said I was doing with the XT instead. It's only used to add your certificate to the target server, it's not used afterwards. So you can just use it as an insurance-shake thing. Once all your clients have added, you can insert the password and nobody can control the password anymore. And now what I can do is, so if I do an XT list at least my local containers, if I do an XT list and check colon, then it's gonna list the remote containers. If I want to stop C2 on the top of this, then I'll stop it and just list it the proper way it is. And now let's say I want to move C2 from an internal about the server. So my client itself has credentials on both servers. The two servers themselves have credentials to each other. It will create a temporary token for that event container to transfer from one to the other and then tend the target host to go ahead and pull the container from the source. Just what's going on right now, we'll see the stretch frame reasonably quickly, and done. So now if I look at it so far, I'm gonna have a C2 container over there. You can also copy containers or container snapshots. So in this case I'm copying the demo snapshot that created earlier and making a new container called blah on some hard bits right. That's a nice way of just like setting something up and then go ahead and get into a bunch of containers to continue some production. And then if we look at some blah, blah, blah, blah here. Now for something a bit different. One thing that's kind of neat these days is you can, we've got those VP GPUs that you can use for a lot of different computation type things. The main problem with using those virtual machines is that I don't need a GPU that's very fancy and can split itself into a bunch of different CPU, or GPU chunks that can then pass to your virtual machines using the FIO. Or you need to have a whole bunch of physical GPUs to your system, just not always ideal. But if you're using a normal network system, you can have multiple applications talking to your NVIDIA or AMD or any other target devices just fine-tuning your product system. And that's what containers didn't let you do because it was a really long-singed camera. So I've got that a lot of post come so far. I'm gonna look at the container and put it cool down so I'm going into right now. And this one, even with tests, and that blows up because there's no GPU in there. It's pretty happy, it doesn't have any GPU. Let's give it a GPU. So that hosted us to identical NVIDIA GPUs. I just passed the first one, so I told it's ID equals zero. If you don't specify an ID, it's gonna give you all the GPUs, which might be fine because I just want to spy just pass a specific one. So if I do a NDSMI again, we'll see that we've got a G4-730 attached to it now. And if we run the bandwidth test, actually, it could also have a GPU. Now, if you want to pass the second one, just pass ID one as well, go back inside the container, and we'll see if it goes to G4-730 as well. That makes like resource allocation pretty easy. Lexi will figure out the devices that need to be passed so it will set up your NVIDIA 01 or whatever, or if you're using open-cl or AMD, it's gonna set up your depth-dli nodes, just figuring out what GPU you want and then figuring out what device not attached to that and then passing those into the container. It also lets you do per BCI bus address or vendor ID and product ID, so it can be pretty specific as to what GPU you actually want to allocate it to on the container. Now, I think it's actually different that probably more people can play with for that part of the box, is I've got the container, locally, or in the top corner. If I run the DB devices in there, I don't see any devices. So, if I attach a Android phone to it now, if I attach it, it's still not gonna show me anything. That's because it doesn't, if we do a config show on that, we see it doesn't have any devices instead, just like it is. But now I can pass saying that vendor ID is my, that's actually coming from N3SB, so I picked like Sony XN entry here. I've got the vendor ID, product ID, right? I just add it, I go inside the container, and now I've got the USB device attached to it, and I can get into a share of my phone. This is somewhat limited to only the USB compatible software, because what we pass are the .dev.person.usb device nodes. If something depends on a specific kernel driver that does not use that, then the kernel driver will usually create Unix character device or Unix block device's nodes for you, at which point you can use our Unix character device or Unix block device pass through and use that instead. But for anything that uses the USB, something that's purely a USB handling type thing, that works very well too. Okay, what do we support as far as images will exceed? We have daily images for a whole bunch of these shows. We've got 快選 by Linux, Soophie and Axcentos and NBN, Fedora, Gintu and Suezle, and Plano, which is Japanese, I think it is like distro, satire, and Ubuntu. For most of them we try to support to affect daily bits of all their different images. Architecture support is usually restricted to mp6486, unless we get special requests then we can try to get some dedicated build time for that on other architectures. We do have builders for QPC64, YAL, and 390X on 64, but we don't have any idea as much capacity for those as we do on X86, so it's kind of a step out. Ubuntu does exist for architectures, actually, if Debian does too, we've enabled 390X, QPC64, YAL, and YAL 64 for it. Federax and those I think wanted to get 64 added at one point, but they don't build it, so let's start. Now as far as where X86 is available, we're all available for the free distros of Alpine, Arch Linux, Debian, Federa, Gen2, Ubuntu, X86, and Ubuntu. How to get it does vary a bit depending on distros, so for Alpine we've got an updated package. For Arch Linux, you've got either I use in the URL repo, which also has to include a special kernel that supports user namespaces, or you can use the snap package that I used on Debian here, that's the exact same way on Arch Linux. They both need to delete the custom kernel because otherwise your stack could just be privileged in containers, so quite a result with Ubuntu as the user namespace. For Debian right now, it's just a snap. We do have an ITP that's been open for a while now. As far as I know, all the VDI code dependencies required for XDR in package that are in the archive now, they just missed a stretch, and the next step is for someone to actually push XDS itself. I would strongly recommend that the XDS branch gets to Debian rather than the monthly release thing, just because we don't do long-term support on the monthly releases, and you might be a bit unhappy having to do that of cherry picking here. For Fedora, we've got a couple of repositories, which I believe can also be made to work on CentOS, but it's not ideal right now. Natif package is on Gen2, Ubuntu is there, and Ubuntu has a snap Ubuntu as well as Natif package. All of those, we do, all of the snap ones we've got automated CI, so whenever we change anything, we get tests on all those issues, we test all the storage backends as well, and we're going to set support right now. It's added in the latest release, and it's going to come to the snap of the next next release due to a bit of a trick that was in there. But with that, you can use, if you've got an XDS cluster that you have your place or whatever, you can then use that as storage for container languages and containers themselves, as well as attached part of coming from the snap RbD volume into your container. Let me just show you what the, so what does the image list look like. So you've got an X image list, I can just type that in. An X image list will list you your local images. So in my case, it's like XDS automatic caching for you. So the first time you create a container from given image, if you don't have it in your body, it gets it uploaded for you, then the container is created from it. And next thing we'll then keep that image up to date in the background. Checking every six hours if there's a new version that updates from that up to an Xperia. So we've got the Xperia of 10 days. That means that if a given image has not been used for any container over the past 10 days, then the images automatically removed from the local image store. Just to avoid clutter right there. All of that is going to go wrong, so you can turn off the Xperia, or you can have it very specifically expired, or you can turn off all the image downloads, you can hold a set that's configurable. And we've got the remote image server for the images that I mentioned earlier. So if we access that, we can see all of these, these three images. We've got a bunch of Firepines, a bunch of Deviant, then other distribution for Linux, and a bunch of other two images. And the image server also lets you download images up to three days old. So if you need to do some compression, as to why something is definitely broken, you can get some static order. So, next day, it's time to solve the project. Next day is ready and go. We interact with the Alexi library through a Go Alexi binding. So we're a bit unusual in that a lot of your projects don't like, depending on any kind of C code. That's not the case for Alexi. We do pretty happily use C libraries as needed. Like, we'll find directions. We've continued to start off, I think, through the C library. Alexi itself is pretty considerable. All the command line tools and commands I've used can be translated. It's all on, um, on web late. So you can go on 9 on ChefSafe, Alexi, and get it in the next release. We've got the API talent library between Go and Python that we made in ourselves. There are also people who've made some for Java, who'll be in a bunch of other languages that might be a future priority of actually more or not. They tend to like the kind of thing a bit. The exit is Apache 2 license. We don't have any kind of CLA to contribute to the next day. The only thing that's we require is the developer certificate of ownership. So just a simple sign up by new comments and kind of contribute to the next day. Questions? So, next day. It runs in containers that are created from images. So that should be there. Pretty much everything's image based. You know exactly, you can have the exact same image, but the host creates containers there. You know, they're going to work the exact same way. That's different from what we used to do with Alexi, where you were running the, effectively, like a template script that was creating your global images from the bootstrap and took all similar tools for the distros and then connected that into a table using that to create your containers. So you weren't completely sure you would get the exact same thing everywhere. Now you're on the, the Alexi image fingerprint that you can see on the images. So if I do an x-image info, say I want to be on the stretch. There you go. The first line is the shot characteristics of the image. That's, if the image is a single file that's a shot of that file, if it's made of triple file, it's just uncatenated version. But all five are connected together and hashed. So you can make sure that you've got the exact same bits everywhere because the image identity file is, it's being uncatenated. Lexi containers are saved by default. We use a line for what we did with Alexi at the beginning where none of those two links were around. Lexi does use all the namespaces that are available by default. We use cgroup for resource constraints. We use succump for, that's just called filtering, by the way of the extent. And we use apama, if available, as a section 16 as well. So if you use the default Lexi container, the only kind of attacks you can really do is running the system out of resources, because we don't do any kind of resource limits by default, but you can configure those if you want to. You can even configure an extra security feature which makes every single container not only be uncribbable, but be uncribbable with its own range of UIDs and GIDs. That means that candle bits that would otherwise cross an experience boundary, say U limits, will no longer be able to cross because the UID is only used in a single container. It's obviously a bit wasteful because you usually want 65,000 for 256 UIDs and GIDs per container just to be post-excompatible. But I'm considering 65,000 containers if I need to use that. So quite a number of UIDs and GIDs we can make use of. The resource imitation is very similar to what you can do with virtual machines, which I showed earlier. You can limit CPU, time, memory, network IO, block IO, disk space. It does depend what kind of plugin you support, what kind of C-groups are enabled in your system, but you can be pretty fine with it as far as what you're doing. To the point where we actually run an online service where you can get the root access to an XD container, we've got a nested unit loaded to it for 30 minutes to play with, and we've not had anyone be able to read the UID system or otherwise skip the container. I mean, it's also against the tips of times of service, but we know that nobody really knows. So for some people, they try running things like foregrounds in there, but we do have a process in the midst that can be applied, which effectively prevents that. I mean, you're going to very successfully deal with yourself, but you're not going to be able to take the whole start with you. We also have device pastures I mentioned. We support passing Unix character and back devices. We support passing the talk interface as I showed earlier, GPU, USB devices, as well as disks. For this, it's either passing a given path, in which case it's effectively a bind mount, zero overhead, faster than access, or you can mount a specific talk device into a given path of the container as well, and XD will do that for you. Lexi is very, very low overhead. We're not seeing zero overhead, because it turns out that C groups, when enabled, do have some amount of overhead that might affect your container. If you don't apply any kind of resource limits, it will usually be zero overhead. It's just effectively the same, answering something natural. So extremely fast, you don't have any of the kind of, are you already zero overhead, and so that means your machine would get you. And Lexi has extremely low level access to any devices you want, as I mentioned again. Passing Unix character devices, Unix work devices, anything that's supported by your host in Xcarnal can be passed into the container. It can be used from there. That's pretty different from having to first find the VFI-compatible motherboard, CPU, and cards, and then pass physical VCI devices with the transmission, which can be pretty painful to set up. The obvious caveat is that if you're running something that requires a DKMS type cannon module, then you have to install it on the host and then pass the magnetic devices into the container. The container itself cannot build the cannon module before being able to do it. Lexi is entirely based on the REST API, all the command line tools I showed earlier were all talking to Lexi over the REST API, either over its local Unix sockets or when dealing with remote servers or HTTPS, sending the exact same thing. So that means that when you want to execute the command inside the container, that's using a normal REST API call to negotiate that connection, which then gets you a number of web sockets, and those web sockets are then connected directly to your VTS devices. Passing your command through and everything works. Pretty much exactly like if it was a normal shed, you can run something that uses curses or other fancy terminal libraries and then you can work just fine like all these sequences and stuff that pass through. We also catch signals and send those through as well. So if you send a signal to your local process, it's forward that signal to the target host over the API. So now that the signal is something we can work on this thing. And lastly, Lexi is very much production ready. As mentioned, we do have that NTS branch. We do have production users, quite a lot of them actually. Using it, we do have also people prefer new latest features than just feature branch. It has three bits, 50-50 as I can tell for users using one or the other. Both of them are very much supported. We are very, very reactive dealing with web reports of any kind directly on GitHub. Or we are pretty active support community as well. We've got the forum in the main and we can ask questions then. And that's it. All done. I've got five minutes for questions. Here are the days. Any? Yes? Yes, I'm wondering whether you have an image repository that has collections of standard images to deploy. So the image repository, we've run our Chin distribution images right now. So that's what I showed you. But all those distros, we build data images for them on multiple architectures. We don't have a up store type fit where you've got pre-package statuses and stuff. It's something that someone could run. I think there was some interest from some people already doing that but for Lexi to then support it ask the Lexi image server as well and then start using them. We don't really have an interest like upstream on our side to start packaging all the web applications or that kind of stuff because that's just unconsuming and that people would know a little bit better than we do. Yeah, thanks a lot. I'm very much looking forward for Lexi to be available in the end. So why did you choose the design question? Why did you choose to create a separate Demian and why not, let's say, contribute to the invert? Yeah. So Divert did have a Divert-LXC driver which had nothing to do with LXC for a long time. So we've always had interference upstream with that to some extent. Also, we didn't feel that Divert was really a library or a Demian with an API people could really interact with. All of the network layer of Divert is a bit special and that is mostly SSH-based calling version and running set commands over that which is not ideal but the main reason really was that we wanted to be container focused. Divert does support containers with LXC but this focus was really virtual machines. And so the Divert integration for things like pulling and pushing files into a container doesn't really make sense in the API since they're really focused around virtual machines. It doesn't feel like a super good fit plus the existing confusion around the Divert X0 was a bit of a problem too. It felt like we could do something much cleaner with a very clean recipe. Much faster in a much more easy way. I have two questions. The first one can the process running inside a container be put to swap? What is the relation to who does how it works? And the second question is how do young people import in creating the LXC base that you can download? Okay. So for the first question for the swap yes we do have a lot of memory limits you can control a swap priority and you can you don't have very fine control because we don't have per container swap wise unfortunately. But what you can do is says that what containers should be swapped first and you can turn off swapping for young containers. So a container that you never want to swap you can turn off swap and then which would affect the set of swapping and so that none of those possible would ever be swapped unless the entire system is swapping. And for containers we care unless you can set the priorities whether they get swapped first. Now for the Debian image the way I'm going to swap it right now is we've got the Jenkins server that runs the LXC Debian template so it's the same shared script I mentioned earlier just that we run those centrally. So you can find those in the LXC code tree you've got this effectively shared script that's being run that spits out a root byte that gets then packed into both a .exe and a portion of that image that's then distributed. You can use all the Jenkins scripts and today we also get apps you can run our building edge scripts and then it's just going to speed up artifacts that you can use on this container that you just want to do. I'm just looking for a clarification about the LXC config progress because it's very convenient way of altering your container while it's running. Is this a persistent conflict change or is it just a just a a format of kind of a yeah all those conflict changes are persistent so the way they work is that it modifies the database state immediately and then applies it as well. We have in our documentation the list of all configurations whether they can be applied live or not. The vast majority of that can the usual exceptions are if you try to modify something like the second policy or if you want to make a container that's currently privileged to be privileged that needs like a persistent remapping which you can only do over restart so it will let you set the key but it's only effective after restart. This is perhaps just an unseen question but is there any savings across container containers resource wise or process and caching and library question if you have a lot of identical containers. Yeah so the kernel does some of that for you but the usual answer is that it doesn't really. If you if your online file system somehow manages to have the same I-node for a given library for both containers then the kernel will do the caching perfectly for you because it knows that the library is already loaded and magic. The otherwise we don't have we've been looking at doing KSM type thing it is possible to load there's like a KSM library that can be loaded into like as an any pre-load for your process in which case it would mark all process memory as doing the DM advice call to make it make the kernel did up work on it. The the test that we've seen from community members so far are not being impressive it looks like a little bit of CPU usage for about less usually less than 20% gain. It depends on workloads. I think we've got some web hosting people that's to run with the any pre-load installed and they do get some amount of saving but it's not usually imaginable. The what we do get as far as saving is obviously the basic space because that part is we create optimized image and we clone from it. So if you use any emperor fs, gfs or self those will get you very cheap copies in instant copies for the for the for the for the for the for the for the for the memory and and so it doesn't matter so much for the workload we've seen and for the for the for the workload we really does matter you can install that libksm thing is called which effectively lets you do that at the the container basis. The reason why we can't get to it centrally in next year is simply because since containers are effectively a set of what's been in certain space next year does not own their memory. So next year itself cannot mark that memory as being that you ready for the job and much we do a a bunch of stickers here or bunch of stickers from the from the table too if anyone wants to post otherwise you can go on our website where you can try next year online it's like a fancy JavaScript type thing where you get great access to a random Lexi server that then lets you run through something so much similar to my demo but not quite in as much in depth to play with it and then if you want to use it we've got the getting started guides that that's you have to install the snap and if you're using that way I know or that I can always depending on what this you're running thank you