 This will be interesting, not just if you're running DHIS too, right? Now, this is sort of basic infrastructure stuff. You might find it useful for all kinds of other reasons. Anyway, hopefully you get a good idea this morning, a good flavor of what LXD is useful for, what maybe what it's not so useful for, how to work with it. Incidentally, everything I'm going to do in this session, you can also do on your laptop. There's no requirement to actually do this on a cloud server. You can do it on an Oracle VM. The only thing that you can't do on your laptop is to do the SSL stuff around in DHIS 2 setup. Okay, so will I go full screen? Go full screen for a little bit. I don't want to stay full screen, because I've got to jump forwards into my terminal. Most of what we're going to do today is we'll be more of a kind of a demo. I'm going to show you a couple of different commands. But if you're looking for a good tutorial, that's quite a nice one there that you can work through as well. I'll cover some of the same stuff I'm doing here. If you're looking for the definitive documents with all the sort of reference configuration options and things like that, it is the place to go to. I'm going to do a little bit of a demo, installing LXD. Most of you, if you install Ubuntu 2004, it'll come with a reasonably recent version of LXD installed already. Versions, I think the latest version of LXD is 4.1 something. But LXD tends to follow the same pattern of Ubuntu. They have an LTS edition, which means that it will be supported for the next six years. So it's always best to work with LTS editions. So the LTS edition we're using is version 4. That's what was installed by default yesterday, when I logged into my Ubuntu machine. That's already there. Depending on what distribution you are, or what cloud service, what image you're using, you may not have it installed. It's easy enough to install. You can do it like this. I'm going to have to do it again now because I deliberately uninstalled it on my server so that I could get back to a clean situation. So that's how you install LXD. And I don't know if you're on the call when you were asking me some months back about Debian. If you have a Debian host, then I understand that the Debian package for the LXD host is not currently available. But the Snap works. So on Debian you could do an apt install Snap followed by Snap install LXD, just like that. And that should also work on a Debian system. Okay, so I'm going to tell you a little bit about the network, a little bit about storage. We can talk a lot about storage today because the other big session I have planned this morning is on Postgres device. It's out of the way of my slide. I'm not creating containers, starting them, stopping them, deleting them, executing commands inside them. And a little bit about figuring limits and security parameters. I mean, that last point is really the main points of why we'd use containers at all. Otherwise you could just run your engine X or your Apache or your Tomcat and your Postgres all just run on the same server. So we're going to limit what they can do to prevent them interfering with one another. So configuring limits is really the fun thing about having containers. It's also a neat way to keep your functionalities separate. But yeah, I'm going to demo you this stuff and I think I've got to know. These are some of the useful limits that we would typically put on containers. I'm going to show you more of these. Setting memory limits, right? You might have a 64 gig, 64 gig physical server, but you don't want your Apache to use more than four gigabytes or something like that, right? You can set memory limits on it. CPU allowance is an important one. I should actually do this by default. Many of us have been in this situation where a particular application starts to run wild, right? It starts to use 100% of CPU. To the extent that sometimes it becomes even difficult to log into the machine, right? Because everything is getting slowed down. I find quite a good practice. Take all your containers and make sure none of them are allowed to use more than 95% of CPU. What that means then is if any of them goes a bit crazy and gets into some tight CPU loop, you'll always have 5% of CPU that will be spared. So you'll be able to do your SSH stuff and things like that without being disturbed. You can set harder limits on CPUs. If you know that you've got 24 CPUs, for example, on a physical server, you can say, right, my Tomcat is going to use CPUs numbers zero, one, two, three, and four. And I can set my Postgres to use CPU number 10, 11 and 12. So that's a more harder partitioning. Security protection, this is an interesting one. I mean, it's very easy to create containers and it's also very easy to delete them again. And sometimes you want to make deleting them a little bit harder to get your database container. So it's a bit scary that just go LXC Delete Force, Postgres, and then your database container will be gone. One of the configuration settings you can set on it is this one, Security Protection Delete. If you said that and if you try to delete the container, it'll refuse. And it will only let you delete the container after you've set Security Protection Delete. It was false again. That's a general problem, I think with, general risk, I guess, with virtual machines, cloud servers, containers. I mean, they're all, they're so easy to make and so easy to destroy and there have been cases. I know there's a few from people I know who are actually on this, in this session where their country system has been deleted off a cloud server, right? Somebody's accidentally removed the cloud server because I don't know, there was some miscommunication. The backup had been made and somebody thought that backup was there back in the country but nobody had actually verified. And so the server got simply deleted and afterwards people found themselves missing six months or two years worth of data. We talk a little bit about data loss later today, but yeah, it can be. If you know that your containers are designed to be around for a while, you don't plan to delete them. Then you can set security protection on them to make sure you don't accidentally delete them. Security nesting is a cool one. As you saw yesterday, I think, that when you make a container, you basically have like a full Ubuntu operating system, as you may use in Ubuntu image, running inside the container and you can do pretty much anything you want with that as if you were running on a real operating system. One of the things you can do is you can run containers from within the container. So you could run an LXD daemon inside a container which is running on an LXD daemon in the host. There are some use cases where you want to do that but you need to set the security nesting to be true to allow the container to be able to create nested containers. The big use case we'll look up for that will be if you want to run Docker inside a container. Because some Dockers are just containers in the same way that LXDs are containers. They all use the same kernel features. Okay, so that's the little bit of, okay, I'm kind of pulling you here what I'm going to do and then I'm going to do it. Up until now we've been creating full Ubuntu 20.04 containers. I've got a typo in there. I've built my installation tools around these Ubuntu containers, largely because I know that quite a lot of people are quite familiar with Ubuntu. So it's not a big conceptual leap to work with Ubuntu running in a container. The truth is they're a bit big, right? You don't need to have such massive Ubuntu containers running just to do simple things. But there are other containers that you could load. If you do LXD image list, it'll give you a list of all the Ubuntu containers that are available. One of the nice things about that, it's a very easy way of trying out new versions, old versions. See if your software still works on version 18.04. You can create an 18.04 container and try it out. You want to try out the latest Ubuntu 20.10. You can similarly launch an Ubuntu 20.10 edition. They're not just Ubuntu containers. If you go to the images, LXD image list images, in fact, they'll give you the same list that you see from that URL there. You also got access to Debian, Alpine, CentOS, various odds and ends. Alpine is an interesting one in the sense that it's really tiny. Alpine is the distribution that most Docker images are built on. I'll show you how we can make little Alpine images as well. That's pretty much what we'll run through with the LXD. It's a good idea, I think, to familiarize yourself as much as possible with the base environment, play around with making new containers, see what play around with the network, so that when you start installing your DHIS stuff on top of it, then you're familiar enough with the environment. Let me go backwards and forwards between presentation and terminal and escape from full screen. I'll try and do this without reading the documents as I go. So what I've done with my server here is my server here. I've actually deleted my LXD because I need to install it again. If I just do snap install LXD, I've got to install the latest stable version 4.11. I've not tested anything on the latest version 4.11 production purposes. I usually like to be a bit conservative anyway. Let's stick with the LTS edition. I've got the string back here. It's minus, minus channel equals 4.0 stable. If you're working with Ubuntu 20.04, you probably don't ever need to do this. You probably find that LXD is already installed. If it's not, this is the way you'd install it. Hopefully, I think it's crossed. Okay, now my LXD is installed. LXD is just like a, it's a hype visor demon that you find is running on the system now. Just see what's running. It's actually, should be running. Anyway, maybe it's because it's not initialized. Once you've got LXD installed, you can execute these LXD commands. If you go LXD minus help, some of the options, there's a whole lot of options there. You know about some options, you might go LXD, move, for example, how to move a container for one place to another. Two important things that we want to concern ourselves with our LXD environment is the network. And you can see the only network that we're seeing at the moment is just my physical, this is my physical work interface. It's not managed by LXD. We haven't configured any network yet. Another thing that we're interested in for containers is the storage. It's basically where you got to put the containers, if you make containers. Like currently we've got no storage defined. All right, so in order to start building up an environment creating containers, allowing them to interact with one another, we need to make some storage and we need to make a network. Sometimes it might make sense to do all of this in advance. If you're really putting together a very customized system, you can start here creating storage areas, creating networks. Generally the easiest way to get started is exactly the way they suggest there. This is your first time running. You should run an LXD in it, right? What LXD in it will do, they'll set up some defaults for you and it'll create a basic network and it'll create a default storage area. I think when Stephen was taking you through the install yesterday, he was using script LXD setup. Now what LXD setup does, okay, he's already, but he shouldn't do that. I better fix this to make sure that LXD setup on a blank machine doesn't install the latest version of LXD. Mental note, come back to that. The other thing it does, it runs LXD in it, it runs it with an option called pre-seed. What pre-seed is, it's just some, you've configured some pre-configured options. So you don't have to answer the questions. The way I'm going to do it, I'm going to actually answer the questions. We'll go through one at a time. The other thing that happens in this script is, it does a couple of kernel tweaks. I can't remember the URL off the top of my mind, but if you Google LXD production settings, it'll show you a couple of kernel tweaks that you should do. I'll show you those in here. For the kernel folks among you, we've got SysAdmins here. These are a few kernel configuration settings that you should set if you're running, particularly if you're running a lot of containers. I mean, we found early on in the last year when we did the training of trainer sessions and people went away and they started making containers. Everything will work fine until you make 10 or 11 or 12 containers. Somewhere around there, depending on your system, eventually it's going to start complaining. Usually it complains about too many files being opened, right? So there's certain kernel limits need to be opened up if you want to make lots of containers. And by lots, I mean, you can make lots. You can quite conceivably have 100 containers running in a reasonable machine as long as they're not all using vast amounts of CPU and RAM. So we're going to run LXD in it to set up LXD for the first time. There's a few gotchas in here. Right, so it's going to start asking me all kinds of questions. Do you want, would you like to use LXD clustering? Well, you know, I would really like to use LXD clustering, but I'm not going to. This is a bit like what I was referring to in Rwanda yesterday. I mean, Rwanda, they have three different servers at the moment and each of them are running LXD. But because it was never planned that way, it just kind of happened. It sort of grew that way. They're not actually arranging a cluster. They're just actually three independent machines running LXD. The advantage of clustering them is that you can just treat then all of your three machines as if it's one big LXD machine, create containers, move them transparently from one to the other, etc. We're going to keep our lives a little bit simple and not cluster anything. We do need to make a new storage pool, right? Because as we saw here, we've got no storage defined, right? When I listed, we need to make a new storage pool. The name of the pool we'll make, we'll call it default. That's default is the default. Now here is the tricky bit. We're going to decide what kind of file system back end to use. Now, this is a difficult decision for me to make for this because ZFS is really the best way to do it, right? But I don't want to encourage people to go off creating systems based on ZFS unless they've done a whole lot of background reading and practice around ZFS file systems. We need to do a whole separate session on ZFS itself before doing it to here. So the most inefficient way of doing it, but also the simplest, is to use just the directory file system. What this means is it's going to take a directory on your existing file system and put all your containers there. For our kind of setup where we're not doing a lot of fancy things, particularly snapshots. Snapshots are a great feature of containers, but snapshots be really slow on the DER file system because a snapshot literally means you're going to take a copy, a physical copy, right? So a snapshot would take a really long time. If you're doing snapshots on ZFS, those snapshots are nearly instantaneous, right? Because ZFS uses a copy-on-write thing. So it's very cheap to make snapshots. I've got a link somewhere, I think in my next presentation, to doing Postgres on ZFS. One of the nice things about it is the ability to snapshot. LVM, a lot of people in VMware environments, VMware is sadly very common. We'll get the storage assigned to them as LVM volumes. It's quite cool to take an LVM volume and tell LXD to use that as a back-end because LVM also allows some quite nice features around snapshots and the like. But for today, we're going to keep our lives simple, stick with DER. And if you've got a simple enough installation, then keeping it simple won't do you any harm. Okay, we don't want to connect to a mess. We do want to connect. We do want to create a local network bridge. As you see, we don't have any network defined. We say yes. Now, this is the bit which is really important that you take the default here and also for the next thing. And there's a limitation in my scripts at the moment, even though they kind of give the impression that you can define whatever network you like. In fact, I've been a bit lazy and some of them have got some hard coded assumption around the network. So if you're going to use my setup scripts, if you're not using my setup scripts, but you're just doing LXD for some other reason, it doesn't matter. If you're going to use my setup scripts, make sure you stick with this default. The game of the bridge is LXD Bid Zero and the IP version four address is that one with a 24 bit mask on it like that. Okay, so unfortunately, at the moment, you have to use that address. If you use any other address, then some of the scripts are going to fail. Because, as I say, I'm not introspecting the network properly. I've made a few hard coded assumptions. We fix that over time at the moment used that used this as the name of the bridge that is the name of the network. We do want to net IPv4 traffic on the bridge. We don't have any need to use IPv6 within our bridge. I'm going to turn it off. It doesn't do any harm generally to have it there, but don't need it to turn it off. Do you want my LXD server to be available over the network? The default for this is no. I guess that's a security thing. There are some quite cool things you can do if your LXD server is available over the network, including publishing your own images. You can access your LXD server and run these LXD commands from a Windows host or from a Mac host. From a security perspective generally speaking, unless you've got a good use case for putting it on the network, leave it off the network. What it means is that in order to work with it, we have to SSH into the machine and run these LXD commands. The LXD server actually exposes a REST API, so everything that I'm doing here, well, not the setup, but all the LXE commands are actually implemented as a REST interface as well. So you can interact with them that way. Yes, a good idea to keep my stale cache images to be updated. Do you want a PC to be printed? The default here is no. If I type yes, I'll just show you what it does. It'll just take configuration that I just find there and print it out here as a file. Configure networks, storage pool, there's my default storage and driver. The profile, that PC file in fact is the same file here. That's all I'm doing in this script. I'm just taking it. This is a file I generated before. Basically all those same answers I just gave and feeding that PC file directly into the LXD init command. That's the way this script is able to automatically set up LXD right rather than go through the process that I just went through. Okay, it's not really necessary to do this. It's just a little demo machine. If I was running a serious machine, it would be a good thing to do. Let's do that manually as well. Set some kernel parameters. What happened? I lost something to my SSA. We can run this anyway. I can't remember the Linux commands. I need to do that to automatically reread this control file. It's not so easy to do. It's fine running in the script because the script is as rich. Okay, then there is a command. I probably should put it into the... Let's just Google it. Google knows everything. Reload. How to reload this control variables. What is it? I know what to release. The command I'm looking for is... To reload. Reload settings. Yeah, just that. It's just saving me having to reboot. Okay. Okay, there's some other commands in there. This is something I need to experiment a little bit more. May or may not be necessary. Sometimes people have found... I think Flemart has suffered with this over the years. That for some reason the UFW file argues a bit with the LXD bridge. And it depends a little bit on the order of things. If you install LXD before you install UFW, you install UFW before you install LXD. What I've done here is just told our firewall to... Please don't block traffic on the bridge. Let traffic in from the bridge. Let traffic out to the bridge. I need to test a little bit more how necessary that is. I had to do it over the weekend. I think it's because I was using a later version of LXD. But it doesn't do any harm to be explicit. And this will ensure that your networking works. So that's all that's involved with the LXD setup. After that, it just runs the create containers command. What I typically do is I don't use that seed thing at all. I always just run LXD in it and set it up. Usually because I've got different requirements over how I want my storage pool to be. But after you've run the LXD setup, then you can run create containers. Sorry, if you run the LXD setup, it'll automatically run create containers. If you set up the LXD yourself, which you can do, just be careful with the network. Then you don't need to run LXD setup again. You just need to run create containers. Anyway, having done all of that, we should now find. We have a network, right? There's our bridge network. And we have storage, I hope. We have some storage. There we have some storage there. Then you can see this is where the stuff actually is sitting under vast LXD, blah, blah, blah, blah, blah. Once you've got storage and you've got a network, it means you can start making containers. So let's make one. I don't need types to use sudo to use the LXC command. And that's because I made myself a member of the LXD group. I might remember from yesterday. So usually the first thing people start with is make themselves a container. The noise in the background is the postman has just arrived and my dogs don't like it. We just make ourselves a 20-0-4 container. We talk a little bit about what kind of operating systems to use. This is the one we've been using up to now. So let's carry on using it. I just call it test. And that's going to make a button to 20-0-4 image called test. I should be able to see it now sitting in there. I should be able to go to it. Notice that it doesn't show the IPv4 address. It's a little bit odd. I think this is a bit of a bug. It has an IPv4 address. It doesn't always show it with the list until it kind of warms up a bit. Executing a command on a container. This is typically the way you do it. LXD exec, the container name, and then the command you want to execute is good practice. Always put a dash dash before the command. It doesn't matter so much in this case. Okay. That's why it doesn't have an IPv4 address. It's not running. The init command that we did here, what it does, it creates a container, but it doesn't start it, right? And so if you create a container with init, then after you created it, you have to start it. Starting and stopping it. Now we should see it running. There we can see it running. It actually has itself an IP address as well. Starting and stopping fairly intuitive. You just start it or you stop it. Or you can restart it. I restart. It's already stopped so I can't restart it. Start it. And restart it. I restart these containers quite a lot. You know, sometimes it's just easier. If you want to restart your database, for example, you can go into the container and restart the service. Sometimes it's just quicker to just restart the whole container. Containers start and stop and restart pretty quickly. So, yeah, you'll see that it automatically got itself an IPv4 address. That's actually coming out of a small DS server that's running internally on the bridge. And that's going to hand out IPv4 addresses. Obviously these can handle out a little bit of random. When you've got long running services like your proxy and your database and the like, you want to fix the IP addresses on them. And I'll show you how I do that. Let's just stop it. Let's stop our container. I don't remember all these commands at the top of my head, but I can grab it out of there and get the commands in here. Okay, so if I wanted to set my IP address to, I want this to always be running on a particular IP address. I've got different ways to do it. Probably the best way to do it is to do this. But first of all, make sure that our container tests the device E0, which is attached to the bitch. And then I can set its IP address from here. Let's make it 192.168.022. My container is called test. Now when I start my container up again, it should hopefully get its new IP address. Sometimes it doesn't get it immediately. It ends on the lease. The lease has expired. There we go. So it's possible after this, this container will always have that IP address. Once I've configured that Ethernet zero device inside container, it will always get that IP address. Okay, so back to my storage. You can see now my storage driver is saying that it's used by things. That's odd. We only made one container, but it's used by two. That's because we also downloaded an image. Right as part of creating the container. So the storage is holding the storage holds the images and also the containers themselves. Okay. Has anybody been picking up any questions in the, in the slack while I've been talking? If not, I'm going to move along. I have to remember what I'm going to say. Creating containers, starting them, stopping them, deleting them, deleting them. Delete is not going to work, right? If I try to delete test, it'll say you can't delete it. It's running. So we can either stop it first and then delete it. I'm really not worried about it. I can just forcibly delete it. Now you see it. Now you don't. Now the fact that this is so easy is the thing that I was saying is a bit of a, a worry if you had a very valuable, long running. It was grass container. So it's going to make another one. Let's go back to, okay, let's make a new one. Make it again. Second time we make it is quicker than the first time we made it, because we already have the image. This was a container that I wanted to delete in a hurry. And let's start it again. We don't even need to start it. There are some config settings. And this is probably config. Set. Test. Let's see config. My brain died on me. Let's check my slide. Let's see config sets. Let's see config sets and then the instance. So if I take the test here, I can, I should get completion. Take on my slide guys. Config sets, instance limits. I want to make security. Delete protection. Okay. Now everything still looks the same. I still have my test container. It stopped, but if I try to delete it. No, you can't do it. It's protected. So I probably should do this more often. It's actually a good idea. If you have containers that are valuable, you know they need to be around. You definitely don't want to delete them by accident. It's a good idea to set security protection. It's a good idea to do it. It won't allow me to delete it now. Unless I read deliberately. First of all. Turn that off. Then I should be able to delete it. Okay. Where are we? So there's some other limit. That's security action. That one is there. We can also set it's like limits and allowances and stuff like that. I got 16 gig on this machine. I got a big one. I got 16 gig. By default, if I set up 10 containers and I get more running, each of them will see the 16 gig. Right. And sometimes that's not what you want. Right. Because then if one of them is greedy, it's going to take the resources and the other ones will be starved. So if you have a greedy container, you might want to limit it. So let's do that. Let's take our test. And set. Limits. Memory equals. Four gigabytes maximum. It's not found. Actually config test. No, I deleted it. That's why. I've created this thing and needed it so many times. No, I forget. Let's make it again. Okay, we've got him again. We set a limit on it. You can always look at the config of a container by going Alexi config show config show tests. Here's our test container. And you can see up here somewhere. There it is. It's limited. Four gig. What that means. I'll start it up again. You can do this while it's running, by the way. I'm not sure what the effects are suddenly changing the memory limit of a running container. I haven't done it a few times. I haven't crashed anything yet, but I'm not sure it's advisable. If we go into our test container, and we look at the memory that's available in the test container, we should see that. It's no longer seeing them. This container will only see four gig. Right. So it's that way we can constrain the resources on it. You can do similar things with the CPU. We can set. I don't know how many CPUs I have on here. We just say, we just let this one access CPU zero and nothing else. Okay. This is on the host. This thing has got six cores. I've got six cores on my main machine. If I look inside of test, my test machine. Oh, okay. It actually sees them all. It sees them all, but leave me. It can only use one of them. Need to verify that. I'm going to verify and check with you later. Usually I don't restrict the containers to particular CPUs. Anyway, it's more useful. Just to set their allowance, CPU got allowance. So we can make sure that our containers never able to use more than 40% of the CPU, for example. Okay. So that's ways that we can, we can restrict a little bit of what our containers are able to do in terms of gobbling up the resources of the host. The other thing that I do inside all of the campaigns, this is actually in the, in the setup scripts. I actually run. See by default. I actually run the UF, will you file inside each container. So that each container has got UFW running on it. And that way we set up particular rules for particular containers. Again, it's, well, this is really a security thing. It's about making sure that containers can't interfere with one another. For one of your containers gets hacked. You want to do everything you can to make sure that they can't spread the damage further. So what else did I have on my slides? I can see them running short on time. We could talk about LXE all day. This limits, wanted to talk a little bit about images, I think. Yeah. As I said, we've been working up to now with 0.04 containers. Now, okay, there's a link here, gives you a, is a list of other images. Alpine, a lot of Alpine images, Arch Linux, CentOS, Debian, et cetera. We can set up images of different types. You can also create your own images. What we will do, I think I just give some examples in my slides. Follow the slides. Back here. This will list, is the Ubuntu image repository. Then there's a repository called images, which is more general images. We can list what images are available. And we can launch instances using different types of image. Incidentally, the difference between the init command and the launch command is just going to do with the way the container starts. If you go LXE init, it'll create the container. But as we saw, you then have to go and start it. If you do LXE launch, it does exactly the same thing. It just creates it and then starts it. In case people are confused about those commands. So let's play around a little bit of that for a couple of minutes. LXE image. If I guess got image list like this, it'll just show me the local images. These are images that I have in my image store. As you can see the Ubuntu 2004 images there. That means if I make Ubuntu 2004 images, it doesn't have to download containers. It doesn't have to download the image over and over again. That's if we want to look at images on remote servers. It'll show me all the images available on Ubuntu. That's why I have too many of them. I can filter those. They have an 1804 image I wonder. Oh yeah, quite a few. It's an 1804 LTS edition of different processor types. So if I wanted to try out my tools, I really need to do this and to make sure that they run on 1804. I can actually do that. I can launch or init. It doesn't really matter which. And I can use 1804. Let's call that one. This will make me an Ubuntu container running 1804. I can try out the latest and greatest. And the latest and greatest I guess is 20.10. Don't think 21 yet. It'll make me an image based on based on the latest 2010. One of the nice things about this is that I know quite a lot of folk out in the field. I've got LXD running already on Ubuntu 1804. There's absolutely nothing wrong with that. And you can still run images based on 2004. So if all my tools are built on 2004 images, you can still run them on an 1804 host. It shouldn't make any difference at all. Okay, if I now look at my images, I can see I've now got three images. Once I've used an image, keep it in cache. I don't know if I need to make another one I can do. Okay, you don't have to make Ubuntu images. Ubuntu images, to be honest, they're a bit big. If I look at how much is being used on the file system inside my test image. I'm going to do this. Here for minus summary, minus h. Let's see. And you can see this Ubuntu image is using one and a half gigabytes right in its root file system. That's quite a lot really for what I needed to do some of the time. So if you want really small images, there isn't a Ubuntu minimal image you can get from somewhere. I can remember offhand, we can look it up. If you want really small ones, opine, look at the difference of this. Look at the difference with this. If I do, let's make an opine edge image called, what do we want to call it? Okay, creating up. Okay, not launch, I'm going to create it. Launch images. Okay, see how quick that was to retrieve the image. Right, almost, almost instant. Make a, make a new one called op two. Yeah, it was a much, much faster. That's just because the underlying base image is much smaller. So what I probably should do is actually look at cutting down the size of some of the images that we're using on here. But yeah, you can knock yourself out, have a lot of fun playing around with different distributions. You can, you can run Debbie and you can send to us. And like, let me just see different images. Yeah, that's pretty much the run through. So what did we do? I showed you, that's where the main documentation is. A little bit of demo showing you about how to, how to do LXD in it and set up your storage and network, creating containers, some of the useful limits. I've left out something here. This should be config set and the name of the container like that, then set limit. Yeah, I'll fix that slide, all of those should be the same. And we showed you a little bit about having different images. Okay, so that's very, very quick run through of LXD. This is what's underlying our DHIS-2 container setup. As I say, it's generally useful. You can use things other than DHIS-2. And obviously in an hour, we've just very much scratched the surface. There's lots and lots and lots of other aspects, particularly around things like the file system, ZFS file system stuff, and things around the clustering and network access to your LXD opens up all kinds of other possibilities that we can't get into now. Right, on that happy note, I think I want to leave it there.