 Okay, so I just had to stop and record to cloud instead. So now, today, we want to talk about setting up DHS to application stuff in an offline server. That is a server that doesn't have external access to internet. And to proceed with that, we will talk about these packages that we need to set up our application. Our stack has, you know, the DHS to application itself has Tomcat nine and, and, and on top of Tomcat nine server, you have DHS to that wall file. So that Tomcat server is going to be serving that wall file. And meaning, sorry, loaded monitoring as well. And then you have postgres. That means you that implies that you will have to get postgres packages in your system. And, and if you plug in other monitoring tools like meaning or subjects, then you need to manage those packages also within your system. So our standard installed normally is built on top of LXD. And for that, for that, it means you need to to install LXD on whatever you have your DHS to installation. So, and if you're dealing with 1804, 1904 or 20, sorry, 1820, 22, that normally that is packaged with snap into snap. So that means you have to package management systems in your, in your server, one is act and another one is snap. So, why do we need internet to be there? I mean, in the servers, like for instance, our, our tools that we have out there, those will be using the shared scripts that we have. They do not work. If you don't not, you don't have internet in your server. That means for you to run those scripts, you will, you will need to have internet connection reasons is because for LXD to build your container, it needs to, it starts from somewhere. It needs, it needs a base image to start from. And those image normally are not, are not available on your, on your local system, at least when you've not downloaded them. So, what happens is that is downloaded. I mean, the container images downloaded from the remote, and then container is built from that image. And then packages required now will be, will be installed into the same image that we get from the remote. That is at least what happens when you run Ansible scripts that we have. In the previous call, we talked about LXD in detailed, in detailed, and we talked about images, how you can manage those images. And we explored and realized that those images are normally hosted somewhere that you, you download them from some remote. You can even list on my terminal here, image, list, or remote list. So we have remotes here, we have images remote, and then we have Ubuntu daily remote and Ubuntu, Ubuntu remote. So normally images are downloaded from this remote. They are hosted somewhere, some remote, remote image repository, and they're downloaded prior to the installation. That is one of the reasons why you need, you need internet access on your server. And then for you to build a working business to applications that then you need to get Tomcat somewhere you need to get Postgres packages somewhere you need to get proxy that is in the next order purchase somewhere, you know all those packages that you need. So you will need apt and what is apt, it's a pocket management system. And apt also has a local cache, it has local cache and when you issue a command like sudo apt update something then it refreshes your local cache, ensuring that it has all the very positive, it refreshes and ensures that your sources list are initiated, you know, something like that. And then when you say you issue a command like sudo apt upgrade, then it checks what you have on your local cache, and then it tries to upgrade and get latest versions and security patches into your system. And when you install a package like, say for instance vim, then it tries to check within your local cache if that package is available. And if it is not, it connects to your remote apt repositories and it downloads that package, and then it installs into your system. Yeah, so when we are faced with this problem that we are talking about today where you don't have internet that that means you, you don't have leeway of getting to download these images from, you know, sorry, can be the images for the containers or even these packages from somewhere else, you know, you need to prepare yourself to, to get these packages say into your laptop and then install them into the system whatever you're building your containers prior to the actual install where you, you set up the HHS to on an environment where you have internet connection. That is you build the containers plus all the packages in an environment where you have internet. And that means you will, you're going to install all the required packages into the independent containers, you can even use the tools, you can use the tools there. Because you can, that can be automated that when when you have internet connection, it can be automated. And then after you have got all your containers with you with all the packages that are required. You can export and then import on on whatever you want to set up your DHS to. So we're going to move quick into the demonstration. See how we can tackle these problems at hand. And I will prepare an instance which, which is offline doesn't, which doesn't have internet access completely and see how we can set up a few items there. And see how we can set up the HHS to the, at least with one of the approaches, you know, we can use containers or get the packages with us. Can I make a quick comment before you go? Yes. Just thinking about cases which has happened to me where I've been in this situation where, where people want to install and there's no internet. There's actually two, two reasons why it happens. I mean, one is sometimes you find in particularly strongly regulated government data centers. I think you were talking the other week about having that issue in Jordan, for example, where you don't, you get blocked from using outgoing HTTP. But I think the other case is maybe even more common when you're installing in a local data center, it's not that you're banned or barred from using the internet somehow. It's simply that the internet connection is very weak and is actually not practical to be downloading lots of packages. That's something I've come across quite a bit. Working in physical data centers, often in places where the connectivity is not always fantastic. I don't know if anyone else has got similar experiences. Yes, Stephen, in South Sudan. Were you working in South Sudan? Well, in the main Sudan, in the main Sudan, yeah. Yeah, because of restrictions that is the functions and the like the internet was only too much restricted. And the connections were not going out of Sudan to pull out those packages into the South. For sanctions, another issue. So, yeah, Tito, it happens quite a bit. So this scenario you're describing is definitely an interesting one. Let me not interrupt you, Fahad. Okay. So I will prepare two servers here. One is an online server where we prepare ourselves, you know, just an environment where we prepare and download the packages. And in a real situation, then you need to prepare yourself prior, you need to get those packages prior to proceeding with the installation. You download the packages when you have internet connection into your laptop, some directory, and then you can pocket them into an ISO, for instance, and then just have them with you. So here, I will launch a container. Let it, let, let, let, let us call it offline. And then I will disable on the networking, I will disable external access for that container. So this will create a container real quick on my local machine here. So the process of creating this container is quick because I have the image already on the local cache, so it will just create real quick. So when I issue Alexi list, I have this offline container. After a few seconds, it will get IP address. And then we will disable networking on that container. And then we'll create, let's, let me just create another one quickly. Let's see launch an online one. So these online container is just helping us to prepare ourselves, prepare the packages that we need at prior to installing them into the offline container. So let me just check now. Yes, the offline container is launched and it has this IP address. And let me go to this page offline as such. It's the turn. It's, it's still have networking right now and I can ping internet say Google DNS and it's responding. So it's online, but we can disable that. I'm editing that configuration and then we disable it here. Activation mode. We just deactivating that interface off through the net plan. And since I've applying this will throw me out because I did connect to this instance with SSH and the interface is deactivated. So that's why I'm now out and I can list with Alexi because of the remote Alexi configuration here, but I can execute into this container offline bash. And then this container now doesn't have internet at all. And if I check networking, notice that, well, the interface is here, but it's down doesn't have internet and if I ping Google like before Google DNS like before, it is not, it is not visible. Now, the online one is I think created also. And it has these IP address that fall. So as a search, we go to the online tab and as a search for. So this is the online container that we will now prepare ourselves prepare our packages and stuff, which we will install on the other on the other end. So here, what are the things that we need to set up details to one is we need to we need a snap we need to snap. Sorry, we need to get snap packet there because normally. LXD is a snap package, even if I list snap package is here, snap list. We do have LXD it comes with a sort of install of Ubuntu, but on our offline server, I'm going to assume that we don't have snap by deleting the snap packet that we have right now. Snap list is going to have LXD so I'm going to delete that. I'm going to snap. Remove LXD. So it's going to be late LXD from the system. So that we have to worry, you have to worry about even setting up LXD you assume you are on an environment where you don't even have LXD installed and you have to worry about that. So right now after after deleting snap can list packages we have. And for sure we do not have LXD like we had before so commands LXD commands like like even listing would not work because snap is not installed. So, here we have snap. The way snap packages are bundled is that an app will have all its dependencies bundled together into one snap file. So you don't have to worry about how you will get all those dependencies together. So when you have a snap package that means you can install it and it will just run because it's bundled with all its dependencies. And it offers a way snap package management offers a way that you can download a package and here because this server is online. We want to download a snap package or for LXD into this instance by issuing snap download LXD. This will download LXD LXD package, but it is not installed if I wanted to install I would issue snap install LXD but I just want to get the file with me. Which is you which we will use to to set up LXD on an offline environment here. We've deleted LXD that means we need to set it up and we don't have internet. We would not even ever issue a snap install LXD it won't work because there's no internet connection here to just wait and wait and wait and it will not do nothing. So online server is just helping me to get the package and then I will copy that package into the offline server and install. So this is downloading snap package sorry LXD package. And even if the server is a gap in a data center where you know you don't have internet, you will definitely have a way of connecting to it because how else would you be able to install an app. Either you will be accessing it through SSH but but that network is just doesn't really have internet connection however you will have that link to connect to that server or a physical console maybe you're sitting physically on that on that server. So that means you will have your packages into the package and put into maybe a flash disk so that would be a way of distributing your package. However, I'm assuming that you will have a way of setting up your DHS to either with SSH or whichever the way it's only that your server doesn't have internet connection. So right now the those two packages the snap packages LXD packages are downloaded and they will be sitting somewhere here. Let me list and here they are. So these are files which you can copy into your flash disk or you can just have them on your laptop. So next step is just to copy this file here, this snap file here into my offline server. And all these really these two are really containers and I will use LXD file command LXD file pool. I want to pull from online container which directory at this file in there in this directory. So I'm pulling from home cake cheeto. And what's the name of the file. This one is directory. It's pulled that and the next one is the next one is is the stock packet itself. And just copy it into this directory. So I have them here. I've just used LXD file pool from that container from this online container. This one doesn't have network also it's disabled by default. So I'm going to push this file into this offline container. But in real situation where you are sitting on the data center you will use SSH you will copy that file over SSH. You can use our sync secure copy or whichever the way that you you know depending on the way that you accessing that server. So LXD file push. I want to push this not package into offline server. Maybe into my home cake cheeto directory. It's now pushing it's pushed. And then I can push the asset package. And it's pushed this one is very small. Yeah, let's get to the offline server and list. Unless the home cake cheeto unless because this is where I pushed the package to. Yeah, note that I tried to install snap offline I mean online here with snap install LXD and it didn't work because it tries to get it from snap snapcraft.io API you know and and it's not working. We don't have internet here. So but then I have gotten these files elsewhere. That means for this to work you need to get your offline package you're going to get those install packages and and carry along with you. Now to install this even when we downloaded the file on the online server we were given a few instructions on how we can install, you know after we downloaded the package with a fake everything sad and snap and we were given a few commands that you can issue to install the package. And one of them is back and then install that is down and then install this package with this other command. So this tries to install snap but from this file is offline package means you can get, you can carry that alone to wherever server that you want. And that way, you will be able to set up your your snap, it's five point LXD with snap. So if I issue a command like LXD list, it should work because now I have LXD but now installed purely offline. Of course, our LXD container engine management engine here is not initiated, but then the command works. That means LXD is now installed. Even if I try to launch these, it won't work because it tries to get the image. Remember the steps that a container is built is that it gets the best image fast. And then from that it builds a container but where is it getting the best image LXD image, sorry LXD remote list. It tries to get the image from cloud images going to daily or here or here. But then you don't have internet so it won't work, it will not work. Even if I try this, it will not work. So that means also you need to plan prior for your images. And I have a server with me. I have a server with me where I had installed the HS2. This is the server. It has base installed of the HS2 but just a very fresh installed. It doesn't have anything. It has everything but no data. It's just a clean install. So it has two containers that runs the HS2 instances here as you can see. And then it has Munion for monitoring and then Postgres database, of course, and then proxy, which is the gateway entry points to our infrastructure, the HS2 infrastructure. So this is where I have like the base installed. And on this server I did export with LXD export proxy. With this, that means you're going to export proxy with all the installed packages in it. It gets that container plus all the added packages and it exports it. It exports it into this kind of Gisib file. And I have done export before. I have a few files here. I have HMIS, Munion, Postgres, proxy. You can just export another one that we are using on this list, which is the HMIS. Export just for demonstration as the HMIS. And the export process will start and as you can see, it's exporting it into this file now. So this is just to prepare yourself. But then you'll have to figure out a way of getting these exported files into the offline server. And we've talked about that before. For this to work, your network environment on where you're exporting your DHS to, it needs to match LXD network environment on where you will import your DHS. That container that we are talking about. And that means your network, LXD network here. Right now we've not initiated LXD, so it will not show us anything. There's no LXD network. But if we initiate LXD here, it needs to match, you know, at least be on this subnet. While this is still exporting from this, from the server that DHS is working, let's now set up LXD profile here with LXD in it and going to go through this quickly. Let's say we want to, we want to give this type of GB. No. Yes. For our network to match, then it has to be LXD bridge one. And then subnet is 172.19.0. 2.1 slash 24. I'm just ensuring that it matches what we have here. Yes. Not necessarily. So we've initiated LXD here. And that implies that the network is, will be available now. And for sure, we have the network created. So even the storage is created, LXD storage list. It has to be here. And the profile. So that means when we get the, when, when this export is complete and we get this export to this server, we can import it and it will just be running everything that we need. If it's a Postgres image, it will have everything, all the packages with us. Yes. So these images I had prepared before starting this call and I have them here. I actually exported on these working VHS to server. And I have them here with me. Yes, we have them here. So I'm going to push Postgres into the offline server and then import, import it into a container. So LXD push, LXD push, Postgres. And it's Postgres into offline server into my home directory. And it will just copy it over the network. It's not even over the network. It's copying it with LXD command because I can connect that LXD instance over the network. Yeah, so this will copy Postgres, exported Postgres plus all the packages in it bundled together into that container. So into that offline server. And we are on offline now. And if I list here, you will see that Postgres is coming. This file is actually growing with time. See it's 398, 417. It's growing because the copy process is ongoing, as you can see. So once we have this file here, then we can import it into a container because we have installed LXD offline. And then we have set up our environment that doesn't need internet. The initiation of this initiation of LXD doesn't need internet. And then what are we missing right now? We're just missing containers. We just want to import those containers right now. And this is 89%, 90% done. It will be done in a few. Yeah, it's done. So here on the offline server, we'll have this Postgres. And I can say, let me just check the storage fast on this server with the F-H. And as I had thought, it's actually 10 GB. So I'm going to increase this storage. It's a container. LXD, if I go to, it's a container and it created it with the default 10 GB. So I need to increase that because I wanted to host other containers. So we need storage space. Let's see, this offline is here. This offline is now having only 10 GB of storage. We need to increase that. Well, you might get to ask why we're having this network interface, but notice that it's not the same as these other interfaces. You know, the IP addresses are run, they're actually on different separate masks. This is the address of the container, the bridge container inside that offline host. But then it doesn't have a way that it can communicate to this host. And if I even type in do.1, I will not be able to see because it is just belong to that content. It's inside that container. And if I can ping these, I know because it's linking to the host network. So this is a different network. It is for container network inside or container environment inside that container, offline container. So to increase this, so that container I'm going to use, let's see, config, define, override. I'm going to override all offline container with the device that I want to override is root. And I want to give it say 50 GB or 60. So it's overridden. That means if we restart this, we will have more storage. You can see now up to 60 GB. Let's see this start offline. I know we're going to lose connection here. Yes. Let's just connect it. Once it started, we should be able to connect it started. Let's connect back. There we go. So we are able to connect back. And if we issue DF miners, we should be able to see more storage. And that's true. Unlike before, we were only able to see 10 GB. So that means we can launch, we can import our Postgres container comfortably. It has most of it now. And we put it on home. So let's see list has nothing now. And this container, this server doesn't have internet, but I can import, import from the tar file that we have here. Postgres. It's now imported. Even though we don't have internet, this container is going to be imported and we will, we shall check what is inside that container. Everything that is in it. So this is for demonstration, but whenever you're faced with this problem in real life, then you need to prepare this Postgres container, and have them with you, either in your laptop or a flash disk or whatever, they are just files. So you need to have them in whatever environment that you will be setting up your DHS to. So it will import. And we can even as it is importing, let's again, push another container, which is HMIS. Let's see and push. So we can restore even two containers. You're pushing to offline. In real life scenario, you either use other sync or whatever way you're connecting to that server. Offline server. It's imported. If we say Alexi list. Now, we should see Postgres and for sure it's here. Only that it stopped. Let's start it. Yeah. While we were also talking, having this discussion, I, I, I started exporting DHS from a live system, and it is now exported successfully. Listing contents here, we should see DHS somewhere. And for sure, yeah, it's here. So that means this file, you can take it along, you can move, can work along with it. You can just go everywhere with it and restore into any system, even though it's even when it is offline. Yeah. And I guess Postgres is started now. And for sure it is started. Notice that it comes with everything that exactly how it was on the working system. Alexi list. It was even the address was 2.20. And here on the offline, it is the same. Because we wanted to really match the network, the way network was on the source and here. Just to, you need to prepare so that you don't have conflicting networks, because the exported container comes with its setting and the setting that it comes with is the, is the, is the Ethernet zero device configuration because if we say Alexi config sure, sure Postgres extended. You notice that it has Ethernet zero and the addresses dot 20. So when you export from the other end, it comes with this configuration. So you need, you need to make sure that they do match. So now we have Postgres, but does it have all our packages that it have, does it have Postgres installed in it. Let's execute into it and check. We are inside. Now this is inside Postgres container, and we can check ports that are listening and see Postgres is, is, is listening on 5432 it's, it's actually installed and it's even listening for connection. And we can try connecting to that Postgres and see we have databases that were there, even from them from the source where we got the image that the container export. We can issue psql, switches, Postgres, and the command that we want to run is psql. Yeah, and we can list the databases that we have here. And as you can see, we have the HHS, we have HMIS, you know, we these were the databases that were actually installed on this system because we had to running the HHS to instances, and these databases that you see here are their chorus, you know, they correspond, they correspond to the, those running the HHS to instances. So this comes with everything. That means if I get an instance here into this host, if I restore this HHS to instance with LXC, import, import, HMIS. And start it, it will just be, it will just run because the database is, is here with, is here and restored into that Postgres container and everything is just available. So it will import that container the same way Postgres was installed. So this is where I was, I got the packages. It's a server that has, it has internet. And I use the tools to set up this environment. And the tools are in this server to deploy. You know, I just ran like sudo deploy.sh and Ansible was able to get the whole environment set up. And then when, once I had this, what I have done right now is just to export these containers into the files and then imported it into this offline container. Yeah, so this is still importing the tools. Normally the Ansible tools are running there. This is how I set up whatever the containers that we have here. I didn't do it manually. So, while still, while this is still importing, I also thought of in future automating this process, because as you can see, this is still manual. I'm doing export and import manually. So in future, I'm thinking of even maybe incorporating this into our tools. It's the import is completed. Let me just list. It should be stopped. Yeah. So we can start HTMLS. So it will start in a few. Yeah, so this is how I set up these tools. It's using Ansible. And we don't have to wait for this because already tools are set up. And I don't think we still need this server. It was just for demonstration so I can close this stuff. So right now this has everything and our container is started. The Tomcat, the instance container. Sorry, should be started now. HTMLS. And we can even check if Tomcat is running. And it's not running. Let's try starting it because it's supposed to. It's supposed to because from here I can ping Postgres ideas. Yeah, we can connect database. Channels. Maybe it's still starting. Yeah. Maybe it's still starting. Yes. So this will start. And it will have everything that is doing some instance had from these. These other side. Yeah, so it is still starting. So once it started, we will see that it's going to be listening on port 8080. Yeah. And we shall try even accessing it right now. Any question up to that point. Comments. I didn't maybe I wasn't watching carefully enough. But did you set the IP address when you imported the image or is it picking the old IP address or is it just picking a new one with DHCP. How's that work? The reason why I ask is because on the Postgres container, there will be firewall rules and PSG, HPA.covathons. Yeah, so what really happened here is that when I exported container from the source, it exported it plus its configuration that is the Ethernet zero IP address configuration. And when it is stored here, it tries to use them, the configurations that were there before, which was because even right here now inside this container right right now. Let's see list. We will have those firewall rules. Let's get to Postgres. We have firewall rules, just exactly the way it was on the other end. So it takes everything the way it is. So that is why I made sure that I preserved the IP address configuration, the network configuration on this destination, this offline server to match whatever I had on the source. Because the export. That's something for people to watch out for. I was just looking at what Bob was asking about the LXC publish. I think they pretty much do the same thing. And the advantage of publishing instead of export seems to maintain the state and the snapshot of the entire container. That means if you move a container that you use using publish, it will continue running. If you export a running container and you restore it on the destination server, I think it will continue running without you necessarily making any configurations. All the configuration and the snapshots and everything that is in the container will be pushed to that exported container. Exactly. Yeah, but I think Stephen, as I thought about that some more, it's a slightly different use case. I think LXC publish is really useful. If you're within a data center to publish images from one machine to another. But what Tito's actually addressing is bringing images from the outside to a machine that doesn't have internet access. And so publishing probably wouldn't really help because then, you know, you still wouldn't have internet access, you wouldn't be able to access them. So yeah, I think publish is great. And I think there's lots of uses for publishing images locally. But in this case, I think what is done is the only way to do it. The only thing I would say, Tito, is when you export the image, you've got to be very careful you don't export snapshots as well, otherwise it can be huge. There's an option on LXC export, I think, which tells it's not to export the snapshots. Instance only. Instance only, yeah. Because if you had a situation where you were using snapshots or taking backups, then you would be exporting all the snapshots as well. Yes. So I didn't switch, put this switch on my environment because I didn't have snapshots anyway. Yeah. However, in situations where you have, say, four, five, six snapshots, then you need to make sure that your exported file or container is very slim and you will have to say you want only snapshots. Sorry, instance only. And you can even, you know, optimize storage and compression and these other flags that you can set during the export. So back to the point that we talked about a few minutes is about export is that you can export a running container, you know, you can create an image out of a running container and then export that image and maybe restore that image on the other end so that you build, you build a container out of that image, say the HHS2 container out of that image. You know, as Steven said, but then it would be a lot of, because anyway, at the end of the day you need to move that file from somewhere else to here. If you prefer, you just export a container, don't worry about creating an image out of it and then just restore that container. It will be a bit simpler than exporting an image on the, on the, on wherever you have it and then import it into an image on this other end and then create a container out of that image. It will be more work to do. Yes. We should be having something now here. Just a thought. I know we're running out of time a bit now as well. But for the Tomcat container in particular, one of the things that might happen on a reasonably regular basis. At this implementation site is people going to want to create new instances. I guess there's two ways that they have to do that. They're either, assuming they've got no internet, right? They either can clone so they can take an existing image and container and clone it, or else they need to keep, keep an image of a basically empty Tomcat container and then just keep creating instances using that image. Exactly. Exactly. That is a good use case of exporting a running container into an image, a base container into an image where, where that base container has all the packages and then you can just spawn, you know, new containers based on that image. Exactly. That makes sense. This is taking long to start though. So I think what you need to do, Tito, summing up is quite useful to maybe create a short document with a shopping list of, you know, what are the, what are the files you need to bring with you. If you're traveling somewhere where you know when you get the internet, you won't have access to the internet. So you might, you might need your Ubuntu 2204 ISO image, for example. You will maybe need your LXD snap package. And then you'll need some LXD, LXD images or container exports. And with that shopping list in mind or in hand, it should be possible for you to get the environment up and running without internet. It's a completely different question on whether that's a good idea or not. Exactly. You know, running Linux systems without internet and therefore without having the ability to do updates and upgrades is probably not advised, but sometimes the world gives us these challenges. Yeah, we first, we are faced with these problems and we don't have options. Yeah, so I have not even talked about how we could manage app packages offline, but you can do that also. You just need to prepare yourself, prepare an ISO image that has all the packages that, that are required for your, for your use case. You know, that means you, yes. Another thing that's useful to think about, particularly if I think of, of a data center in an environment where maybe the internet is expensive. It's not that you don't have it, but it's expensive bandwidth is hard to come by. You want to minimize at least the use of the internet, then it can make sense to have a local app repository. In mind, you might be making, you might be making 10, 20 different containers. Each of these containers is going off looking to get app packages. It might make sense in an environment like that to set up a local, I know you can, you can set up a kind of caching app repository where at least the repository is only updating once and then all the local containers can make use of that. That's something that I done many, many, many years ago, but it's also something to consider if you're, if you're in a bad bandwidth constrained environment, even from a security perspective, it could mean then that whoever's managing the firewall will only allow outgoing connections from the app repository, but won't allow any of the other servers to have outgoing connections. So it's something else to look at. Yeah. Exactly. We actually, the time is up, but we will, we will maybe explore also them how we can manage the app packages in the future, because right now we've just dealt with, you know, moving containers here and there. But this server, but you also didn't have internet and we managed to set up these two is to contain us. And they're up and running. Yeah. So, unless we have questions, concerned comments. I think, yeah, I think this is a very good use case, and especially for those deployment scenarios where, like Bob has indicated the connections are either restricted or, you know, because of one reason or the other. It's not there and yet maybe want to start doing some support to customization and like as they try to figure out the connection and the like, I think this is a very good use case. Maybe, as you said, the automation and the inputting it as part of the, the other tool would be a good one, but also packaging it maybe some way as an option that someone can easily just get that. Maybe that, like, like the base containers, like, so that I can just download it and move with it and then I just go and start it up on a server, and then I continue from there. Yeah, we are even having a discussion in Bob and when we realize that if you are to really manage your images, your LX images images that way, then why don't you anyway use Docker, because that is what it's meant for. Yeah, but you see also the ease of use is another thing. Docker is good in one aspect, but also not everybody probably is as conversant with it as possible. I think they use using it might be more complex than LX, because I think LX is kind of direct and delay. And, of course, it's still a good option, but we will look at maybe making containers or base containers that runs on various supported versions of Ubuntu with that with all that thing and then publish it for maybe download on the DHIs to maybe GitHub or when someone wants a 22.04, I just go and get that offline mode for 22.04 and then I put it on the server. I just started up and then everything is fired up. It's not a bad idea, Stephen, but maybe maybe an easier way to do that or more manageable way to do that because you see if we start publishing the images, then we have to make sure that they are up to date and all the security packages and things. But what would be quite easy to make is just a short little script to do that. You know, to get the latest Ubuntu to run Tito's Ansible stuff on it and then to collect all the images of it. Oh, in fact, yeah. Yeah, I think actually publishing images, it becomes a little bit of a nightmare. I think it's the same nightmare that the team currently have with publishing of Docker images. So once you've got the thing published, then you're kind of responsible for the security management of those images. So in a way, it's kind of safer to provide the scripts to create your own image rather than just putting them up there for download. We'll think about it. Okay, yeah. So right now we have scripts that creates containers for you. So, means we need to add something or some parts that exports that those images into say, packet them into one file or all those. Yeah, basically a routine does it help to take this installation and pack it up in a bag and take it somewhere. Yeah. Good. Thanks Tito. Maybe I suggest you informed for the next one. I'm just thinking about, I mean me, I followed what you were doing. And I can understand because I'm quite familiar with all the commands and everything. But maybe for someone and I'm sure Steven the same for people who are maybe less familiar, they might find this style of going through the demo. A bit intimidating. I think maybe maybe one or two slides and a picture pictures always good. No picture to say, this is what we're going to try to do. You know, there's the, there's the, there's the server, there's the containers, we're going to make images out of them. Now I'm just trying to think what it's like for people who may be less familiar than me. But other than that, I think, you know, excellent presentation and good, good research that you've done around testing all of this stuff. Okay. I have to go anyway. Thanks, guys. Thank you everyone. And thank you for joining. Thank you. Thank you both and thank you everyone. Thank you.