 screen. Okay, so this is a neo blank instance that modern has just created. We're going to do just a quick, I think a repeat of what Bob had done, but also continue briefly on what probably has to be done following on. So first, like Bob hinted out that we have a documentation on the on the on the on the on GitHub that defines exactly what what has to be done during the birthing process. So I'm just going to I don't know whether I can share with you guys the link quickly. I don't know whether you have shared he hadn't the DHS to tools the energy and we normally recommend people to actually first do the birthing process, which is accessible. And just to make it more secure. And I'm just going to follow briefly what we are already actually documented on the right. Just on a blank instance, of course, Bob had already done this, I'm just copying. And this is my new instance. Just hopefully the size is visible enough. Okay. Okay. I want to reach where Bob had already reached just quickly, then I can continue from there by giving some bit of explanation. So I think Bob had already given a lot of explanation. And of course, we create a user, add this user to the pseudo and all those bits. So as it gets done, right now, I'm looking at the root. So, so I'm just going to create myself with my password. So you can complete just the basic other processes and confirm. So now we have created a user. But of course, this user is just a basic user that needs to be added into the pseudo group. So this command specifically adds that user to the pseudo group. So I've created a username called Sokaya. And that is it. So right now, I'm part of the user, just to test normally, you can switch to that user and see if you can do ls maybe Viva, Lib or something. And then if you can log in, that should give you so at least this shows that at least you can, this user is basically every, it falls under the pseudo group. Then the other bit is now to send the key. That is very important. I already have my keyword. So this process is normally for those who don't have even a key gen on their existing laptops or computers that they are using to connect to the server. But if they have already, then they just skip that process and go to the next process. That is this. But before you can send successfully the your public key, the server, it's important that you have a directory under that user account that you've created. So right now I'm going to make a .ssh folder in my home directory. So that we have this folder is what is going to receive our authorized keys that we have. I also have on my laptop, my home directory, where I get the public key. It looks like we can see the two last lines of your terminal. You can see what? The two last lines from your terminal. When you attack me, we don't see anything. I don't see. I've kind of zoomed in a bit. Half of the last line. Yeah, I've zoomed in a bit. So it's a bit. That's a suggestion for Hermes to make this terminal a little smaller. And if it is, it will become. So on that command, on that command that I've just copied to send the thing, we have to get the, of course, my username is now the new username that I've created. So that I'm going to change to Sokaya. And then the IP address, let me just get quickly from Martin. This quickly this and replace this. Okay. So I can send my code. Okay. So now I can actually try to, right now I'm able to log in using my public key, I suppose, public key plus the password or something like that. So if I exit, but just to confirm that we have everything. A little request for zooming in a little bit or increasing the font for your terminal window. Okay. Zooming in, we'll just take us back to the issue of, can you still see? It's even high. The problem when you have the full screen is that because we have zoom things, the zoom controls, we cannot see the last screen. So you can zoom, but just make the terminal window a bit smaller. So it doesn't reach the bottom part of the computer. And that should be all right. That's the only problem we're having. That's the reason we cannot see your last lines. But you can zoom in as long as the terminal does not take the bottom part of the screen. Let me stop around here. So you see we have my authorized keys. If I'm to just quickly, this is my public key. So let me go to my home directory. So I have my key there in my local laptop. So when I send from here, I think this command has to be run from the local computer. I think I'd run from the other side way, the other way around. So I put the password. There's a message saying that you're in your local host. Is that right? Yeah. On this, this is my remote server. This in black is what I have on my computer. So the command, I think I'd run from the server itself. That's why the content was blank. But now I've run from my laptop and just transferred. I think it should be the content of this file should be now there. As you can see, it has now loaded on the remote server. Just to go back to this site, I can log in, try to use now my user account at this IP address. The port is still the same. Okay. As you can see, the login pop up for the password has now changed to the key. And now you have to put the password of the key. And that's no matter what we, so we test with the password of the key. If you're able to log in like that, so that should be guaranteed that the key works. And then the next step is to basically disable password login, but also securing that file from this, from that remote server, making sure that the access rights is on the read only with 0600. The permission is changed to read only. And then the other bit is to turn on, turn off password authentication and also disable root login. Right now we have two users that can log into the server, the root and the new account that we just created. We want to disable that and we just edit the file, which is found under that directory, etc ssh. So the nano that should open first the file that we have. And the first thing that you see from there is disabling root login by just changing the yes value to a no. Okay. So if you turn it to no and you complete that, that already disables root login. You could first try to make sure that you're okay. Then you also look at the password password login has to be turned off by default. It's a yes. So you look at this password authentication is a yes and you uncomment that and turn it to a no. That will only guarantee or accept login via the public key. Okay. Then definitely you restart your ssh services just to make sure that your new configurations are loaded and take effect. Okay. So right now I think I should not be able to log in via password, but rather just only via the so right now we are still on port 22. And the next thing is of course to try to secure the port. This is the default port that everybody knows and it's important that we change it just to make sure that people don't guess it's a first time. Of course the port of ssh is where people normally try to enter via and the default one is port 22. It's still on the same file. That as you can see on the first 10 or 11 line, this one, are you uncomment that port and change it to any other port? We normally use a to two, but you can try to change it to any port that you feel more secure. There has been some guidance around this port that should be below 1000. Given the fact that ports that are above that side has some usage, which is a bit less or requires less security on the data on the line. So we put our a to two, so that is our new port and then we restart again the service. So right now if I log out, I'm not able to actually access this server from the default port 22 and also using fast web. So just to just check it out together right now I've logged out and just to log in using a root just to test this connection refuse on port 2020. And if I do still another port and I try to log in, you can see that requires public key and root is not accepted. So if I go to I change the user now, we should be able to okay to log in put in the key password and we should be inside a port 822 and also I think this is where Bob had just left, right? Oh, he had finished up with the firewalls and the light. Of course, the next step is to just enable that make sure that the firewall work and allows that that port here. It's important that it's important that sorry, we don't log out before allowing that so you can see that our firewall is now active. So if you log out here right now and you're not configured it to accept port 822, it becomes a bit tricky. So the next command that allows port 822 is that one. And this is just a description of what that command is trying to do on the port. So just quickly, okay, now I should have allowed that port 822 instead of so just having it blank just to restart. Unfortunately, we don't have a single command that this that will restart or reload the service. So you have to first disable and enable it again. Okay, using that. So that means already our server is quite a bit right now stable enough and and secure. And that's what we coined as the batting process. What is next is now to try to bring the DHS tools on this server, which is which requires a git. We haven't yet packaged this in a single command where you can just easily go and type some command. Then that will pull it onto the server. So we normally require the process of installing first the git onto your blank server or at least updating it. So right now we have our git on the server. And the next step is to just bring us back to the code that will allow us to download these files on the script that we have. If I can go back quickly this site. Let me just go back here. Okay. Now to install this script. This is the installation step now. Our server given the fact that it's now very stable or secured from any other illegal access on the line. The next step is of course to come out and we are using just the clone. We're just getting the whole entire project, the git project for now. So I'm just going to copy this and in my home directory just clone the LG tool onto my folder. Yes please. Yes, I don't know if we can talk about the LXC concept. Just to make the information of how we can create a simple container, maybe deploy an image inside. Can we just maybe talk about the LXC? Like Bob was doing, I don't know if this was the thing he was processing to do. Just show how we can set up LXC, create a simple container and empty one, and how we can just go in and go out before setting the full script. It is a suggestion. Yeah. So this is a continuation of this process and the LXC concept of containers and the like I think had already been explained briefly by Bob. But to start creating it inside, this tool automatically creates for you the best containers that are required for the process that, for the deployment process or installation of this entire DHIs to environment. So I first wanted to run maybe just a few things before we can actually do. Right now, definitely, of course, there's no LXC containers in the thing. Like you cannot see a command that I don't think it should work here because it is not installed. And the next, of course, as you can see, it's there but just nothing is there. Okay. So within the script that we have just pulled in into our folder, like DHIs to tools engine, we'll see a couple of things, a couple of scripts that under setup that really has been automated somehow to take one with the whole setup of the environment. As you can see, we have shell script that are responsible for creating containers. The details of it, I think Bob will maybe come and highlight a bit. But if maybe just quickly, it's just a complex thing for non-programmers or non-shell scripters. As you can see, we have normally, of course, you have to configure the storage image that it has to be used by the containers LXC. And this defines by default what operating system we should be using, the version. And we've indicated just Tomcat Ubuntu 20.04. And it will basically create for us three base containers, one for proxy, one for the Postgres database, and then another one for for monitoring, though it will not show up. And so we also have the actual DHIs to tools commands that are found in the script. If for some of you have been using the DHIs to tools, the first versions, you know, you just type DHIs to shutdowns, DHIs to start DHL deploy one, all those commands are actually within these installed scripts that is there. And some of the scripts are part of the, they are called from other scripts, like the LXD setup is one that will call the one that will create for you the containers and the like. So we have just few basic commands that we have to use out of this to create, to be able to create our containers. Looking at the documentation, of course, there are few adjustments that has to be done on your containers or on your setup. This is where the domain aspect of the requirement of it will fully qualify, but domain name is important. And because we need to have something that is fully accessible from this setup and should be actually. So if I open, I first make a copy, let me just do using here the sample inside the config here. You should be seeing a default, sorry, you should be seeing these are default configurations that we have. And we just have to rename this a bit or copy, make a copy containers, the sample to containers.json. And then we should be having our containers.json. Now, as you can see, we have three default containers that we have. And our subsequent command that we shall be using will create for us the proxy server, postgres server, and the monitor server, I mean containers that will be running at the various software that are embedded within. The Apache proxy defines a proxy that will not be using IngenX or any other proxy servers, but rather Apache 2 servers as the proxy server. And then postgres definitely defines the container that will hold your database server based on postgres. And the basic requirement or parameters for each container just to be created well is the name of the container, an IP address, and then the type of the container. We currently support the three. We have the Tomcat, we have the, I mean we have the postgres, we have the Apache, the proxy server, and we also have IngenX proxy, but that has not been fully tested yet. Then we have the monitoring container. Those are the types that we have. And then we have, so there are three things that we just have to change on this. We have to change according to our server details, the FQDN, that is the fully qualified domain name. Let me call this, I'm going to call this as demo. All right, everyone, yeah. Everyone, hear me, I've already set up a domain for you. If you go back to the Slack, the shared Slack VM, he's already set up a domain. On the Slack? On the Slack, yes, yes, yes. Okay. And then the Bob Jolie, Kevin, if you go a little down. Yeah, there's a domain there you can use. On a visa, eh? Yes, thanks. I can use this? Yes, that should be okay with that. Yes, yes, please, yes, yes. Okay, okay, so thanks. I'm going to use this as our domain name. So I come and change this here. Sorry, it should just be the domain name, not the full link. So ensure that you don't have these URL parameters and the like protocols and the like. And then the other thing that we change is the admin email address, of course, so that you will be receiving the SSL certificate notifications whenever they expire. This is one way of using it and also admin notifications on the containers that you'll be getting or the entire environment. So I can put mine. I have that as an email. Then the time zone, unfortunately, we decided to use originally we had the Africa Diary Salam. So here this teaser was confusing the entire team. We thought it was a short for time zone, no, for Tanzania. So someone from Uganda would put UG for someone from Kenya would put KY and they're like, but that was not the case. So this teaser is a time zone and you have to define which city or country or country and you're coming from. For my case, I would put a compiler. Okay. There's a time zone. It should be an existing time zone. So you have that. And basically, we recommend just to stop there. The rest of the other things down, we don't recommend you to change because it's basically gives you some bit of space and the like. And also what we've provided is that the IP addresses have been given some ranges. So Postgres containers will run from 20 up to maybe around 29 if you have as many Postgres containers that you're creating. And then we have two for proxy. Proxy is basically only always one. But then we've intentionally left out from three up to 19 to support the IP address for our DHIs to Tomcat containers that you will be creating. You remember Bob says that we will create as many Tomcat containers to be able to hold the DHIs to instances that you have for training, for development, for an older space. And that will fit within those IP address ranges. But you can always define your own as well and come out with one that may not be within this range. So if I close that, and just quickly looking at the next, of course, we are done with this, the top theory. You can read that we recommend you not to go down below the 3D lines and the like. So our next step is basically create as the LXD containers and the like. Of course, just by running, let me go back one step. In the setup, of course, run LXD setup, LXD setup. We expect this to create for us the first three containers that we have defined within the containers of JSON5. So let's wait quickly. It's now creating a proxy of type Apache proxy. It will download for us the Ubuntu 2024, set it up and also install the necessary software that is required to run an Apache server within it so that you don't need to go and again start installing manually at the various software that is required to run Apache server. So it's still reading the file. So let's see. Okay, it's done. So this should take us roughly maybe a few minutes per container just because it's pulling it. And a good thing with the LXD, once it has downloaded once, the next process will not try to download. You'll use the local copy and try to install with what it has. So it will download for you once each of the content matches you have as many containers as possible. It will just reuse what it already has in this history of what has been downloaded. I think we're done with creating the proxy. It's probably now creating for us the next container. Oh, it's configuring the Apache 2, you can see. Okay, so you can see the next container being created. Yes, Ale? Someone has a hand? Yes, please. So while this is installing, I just wanted to appreciate your explanation. I thought I'm following very well. So what I wanted to comment is to say I think it's better if we actually open up the scripts and then explain what's happening, like each and everything that's written in the scripts, unlike just running the scripts. Okay, let's first run it and then I'll just explain to you. Let's just wait for it to finish. I'm not sure if everybody is comfortable with looking at the script. I don't know whether that was the plan for both to go that far, but it's okay. I can explain quickly what each script is doing. Let's first wait and complete. Somehow this process is a bit slow. I don't know whether it is the server speed side as well, but it should take normally a shorter time to create those three containers that we have created. I see it's almost getting done with a Postgres container. So as you can see, it has created the Postgres. It's done. The next is creating the third and last container, the morning monitor that will be used to monitor the entire environment, both and all the other containers that has been created, plus the host machine, once fully configured. Yes, Jamie, your hand is up. Yes. So just quickly to mention that as you see, when Stephen was doing the setup, he had to include that domain name that we use. That's the reason you were asked to have a server and also have control over our DNS. If you do not have, so on top of the server, you should have this. If you do not have their free services, I listed one of them in the chat. But if you see at one point, I just checked and it was at one point the setup was saying acquire an Excel certificates. So if you do not have that domain name, that step I think will break because it requires to have a full qualified domain name that is going to be linked to the HTTPS certificate. Just wanted to mention that because in case you run this without that, it will probably break and make sure that when you get the server, you also have access to a DNS zone or if not use a free service like the one I listed there. Sorry, Stephen, thanks. Yeah, thank you. And also at some point, you will note that we kind of left out the process of SSS creation out of this process just to make sure that you do it by yourself and be cautious around the rate of failure with the less encrypt. Also using less encrypt, normally it limits the number of attempts that you do to generate the SSS certificate within a specific amount of time. So as if they notice that you're keeping on generating every time, most likely they will block you and that means you'll have to get another domain name and probably with that process might be long and like. So as you can see, we are done with the setup of the content. Hello, Stephen. Yes, please. Yes, yes. I just wanted also to add the previous presenter that I said that you need to have the domain, but again, if you have purchased the server to those clouds like Linode and the Contab and the like, I would just paste the command where you can be able to get to those free norms that they normally provide. So if you don't have the domain map yet at the moment, you can just use that one. So I just put it in that command in a Slack and it's so that anyone can use it for meanwhile, I may be purchasing for the domain. Thank you. Yeah, thank you. I think that is very key. Most of the virtual providers, actually all the cloud providers will give you another alternative subdomain within their domains or something that you can also use to get the, but of course, it will be having funny names that may not give you what you want. So it's important that, so as you can see, we have our three containers now created and they are now running at the ports that we have defined within the container's suggestion. Just quickly, before we can proceed, like someone requested, remember we ran out the LXD setup script. So if I just open that, it's basically small, as you can see, but tries to pull out other things that are more into the LXD setup script, basically does the basic system update process and upgrade of all the packages that are within the host machine or something, and then installs for us the LXD. Okay, LXD is basically what LXC, basically we use the word interchangeably, but this kind of gives the base of what LXC containers actually run on. And so as you can see, just installs that and initialize the LXD environment within the host machine. Once that is done, then of course sets up basic firewall on the disk space that has been defined. This is the bridge that will link the host machine with the containers, so that you are able to access your containers from the host machine, and also the host machine can be able to access, I mean, the containers can also be able to access the host machine. And then definitely reads out or runs or puts up this creates, you know, this create containers of SH, it will start to process it. It's now calling the next script that is create container, the one we just looked at, which has a number of script that we have. So if I look at quickly create containers.sh, remember we already defined our on the containers objection, we defined our container that we need to create. But also basically we have to define some basic variables that we have. The first few things is looking at variables, defining the variables and what the basic command does. For example, the IP root, which is trying to find out what IP are in each of these containers, the one that was defined. And then tries to create an attach it to the bridge, so that you're able to access using the IP address that was configured. By default, the LXC can install for you and assign for you an IP address that if you have not specified, but if you have specified, it's basically the concept of an automated automatic IP address that is assigned to a device that you come that you bring on board or a static IP address. So what we've done for these three default containers is basically define static IP addresses, but also make sure that they're accessible within the host and the container themselves. And of course, the firewall will try to make sure that it is accepted and accessible within there. Now, after that, you'll see we start to install some bit of files. Here we are installing unzip to just support this unzipping of certain files that could have been zipped. And then we have auditity and JQ. JQ is a JSON query thing tool that will read the JSON file and process what is inside. Okay, it is used somewhere down. Just as you can see this side. Okay. And assign it to a variable or something. The audit queue is probably used for auditing. That one I don't understand much, but the JQ is a tool that is used for reading the JSON file. Remember, then the other bit is pass config.j... This file will read the configuration file that we just created or edited and then keep the content into a variable. Once that is done, you're able to actually read. As you can see, we have for each the value that comes up, we get the value, the key, the value, the environment name. This is reading from another... Let me just close this and we see pass config. Okay. As you can see, we are loading the config file on the first line. Okay. And then we have already assigned this to environment variables that we are going to be using. We know that the name of a container is this, this, this, and that, and that. And every content has been assigned a variable so that we can easily use that in a... Basically, this script sets up a global variable that is accessible within the entire directory, but also allows you to use it in any other subsequent script that you'll be using. Already, we have assigned... We have already assigned them to key variables that we, as you can see, proxy. The proxy is using the... Confirm the config. It reads what's the proxy. Monitoring and all those bit... And those are already defined in the config containers of JSON file. They're all part coming from this file and already they have been assigned. Now, once that is done, we're in create containers. Okay. It reads and puts these values. Okay. So for each of the configuration that has come from the file, basically, it assigns it to another variable, key values, and then assigns it to the default LXC profile. This is a command that we used to actually start to create what the containers that we do. And then the next step would then now do the actual assignment of all these variables on each of the containers. This... Remember, we've defined our guest OS as 20.04. This command that I've highlighted is basically the one that actually starts... If you are not using this script, you can just create your own on LXC, LXD container using that command. LXC init Ubuntu. You can also do central edge. You can do windows and the like as long as they are images for the LXC. So right now, we are saying create for me an LXC container and initialize it with an Ubuntu operating system whose version is 20.04. Of course, that variable is defined up as 20.04. And the name... Remember, our name is coming from the containers.json for now, and we're creating it dynamically. So that name will be replaced by the three names of each of the three containers that we have defined within the containers.json. So if I'm creating proxy, it will be create LXC init Ubuntu 20.04 proxy, and then it will attach the configuration that was also defined, the bridge name, and then the name of the container, and then that. So these are the networking commands that are normally required to attach an IP address along the bridge so that you can be able to access the container that you're creating within or from the host machine. Once that is done, now we... The next section is basically trying to do specific commands for each of the containers that we are looking for. You see the first thing is checking if the container you're creating is a proxy. If the type has some wide proxy, you'll need... Okay, the format is that if we are creating an engine X proxy, then where you see star will be engine X proxy in the containers of json. So here we are saying, if a type has something underscore proxy, then... And the firewalls and all those bits are working properly, then do these commands that are within this. Now this is specific to the proxy container, and as you can see, we're reading the rules and all those things that are defined in the firewalls for proxy. Definitely, we expect the proxy to be our entry point to all the other containers, so the configuration is basically different. So it reads out what has been defined in the firewall in the firewall for proxy and puts it under the normal host machines firewall or proxy or in the container within the container for proxy and defines that before the rules. So that's before the rules, I think those good networking experts within here can best explain that these are rules that probably are checked before starting to run certain things. I hope that is what it is. I am not sure very much, but I think that is what I understand. And then once that is done, you see it puts it in the networking range and all those bits and then finish. But the most important bit is that it starts to... Now this will start the LXC container that will have been created and then do some bit of NSLOOKUP for your domain name that you will have defined just to test that the container is accessible from the... We had actually been using pinging, but pinging was becoming a bit of a problem where we tried to ping the... From the host machine, we tried to ping the container that we've created, but sometimes you find that it's actually not responding within time. That's why we try to do an NSLOOKUP of this URL just to... If it can access, then the networking has been fully done very well and that means from within the container that has been created, we are able to actually access the outside world from the internal container. So there are certain other things. So this is for monitoring, if you can see. This section does for monitoring. It basically has just separate configurations. Just try to define separate configurations and then any post-setup scripts that are there, if containers has post-setup. For example, you could be having another process that is meant to clean up the old entire configuration, then definitely it will use that. You can see there's nothing almost, but just tries to just create and stop there. Okay. Then, of course, for type moaning, there are some other commands that we need to install some basic things in it. The moaning node itself within it and that will automatically install it and configure it with the IP address that has been defined within the range that has been defined and allows it to actually start running and monitor each of these. As you can see, it starts and like. So this is our script for basically installing the entire content, the number of containers. I've not seen here the one for Postgres. Has anyone seen for Postgres? This side, it should also be there so that it defines things that will run on Postgres instance. I don't seem to see here. But basically, this script looks at custom requirements for each of the containers and then do the installation separately, the execution of the commands. So that is basically that createContainer command, which is initiated or called from the LXD setup. Now, once that is done, as you can see, we have that. So we also have the next script, which currently, as we nod on the tool, there's no command as you can see. That normally allows you to start actually running the DHIs to commands like what we used to do with the old DHIs to commands or DHIs tools commands like DHIs to shut down, deploy one, all those bits of things. So referring back to the next step, after running, of course, we'll finish with this. The next step is basically to run the install script. Install script has at least this is what will install for us the various commands that we'll be using from the DHIs to tools. These are custom commands that we should do. Install script. I put on my password. Okay. So as you can see, it has installed the script and completed successfully. Now, I think I should even be able to access some as you can see. These are the commands that we have. DHIs to backup, DHIs to create instance, DHIs to DB activities. We have deleting instance, deploying log view to see your logs, restoring a database, and also the one that actually tries to deal with the monitoring morning and the like. So looking at the install script. Install script. You can see that this is what it basically does. We're just copying what we have on our script and putting it in the user bin. We are creating the environment value so that your commands are accessible from the terminal or within the environment. And we first make under the user local, a directory called DHIs, where we'll put all our files there, the commands that will be doing the actual execution of the task that we want. Basically, this script is what supports all this process of installing the script. Now, we have our scripts accessible from the terminal. Okay. And definitely, we are almost today. Okay. The next step of now starting to actually start the entire process. Okay. So like you said, you should now be able to access your go to these and all those bits, but we haven't yet created a DHIs to instance. Okay. There are other scripts that are there, like deleting delete. Okay. If we just look at it, what it does, this will allow you to delete your containers that you have. So just looking at what we have, this script should be able to delete for you the entire containers that are there and then allows you to start afresh. Okay. Of course, if you don't want to use the script and you want to delete only maybe one container, that is what that's what you can use to delete a container. The dollar C is actually the name of the container. Okay. So you would say delete, LXC delete, first thing is if it is running, it will stop it and then delete, first delete the container. That's why we use it there, just to avoid any failure that is there. Okay. Yeah. So basically, the overview of that thing, now that we have our commands of the DHIs to tools available within the server, we should be able to proceed to an installation of our DHIs to our Tomcat containers. Remember, at the minimum, we need to have basically four containers. Already the three are already in, but now the one that will actually hold our Tomcat, which will then run the DHIs to war files is what we need to be creating. And the commands that we'll use to basically create a Tomcat container, which is basically the DHIs to instance, is this, okay, DHIs to create instance. Okay. What we require in this command are some basic parameters. We allow you to define the name of the container that you want, or the name of the DHIs to instance that you're planning to create. And then you also give it an IP address, okay, based on the ranges that you have. And then you also give it the database container that you have. You remember the architecture of the one Bob was showing, where we have multiple possible containers and also multiple Tomcat containers, multiple loads. So you are free to create as many containers that for possible, but then when you're creating out the actual DHIs to it is important that you really point for us to the right possible containers that will be holding the data, where the database will be created. So just to... Still nothing to breaking up a bit. Yeah, someone was calling on my phone. Sorry. Yeah. Create instance. So you will remember the parameters that we have, right? So if I try to do enter, I think it should give us some bit of what we need to include. Okay. The instance named IP address, the possible container. Okay. This gives us some bit of an idea of what we need to run when we are creating a new instance. The options that are required is basically that the instance name. The IP address is kind of optional because it automatically of late. Now we allow this to be optional. As you can see, but the name of the container is key that you define. This is optional. It becomes applicable normally when you have only one possible container or something, but it would be important if you have more than one possible container, then you define for us the possible that you're trying to use, the possible container that you're going to use to link up with the DHIs to your creating. So we'll do it again. Instance create instance. Of course, our host name now we're going to call it demo. Okay. And then my IP address, I can leave it, but let me just create, I want to start from 192.168.010. Okay. Just start 192.168.010. And then I can put the our container, which is possible for possible. And then I hit this. So as you can see, it does a lot of things that are already, that are requirements of DHIs to deployment. Looking at this, it seems it has gone into the possible container, created a database named demo on the possible container. And there's altered the role. That means that it has assigned the entire database role and ownership to the demo user. And also, there's an extension that we normally require for the latest DHIs to versions, the post GIS. It has already added it. And also applied, as you can see, rules added and although that's the firewall related command output. And now it is creating a Tomcat container called demo. So let's wait and see what it's going to do. So seems it has finished all the network testing. And now it's going to install for us a Tomcat container for our DHIs to instance called demo. Okay. So it's seemingly working fine. So like Bob had said, our Tomcat is Tomcat nine, as you can see there in the log. And our post-grace, he says, has been upgraded to post-grace pattern. So basically, this pulls out the latest software, but also, of course, it is important that we check it out. So it seems we actually up and running. So just looking at our containers, what has happened. So now you see our new instance is there running. Okay. And so remember our domain name that we created. So if I go here, let's test before we can go far and then put this. So right now, we don't have any, we have not yet done the deployment, of course, of the DHIs too. And just to see that nothing is still showing up and it's not secure. Okay. So our next step is basically to, since it's a new database, we just probably deploy our Wi-Fi. Now we have options. If you already have the file that you already downloaded within the environment that you're working in, the host machine, you can pass F to allow you to point to the file that you're working with. Or you can pass L to define the link where the file is located. Okay. And then definitely the name of the instance. So since I don't have deployable and I don't have the content, the file already done on my computer, I can just go and pull out the latest, the latest DHIs to stable version for now. And then as you can see, I just copied the link. Okay. And say, DHIs to deploy war minus L, and then you defined where it should pick up the Wi-Fi from. And then say demo. So what I'm saying is that deploy this Wi-Fi, go to DHIs to download the file, deploy it onto demo instance, demo container. Which is basically our DHIs to instance that we have created. So once I enter this, it will download for me the file and then start the automatic deployment of the Wi-Fi. It copies, it creates a file called DHIs. .war and then starts to deploy it onto the demo container. So this is a repetitive process that you can always do. And always you can log view. If you want, you can see the log of what is happening. You just define, from the host machine, define DHIs log view, minus F is for a continuous, of course, this one requires a pseudo. Okay. Like continuous is like tail minus, like log tail or something, which keeps on running when changes are happening. So as you can see, it's trying to create for us now. It's deploying for us the Wi-Fi. And it seems so far, everything is okay. The exception of a common or a common Tomcat error. This is a Tomcat error. It's not a DHIs to error that we have. So it's not, it's just a Tomcat thing. We need to check with both on what's happening. So let's wait and see if it can create for us the entire program without any issue. So let's wait and see. It's getting for us, doing for us some processes. Okay. It seems to be running stubborn now. Steven. Hi. Hi. I was saying we are about 10 minutes over time now. Is it possible to wrap this up in about five minutes or? Yeah, sure. Great, great. We are done with the deployment of DHIs too. So DHIs has run successfully. It seems there's no error. If I can go to that, sorry, it's URL that I created. Let's just get the URL link. We normally require, let me just remove this. Okay. So we have to do post installation to make sure that everything is done successfully. Remember, this now requires us to put, the whole setup has been set up via HTTPS. Of course, that was not completed yet. As you can see, it's not allowing, but it should be giving us an access from the local here if I install links, the command line terminal. So going back to the documentation, definitely we have a number of other commands that view that is important that we run, that we can use always. You can always top a container or start a container, Tomcat container, and then restart. Those are all possible commands that are there. Now, what is next left is one thing that we haven't done. Maybe we can come back after the break. Are we coming back after the break? I believe we're continuing tomorrow. Isn't that right with schedule? Let me double check that. Yeah. It looks like 1145 GMT was the setup time for today, and tomorrow it's also on 9 GMT to 1145 GMT. Okay. I think maybe we need to follow those two timings. Can we agree and finish the post installation step if it is possible? Yes, these are a few things because we have to finish it so that we can see the page running, the system running. Is it agreeable to everyone? Alice is here now. Alice, can you? Yes. Given the circumstances, please take all the time you need for this session. That is absolutely fine. We are not in the hurry, so take your time. Okay. Thank you. No problem. Thank you. So, ladies and gentlemen, the next step is basically done with the setup. Originally, we had done the entire process by just automating even the SSL certificate generation. But, of course, we try to be cautious that most time people will keep trying to run that script, and then they are using only one domain name or subdomain, and then they get limited or they get blocked out by less encrypt. So, we try to separate this process to make sure that your instance is not blocked or your subdomain is not blocked from when you are testing out the SSL or when you are generating the SSL certificate for that instance. So, one thing that we definitely will do is to run. So, remember, we have four containers now, and accessing the containers is pretty simple. Basically, we are going to do the post installation from the proxy server, but when you want to access any other container that is here, you can always use sudo lxc, exec, then the proxy name or the container name I mean. So, for now, we are using proxy, and then you put your command. The dash dash just changes, just gives you, gives the command an idea that the next, the follow on thing is the command that should be used once someone has logged in. So, as you can see, I'm now not in the host machine, but in the proxy container. And within here is a complete new Ubuntu 2024 server that is already installed with the with our Apache server. As you can see, Apache servers are already there. And this is only for the proxy server. You will not find the Apache server on the demo or monitor or possible container. So, one thing that we are expected to do here is to basically we will first test out our command. Again, it's just to generate the SSL certificate. We test it against the domain name that we used. Using this, here it will allow you to try as multiple times as possible without being blocked. But once that highlighted parameter is not in, it means that you're trying to generate SSL certificate for production. So, it's important that when you're testing, you have to apply that parameter there and also change the right domain name accordingly. So, when I come back to my, I'm already in my proxy server. I just, I'm going to change a few things. This is your email address that you want the less encrypt and SATA board to send you notification whenever there's any issue, for example, expiry or something like that. So, it will send you to, so I'm putting that, it will send me to that. And then I get the domain name that we use. Let's just copy link. So, the parameter of D indicates that, what's the, indicates the domain name that you're trying to, okay. So, now I'm saying test for me this certificate. If it's successful, let me go ahead and create. So, let's hit it. It will show us what the outcome is. If it is successful, then we should be, of course, we have to agree to the terms and condition. So, as you can see, ladies have been saved. I don't know why it's not, problem-binding to port 80 could not bind. You need to stop Apache first, Steven. Yeah, yeah. That's what I'm, okay. You need to stop, not restart. Pardon? You need the system control restart, Apache. You need to stop, stop Apache. Yeah, that's what I did. You need to restart. You need to stop. Okay. Let me see. Okay. See how this is going to run. Okay. As you can see, guys, it's telling us that we have been successful when we're generating the SSS certificate for this domain name. And that means we can now go ahead and generate for us a real domain name for, no, a certificate for that. But, of course, even the test will store it within the server. So, what is important is that we have to, we have to delete what we use for. So, a certificate, delete. It will list for us all the available, for those who are used to USSD, you just specify the number of what you want to delete and then it deletes. And then we run now our command without the test. Okay. Let's run. Okay. So, we have finally generated our SSS certificate using the less encrypt, free SSS certificate. And it has been tied to the service. Definitely now we start the Apache 2. And we should be able to go to our, the rest of these other commands, it's basically for morning, just to attach also the DHIs to remember we didn't, we didn't attach the Tomcat, the DHIs to Tomcat. So, we have to make sure that it is reloading the entire container that are within. So, let's get access to. So, if I go to now, this domain name, sorry, we just get through this. We should be having manifest protocol error. I don't know why it's not. Okay. Let me just, I don't know why I'm not able to get this. Okay. So, let me just install just an online browser. Just make sure that I can access the Tomcat from my side. So, links, HTTP. You can do demo, port 8080. That's the default port. It seems to be links, HTTPs, SSS, A-run. Just make sure that the Tomcat is running as well. So, the LXC exec demo dash. Okay. Remember, this is not for installation in each of these ones. It's not required, but I'm just trying to ensure that each of these containers are running as expected. And we are able to access the other bits. The other bit is just to try to access using a ping command or try to access that you are able to access the other remote container and the like. So, I can try to restart the host machine because it looks like I'm done with the entire just bits. Yeah. And then the other thing that is also important is that for now, let me first, let me restart the host machine. While it starts, the other basic configuration that you can use to change, to automatically load the subdomain. You've been seeing me typing slash demo or something within the configuration, within the proxy. There's upstream and all those bits that has to be configured and added. Let me just log in to show you guys. It's still coming up. So, we have that. Inside GHI is to set up. There are other things that we need to, I think it is in, see, though not edited from here. I just wanted to show you. Okay. So, let me log in into LXC, exec, proxy, and hash. It's open. So, me, hash, terminal. Okay. So, inside the ETC patch, we normally see inside apps. So, site enabled. We have this. So, this is not enabled by default. I think there's something that has been, but also if I go to site available, we should be seeing. Yes. So, why our thing is not coming up was because of this has not been enabled by default. I think that is what some change that has happened. So, if I see, I open this by default. We should be able to see at some point where, which allows you to, as you can see, they have just put the entire things already in the SSI certificate, but only that it was not enabled. So, there's this section that is very important. The right rule that we have to always look at, so that especially if you're just having one instance, and you don't want people to type slash, to keep on typing the slash. So, what you do, you would uncomment this. Okay. And then, definitely, you would change this from DHS to maybe the name of your instance, or the way the instance has been named. So, I put demo such that once you install the, once you install, once you try to access, you should be able to, so let's enable this site. There are also, there are normally commands that you can do, and they call it AZ something to site enable, to enable a site. The other option is just to copy to do a symbolic link of your file, making sure that it's in the site enabled. And then it will automatically, when you, once you restart your Tomcat, you should be able to run. Oh, no, I'm in a patch. So, site. Let me just go to sites. Symbolic link. I go to sites available, and I'll say a patch. What is there? A patch. DHS2.conf. So, in our site enable now, we have two sites, as you can see. We have enabled the Apache DHS2.conf. And now if I restart, I think if nothing else is the problem, we should be able to access our DHS2 instance. That's that. So, this is a complete running DHS2 now running under the various LXD containers that have been separated. Like Bob has hinted, the concept of the containers allows you to manage resources in a more, in a more what? In a more flexible way, such that you can only maybe allocate a small amount of memory or resource to a container that does not require much, but also focus more of your resource to the one that is very heavy or that is over consuming. As you can see, our system is fully secure using the SSL certificate. And basically, we have a complete secure system right from the server side where we try to at least try to prevent root access. We try to prevent default port access. We try to go further. Just recently, an interesting one came in. I think we will add it in here. We only want the proxy server. Like here now, we have port 8080. So, now when I do this, it's not, I'm not accessing directly from, I hope it will access or not. It may not access. So, from Uganda, we had a scenario where we used a different setup and were able to access this system directly from the port. But of course, I think this has been taken care of. As you can see, it's not accessible. But rather, because of the firewalls that we have put in to give the flow and all those bits, port 8080 is not among the ones that have been allowed from outside of the containers. It's only through the proxy that you get into the other containers and have you. So, this is what we have. As you can see, we have a full run inside. Any question or comments before we can close for the day, I think? Very good presentation. Yes. Thank you so much, Sivan for this presentation. And I think we have slack, but we are heavily using. So, in any case, if there is any questions or you can, you can post it on Slack and we will try to reply as soon as possible.