 Okay, so my name is Tito from University of Oslo. I'm based in Nairobi and today we want to talk about things that people do after the installation. When you have a fresh install or do a VHS2 instance, maybe you are using tools or you're just following a guide doing your install. There are things that you would do to make sure that your resources are optimized to make sure that whatever you have on your post are being utilized by the apps that you have, the application components which includes Postgres and your web application. So I prepared a few slides just to walk us through what we have today and then after that we shall go to use one of the problems that I just created for demonstration purposes and a few problems that you might face and you've completed your installation but it's nothing is working. You're not able to access whatever you've deployed from your browser. So these are the slides that I prepared and it's going to check us through the tips and tricks, troubleshooting tips and tricks and post install things that you need to do. So I'm going to start the slideshow. Are you able to see my screen? You can see the screen, it's okay. Okay, okay, okay. Thank you. So what's the problem? What's the reason why we are having this conversation today? And it's because normal installations, maybe we're using tools we have at our disposal. Maybe you're just following the installation guide and you're building those components one by one. And by default, the resources that you have are not or the apps that you deploy are not optimized to use. The defaults are not like stretching to the resources that you have on your system. And once you've completed your installation, you need to tweak configuration files that allows your app components to use whatever application or other server resources that you have. And then another section is you've done your installation but nothing works. You put, you just access your URL and you're not able to get anything. You're getting errors instead. So what are the tips and guidelines, troubleshooting guidelines that you need to deploy? Yes. So what are the categories into two that you need to do to tweak your app and optimize your apps and then troubleshooting now tips. Okay. So we have components that when you get your installation with the tools or if you're doing manually, you will not miss to have a database and Proxy and Tomcat. Proxy is going to be really your choice. You might choose to go with InginX or Apache 2. And then Tomcat is normally our standard install comes with Tomcat 9 instance. And then the database, which is of course Postgres Equal 13, as of this recording. So in our standard installation, we want to do an installation where it has all these components packaged. Of course, there's another component here that is not included on this slide, which is monitoring because then you want to again be able to get metrics about how your database is performing your Proxy and Tomcat. So that one is also included here. So when you get a server, which is normally an Ubuntu 1820 or 2204, you have a host resources that is CPU and memory. So you want to budget your total memory so that you give Postgres fair amount of your memory and then web application. And of course, you don't allocate applications all the memory and you get your host with nothing. So you budget your memory footprint so that the apps have fair amount and also the host. So we have Postgres and the web applications. Those are the main applications that we will talk about today and then the host. So normally, when you have say about an estimate of around 64 GB of memory, you want to and say you have two instances of Tomcat. So that means you have how many applications here to budget your memory. You have two web applications and one Postgres. So by convention, it's good to give your database even a larger share and then the remaining share you divide between your two web applications. So if you have, for instance, 64 GB of RAM, then you want to ensure that your Postgres has about 32 GB and then your other two web application is say HB, HB, H or 1010 and then you want to leave some amount of memory to your host also. A normal operation of your operating system needs some memory. So you want to leave some amount. So next slide is really talking about Postgres SQL database, you know, that you have now given your Postgres some storage. Do you have a question? Okay. So you have allocated your Postgres SQL database some storage. So you need also to tweak Postgres SQL configuration. And these are the settings that you need to set generally because today I'm not talking about, you know, Postgres SQL configuration extensively because that needs a whole session on its own. It has a lot of things that you can play around with. But then I want to talk about four main components that are even mentioned in our install scripts guideline. And one of them is the shared buffers. And out of the total amount of memory that you've allocated to your Postgres SQL instance, you want to give a quarter, 0.25% of the total amount of memory that you have. And then if you have, say, 32 GB, then that is going to be about, I think, 88 GB storage. And then work memory is normally calculated total amount of work memory that you want to give your configure in your Postgres SQL. It's going to be a factor of connections that you have opened on your Postgres SQL configurations. So if you want to calculate total amount of work memory that it's going to be at the end of the day is by multiplying work memory value plus maximum number of connections. And if 42 GB you can do 10 MB and then you can multiply that by factor of maximum number of connections that your system will be supporting. And then there is maintenance work memory. And that is normally, you want to normally utilize when you run your analytics and you want to at least give your system, give your Postgres a good amount of work memory. And then there is effective cashier, which is now going to be utilized for caching, Postgres SQL caching purposes. Any question up to that point? I don't think that it should be a question. You are basically the example that you've given already, you said 64 and you're giving 32. So basically the calculation that we're doing here for the Postgres, is it going to be based on the 32, not on the 64 GB? So this 64 GB is assuming that it's really for the Postgres. But then before that there's the budget first, which is going to, with a budget you will know which amount of memory will you allocate to Postgres. It's not necessarily, you know, this is really not the system storage, sorry, not system memory, but only what you've budgeted for Postgres. So if you're going to budget 64 GB for Postgres in your case, then that means your system has a lot more memory, even 128 or so. Okay? Yeah. Yeah, so, so um, yeah, I had a comment, I nearly forgot. Yeah, in terms of Postgres in particular, you know, with Tomcat you can kind of restrict how much memory it's going to use, because mostly it uses its heap, and you're probably going to show us this later, so you can set the maximum heap size. With Postgres, Postgres is greedy, right? And all of these settings that you tell it, is not going to prevent it from using as much memory as it can find. That's the way that it works. And particularly that effective cache sizing, it will just try to use however much memory it can see. So the really important thing with the Postgres, if you're using containers, and I know you haven't spoken about containers, but if you're using containers for your database, and it doesn't matter if it's LXD or it's Docker, you have to constrain that container. So you have to, when you create the container, you have to configure it to say, this container can only use 32 GB RAM. Then after that, you can do all of these settings. One of the things we've seen happen, if you don't constrain the container, Postgres can use, it'll use as much as it can, and sometimes then you try to start Tomcat. Tomcat will fail to start, because Postgres has already chewed all the memory. I was actually going to demonstrate that. And here we have, which has to deploy it within LXD containers. If we issue LXD list, we're going to see that we have Postgres here. And the Postgres here is, I guess, seeing all the available memory. If we check the memory that we have here, we have 64 GB. So let's execute into the container, Postgres container, and check the memory that we are able to see. So the memory that Postgres here is exposed is about 64 GB. And that means, as Bob mentioned, that it will be created and try to use all this memory that is available on its disposal. It will try to use all this available memory, which is much going to even use all the available system resources. So we want to limit this Postgres container, so that it only sees memory that is budgeted to it. So that is done by you issuing LXD limit command, which is LXD, config, and then container config set. That container name, which is in our case Postgres, and then memory, limit.memory. And then the amount of memory that you want to give that container. In this case, it's 32 GB. So that is going to limit your container, Postgres container in this case, so that it only sees this amount of memory. When we execute back into the container and check the available memory, we see that it only sees 32 GB right now. So even though it wants to be very greedy and it wants to use all the available memory, then it's only able to see up to 32 GB. That is what Bob was talking about. Is that clear? Yeah, that's clear. So next, what we want to talk about is also Tomcat. So when you've completed your installation, yeah, you want also to make sure that your Tomcat is able to use total amount of memory that is allocated to it. And that configuration parameter is in this file. Yeah, sorry, is in this file default Tomcat 9. So let's just quickly get to our install here and list containers. And here we have around six containers. The first two containers are posting our Tomcat instances. Let's just get the first one. Let's see, exec, the showers, and then bash. And then we just view that file, edit that file and see its content, which is its default Tomcat 9. So this is the file. And the file has configuration, java configuration parameters. And I guess line number what, sorry? Line number five is the line that you want to uncomment so that you can tweak your memory that your application is going to be using. So get to that line and then out of the available memory, the budgets that you have set for your Tomcat instance, then you can change here. You can change these parameters to the amount of memory that you want to set, say, HGB. And then after that, you want to reload your restart your Tomcat instance with system ctl, system ctl, comma. I don't know if restart, reload will apply configuration, but you can restart the Tomcat instance. So that will apply your changes so that if you say, check that process, you will see your configuration changes. That is HGB here. And so your running instance is using that amount of memory that you wanted to see. So the other parameters here that I can talk about, like the last one here, about line number 26, this is for application monitoring. It's a chloride plug-in that when you want to enable your chloride monitoring, you will uncomment this line. Of course, you will need to have your chloride package extracted into this directly so that it will now be monitoring your instance. Of course, there are the configuration parameters here that I guess we are not going to talk about all of them today. These are called on its own. And then there's another file, which is opt DHS2, which is instance configuration file. This is just an environment configuration file for the Tomcat. But then there are configuration files that are specific to the DHS2 instance. And they are all on this file by default. So we want to see how that file looks like. It's seen opt DHS2, DHS.com file. So this is the file. And on the top, we have these lines that are uncommented. This is the standard install. This is the default that comes when you do your installation. However, there are things that you want to change later on to suit your environment. Like for instance here, we have connection pool maximum size. Sometimes you might want to, in a very, very busy system, you want to increase this number. And this number needs to be really in compliance with the number, with the maximum connection allowed on the post-recipement database. Yeah. And even much more configuration options that you can tweak and enable other features that you want to see your installation to be supporting. Yeah. So the standard install will come with these four uncommented or enable settings. Of course, there's database password. This is just a demo instance and it's not production. And these are going to be on this. Hello, Tigo. Yes. Yeah. I think this is very good. Yeah. I have one question. It's Lamin from Gambia. Will it be possible like some of these things, like this comment, it may be like those things which are needed, like to put them in your presentation so that if anybody do the installation, you know that these are the things you really need to comment. And these are the things you need to do as part of your presentation. Yeah. So this comes with the standard install of the H2 with the tools. And normally, these configurations that you see and commented here are sufficient enough. You don't need to really touch anything else here. And this, if you're using automated install, you don't touch this file. It's going to happen on your behalf. Otherwise, really what you need to change is the first defaults that I showed you that you need to tweak these to suit your cyber memory availability. Otherwise, the other details to configuration file, you don't need to really touch that unless you have a special need for your installation. Yeah. It might be worth, we have a two hour session planned in the upcoming server academy just on the dhs.com file, all of the options which are in there. So I mean, we try to make sure that one gets recorded, I think. But yeah, as Tito says, it's those first couple of lines which are the essentials that you need mostly around your database connection. Yeah. So those are the tunings that you need to tweak after the installation is completed, at least on the side of Tomcat. And then we go to proxy. Sometimes you have your application deployed on slash dot application name directory. And even general last time when he was testing the tools wanted to really access the app from the root without appending the application name. So that is something that is come most of the times and the tools do set up the install, but it doesn't do redirects by default. So he wants to come back later and gets to the proxy configuration, which can be in the next or a party to the line is there, but not at least not enabled. It's commented out. So you need to get to them. The proxy configuration gets to the line that you need to uncomment. And I will demonstrate that on this call, at least for for the next proxy that we have running right now. So the site that we have is BHIRs.com and it's returning empty response. If you just get to the root direct, you don't append application name. It's appending. Like you don't put a font slash application name. It's empty response. The reason why it's returning empty response. Let me just get to them. The root cause of that is that let's get to the proxy. This is the main configuration file. And the reason we are getting empty response is this, that anything that is not matching, it's not matching the application name that you want to access is it was going to return a 444. So the 444 that is there is this really, this is its empty response 444. However, you could instead of returning an empty response, I could maybe change the tool so that we get the default engine next, the default engine next site static site that normally you get with the installation of engine next or static site that you normally get with the Apache to install. But then the line that I wanted to talk about is this rewrite, rewrite line. So you want to recommend this. That means whenever you access the root domain, it will be rewritten and redirected to HMI. Of course, you need to have, this needs to match with the app that you have in your system. Yeah, so pseudo or let's say system CTL reload the next. It's 15. So that means whenever you access this, it's going to be redirected to default application like you've seen right now. Yeah, so that is another thing that you can tweak on the proxy after the installation. Yeah, okay. Yeah, I have a question. So that was the what file name, which you have recently uncommented? It's the main file for engine next. It's in the next directory, you see? No, yeah, I can see it. But in the configuration file, you have uncommented rewrite line there and slash HMIS. So HMIS is our bar file name, right? It's the application name. We will list them. It's the Tomcat application name. Let's get out of this container and you see we have HMIS and we have DHS. So that means every request that comes to this server are redirected to HMIS. However, if you want to access our DHS, that means you need to append on your browser DHS, you see? For you to be able to Okay, fine, fine, fine, fine. Got it. Yeah, something like that. Understood? Yeah. Okay, so that's one. And then number two is monitoring tools, one of which is Muneen. Muneen is we use to monitor our instances, which can be servers, depending on the install that approach that you took or containers or the host, you know? And normally default install is not is leaving Muneen exposed. Empty response, Muneen spelling errors. There you see. So that means you're not supplying username and passwords for you to get to this Muneen. So what we've done for that, we are using basic authentication on the on the on the engine next level, or rather the proxy level. And on this call, I'm going to demonstrate how you're going to enable at least basic authentication for them for them for the Muneen. And these are the steps. Of course, you're going to need a patch to your tools, and then generate the password, and then edit Muneen location configuration, and then enable that basic authentication. We're going to run through that quickly. And on the same server, we have proxy, that's why you're going to do your, you're going to enable your basic authentication. So you need to execute into that proxy, and let's see except proxy, then bash. And then you need to install this package, which is Apache to your tools. App install, Apache to your tools. Yeah, they're already installed. I have installed. And then you want to generate the password with this, this line here. Just find the command industry. Yeah, with this. So this is going to be a password, and then where you're going to start to store your password, and then which user are you going to generate, going to generate password for. You can choose username that you want. And for this call, we're going to go for admin, and then it's going to ask you for the password for that user. I'm going to just put admin. So that's, it's generated the password for user admin and the password is the same. So and then after that, you want to edit location configuration for muning, because it's muning that it's actually exposed to the it's without password. So normally with the standard installed, we have all in the next configuration file within, within a conflict directory. And here we have main configuration file and then upstream configuration files. Let's get into the upstream and see what we have there. We have the HH2 configuration. And let's just see what we have here. Here we have the HH2 for two different instances that we have here. And then the file that is of our interest here is muning. And at the very end here, you add the two lines. You add the two lines, which is all basic. And then you want to get, to add some lines here, say basic. And then out, out basic, out basic user file, where you're going to have your, your password. User and that file is in our case, et cetera. HH2 password. HH2, HH2 password. Yeah. And then you want to add those and then check your engineers configuration if they are valid. And then you load pseudo service or system. And now let's try accessing our muning on the new InvoVnito window. Now it's going to request for username and password. Those are the ones that we just configured. And if we supply admin, default admin, then it's going to take us through the muning. But at least now, you will not get to access this site without credentials. Please, we're monitoring endpoint without credentials. Question? I have a question. Yes. What is the effort of having password to muning? Is it a security trade or what? Like, what is the main reason of having password to muning? Like you can see right now here, your resources, resources that you have on your infrastructure, that is Postgres, Apache, everything is just exposed to the internet. Whoever has this link can log in and they can, they can like, they can see what you have already. They can get much, much more information about how your system is set up. And it's not good for security. You need to hide your stuff. You just don't leave them to the public so that they can in a snapshot know what you have, which one can be their next, they can, you know, they have a lot of information that they are not required to have, I think. So you need to at least, and you don't want to open this to the public. This is private to your infrastructure. You don't want everybody to be able to access these endpoints. It needs to be really secured. Okay, this is good. What for the night? It means that we have installed muning in our systems, which means that we need to do this, but it will be also important maybe like to have a demo, like someone who has muning, how you can just enter into his system without no password. Maybe like next Tuesday or after that we can try to look into that. Okay, okay. That would be interesting. Yeah. And also most of our installation is being done using Apache. I see you're using Internet. I know basically the procedure will not be the same. Probably before sending the slide, you might just need to add the same for Apache and then you send the slide so that we can protect it. I never knew that the muning exposes the information, but I don't know that someone can take this information to hot the system. So we need to actually protect our systems now. Yeah. Yeah. So when you've actually mentioned about Apache 2, last time when I demonstrated the tools, I had not developed support for Apache 2, but right now with the latest push, you can pull latest in source scripts and it supports Apache 2. So you would need to really change one configuration directive from engine X and put it to change it to Apache 2, like you had done general before, but it was not working. Right now it's working. Yeah. So yeah, next is really backup. So backup plan is not a cost-installed thing. You need to really plan for your backup even prior to starting your installation. You need to know a few things like which backup policy are you going to, retention policy are you going to use, which upside backup are you going to push your dumps to. And also for instance, yeah, those things, the details about backup and scripts and all those kind of stuff, it's something that you need to really plan even before you start your installation. Something that you just do as a post is maybe testing your backups. You've had your system up and running and you want to test and see if your backup script is doing its work. And number two, can you kind of restore your backup in a fresh environment and is it working? Something like that. That's something that you do after the installation, just to make sure that your backups are working and test that your restore are working. Otherwise, planning for the backup is something that you do before you start installation. And all the bar scripts that we have before the DHS2 tools have backup scripts, which I'm still on process to put into the Ansible scripts that we have. Question? I have one question. Okay, Gerard, you can come first. Okay, so my interests have been always with this because though we are saying we're using open source, but we have been at a greater disadvantage when it comes to cloud hosting. And so backup is very necessary. So one of the things that I am peculiar about is actually the backup at off-site. But then there was this project where I always have in mind, where once you backup the off-site, how do we like replicate the system? It just automatically replicates itself on the system. Maybe there is a script that does drop the database and deploy the new backup that you've already done, which is the backup testing. And then we start the local instance, and then it works. So that was one thing that I've already, I always take to the system administration training. But it's something that I think all of us should collaborate in order for us to have a one script that can do all of this process, do the backup from online for the cloud hosting solution to a local instance. And then on the local instance, you can drop the database and rewrite the new or upload the new backup. And then we start that instance. Because sometimes we lose data because we don't have resources to make those payments and all the rest of it. So it had been a challenge actually when it comes to this part of Africa. Maybe I can comment on that. Broadly, I would say that backup is also something that is, you know, we can broadly classify into two main steps. One is making the backup on your host. Just making a dump of your running instance database. And then number two is now storing that backup in a place that is safe, which in this case is remote site. So the script that we're talking about needs to be able to do these two components. One is it needs to be able to make a backup on your host and then push that backup on a place that is safe off site. So we have scripts that does backup. The scripts that were developed by Bob was doing backup and pushing using tools like Arsync to another remote site that you want to push your backups to. And we also, we could also, you know, push to S3 endpoints. And S3 is on cloud environments of your choice. And normally we normally go for, I think, Linux S3, but major cloud providers like AWS and Google Cloud have S3 endpoints, which you could also push your backups to. But that everything that I'm just talking about needs to be automated in a way. Of course, backup policy is retention policy is how many copies do you want to retain, daily copies do you want to retain, and how many weekly copies do you want to retain or even monthly copies? Do you want to retain or backup in your off site environment? Next question. Hello. Yes. Yes, I also wanted to ask, like Gerard said, these backup things, because for us, even here, we are doing backup in the same systems, in the same Linux. So we're trying to find a way whereby we can do backup off site so that in case that there's something wrong with our primary instances, then the other one will just pick up automatically. And it will be like a replicate of the same backup things that works. Because we are doing backup in the same instance. Yeah, because if you have a backup on the same system, assume that same system is compromised. Yes. Then your backup is not useful anymore. It's not useful. Yeah. It's like shooting yourself in your leg. No, we need a suit. Let's suit. So where we could talk about off site backup approaches that you could employ. And I've talked about S3. And you could also have another server sitting somewhere else, just procure a server sitting on another data center or another cloud environment. Then you just, you have a set station because Arsinki is using a set behind the scenes and push your backups to that instance. Yeah. Okay. We've done the presentations, but then I just had something for us today, which is having this site, which I've actually, I deployed DHS2 on this endpoint, but it's not accessible. I just brought it deliberately for us to have a discussion about. And when I'm doing table shooting normally, the approach that I use is follow the packet from my clients, from my Chrome browser or Safari or Firefox. Your traffic goes through the internet, through the proxy that you're using, which can be part two or the next. And then from there, it's routed to the backend application that serves your request. And depending on what you're doing, if you're retrieving data, then it reads data from the database and it gets back to you. Or if you're posting something, it depends on what you're doing really. So this is also a guideline for the table shooting that you can follow that I have an installation. So you can just segment into steps starting from your browser and then get to the network, which is the internet, and then to the proxy from the proxy back to the your app and then finally to the database. So this site that you see here, which is this domain, is not accessible. And as you can see here, we put it's getty.com-load. Just put m, it has m. Yeah, you see, this is saying that the site can be reached. It's not accessible. It's different from the one that I had before. Maybe let's just delete m there and see. This is DNS BROC. DNS BROC possible. That means for this one, even this domain does not exist. It's a DNS issue. So that gets back to post-installed troubleshooting guide that you need to have a domain that dissolves to your server's public IP address. So because you can see even from the errors that we get from the browser here that this is giving us a different kind of feedback that DNS BROC possible. It's a problem of DNS. This is not dissolving to any public IP address or rather any server's IP address. And you can also test on the terminal using tools that are available like nslookup puts the DNS here. And as you can see, it is not finding IP address for that. However, this other sites that I just broke is really resolving to public IP address, as you can see. But again, it's not accessible. Even if I put here m, just to complete the thing is that it's not accessible. So that means, first of all, you've seen that it's resolving to public IP address. That is checked. But then are you able to reach that server? So you could use tools like Ping. If your server is exposing ICMP packets, it's enabling ICMP packets. However, this is not to say that if you don't receive any equal replies that the server is down now. Sometimes even on the firewall level, ICMB is disabled. But for this, for our case, it's enabled and we're able to even think that server. And we've seen that we are able to get to the DNS is resolving to that server's IP address. But we are not able to access on the browser. So we've checked the first part that our browser is okay. The internet is okay. We are able to ping our server. Now, the problem could be lying on the proxy that we have running on that server, which is in the next. So you could access that server. And just to make things easy, I had access into the server and we can list the containers that we have here. And we have proxy. We have proxy here. So, of course, the proxy is up. The container is deployed, but we are getting nothing. So one of the things that you can do is try connecting to proxy. Our proxy normally is exposing two ports, 80 and 443. Try connecting to those ports. And you could use tools like TenNet. TenNet. Your servers are public IP address and then the ports that normally we are exposing on the proxy. And as you can see, we are unable to connect. Even 443, we are unable to connect. So that means our proxy is having a problem. It could be the host firewall or the proxy itself not even listening or the service is not up. So we've gotten to the server, which is this one. And we're seeing the proxies here. So you can execute into the proxy with LXC accept. And you want to see, you want to check if your proxy is listening or the service. Proxy service is listening on the network. You could use tools like SS, PANMP. And here you see that we have nothing listening on port, 80 or 443. We have nothing completely. So that means our proxy service here is not listening. And you could check even the firewall here. That's another thing that you need to always check. Check UFW status. And when we check the firewall, we see that it's listening on port 80 and then the ports traffic are not filtered on those two ports. They are just open. But then we have the real issue here is that we don't have service here. We don't have service listening on that port 80 or 443. So you could say service systems tell, depending on your proxy of choice status in the next. We see that in the next service is not running. Here I have used in the next, but it's not running. So it's not running. And let's try starting it. System CTL. But we are getting errors. It's not coming up. We are getting errors. And they are actually here we have an error. And normally most of in the next errors are related to the configurations that you have. And you can check configuration syntax within the next dash t. That is going to give you where the problem is normally. And in this case, it's on line five. So you need to edit this file and see where the problem is. And in our case, I think it's not terminated only. So you can edit this file with the editor of your choice and then line five. There's a line here that is not line five. Yeah, here it is. It's here. It needs to be terminated with semicolon. Yeah. And that was the problem. Why our next was not, for example, running. So after that, you need to start your service service or system CTL. Or you could check even directly if it's passing fast, configuration with it's passing. And now in the next minus t, you see that it's okay. It's not, it's no longer giving us this error. And you can now start your in the next system, in the next, and it is started. So let's check if now we can see port 80 and 443 on SS command. And for sure, we are seeing port 80 and 443, they are listening on the network. Even when you get back to your clients, whatever you are accessing from and turn left 443, you're seeing that we are now able to connect. This is now going to tell us that when our proxy is okay, it's accepting connections on 443 or even port 80. You can test the two and you see we are able to connect to the two comfortably. So that means we've checked this part that our proxy is now okay. It's listening for port 80 and 443 over the internet. So are we now able to access our site? Let's reload. No, we are getting M2 response. So this is now the 443, remember the 444 error that we talked about when you do not redirect your traffic to, you know, this is just root. So, but it's not where your app is listening on. If you issue LXC list here on the host, your app is listening on DHIS and, you know, endpoints. So this is the error that we just talked about before. It's 444 error that when you had not, you have not redirected your traffic to return to minimize your whatever your application endpoint is, you get an empty response. But let's put the name of our app there, which is DHIS. You are getting bad gateway. This is another error now. It means that our next proxy is accessible. It's trying to pass our request to the backend application, but, you know, it's not, it's bad gateway. Our app is not responding. So let's get back to our install again. LXC, but then the app is DHIS. It's here, but proxy is not able to access that application. So we might say, let's execute into the proxy and try pinging, try pinging our application, which is DHIS, just to make sure that the network is okay and we're able to ping. You see, we are able to ping. But then what port is our application listening on? Normally it's port 8080. You know, you could even turn it here. Standard Ubuntu container install comes with a 10 net client. You could even turn it, your servers, your apps IP address on port 8080 and see 10 net spelling. I'm sorry. It's not able to connect. So that tells you that you have installation here, but then it's not listening also the same case. It's not listening on the network. And then one of the reasons why it's not listening is that maybe your Tomcat service is not running or number two, your app didn't complete startup process and many, many other reasons. Yeah. So you need to execute into the container, which is in our case proxy, I'm sorry, it's in our case is DHIS and see what's happening there. Normally you can check the logs, the Tomcat logs, but first thing that you could check is firewall. Do we have firewall running here? Yes, we do. But then our firewall is allowing connection from our proxy. So that is not the issue really, because our proxy is 2.2 and it's open. This entry here is open. Yeah. Any question up to that point? Hello? No, I don't think there is a question now you can continue. Yeah. So that means our requests are getting to the proxy, which is this container here, but from this container, they are not getting to the backend application. And we've seen firewall is okay, but do we have anything listening? Let's check what's listening on the network, because you could have firewall exposed opening that port of ours, 8080, but nothing is listening on that endpoint. So we could use SSTanel PNC. Indeed, there's no service listening on port, port 8080. So that means our Tomcat service is not is not running. So you could say PS out and then you grab Tomcat for that matter. And of course here, there's nothing listening on that endpoint. So what could make our Tomcat note? Do we do? Sorry. I'm going to have to leave you. I know it's, but if you are happy and if there's still people happy to carry on, I think feel free. Okay. Okay. Hello. Sorry. But before you leave, hello. Yes, let me. I have one suggestion, because this is very good and it's really helpful. Like maybe if we can have like every Tuesday, if you can have like 10 minutes or 15 minutes added on top of this, so that the issues people are addressing in the telegram group to see how best we can resolve those one, because many people will report their problems. And it's another way why we can try to see if this is the solution to their problem, then we can have a documentation to avoid such problems again. Okay. That's my suggestion. To your suggestion, we have, we have 10 minutes allocated for this call. It's not a particular user's questions. Corrections, yeah. They normally send to the telegram group and to find a solution to that. For the benefit of orders in case it happens again. Okay. Yeah. No, we can look into doing that. Yeah. All right. Thank you. Yeah. I'm just a bit reluctant. I mean, to just restrict it to the telegram group, because not everybody is on that. But for sure, yeah, we can take questions from there. Any other questions if people have a particular question, they can send it directly to, or we might also look at the community of practice data, the community. So what new posts have there been this last week, for example? And see if we can make any comment on those. No, good idea. All right. I'm going to have to love you and leave you. But I'm sure Tito, you can carry on. Yeah. I'm going to make it summarized in a very few minutes, so that we can have this decide at least accessible quickly. So what you normally do here is that you want to start your Tomcat instant and see what happens. Maybe you can follow logs and see what really happens with a system. It's bundled into simple system B service. So you do system CTL, start Tomcat 9, and then you might want to follow the logs with general CTL minus follow unit Tomcat 9. Yeah. Yes. So this is going to start your Tomcat. And at the same time, you're going to see on the login what's really happening. Normally, if you have connection errors to the database, you just have to see here why your Tomcat instance was not coming, you will get to know where the problem lies. But to shorten this demonstration, there was no really a problem why this Tomcat was was not running. It was just because I had shut down the instance. But then normally, it could be related to instance connection to the database, un-upgrade, gone wrong, things like those ones. So you will see on the logs that runs through here where the problem is exactly. So yeah. So once this instance is started, we will be able to again access this site from the internet. But then it will have checked at least four of these diesel components that we make make sure that our client is OK. The network is good. DNS is resolving. And then number two, the proxy is not the problem. And the web application is not the problem. But the problem would also even be lying on the database about the firewall blocking access from the instance about the database configuration files that has entries for BGHB configurations. Because on that file, it's where you configure maybe application user and the password and where it's connecting from. It could be the reason why you're not able to access this because of that file. So we will go up to this point. And I guess after that, we will have our service up and running. Any question after that when the app is coming up? Yes. Thank you so much for this meeting. It was a great meet. Is there any way to download these recorded video I can see later? Well, yeah. Normally when we have completed this presentation and then it's pre-recorded. And we normally upload this to YouTube channel. And if you want to follow even all previous recording you will find in that channel. So after a day or two, we will have all these recording uploaded to the YouTube channel. Okay. So can you please send me the link of the YouTube channel so that I can see? Okay. And the channel has a lot of other things that you might even be interested in following. So let's see after this application comes up, we should be able to access this site. It would not give us five or two bad gateway anymore. It will be accessible, but takes a while for the app to come up. Yeah. I guess that wraps up what we had today. But we can just wait and see what we can get after the app comes up. Just to mention also is that all these apps that we are demonstrating with right now are actually installed with the automated Ansible tools. DHS to server tools. So yeah, mostly you would not, you see that the app is now accessible. So the problem was there. But then as I mentioned, it could be somewhere else. It could be on the database. Maybe if we have a call session that is really dedicated to troubleshooting, we might explore all the possible problems that we might encounter. But for the automated install, normally you would not get these problems. They will be fixed. However, in some situations where people are not using the tools, maybe it's not suiting their deploy architecture. And they also deploy DHS to each and every component separately. Then that means this is going to be helpful for them. Yeah. So right now the app is started and it's pretty much accessible from the internet. Next slide is just for questions if there's any question. Otherwise we are actually on top of the hour. We are even past the hour. Yeah. Do we have any questions? I don't want to say it. I have a question. I will just say thank you. I think with reference to the backup, I think we've already put a hold to that. Probably it might be part of the next session that we're going to have. And pretty much what we're going to do in Rwanda. So I was to put that as a pending action for me. But then this is good and it creates a lot of awareness. It's something that you have to have the love in order for you to go through the processes. And for me, I spend most of my time doing what you're currently doing, fixing problems, identifying problems. And that is how I learned as fast as I could. So this is just an addition to the package and I appreciate it more. And it's something that I want to keep moving forward. So I wanted to say that things that I just talked about right now, we have not gone really deep into, because each and every component here could be its own topic. When you talk about Postgres tuning, it can be its own session. When you talk about in GNEX or whatever, it can be a whole complete two-hour session. So these topics are going to be deep-dived in Rwanda. You're going to have each and every session talking about one topic, extensive. Otherwise, we can just finish at that point for today's call. And thank you, everyone, for joining. Yeah, thank you so much.