 This was just talking about a few little security extras that we've configured in there to try to make your Tomcat as good as we can. Let's move on. Logging, yeah, logging is maybe a little bit different to what you used to. In some ways it's more flexible. I mean, by default Tomcat logs into Catalina.out and also there's a whole lot of custom DHIS logs which happen under Opt DHIS2 logs. I think logging in DHIS2 is a little bit of a mess at the moment. I think there's a lot of improvements that we need to make. I know that there is some attention on it at the moment to improve the logging. For the most part, we log too much. So log files get really big. There was a complaint recently, I think from Indonesia saying that we're getting 15 gigabytes of log every day. That's a little bit crazy that we're writing more to the log file than we are to the database, I think. So yeah, there's a lot of improvements need to be made with the logging. But the fact is that Catalina.out log can get pretty huge. It's quite hard to find what you're looking for in it sometimes because our Tomcat is running under system D. It logs through a mechanism called the system journal. And I found it's much, much easier and more flexible to look at your logs through the system journal than it is trying to grip your way through Catalina.out. So if you're inside the container, you can look at system journal, journal control, sorry, which basically logs all kind of aspects of the system. But the Tomcat 9 service, if you want to just look at logs, which are coming from the Tomcat 9 service, you run that with journal control minus U, okay, nine. There are many, many options on journal CTL. I'm going to show you a few in a minute. It's worth looking at man journal CTL, right? The manual, that'll show you some of the extra options that are on there for different kinds of filtering and formatting. Generally, the journal control command is so useful. We made a little wrapper script for it. So from the host, you just need to type DHS to log view. I think you saw me typing it once or twice already this morning. That just saves a bit of typing. And it's viewing the Tomcat log via the journal control. A couple of examples I give in here. Minus F means to follow the log. So if you just go DHS to log view minus F, it'll follow the log in real time, throwing out line by line as they come up. If you're looking for particular parts of the log file, you can go since and until. So if I only want to look at the log from yesterday, I will go minus S yesterday and it'll show me all the log files since yesterday. The output will, if there's a lot of output, you need to pipe it into less, otherwise it will just get to your screen. You can page through it with your space key. You can isolate a particular interval. Often you know something interesting happened between 7.20 this morning and 7.22 this morning. If you just want to see that part of the log, you can just go log view since 7.22 until 7.22. Sometimes useful to look at the log upside down. You want to see what's happened the most recently. So rather than starting at the top, you want to start at the bottom. That's where the minus R does. It just looks at the log in reverse or for reverse. Let's look at a few of those. So, yeah, inside the container. Let's go into our new container. NXC, yeah, exactly. If you just do journal control, right, it shows you everything that's happened inside this container, including the system logs. This is obviously not just Tomcat. Everything else that's running out of system D is also logging into the journal. So the first bit of filtering we want to do is we want to say just show me the logs relating to that line. And there we can see what you would normally look at inside your Catalina.else, right? This is all the logging from Tomcat line. There can be a lot of it. If you want to look at, see what happened most recently, you often find it useful to go minus R, look at the log upside down, right? Now you're looking at, this is what happened in a clock and this is what happened 9.37, et cetera. This looks at log upside down. If we just wanted to know what happens for 9.37, you can go look at the log until 9.37 and it'll show you all the log until 9.3655, in this case. If you're not inside the container, so if you're just on the host, this DHIS2 log view command, basically it's just a little wrapper for journal control. So basically you go journal control and the name of the container, in that case Bob. In this case, we have to pipe it through less because if I just type that, it'll put the whole thing to the screen all in one go. Slow it down with less and you can page through the log. I haven't implemented all of the options on journal control, but we've got minus F, we've got minus since minus until minus G. Glow root, mean glow root is a grep. So I don't know, can we find the string startup in there? And it'll show you, okay, I really want to find all startup routines done, that's a, started is a good string to look for, started, many things have started. Minus G basically just looks as the log, it'll grep lines from the log. Stuff to do with the loan, there should be nothing in there. So if you just wanted to see today's log, this machine's only been running for 10 minutes, so there's not much log in it. But if you want to just look at log since today, it'll give us today's log. If you wanted to capture a section of log, you can just do that with the redirect. So if I just want all the logging since 937 and stick it into a file, you can make a file called log 937 onwards. Capture the log like that. That'll just be a section of the log. Sometimes if you want to share a log file with somebody, you don't want to give them the whole thing. You're just interested in a small time period of where interesting things are. Okay, so very flexible. I do recommend you read manual. These are just a couple of little short cut examples of things you can do. We haven't really talked about the log format. I mean, the default format is kind of as you saw it. It also has a couple of other interesting variations to that. I'm just gonna show you for example, I'm gonna go into the container again. You can also look at log in adjacent format. Yeah, not format, I think the one is in minus F. Now I need to read the manual. Read manual, minus O, minus the output format, right. It'll give you the log in adjacent format. You can also do give it, I think, in adjacent pretty format or adjacent format. It gives you the log like that. These kind of formats are interesting, particularly if you're doing centralized logging. You want to, one of the things you can do with your journal, you can send all the logs to a central logger, things like Splunk and what's the other thing, the common thing in the Elk stack, put all your logs into a JSON elastic database. Lots of things you can do with, okay, that's showing me all my log lines expanded as JSON messages. But in most cases, when you're looking at logs, you're actually just gonna look at lines of text. That's logging. Logging is important. You need to know where to find your logs. And as I say, if you're looking for Catalina.out, you're much better off just looking at your logs like this. Okay, one of the things that we mentioned on the first day is a little profiler called Glowroot that we've increasingly found is useful to install on servers. This is to get more detailed insight into what's happening inside your Tomcat container. That's particularly when you're having difficulties, particularly performance problems, and you want to isolate where those difficulties are coming from. What I can do, I asked Andrew Meheery yesterday, if you didn't mind, I can show you Glowroot running on a live system. It's not really very interesting to look at a test system like this because there's nothing actually happening on it. But we can look at one of the instances on one of the servers in Rwanda. I'm kind of afraid to look live because we don't really know what we're gonna find until we get there. This is running in Rwanda. This in fact is the COVID tracker. It's running at the airport point of entry, I think, and checking lab results and things like that. I just want to let you have a look so you get an idea of the flavor of what's inside it. And then we'll talk about how you would set about installing it on your own system. Okay, this is what is happening today. Well, in fact, this was happening over the last seven days. But, yeah, I need to change that title on that is wrong. It's not UPHMIS at all. This is anybody from UP wondering, this is not your UPHMIS clone. This is in Rwanda. And the first thing I can say, just looking at average of what's been going on over the last seven days, there's nothing particularly unusual going on today. It doesn't look like, which is good. The overall kind of average response time for all web requests, it's kind of fairly high. It's averaging about 400 milliseconds. It's not great. It's also not bad compared to a lot of DHS. Two instances that we see when they get into trouble. You can see just looking down the left-hand column here, which are the API points that are compiling the server the most. What this means really is, it's kind of a measure of the volume really, how much CPU time they're using combined with what the throughput is for each request. And that gives you an idea of which API calls are consuming the server the most. And this one happens to be tracked entity instances query. But it's generally listing, I guess, of tracked entities. We can look at this in different ways. You can look at the throughput, see why it's kind of averaging, I guess, at times up to about 700, 800 requests per minute, about 10 per second. It's fairly busy. It's not catastrophically so. Get a bit more detailed, if you look at it more recently, if you look in the last 30 minutes, this is what's been happening over the last half an hour. Request rates, yeah, around about 800 transactions per minute, throughput, yeah, averaging around 400. If we're looking at optimizing anything, this is probably the best one to optimize tracked entity instance query in the sense that this is using up most of the server resources, 40%. And you see these requests, on average, they actually take quite a while. They take about four seconds. It would be nice to optimize that a bit to get those requests down to something a little bit quicker. To do that, you've got to understand, really, what is taking the time. We can see in this case, just by the yellow, you see the yellow is JDBC queries. But most of that four seconds is actually being used up on the backend database queries. That's because their database is getting bigger. They're making more sophisticated queries to it. You can begin a bit. There's a little tab here called slow traces. And you can see there's a couple of really slow ones here. These ones take enough for 14 seconds. That's getting close to unacceptable. We can see now a little bit of detail. This is the query. There's various request parameters. If you look at theory stats itself, you can see that all of the, all of the time is taken executing this query here. They want to detail, but basically it's looking at queries like this that sometimes you can find that we're using the wrong type of index or perhaps they're using too many setable parameters. But this is the place to look, start sharing with developers. If you have this on your system, you've got a particularly slow query. You can copy and paste this query and complain. Is there anything we can do to make this run faster? So, yeah, GoRoot is just a really nice tool for getting a bit of insight into what's happening at your back end. The other useful thing to look at is the JVM itself. And here you can get an idea of this is what's happening with heap. You can see with DHS2 it tends to have quite an active heap, allocating lots of memory, then cleaning it up again, allocating lots of memory, cleaning it up again. It's got quite a heap turn from 8 gigabytes to 18 gigabytes, allocating about 10 gigabytes of memory over maybe about two minutes and allocating it again. This again, one of the areas of DHS2, looking at improving performance on trying not to allocate quits. So much heap so quickly, because you see the consequence of allocating heap is that garbage collector has got to come in and clean up after it. And again, you can get a good bunch of here what a garbage collector is doing. Look at the collection time for your two generations. You can see here that every minute or so is a young generation garbage collection comes in. And that can be quite expensive. I know this is not too bad. It's most about 46 milliseconds per second. If you compare that overall CPU usage, process, CPU load, you can see this CPU is not clearly loaded at all. Right on average it looks like it's running a 0.122 of the CPU. It's because this is actually running on quite a big machine. There used to be half a dozen other containers running on the same machine. And because the COVID application was so critical, we've actually removed almost everything else off this machine. So this Tomcat is running with, oh, I can't remember exactly, probably 20-odd CPU cores and quite a lot of RAM. So it's dealing with the load quite well. But again, Tomcat, I mean, Glowroot gives you a fairly good indication of what your system health is like and some areas where you might have problems and where you might need to address them. Okay, so that's just a quick look through on a live system. We haven't found anything interesting on it, which is in a way a good thing because we're not really here to be doing debugging of the system in Rwanda. I just wanted to show you what Glowroot looks like. It's an example of a profiler which is quite easy to interpret, I think. It's not so easy to set up. It's a little bit complicated. It's not really complicated, but there's quite a lot of steps to it. And I've gone through, I thought I would just write down the steps here so that people would be able to set it up on their own system. As it turned out, I ended up filling up two slides worth of steps. So it's quite a lot of steps. It would be nice to make this a little bit automatic. I've gone partway in terms of, you saw I've got a couple of Glowroot references already in the configuration files. But let's go through it here. We've created this Bob container already. Right, it's running version 235 on it. Let's just verify that it's there. What's my... Here's my lin node here. Got an instance here. This is the instance we just created. Bob hopefully comes up. There it is. This is just an empty, empty version 235 DHS2 instance. What do we need to do if you want to put the Glowroot profile on it? So I'm going to go through these steps. Maybe just read through them quickly first and then go and do it. You have to go get the agent, right? So I'm going to get twice. I've got to download this file, Glowroot version number.zip into this directory opt-glowroot. But actually I'm going to download it into opt. I'm going to unzip it into Glowroot. Fix that. Download this file into opt. Unzip it. There's a little config file for Glowroot, which I got a sample of, which we need to push into the container. Then we need to change the ownership of that Glowroot directory so that Tomcat is able to read and write in it. Then we need to uncomment the Java agent line and etc. Default Tomcat 9. I showed you that before. We need to add a proxy location to the upstream so we're actually able to access the Glowroot. Reload the proxy, restart Tomcat. We've got to open up another firewall port in our Tomcat container, because the Glowroot by default listens on port 4000. At that point we should be at the browse of our Glowroot and it should be up and running. We set a password on it at that point. As you can see, there's quite a lot of steps. That's quite a lot of mission. Quite a mission to do. I'm going to try and make it a little bit easier for you. For the moment, nothing wrong with doing it manually. You may learn a few things along the way. Let's go through this. First of all, I want to take this file and get it into my opt container. Go to opt, go to file. That version number, of course, might change. You should go to glowroot.org to get whatever is the most current URL. They haven't released a new version for a while. I've got the file, I hope. There it is, glowroot.indist.zip. Having got the file, I want to unzip it. There's my agent unzipped at this point. I can see I've got a new directory there now called opt glowroot. The next thing I want to do, I'm going to go to my slide, which I don't forget to snap. I'm going to put in the config file. The config file, the Conway server host, I think I called it. Yeah, glowroot.indist.json. To change that on the slide. Basically, in this config file, the most important is that we need to set here on the web. The only thing I need to change is the context path here. I want to call this Bob minus glowroot, right? Because this is the glowroot for my Bob container. This is just the context path, which I'll find it on in my browser. So let's push it. Let's see file push. The file is called glowroot admin. I want to push it into the Bob container into hot glowroot. Okay. Back into the container. Let's see that it's there. There's the file admin.json. Right, sitting in place. Next thing I need to do, you notice the ownership of all these files are kind of all over the place. Some are owned by a thousand and one. We're going to need to change the ownership recursively to Tomcat. Everything in glowroot again. Okay, that looks a bit better. Right, so I've got all my files in place. Change the ownership done with slide one. Let's go to slide two and comment Java agent. Let's set a default. Let's do that. Okay. At least I have the line in there for you. Line with an error in it. Here you go. Uncommented line for glowroot. We need it. Let's just go back and slide again. I need to add a proxy location to the upstream. We'll do that afterwards. While I'm still on my Tomcat container, let's set firewall rule. Currently the firewall rule by default is set on here. It's only allowing 8080. So what we want to do is to, I don't need to sue you because I'm already root. We allow protocol TCT from 192.168.0.2. That's the proxy. So we've got to allow the proxy to allow form there to any interface on port for thousands. Okay. The new firewall rule just to make sure that you can actually reach the glowroot. Let's get out of there for now. Let's go into the proxy. Let's talk more about a proxy tomorrow. Okay. That's there with me for the moment. This is an Apache 2 upstream. Its process is very, very similar. If you are using NGX, the way that syntax for the location block is slightly different. So this is the way that we access our main DHS2 application. I'll give those lines. What did we call the context? If we want to go to Bob Baruch, we would do it like that. Port is 4,000. Okay. So you're basically just copying the lines that you have here that you're using to get to. We're adding extra location in there, which will interest to our glowroot container. We can reload the proxy now. Let's go back to my slide. See what I've forgotten. Always forget something. That's why it's good to script these things. This point, I can restart my Tomcat. And I should be able to browse through it. Instead of restarting the Tomcat, I just restart the whole container. Not much difference at this point. At this point, I should be coming up. Here's our log view command again. Here's our DHS2 coming up. Glowroot should probably be already up. Let's go and have a look. What did we call it? Bob minus Glowroot. Did I call it Bob minus Glowroot? What did I call it? Glowroot minus Bob. Should be called Bob minus Glowroot. That should be up. Why isn't it? Here's my log view again. This time that's Glowroot. See if there was any errors. What could we need in reverse? One of the containers called Bob. You can see there was an error. Not correctly locked Glowroot temp. I don't know why that is. There's a step we didn't do. This is the thing that I said we're going to fix this morning. I'm going to fix it again. You remember that I tried to make configuration a little bit easier. Adding the read write path to opt Glowroot already in your system control setting. Turns out your Tomcat refuses to start then unless that opt Glowroot actually exists. That's why we had to comment it out again. The problem is now it does exist and we do want to access it. Let's enable it. If we try to restart Tomcat now we have a warning that says that there's been a change to the config file. So let's just reload it. Go again. Now hopefully our Glowroot should be up. There we go. Our newly installed Glowroot running on the Bob container. Accessible there on the Bob minus Glowroot. It's not showing anything interesting at the moment. Even the DHS2 application is not yet even up. It is running. Mostly there's been no requests on here. Let me just make a request so that we get some activity. Let's request the login page a couple of times and in the last 30 minutes we can see a little bit of action starting to happen here and it's actually looking at login action. And it's loading static content. Okay. There's very little interesting to see on a profiling system where there's no activity on it. But as we saw from the Rwanda system earlier once you have a system that's running with a good bit of throughput it's a really useful tool to gain some insights into what's going on. It's a really useful tool that you need. You also do need to be able to look at your proxy logs and you do need to look at what's happening on your database. It's probably the most useful tool for getting an insight into performance issues. Okay, so that's a lot of Tomcat. What I've done really just to recap is we went through basically the way that Tomcat is set up in a container and some of the security considerations that are in there, some of the places where you can make edits and tweaks and tunings and then this rather more complicated process I'm not really very complicated but quite a lot of steps to it of installing a profiler onto your Tomcat so that you can get deeper insights into what's going on. I've got a couple of do's on my Tomcat. One of the things I'm going to do right now after I stop talking to you guys is to fix up the problem that we have you see with a commented out mobbed Glowroot directory. I think we should automate this if we're really going to recommend people to install Glowroot as a default profiler. I should make a little script to automate those couple of steps and that'll make it easier for you. Meanwhile, you have to do it manually. There are other profilers and something called your kit is quite popular. A lot of the DHS2 devs use it. It can be useful to install your kit on a production instance, particularly if you get a developer working and in hand with you who was trying to find out any particular issues. A good thing about Glowroot, I guess you can do it yourself and you can interpret yourself. Developers will get a little bit more insight with something like your kit and then the setting up of it is actually quite similar to the setting up of Glowroot. Basically, it's the same kind of thing you're installing an agent. The different main differences you're going to access your your kit profiler over SSH rather than through your web browser. I mentioned earlier already, I want to start testing Java versions greater than Java 8. I haven't done it at the moment because I'm too afraid that it's going to break people's DHS2 installations. It would be nice to make these Tomcat container images a little bit smaller than they are. It doesn't bother me too much at the moment, but they'd be a little bit quicker to set up and arguably a bit more secure if they were a bit a bit littler. But yeah, that's all it is that I had prepared to tell you this morning about Tomcat.