 Welcome everybody. What I've been asked to do is just to run through a sort of increasingly standardized setup. There's a number of us in the server admin community have been working on this for nearly a year now, I suppose. It's being used in quite a large number of countries, one guys or another. It's not specifically about COVID-19. The reason that we came up with this set of deployment tools was really to assist with system administrators to be able to install a reasonably secure installation of DH-2 out of the box. Some people will remember the old DH-2 tools. I think there's quite a few on this call. We've used them. A number of people use them for a number of years, but for all kinds of reasons, they're not really adequate in today's day and age. So we've come up with a sort of next generation version. Maybe just to dampen expectations a little bit. If people are expecting a sort of setup.exe, that an inexperienced person will be able to just run and host their DH-2, I think they'd need to think again. If you're not experienced in hosting Linux systems, your best bet is going to be to get some support from somewhere. Probably outsource it to the likes of BAO systems, Hisp Geneva, Hisp South Africa, Blue Square, possibly a number of others who offer this service. It's not a good idea for your DH-2 system running your COVID-19 tracker to be the first system you've ever administered. So that's my spoiler. Having said that, if you are going to work with DH-2, it makes sense to do it in a way that lots of other people are doing it too. And that's kind of one of the reasons why we're trying to standardize around this particular approach. I can give you a little bit of a diagram first just before we kick off. I don't want to take too much time talking because we need to get installing as well. What we're aiming for, what we're aiming for is a situation like, I've lost it. Where is it? Where's my picture? They have a nice picture somewhere. Is it gone? Aiming for a situation where to meet, I think quite a lot of people's requirements who are taking a single machine, often a cloud, a cloud VPS server from Amazon or LinNode or Contabo or Teddy server, some of the common providers people use. They typically just give you a single box, a single Linux machine. And the problem is, as most of you know, DH-2 has a number of components to it. And it's not a good idea running them all together in the same memory space and in the same sharing the same CPUs. It's not good from a security perspective. It's also not good from a monitoring and performance tweaking perspective. So what we try to do is to break those pieces up. So each component, you have a proxy. You have one or many TomCats. You have one or many database servers. You have one or many monitoring systems. They run using something called containers inside the host box. Setting all of this up from scratch is quite a long operation. But most of it is fairly easily automated. And I've actually done that. And I'm going to show you that briefly now. There are, is the beginnings of a simple ish setup guide. You'll find in the repository here. This repository, by the way, we need to move at the moment. It's sitting under my personal object. We'll move it into the standard DH-2 space. I've got a couple of steps here, which describe how you go through the installation. So I'm not going to go through this document. I'm just going to do it. And I'm going to skip the prerequisite because I've already done that and go straight on to installing. So I have a server here, which is relatively empty. It's running on LinNode. And I've run the setup script on it already and then sort of wound it back so that I can do it again. But what you would do, you pull these scripts from JIT and that's described in here. Where are we? Well, there, JIT clone the tools from the repository. Then you go into the setup directory in there and you'll find that there is a file called containers.json.sample. You need to rename that file to containers.json. This is a configuration file. And just make a few edits in that. I think I've already done it. Yeah, you put in the domain name of your server, an email address, and for those, just these three things you need to do. And then your time zone, quite important to do the time zone because most often the time zone in the place where your system is meant to be serving is not the same as the time zone in the data center where the server is. So you have to be able to set that. It's important to note that you cannot do this with just an IP address. You need to have a fully qualified domain name. That's because the setup is geared towards using SSL. And there's quite a few things I need to undo on it to not use SSL. So you need to have a fully qualified domain name that works and is mapped to your IP address of your machine. You need to have an email address and you need to set your time zone. From there, normally you would just run this, sudo lxd setup. I'm not actually going to run that because I've run it already. If you just see what's in it, I'm just going to run the last part of it. I don't want to do all of this stuff again. What I want to do is just run this create containers. SSH there. That takes about seven or eight minutes to run. So while that's running, we can go back and look at the diagram. Off it goes. While it's doing that, let me go back to my diagram and tell you what it's doing. In our configuration guide, or in our configuration file, we specified that we want to have a proxy here. Now I'm using Apache 2 as a proxy. I know that there are religious wars to be had about this. Some people prefer to use engine X. I don't want to go into all of those here. I've done a lot of work on tuning up my Apache 2 configuration. Tuzo from his tens of minutes. Has recently contributed a engine X content version of it. It needs a little bit of tweaking still to get it into the, into the setup. I just merged his commit about an hour ago and actually broke my, my install. Revert it. But yeah, it sets up a proxy container. It sets up a database and it sets up a database. It sets up a database and it sets up a little monitoring program called Moonee. You can ignore this thing about encrypted disks for the moment. We're not going to be out of the scope of this discussion. It's important, but it's not something that I've managed to automate and document as yet. The important thing about automating, I mean, we could just like go in and follow the install guide. Right. I'm typing sudo nano this and what have you. One thing about automating is that it becomes it's, it's repeatable and you don't have to remember all the little things that you might forget, particularly around security. We've done quite a lot in terms of insulating these containers from one another. For example, we have, you'll see when they're up and running, there's no space firewalls running on each one. So it's only possible for your Tomcat container. For example, the only thing that can access it is the proxy container. That's the only thing that needs to access it. Similarly, the Postgres permissions and things are set up so that when a database is created for this HMIS instance, for example, then nobody else can access that database except that HMIS instance. So yeah, quite a lot of work has gone into money. In terms of the containers themselves, I've made use of a set of benchmark controls from CIS. You might want to look that up. CIS security is one of the many tabs I don't think I have open. I can Google it. CIS security. And try it to go through each container as much as possible. I've been assisted here by, now there's an open, Tuzo has done a similar due diligence on his engine X container. This is quite a nice website and it gives you kind of checklists of things to do if you're setting up an Apache HTTP server or a Tomcat or whatever it might be. This is sort of many hours of work, right? The Apache HTTP server is about 170 pages worth of content of checklists, which we've tried to go through most of those so that the containers are by default set up reasonably secure. They are probably still going, wow, it looks like they are done. Let's have a look. There you can see, there's our three containers. What we don't have, we talk about what's inside them and how to get inside them a little bit later. Let's first of all just get something up and running. What we don't have is a DHIS2 instance, so we need to make one. The last step of the installation process is actually to install some scripts. They actually go into a place called user local bin, so that they're in your path. You can see there's various scripts around backup and creating instances and deleting instances and what have you. What we're going to be doing here, some of these might be again familiar to people who use DHIS tools. We're going to create an instance and deploy a war file to it. Then if we do really well, we might try and deploy some COVID-19 metadata. Let's have a go. Sudu, DHIS2, create an instance. We'd call it COVID-19, being a topical thing. One thing that's different, particularly people have used the old DHIS tools is we need to specify an IP address for it. Remember, if you see these three machines that are running here, the monitor and the pose, they all have IP addresses. By convention, when I start up my Tomcat servers, I start them up on 192.16.0.10. If I make another one, I'm going to make it 11. If I make another one, I'm going to make it 12, et cetera. The other thing we need to specify when you create an instance is to tell it the name of the database to use, the database server. In this case, there may be in most cases, you only have going to have one database server, but it's possible. If you've got many trackers, you might want to run each one using a separate database because as some of you know, trackers are not always very friendly to databases. I create an instance like that. This instance is based on Tomcat 9. I've only been trying this for the last couple of weeks, but it seems to be working well. There's a few odd things about Tomcat 9, which took a bit to figure out. Generally, it functions perfectly well. Some of the places that it puts its files are slightly different to if you're using Tomcat 8.5, particularly the Catalina.out. It seems to be no longer there. What used to be in Catalina.out is now ending up being logged directly into syslog. This will take a minute or two. It's just downloading all its packages. It's going to install the OpenJDK and Tomcat 9. Just about done. At this point, it's updated some firewalls and it's created a database. Done all kinds of useful things. Now we can see we've got a new container running. There's our COVID-19 container. It's just running a Tomcat. It's created a database on Postgres, which is called COVID-19, and owned by a user called COVID-19. It's just easier to maintain consistency throughout. But it doesn't have a war file on it. In order to become useful as a DHS2 thing, it needs to have a war file. Let me just copy from what I had before. Let's take the latest 233.2. The command there is to deploy war minus L as you're getting it from a link. This is from an HTTPS. We're going to deploy that war file to our new container, which is called COVID-19. Just download the war file as a quick check on it. It's going to deploy it. The way it deploys it is in a way that, again, is slightly more secure, probably than the way that most of you are doing it. It doesn't just dump the war file into the web apps directory. It actually unpacks the war file, makes the directory owned by root, so that if the application gets hacked, it can't modify itself. At this point, it's starting up. While we're waiting for it to start, because it can take a couple of minutes to start, let's make another one. If you want to make a few instances, you can. Let's make one. If you make a new instance, you've got to give it a new IP address. Let's make COVID-19T. The reason I'm doing COVID-19 and COVID-19T is that one, I'm going to try and put tracker on it, and the other one, I'm going to put the aggregate on it. Unfortunately, we had an issue. A couple of days back, where the tracker metadata and the aggregate metadata is a little bit clashing with one another. There's some harmonization of UIDs need to happen. I'm hoping that'll happen soon. Anyway, it's a good idea in general to keep your tracker separate from your aggregates. And I think most people are doing that. There might be a case if you wanted to have a very COVID-19 focused box that you put the aggregate together on it. Okay. This is doing basically the same as the other one is. While it's doing that, let's see how's the other one come alive yet. Reload. Okay. Long URL. It will be COVID-19. Ah, okay. There it's there. Few scary little messages. Okay. So this is the first one that we created. We deployed a whole file for it. So it's just going to be admin districts. And 23.230. There's our instance. We can have a look at the about page just to check that it is what we think it is. That link doesn't work. So you need to enter into one of the apps, the old apps, then you're to write. Ah, okay. All right. Never mind. Yeah. Okay. Let's not go there. There's our COVID-19. By this stage, where's our COVID-19 tracker doing? Okay. We had a problem with our COVID-19 tracker. Okay. Rather than try to fix it. I know why that is. I'm going to just make a different one. Let's make a third one. Call it COVID-19. TR. Now, after I create the container and then start running the, the updates scripts and things on it. Sometimes the network doesn't come up immediately and then the app get fails. I need to find a way of making that a bit more robust. Just make a third one. Okay. That's looking better. Okay. The other thing that has been configured was configured behind the back was this, right? The SSL was set up using Let's Encrypt. All right. So that was automatically pulled in on the basis of the domain name. You also have a very simple monitoring program on here, something called MoonIn. It's not going to show you anything interesting, but it's configured and ready to roll. We don't have any interesting data yet because this machine is new. Have a look. Now, I've just collected a couple of pictures from some running servers to show you some of the MoonIn graphs. Looking at the graphs that you're going to get of a brand new server is not interesting. Okay, that worked better. Let's deploy a war file to it much the same way. What have we done before? Remember, history, grep is my most common command. Try to remember what I typed before. Okay, there we go. Let's deploy a war file to our new one as well. COVID-19 TR. We'll put the same war file on it. That's version 233.2. And off it goes. Just looking at the time, it's 1324 now. So it took us about 20 minutes to set up machine pretty much from blank to having your 233 instance running on it, your SSL configured, a little bit of monitoring happening. We created three instances, though there was a problem with this one. I'm not going to look at it. But the TR has worked fine. We've got three instances running. We'd have to delete the other one at some point. But this is, let's see, is it up here? That takes a while as you know for the war file to load up. Maybe it's still running. Okay, well, that's running. I'm not going to, not going to show you what's on these graphs. I've just taken a few pictures really of, I'll show you a little bit. It's not going to show you much. The monitoring machine here. I'm monitoring itself. The Postgres machine. There are lots and lots and lots of graphs. I mean, there's one of the problems with, with MoonIn, it does give you a lot of graphs. You really have to become very adept at navigating your way around or just keep yourself some links or make a dashboard page that links to the ones that are interesting. As you can see at the moment, there's not much there. The way MoonIn works, it's fairly primitive. It's not as fancy as something like, like Grafana. It simply samples every five minutes. The reason why we've used it was, well, partly because it's really quite easy to set up. It gives you really useful information for quite a small amount of effort. But as far as the configuration is concerned, if you look at what we started with, the plan is for that to be configurable. You see in our config file, at the moment, this is a bit hard coded, but you can specify what kind of monitoring solution I'm using MoonIn. Then we specify the container called monitor. What's the type of container? It's a MoonIn monitor. The idea being that over, over a bit of time, we'll also create perhaps a, a Grafana Prometheus type monitor. And then it will be a matter of in your configuration specifying different type of monitoring and then a different type of monitor to load up. Let's look at what some of the, the, okay. I've just made a couple of collection of little, I just grabbed these this morning from another server. One of the most important things is that it really easily gives you a way of looking inside your, your JVM, right? To see, see whether you've allocated enough for a lot of people allocate heap memory as a sort of act of faith. Oh no, my server is going too slow. It's crashing. Probably I need to give more heap to Tomcat or what have you. People are not using an objective measure to do it. It's really useful to have these little graphs. Basically you look at that and as long as you've got a happy amount of green free bites, it means that your heap is, is operating nicely. If you find that you're getting places in the day where that green is getting very low, that's a sign that you're going to need to increase your heap size. You get a simple view of what Tomcat's threads are doing. As you can see, this server is not particularly busy. It's generally running with the minimum of 10 Tomcat threads. Every now and again, it gets a bit of activity and it jumps up to a maximum of 24, it's looking like. That's usually a sign that some of these threads are now getting quite busy and maybe getting quite slow to return, right? Maybe they're making database queries which don't come back straight away. And so the threads stay alive for longer and then you start to see an increase in your thread count. Something that's going to be really nice to do and I haven't done it yet is to go through some troubleshooting and some interpretation. If you see this kind of thing in your graph, that possibly means this, that or the other. Because, you know, all monitoring solutions provide you with dozens and dozens of graphs. But if you don't know what they mean, it doesn't help you too much. It also, a bit of monitoring here on your Apache server. This gives you a good idea of what your throughput is like, whether it's... The nice thing about MoonIn is that you can see this daily graph but you can also see a weekly graph and a monthly graph to see what your trends are over time. That's very important for things like disk space, for example, if you see your disk space. Another useful thing is monitoring your Postgres and this is probably the most important graph in the whole thing because most of us who have seen performance problems on DHIS2 in the end comes back to something that's happening at the database. And this is a graph here that's actually showing a fairly healthy performance. You're looking at the total number of connections and what you want to see, as you can see, there's most of them are idle, which means as we sample them you're not coming across a large number of connections which are active or which are waiting for locks or something like that. So what you'd see typically, if your database starts to become much bigger, you'll see this number of connections will start to rise. You might reach a point where you've got to think of increasing your pool size. But as long as they're all showing orange, generally you're happy. When you start to see them showing green and blue, it means there's some problem with your database that you need to sort. It might be to do with the kind of locks that some people have seen happening. Sometimes with track error it can be because of program indicators which are a bit ambitious. Besides actual applications, money also monitors aspects of your system. I'm just giving an example here. You can see the CPU and all kinds of things. This is looking at a disk. Again, this is quite an important figure to understand what your latency is on your disk. Often when a database performance is slow, it's because the underlying disk is slow. Often it's not a disk at all, but it's just some disk IO that's been given to you by your VMware or by your cloud provider. This is a very good example here where we are seeing IO wait times in the hundreds of milliseconds. That's an example where you need to get onto your cloud provider in this case and say, you guys are giving me a crappy old disk. I've paid for SSD. You expect to see those IO wait times more, either less than a millisecond. If it's really, really busy, you might be looking at one, two, three milliseconds. It's certainly not hundreds of milliseconds. Sorry, that was just a diversion to show you what the graphing you get while we were waiting for, what were we waiting for? We were waiting for COVID-19 tracker to come up. Did it come up? Let's see. COVID-19 tracker. Oh, yeah. Okay, so we have two instances running. All right. The first one is the process. The process is something that, I mean, typically we've done a number of server academies over the years. A couple of people here have been on some of them, I think. Going through all of this and then looking at the detail of what's inside of everything, it's typically four or five days academy. It's not something that easily one can go through and demonstrate over the course of an hour. It might be a good time to stop and take a couple of questions for five minutes. And then we move on and let's try and throw some COVID metadata on here. Yeah, Bob, maybe if you want to start with the questions that are already in the chat, you can open that. Okay, that's scary. I haven't been looking in the chat. I've just been talking. Yeah, we just let the questions pile up there for you. I'm sure there are tons. How do I find the chat? More. Record, leave, share, share, share. There's a little chat bubble on the bottom. Okay, I got to say the 10 chats. Bug from, okay. Thanks, Nick. Does it include a DB by default? Is it empty by default? Or can we provide one that we have? No, it doesn't include a DB by default. It's quite easy to deploy a DB to it. But it's empty. How can we specify Tom? Okay, good. I wanted to go through a little bit where some of these files are. Okay, so yeah, answer the first question is no. It starts up with a blank database. Empty by default. What do you do get? Let's go into it. We haven't done this yet. So here's our chance to see. How do you go in and see what's going on inside these containers? Well, the easiest way to do it. I mean, I've got a lot of scripts which can run commands inside the containers to do various things. But the easiest way to do this interactively is just go run bash inside a container. You do something similar with Docker. So let's go into our Postgres container and type bash. No, sorry. Execute something inside my Postgres container. What? That's going to drop me to an interactive shell. What is in here is this is something that we've talked about quite a lot of how to I've tried to make the configuration of the database as straightforward as possible in the sense that we can't really easily tune it out of the box. I know you can sort of do it by making estimates on how much RAM you can see and the like, but what I've done is just taken the the standard settings from our DHS to implementation manual and try to put them in here. They commented out with a little bit of annotation so that to get this thing tuned properly, you would just probably increase your shared buffers. They increase your workmen, your maintenance workmen. Most of these things you'd want to keep the same. Your max connections is going to depend on how many instances you plan to run against. We know we need at least 80 connections for each instance. So if we had those three instances running, I'd have to up this. I'd probably up it to about 250. So where the files are and the files basically are inside each container and because we've just used the standard package for each using the Ubuntu package system, the files will be in the standard place that you expect to find them. This is maybe slightly odd. Normally people go in here and they edit this postgresql.com file directly. I've never been in favor of that. I've always advised people not to because it's very hard to find out what you've done and what you haven't done. Unless you keep it all under version control, which is probably a good idea. But you'll notice there is a little thing at the bottom of the file that says include everything you find in this directory, conf.d. So what I've done is just group everything together. So all the configuration for your postgresql you'll find in here. This is something that we've done for a number of years now in the community. So the way postgres configuration works it will override a setting by a later setting that it finds. So basically whatever you've got in this file is going to override everything that's sitting in your postgresql.com. So the place for doing your tuning is in here. So that's postgres. Tomcat configuration and stuff like that. Yeah, very good. Let's go out to postgres and do the same thing and go and have a look inside of one of our one of our Tomcat containers. We've got one called COVID-19. A few things to note. Explore this container a little bit. One of the things you'll see is that my default got firewall set up so you can't access this container unless you're coming from the main host. That's 192.16801 or unless you're coming from the proxy. The proxy obviously needs to be able to access the Tomcat. The configuration files then I have not changed from the standard Ubuntu defaults with the install. So you're going to find et cetera default Tomcat 9. And that's where we've got your Java Ops. I've not set anything in Java Ops here yet. Again, it could be done automatically, but I think we're doing enough auto magic. It's probably better to set some of these things manually. But that's where you'd set your Java Ops up. Your server.xml is the other important one people tend to go for is sitting in there. Nothing too interesting in your server.xml, I guess. I've turned off the control pot. That's a standard security measure that everybody should be doing. I've enabled the user database here. That's for the Tomcat monitoring. A little bit of monitor the Tomcat. You'd need to have a monitoring user, which is configured. I don't really have time to show you all the setup for it, but it happens automatically. And there's your standard connector ports, your relaxed query characters and things like that are in there. So that's where your server.xml live. The other interesting thing is where does your DHS-2 home live? Because you might have noticed when you went in there. Normally with your Java Ops, you're going to have to specify your DHS-2 home environment variable. We don't specify a DHS-2 home environment variable anywhere. That's because by default, with DHS-2, there's a Tomcat memory or maybe a container. John, I'll come to that. With DHS-2, if you don't specify DHS-2 home, if you just started up with no environment variable, it has a default that it'll fall back to. So from a configuration perspective, it's best to just ignore DHS-2 and let it fall back to the default. And the default happens to be oct DHS-2. So if you're looking for your DHS-2 home, that's where everything lives. And again, what I've done here, again, to try to encourage people to explore and use some common options. This file has already been set up. See the database was already created for you with a reasonable password. You notice I use a username and a database name the same as the container. It just makes life simple. What I've done here, I've actually copied the DHS from the reference manual. Because the idea of this is not to replace the reference manual, but more to implement it. So we're implementing all of what we can find in the reference manual. So everything is in here for someone to go through and set how they might want to set it. I think the only thing that I've done, okay, I wanted to actually do this and turn it on by default. Because it's such a common thing. I think many performance problems are solved by having that on. I can't leave it off for the moment. So that's that file. Right, James, there's a couple of other files I could go through like the proxy setup, but time is a bit ticking. John, on the TomCat memory, all we give memory for container. Okay, that's quite a good question. Normally, the most important thing you're going to set on your TomCat container is going to be your heap size, right? You're going to have to stab in the guess at what heap size you're going to need here. Sorry, I don't have any. It's minus XMX and then maybe we'll give it eight gig. So here you'd specify the heap size to use. The good thing about doing that is that you can then watch it. Usually, when I set up for the first time, if it's in a production environment, you watch it very, very carefully the first couple of days. You see, did I give it too much heap? I don't need that much. Or did I give it too little? In terms of how much memory the container has itself, by default, if you don't do anything and I haven't done anything, there's no per container limits applied. Those can be applied. I'll do one. I might want to, for example, I've only got eight gig I think on this machine. Three minus M minus H. What a four gig on this machine. Right. I want to be careful with these containers that they're not going to interfere with one another's memory. One of the things that you can do is you can say LXC config. Take a container. Let's say the one we have that's doing anything. Config set. Config set on, I'll document this stuff somewhere in the documentation. But let's take, if we wanted to limit the memory on this, we can do it like this limits.memory and make sure this thing can only use one gigabyte. LXC config set. Let me doing these things live, I get that. Config. I'm doing it the wrong way around. Config set. I do it there. Set to container. I thought I'd had that right the first time. Config set. The name of the container, which was COVID. Limit.memory or limits.memory. I'll put this in documentation after I'm having a brain freeze trying to do it live. You can limit the memory of your individual containers. You can also limit the number of CPUs that your individual containers have access to. That's typically something that, again, I don't do it as part of the setup. I rather, first of all, let everything use whatever it can find and then you monitor it carefully over a day or two or three. And after you've done that, you get a better idea of what you can and should restrict. Okay. Nick, should I quickly try and put some COVID-19 on here? Yeah, I'd agree. I don't know if it'll work. Sure. It's not very well rehearsed. The problem I had, as I said, my script got broken. Okay. At least I've read, I've kept it in history here. So let's try and set up aggregates. So basically I'm going to pull the aggregate metadata here and just try and post it. Try and post it into our container there. Check this carefully. Our container is cold. It is called COVID-19, isn't it? I'll remember correctly. Okay. So I'm just pulling. Oh, no, no, this is not the COVID-19 metadata. I'm going to pull some org units first. A lot of, if you're bootstrapping a system like this, you're going to need to get your org units from somewhere. Most commonly you're going to get it from your own HMIS system. The HMIS system acts as a kind of de facto facility registry in a lot of jurisdictions. You might have a proper facility registry somewhere and you'd pull them from there. But okay, what I'm doing here, I'm just pulling the org units from play. Always a risky thing to do and try to push them. Oh, there we go. And that looks like, yeah, lots of created. Yeah. Okay. That looks like we now have org units. I'm going to check that. Do we have org units? Let's make sure we get our right instance. It's getting confusing. Okay. Admin district. Did a lot of fiddling around with these parameters and a lot of discussion with Morton and all about whether we get the GIS stuff in or not. But I think this does get the GIS in. Easy way to check. We'll go to the maps, see if I can get boundaries. Add ourselves a layer. Let's just see what boundaries we have. It is thinking. What is it thinking about? It's loading org unit hierarchy, I guess. Doing it kind of slow. Leave it thinking for a bit. Let's go back to the, let's try and get some metadata in the meta. Okay, this one here, this will pull the aggregate, all the aggregate metadata and push it in. Whether you do this from the front end or the back end just doesn't make, I think most of the guidance and stuff is about doing it from the front end. Obviously, if you're, if you're installing this system from scratch, then you might as well curl the stuff in like I'm doing here. Let's try that and push in the, the aggregate metadata. Looks like it worked as well. Yeah. Okay, this didn't. We need to check there. It might be something Nick was telling me about. My admin user probably needs to be assigned to the root of the org unit hierarchy. Let's just have a quick look. It's a bit slow. This is running in a very, very tiny environment here, right? With three, so that's too slow. Why is maintenance giving me so slow? I don't know. I need to check that. I can't see why that's not loading. We need to go and check in the logs to see. It worked when I did it yesterday. Other odd things happening here. I'm getting not secure. I don't know what that's about. There must be, must be some resources on this page, which are not coming under HTTPS. Okay. I don't know. Sorry. The, the importing of the COVID metadata and stuff, we still need to test it. Tested it all yesterday and it worked well. Obviously it needs to be tested some more. But I think the main point, as I said, these tools, it's not really about, it's not really about COVID-19 specifically. It's about setting up an environment where you got reasonably strong security applied to all of your particular components. As I might Apache configuration took me about a week going through all the CIS benchmarks and try to fix whatever I could. Similarly, the Tomcat is, is reasonably securely set up. And to make that sort of out of the box experience, that by default, you follow as much of the best practices as possible. Let me stop there again, see if there are any questions. I'm embarrassed myself with more things that don't work. And I just want to say, anyone who hasn't typed a question in the chat can also use the raise hand function that should be the bottom of your screen. And we'll just call on you if you have a question you want to ask live. Yeah, Tuzo has fixed my, thanks Tuzo. This is the way you limit your memory. LXE config set postcode limits.memory equals. What I did wrong, I think I'm bad. Okay. Good, thanks. John, you might need to check your user. Yeah, I need to go to my user. So when you want to start again, it won't assign to all units. Yeah. How do I do that? Because I need to get to my, I need to get to my maintenance app to do that. Maintenance app is not coming up. I can edit the user. Yeah, I can do that. This is great. Everybody in the audience is telling me what I need to do. That's the kind of viewership that you want. There's something which is unhappy about this. I mean, that's not, something is not right. Okay. Without going to look at the log files when all else fails. Let's just reboot the thing anyway. Okay. Just restart our COVID-19 instance. I'm going to have a little bit of it again. When I did this yesterday, it was fine. I could go into the maintenance app. I could see the user. I could see the, I could see the org units. And then I was able to assign my user to the org unit. I had to wait a few minutes for that to come up. Has anybody asked me anything else or given me more advice? Yeah, access by a user app. Thanks, Stephen. Yeah, that's what I was trying to do. Does anyone want to ask me anything about the, the actual service setup while we wait for this? There are quite a few people on this group who've been using this way of doing it already. I know Stephen is a bit of an expert. I'm too. So is a bit of an expert. Clemens. John has raised a hand. Okay. John. Yeah, like you need to give the access to speak. Everyone is mute. You need to learn about the zoom. Max is operating the zoom. I think I'm just talking. Yeah. I'm the same dictator at this meeting. Yeah. Thanks. Bob, like I just see like it's most of the things are been done at the different script, right? Yeah. And it installed in different places. So are these things. It's been currently documented somewhere or you are going to know that they are being currently documented somewhere. I've been, I've been at it all week. Carry on towards the end of the week. You see the thing with, with the, like with the likes of the Postgres container and the Apache container ones have it, they're all, they're all in their standard places. So in a sense that's quite useful. You can, you can, you can, they should not be clashing with what you read with the official reference documentation for those packages. But the piece of documentation that I need to write now is going through all of the particular configuration files where they are and how you access them. Yeah. That's fine. Yeah. I've not completed. No, no, I'm not talking about that one. Like especially for the script, you know, instead of like loading a code, you can also load a restore database, right? So I guess like you're script for the restore database. Sorry. And you have the, the man help in the, in the script, right? So we can just like say, man or just a, you will, the next, the next, the next couple of days that'll be there. Currently, yeah, there are a couple of utility scripts that have been installed on the user local bin. I've got to make new man pages for them. Interesting one, the DJS2DB activity. This is a really useful diagnostic script. I was running it so much that I decided to just include it in the package. What this does is it just takes a look at a lot of you may have seen this before where you're looking in your PG stat activity table inside Postgres to find out what queries are actually running at any particular time, what queries which are not idle. So running that now is where you just have to go Zulu, DHS2DB activity. And there's, it's not finding any active queries. It wouldn't because the system is so quiet. But yeah, ideally what you want to have and you will have, I just got to write these things with part of the install. We should install the man pages with it like we had from before. You go man and it'll give you what you need. Yeah, just the syntax will be fine for now. Yeah, I mean the man pages don't have to be very long. It's a little synopsis of what the syntax is. But I mean, focusing on ironing out all the little bugs around the install, around the installer, because it's installing quite a lot of stuff, lots of little gotchas. I think I've ironed out most of them now. But I got broken again. Just two hours ago. Because as you know, I'm using Apache. There are some people who are fond of engine X. I'm not particularly, but Tuzo contributed a package for, a container set up for engine X. But unfortunately I didn't test it properly before committing. And so I found out two hours ago that it was breaking a few things. But yeah, that'll be easily fixed. So you could have an option of using either Apache or engine X, but of course that also requires more documentation. So I'm going to keep on documenting my Apache. And I'm sure Tuzo will take on the responsibility of keeping on documenting the engine X version. My last question, can we use your script right now? Or do you ask us to wait a bit more? No, you can use it right now. Well, okay. You could have used it two hours ago. I'm going to just check again. I've fixed the stuff that got broken. But yeah, I have been using it. That's how I'm getting all the bugs ironed out of it. You can take a blank machine, what you can't remember, like what I what I put up here. Follow those instructions carefully in the read me. And it will work just like you've seen here. Great. Thanks. Anything else. Are we over time? Anything, any questions left in the chat? Thanks a lot, Bob. Maybe you want to mention which asks, yeah. tried this thing out this morning or yesterday on Debbie and then it didn't work. Yeah, currently all of these scripts have only been tested on Ubuntu version 18.04. Okay, I have no intention of backporting back to Ubuntu 16.04. I think anybody's on that should probably consider moving forwards. Ubuntu 20.04 is going to be around the corner next month so yeah we'll test them and release a version for Ubuntu 20.04. It doesn't currently work out of the box on Debbie. It could be much easier to make it work on Debbie and then it would be to make it work on Red Hat. That would be quite a different prospect. So yeah then if there's a sufficient interest and demand for doing it on Debbie and I'll certainly put a little effort to make that work. But for the moment I think we've got enough to do getting everything really watertight on 1804. I've kind of I've targeted what I guess is that the mode what what most people will probably use as a starting point.