 ... we've been recorded. Michael is our, our security chief, if you like. ... I'll let him say a little bit, a few words about, about what that means to him. But ... yeah, he's joined us part-time, we don't have him full-time, we want to try to get him a bit fuller than we have, and we're working on that. But for the moment, he's with us part-time. Also, I think primarily working with, with ... the company that runs Opera, the browser. So, Michael, maybe, I mean, that's, that's my brief intro of what I know, maybe you want to... Yeah, thank you once again, and quick intro about myself. I think that I managed to talk to not maybe many of you, but at least some of you. And what probably you should know about me is that I've been doing security for the last 20 years. I am an engineer by education and primary interests for that. So, I hope that I'm quite a technical person to be able to support you and some answer some really technical questions. At the same point of time, I've been working a lot on the, let's say, policy and process level of security. And with all this like complexity of issues, I think that I can help to translate formal language of standards, legislator requirements, or anything that is related to compliance to security measures, and to help to reflect back whatever happens on the security side, on the practical side to, to the policy level as well. I've been working as an auditor and penetration tester. So, I understand how attackers think and what are the most typical issues or the most typical attacks scenarios that can happen. And I've worked quite a lot on protection side, running systems at scale, both as a system administrator and security administrator and security manager. So, I also know a bit how it works on the backhand side. DHS2 is still a new world, exciting world, which I'm exploring more and more every day. So, there are a lot of things that I still don't know, but at least I hope that I get more and more on board and learned a lot of things through this year. And I also rely on your expertise and knowledge to provide some feedbacks and hope I will be useful with mitigating security issues or helping to prevent them in future. That's all from my side. Thanks, Michael. We've been nice to go around for everyone to introduce themselves, but we've really got 11 people on, so we won't do that. I can tell you, just looking at all the names that what we have here is a smattering of people from around the world who are responsible for system administration in their various countries. And of course, interestingly enough, we have Jermila joining us from Hisp, Western Central Africa, who I think is the first, well, perhaps after Hisp, South Africa, is the first security appointment that we've seen in a Hisp group. So, we've got a bit of a mixture of mostly server admin people and one or two security people. Tato, I don't know, Tato, are you on the security side specifically, or are you server admin in general? Yeah, Bob, I've moved to security, so that's my main focus. Yeah, it seems like there's a division now in what used to be just one team. Yeah, with fractionalism don't solve. So, Michael, that's kind of who's here. And I thought it would be interesting for them to hear a little bit from you about your thinking around DHIS2 and approaches to managing it and installing it. And I also thought that there might have some interesting questions to ask you regarding these practices of security. So, maybe Michael, let's start there. I mean, security, DHIS2 obviously is much, much more than just the backend system administration stuff. But the backend system administration stuff is obviously important as well. How do you break that down in your head in terms of security and what's the importance of system administration in that? Right. Okay, so to begin with, I think that everything starts with the installation of the system. So, even or once we have a product, you can't have a lot of control over what happens within the software. And we fully rely on the core team to develop the product, develop the software. And once we get to the JAR file, we are kind of left alone with what they have delivered. And the overall responsibility comes to system administrators or those who deploy the system actually. So, it means that from a security perspective, you will be those who can either keep it running as secure as it is, or help the team to, let's say, work around the issues that will be discovered during the use process. So, it means that, for example, you have a default installation, you have followed the security manual or the guide, the guide, the best practice for this platform or your Linux probably system, you deploy the HS2. And then you find out that there is a vulnerability in the system that requires an upgrade and upgrade can be performed. So, you can introduce a workaround. And it means that the system administrators, the support team, they are mostly the first line of defense that can introduce some workaround, do some patching immediately while the developer team are preparing the next version for an upgrade, or when the upgrade is not possible. So, it means that it is a kind of a crucial level, it's a crucial first line of defense that we have if the application fails or may fail in certain security aspects. Michael, you're kind of lucky, I suppose, to have a like when you had have, because if you looked in on this space, maybe five or six years ago, some of the things that were being done were, to make your skin crawl a little bit, the most common setup of DHS2 would generally would not be using SSL. Very commonly, we'd be running on an IP address. And also quite commonly, the Tomcat would be running as the root user. That's kind of was the state of play. I guess because large part system administrators were not experienced, often they were just the most technical person at hand, who was called in to do the job. And our documentation, I think was very weak in terms of our implementation guidance. So, I think you can say we've come a fair way from those days, in the sense that now most systems that you see, at least don't have those very fundamental errors. And part of the way we address that was when you'll know, I started making these installation management scripts. I say Stephen helped me out, in fact, on a couple of those, isn't it? He's on the call. And that was really a way of trying to inscribe some good practice into a normal installation, so that you'd get reasonable security settings by default. But one thing we never did, we thought about doing, we never did, was actually write down, what are those security controls that we are attempting to implement or to comply with? And I think that's an area where you've got some thoughts on them. We can marry our implementation tooling also with some kind of compliance or control list or something. Yeah, I would probably make one step back and say about a bit about what actually happened during the last years, maybe not very noticeable for all who do system administration and who work with Linux systems and the applications using Tomcat and Genix Apache and other things like that. So we see that you mentioned that there is quite a lot of good practice and a lot of changes that we had in the past, they are related with some poor practices like running systems as root and opening, having full access to the configuration file along the web server to find the file configuration and so on and so on. These are naturally part of the checklist that we have right now, but a security checklist that we recommend is our best practice. But if you look at what has happened during the last years, I think we came from a kind of a chaotic and individual driven approach to some kind of a standardized environment where both operating system and applications and supporting software like demons like networks to services and so on, they have a kind of a recommended way of doing things. I would like to invest a bit of time into that right now because it is important to to get on this topic and understand what has changed and how we can benefit from that. For example, and it's also tightly related to the topic of security automation, running automated deployments and other things that we will hopefully discuss during this session. So let's take one very, very simple example, which is SSH. And maybe I will try to share my screen and show you some configuration, so just to make it very actionable and easy to follow. So let me find I made some preparation for this talk, so let me find what we have. Oh, we impressed, you made some preparation. Yeah, I got some. Let me see. I will share my screen in a second. Yeah. Okay. So I'm not seeing it. Yeah, not yet, just a second. I have too many tabs to share my screen. Okay. Host disabled participant screen sharing. Okay, share my screen. Okay, that's much better. Right. Okay. So we have a host which is called security demarc.gsh2.org, which is, I think you can see my screen now. So this is a test machine or kind of a machine for checking the MAC records for our domain. It doesn't work as a gsh2 platform or anything related to the application itself, but it has a kind of a default configuration that I would like to show. And we'll start with SSH. So as a root user, I will go, do you see my screen? Yeah. Yeah, great. To SSH and to ETC folder. And here we have a standard folder for SSH configuration, which is ETC SSH. So if we look at the standard server config file, it is, as all we know, it is ETC SSH, SSHD config. And this file exists for like ages. So this is the standard configuration ever. And one of the typical things that we do as a part of the hardening, we disable password authentication. And to do so, we just configure this line. It is commented out by default. So it is password authentication. No, it doesn't have a comment field. So if you'd like to disable plain text password authentication and use only security keys, SSH keys, you will just on uncomment this line. And if we don't do that, we are vulnerable to potential brute forcing attacks. We are possible about stealing, we are vulnerable to someone reusing the shared password. And from compliance perspective, it's a kind of additional burden to ensure that your system has an additional password policy, and you have to maintain it. So using SSH keys, how do you encourage? And once you use SSH key, you have it to configure in this way. If you would do this kind of action five years ago, you will just do it exactly as it is written here. So you'll just uncomment this line and restart or reload SSH server. The problems that happen with this configuration are that, for example, you upgrade the version of SSH, you have automatic upgrades, or you just upgrade your system manually, and then a configuration file changes. So you will get a new version of this file, which will be saved in a different one. And in fact, you have to maintain a difference between this file and the one you get with a newer version of the demon. If you have more changes, or if you'd like to introduce them with some kind of automation tool, you need to literally make some changes with a puppet or Ansible, or any other tool of your choice to edit this file in place, and to ensure that your configuration does not interfere with other changes that are introduced in the file. This is a very simple example. Changing one line is pretty straightforward, but if you have a more complicated configuration, tracking these changes automatically and applying this configuration at scale can be quite troublesome, and the risk of error or risk of misconfiguration increases. SSH also has a quite tricky policy with match rules, and if you have some match rules and would like to do different policies based on matching, it will be even more complicated. If you do it with SSH up to version six, I think, or maybe even seven, you will be definitely doing it in this file, but one of the most important changes that happened a year ago, roughly a year ago, is the line that now appeared here, and this line is including a snippet configuration. This is very similar to what Nginx does for years, or what Apache does for years, but in SSHD it appeared only a year ago. In fact, instead of doing this change, we should do a different command. We'll take this rule. We create a file with a configuration, which will be looking like that. Now we also edit SSHD config file, and we return it to the original state, which was like this. Michael, can I ask a quick question? Sure, please. We do the same thing, and I have advocated for years, with the Postgres configuration file. It's also just this big long configuration file. It's easy to see what bits that you've changed. But the interesting thing with Postgres is that when you include the file, you do it at the end. With Postgres configuration, it's the last setting that wins, but it looks like with SSH the reverse seems to be true. That's a good catch. In fact, you include all the changes. Before we do anything here, we will make one more action. In fact, if we look here, indeed, we include all the custom settings first, and then we have the default ones configured in the file. If you see most of them with a very few exceptions, they are disabled, or they are commented out. In fact, they consider that the default settings here, they should be good enough, and whatever you would like to change should go first so that you will not ever write the defaults that they offer. Typically, yes, you can, but the whole concept of this is not to touch the main configuration file and put all the configuration you want in the code snippets that they have. Then the flexibility of using these snippets is that you are much easier with upgrades. You know you have a dedicated file that can be managed separately. You can just drop in this file when moving configuration from one server to another, and it is much more, I think, reliable in terms of potential upgrades, conflicts, changes with automated tools, editing the file, and it simplifies the administration effort quite a lot. Then we probably would like to see what will happen next. As Bob mentioned, typically this configuration is done at the very end of the file, but for SSH, they decided to go a bit different way. They offer users to introduce their changes first, and then to apply defaults after that. Is it the first setting that wins? Yes, this is the first setting that wins, and it will be applied first as far as I remember. If Potlaki is asking you something. If you have some questions, I don't watch the chat, so if you have questions, probably I need to look into the chat as well. Potlaki, can you ask your question in the flesh? In the flesh. Thanks, Bob. Good afternoon. Good morning, everyone. I just wanted to find out, can the defaults be applied for the personal authentication? I think you were breaking up at least that he was not able to. I think you were doing better in the chat. Now, what Potlaki was asking us, can it be done for all the other defaults? Yes, you can change it. For example, the port to put it on the hidden or non-default port, of course, you can do that. The options that are commented, you can do it with all the options, but those that are uncommented in the main configuration file, they will be reused. Then, once we do this, we would like to apply the changes, but another good practice that is highly encouraged and more and more services allow that, like Nginx for sure, SSHD for sure, they suggest you to test your configuration before you actually apply it. Then, we can do it like this. It will say nothing. It means that the configuration is valid. For example, if we add another option here, which is a random option of my choice, and I try to test, it will say that there is a bad configuration option, and it allows us to ensure that whatever we changed will be applied correctly. More and more tools have a validation of security options, and it is quite essential to test whatever you changed before applying. I think it's a good rule for all kinds of changes. With Nginx, there is a config test option, the same for SSHD, and many other tools allow to, let's say, have this kind of checking all the time. We revert back to the good configuration, and now we can test it once again and then apply the changes. That's it. Back to the original question or the original topic. There are a lot of things that have changed, and now checking for security settings is much easier than before, especially if these settings are grouped and structured properly. We can benefit more and more while using standard approach to configuration and using standardized approach to making changes in deploying systems. This is a major convenience that we see in the last years, and that's why we will be creating and updating our security tools. We will be trying to promote a good standard for making your configuration and making it easier to maintain and easier to support later on. Michael, just a quick comment. I see that when you finished there, you did a reload of the SSHD. There's always this question about, should you restart or reload? That's not just the SSH is also whether you're making changes on the proxy or anything else. One of the interesting differences with a reload, and I'm glad you did that reload there rather than restart, is that it'll also test the configuration. If the configuration is invalid, then it simply won't load the new configuration settings, but the service will remain up. Whereas if you'd restarted it with an invalid setting, then you could end up with your SSH down. Yeah, that's correct. I think that up to the very, very recent versions, it didn't allow. Now I think even if you restart it, it will try to keep your current session, but lots of sysadmins in the past, they apply the incorrect settings and got cut off from their systems. We are explicitly using SSHD-T to test it, and then we do reload just to ensure that we are doing everything properly. Actually, after that, we should go to the log file to see that the configuration has been successfully reloaded, even if we don't get any error message as a kind of a best practice, but this is a kind of a more paranoia than the regular thing, but I'm pretty sure that quite a lot of you do this on a regular basis, and this is already the part of your regular habits, rather than something that you would just learn and say, okay, this is kind of overhead. Oh, well, I wouldn't take, I wouldn't count on the fact that everybody always has the best of habits, but isn't it? I mean, it's good starting with SSH, and people will know who've been on server academies that we do every now and again, typically, it's always a bit scary. I try to have a kind of barrier of entry to say that people can be really familiar with SSH keys and things before they're allowed to show up. The way the world is, people show up anyway and don't necessarily have them on the background, so we'd spend at least half a day typically talking about SSH. So, yeah, it's a good place to start. I know Stephen's got a whole presentation around weird and wonderful things you can do with SSH as well. I mean, once you get past the basics, the importance of using SSH for tunneling and its relationship with SCP and things like that is good. So, Mike, would you suggest, I'm sure, with automating that configuration of SSH? I was a little bit nervous about automating configuration of SSH because I've been locked out so many times in my life that I trust myself better doing the initial steps manually. But I'm joined. Well, it depends. I would say that if we look at the most recent versions of SSH or Linux that is running these versions, I think we are in a pretty safe situation and it is quite reliable, even if we fail. However, I would say that we should consider looking at least one or two major versions of the operating systems behind to ensure that we still have compatibility and our advice applies not only to those who is running the most recent fancy Ubuntu 22, but I'm pretty sure that some of you have Ubuntu 18 or even Ubuntu 16 or equivalent systems that may have a different approach. So, we always will have this compatibility challenge. And I would agree that some things probably should be configured manually or at least tested much more thoroughly than before, especially if we have a new version of Ansible, some scripts that were not tested on the very old systems. So, yeah, it can be challenging, even with all the automation we have. Steven. Hi, Steven. Hi. Yeah, it might be worth just adding from an additional security perspective that there's a tool out there that's called fail to ban. And one of the things it can do is it can watch things like your SSH logs. And if people are repeatedly trying to get into your system with failed passwords and all that kind of stuff, it will actually put a record into the IP tables of the machine to block those kinds of accesses. I try to make use of those kinds of things a lot because you can have denial of service attacks and all kinds of other things which that tool can be used to block. So, I encourage anybody to look at that. The flip side is that if people type in their own passwords wrong too many times or their key is wrong too many times, then you can lock yourself out. So, again, sort of the Bob's point that you should always have some additional way to get into a system if you're running to issues. But, yeah, fail to ban can be useful. It can be useful against the NGNX and the tax on DHS2 as well as just an additional tool to look at. Yeah, great comment, Stephen. Thank you. And I'm a great fan of fail to ban. And at the same point of time, along with the issue that you can accidentally be locked out, I faced a couple more interesting issues related to that. So, if you, for example, install fail to ban on public host and there is quite a lot of let's say malicious activity ones trying to brute force password at scale and you don't have the log rotate configured properly, which sometimes happens. The system will logs and if you don't have var partition properly partitioned and we don't have enough space there, which sometimes happens, you can simply run out of disk space with a lot of attempts of brute forcing and fail to ban producing too much noise in the default configuration. This happens sometimes. And another story related to that is also kind of an unintentional abuse and denial of service is that if there are too many entries that are blocked and fail to ban is main thing, a huge table of remote APs that try to connect or the policies don't wipe them out in a kind manner, you can get a risk of the significant slowdown because it may get quite huge. The table for AP tables can be quite huge. And for systems on the high load, for example, if you have a gateway for all these services, it may impact your performance badly. So this is another thing to consider. But otherwise, it's a really good tool to use. Yeah, I agreed. Thanks, Michael. So moving beyond SSH, I mean, SSH is obviously we could spend the day on it. Many, many aspects of configuration and good practice. But in terms of the way that we currently recommend DHS2 is installed on a system. You've had a go, I think, yourself at Tito's Ansible Scripts. What's your prognosis there? Do you think it's a good way to go? Is it something that you're planning to use for your reference implementation? And maybe tell us a little bit about what you mean by security reference implementation. Yes. So answering to your question, the tools are great. And I think that person like me who tried and pressed time and managed to install the DHS2 in the full configuration within roughly 14 minutes with some minor questions and answers from Tito, who happened to be in the same room at the same time due to the pure coincidence. It was very, very helpful. So the tools are really great. And they are very versatile. They are flexible. And I think that while using them and trying them on different systems and platforms, we probably can work around the issues that we faced or may face in the future. So there is some work to be done to maybe to clean up the policies or to ensure that some corner cases are followed, but at least for a default setup, it worked pretty well. And one more detour here. I'll tell about the reference setup a bit more and what we're trying to achieve with this implementation. So once we deploy the HS2 using the scripts, it is considered like a standard setup or one where preferred or one of the possible ways of installing the system. And as we are not able to support all the platforms and all types of the installations, we somehow concluded or almost concluded internally that we would like to maintain at least one validated and tested way of installing the HS2, which is called a reference setup. And we will try to use this setup for all kinds of security assurance tasks for penetration testing and like trying to provide a configuration that will be secure by default. And it will be easy to test, so we call it a reference one. And for this purpose, we will use the the HS2s with the Ansible setup. And I can show how it works and what's the whole goal. So there is a virtual machine that we support, that we deployed in Oslo, OpenStack Cloud. It can be also deployed in AWS or any other cloud environment. It can be a physical machine as well. But the part of the setup is to ensure that it can be deployed fully automatically using the tools and at any time have the safe enough defaults. And we will test security setup with scanners or kind of static analysis tools and dynamic testing against this machine and ensure that it is secure enough. It's also the way to test in the real life that the tools are working properly. And we can recreate this configuration with Ansible at any time at the provided that we have default operating system, freshly installed and so on. So long story short, we have a Ubuntu machine. It's I think Ubuntu 20 or 22. And it uses vanilla image. So just the operating system from installed with the minimum tools. And then probably I will show my screen again. Let me find the script. So in fact, we open, yeah, let me share my screen. Yeah, Michael Tito took us through the installation process last week. Yeah, I will go through the installation process related to the tools themselves because it was really covered before. But I will show the part that is not, that is not in scope of that for sure. And we'll see if you have any feedback and comments on that. There's also lots of people here who weren't there, weren't here last week. Yeah, I'm pretty sure that Tito can add or comment on what I did because it was probably not, I think what we discussed was once I have a fully running system, and I will go one step behind and we'll share the screen and show something different. Right. So everything is public. We have a different GitHub organization, which is called HS2SRE, Sector Reliability Engineers, where we put all the scripts for internal deployment. And we have the repository called HS2Specimen. I can also send a link to the chat, if you'd like to explore it by yourself. And I need to find the chat here. Right. And in fact, we have two scripts. One script is this one, which is called User Data SH. And this is a script that comes to the virtual machine configuration. So it is stored in the metadata of the virtual machine. And it does only two things, or three things. It updates the repositories, it installs WGet, and it runs one of our bootstrap script using WGet and bash piping. So this is the whole thing there. And once we reboot or recreate this machine, it will just trigger this script and the configuration will be downloaded. And what happened next? We have a bootstrap script that actually performs all the necessary actions. And this bash script is just using Ansible to create certain to create the HS2 deployment. So you all are aware about how work with Ansible. So I'll just tell about missing parts. So here we just set up a full domain automatically. This is a standard setup recommended way of getting the hostname and setting the hostname. And these are lines from 10 to 15. Then we install additional necessary packages that are needed before you can launch Ansible. And then we configure firewall. These things were missing, I think, from the default setup. So these are the necessary steps on the default system. Then we configure SSH in the same way as I explained a bit earlier. And this is not the part of the original policies for the host machine because all the configuration happens in a virtual container environment. Then we install Ansible, it was recommended, and download the server tools and make some changes to the configuration. So the only settings that I have changed are time zone. I prefer to use UTC everywhere because it's a standard for systems. And I just add the email FQDN and I update the OS version with the one that we run, not hard coding it. And that's pretty much all. There are some action items to be added, like IPv6 configuration if needed, using the latest version of the HS2 in the tools. And then we just deploy the playbook or LXD setup as prescribed by Tito in the manual. That's pretty much all. And if you run it on any, or at least we would like to, not guarantee, but to ensure that if you run this setup on any recent Ubuntu server, you will get the system without any problems from scratch. No, thanks. I got a few comments I could make on it, but I'll take it up with you afterwards, small things. But we only have 10 minutes left, so maybe instead of hearing from me, let's see if we've got any other questions from the participants that they want to put to you about this or anything else. And Stephen will give you a go and maybe just see if anyone else gets the hand up first, otherwise it's you. No, Stephen, off you go. Okay. I've been experimenting with the Ansible scripts and they really look great for deploying DHS2 on either on a bare metal machine or in a virtual machine or something. I was wondering, given that you're working on this reference security implementation, which also sounds fantastic, what the role of the Docker images are that Oslo is providing for DHS2, because I actually, I'm sort of lazy and I make use of the Docker images a lot, not only because they're sort of self-contained and I know that they work or I assume they work because Oslo is packaging them, but I'm able to wrap them with things like JoLokia to watch JVM messages and I have Telegraph and I use things like console and other things to pass passwords and things, secrets securely in a lot of my deployments. And I'm wondering, I guess it's two questions. One, are you going to do a reference implementation of a secure deployment of DHS2 in a Docker image as well? And what do you see the role of the Docker images being for the maybe even beyond just development, but in a more production type setting? Okay. That's a kind of a very discussional question. So I would say that we started with containers because it was actually one of the maybe, I wouldn't say it's simpler, but it was one of the historically first implementations after the manual setup without like with the containers happened. So I don't have strong preference, but the setup with containers was already in place and we decided to like to have a kind of an MVP for this reference setup. We decided to go with container with Alexey containers first. So I know that we have Docker images, but I have not tried them by myself yet. And probably it will be the next step to deploy another similar host with the Docker configuration. It's likely the next one on the agenda. Maybe we suggest to use your experience or to have some extra services included, but another like part of it is that, for example, I would love to have console included for service discovery, but for me it's a very useful thing, but it's a bit more custom or a bit more specific rather than the product itself. And it may cause, if we would like to make a secure setup and a kind of something that we are not providing a guarantee, but at least feel more confidence, it will require more work on our side to maintain the setup and like give professional advice about that. So we are starting quite slow with this. So and we would like to have more configurations, but at least for now, we will stick to one version of Ubuntu, one setup with containers. And if we have a bit more time, we'll try to deploy it with Docker or maybe in Kubernetes and see how it works and test and provide some security advice for that, but as a second priority only. Okay, well, do get around to that with Docker and looking at either using Alpine or Slim or whatever you're currently using as your OS and doing a reference implementation in Dockerfile, I'd love to help because it's an all other way of deploying things that often is it's easier to use in a pipeline for sort of continuous integration and delivery where you might not just want to put DHS2 up, you might want to do other things, you know, and often more and more, I think DHS2 is living in an ecosystem of other products, it gets really complicated. I feel for you because like you're trying to solve things at the OS level, so people have an easy way to install things, but then it's hard to figure where to draw the line, you know. But Docker should be an agenda for sure. I'm like very, very careful with giving any promises here because we would like at least to get some kind of a full cycle of deployment and review. And we are still in the deployment phase, it's ready for testing, we'd like to go with one route of testing. And for example, adding Docker will, from my perspective, will also be a kind of a stress for system admin administrators who haven't worked with it before. And security in Docker requires quite a lot of knowledge of the platform itself and quite a lot of understanding of the things under the hood, which for me is a kind of a next level of complexity after containers. I think Docker is in our future for sure. It's in our present, in fact. One of the issues we face with, I guess, is the security management of those Docker images. And that's currently what holds us back at the moment for recommending people take those images and run them in production. And I think you've seen that disclaimer on the website yourself when you download the image. The problem with images, they go stale. And unless you've got a security management plan around them, then you're potentially going to find yourself hit with zero-day vulnerabilities. And I think we probably will reach a stage eventually where we can provide proper security management around the images. But I mean, for the moment, one of the benefits of using containerization, using the likes of NXD, for example, is that you can make use of the package manager. We could already do Alpine, of course. And that's actually a good idea. Making some lightweight containers based on Alpine, just running Tomcat makes good sense. But yeah, I think we have some work to do. And I think others have as well, to be honest. I think I saw something recently about at least 50% or more of Docker images available for download have some kind of security vulnerabilities in them critical. And that's the situation we want to provide. The thing about Docker images, though, is even, if a Docker image has an SSH vulnerability, for example, I don't know who would put SSHD in a Docker image, but maybe somebody does, you don't expose that to the outside world. You only expose the port that you need to expose. And the nice thing about Docker is often if you look at what most Docker files invoke running, it's almost nothing. It's like just your code. Tomcat plus JVM plus your software, which that's kind of the best possible scenario. But at the same time, you're right, you need to wrap it with things that are that make it all secure. Otherwise, you could run into your other sets of issues. You need to keep your JVM up to date in the case of Tomcat. Absolutely. Yeah. Yeah. Okay. Anything else? Anyone else want to get in the last three minutes of Michael's time before we leave you get on with your name? People have been sitting quiet. Michael, I think you've stunned them all into silence. No, really. I don't believe so. I'm sure we are impressed with what he had done and probably looking at his script, actually, looking at it from the point where it starts with the open stack. I've never deployed it one yet, but then we'll like to get a taste of every bit of it and see what is best for us. And there are a couple of comments that I've already made on Tito's deployment. I don't know if those have been taken care of in your own script, Michael. So I don't know, but probably Tito will share that or he had shared it with you. So, you know. Yeah. I haven't looked into the recent comments yet, but we'll do some internals because I also have some comments for Tito for follow up. So we'll make a session maybe next week or after the holidays and get into that for sure. And if you have any feedback or if you would like to test some of the scripts or you find some incompatibilities or improvements, both Tito Bob and I will be extremely grateful for sharing them either on the Telegram channel or submitting as a change request. So everything will be counted in and we like can't prepare like really good setup without your help because you are on the ground. You know it's much better than us. I like trying things. So don't worry Bob will tell you and Tito will tell you. So don't worry, I will try and give you feedback. We know you, Gerald. Always, always good to have you sharing. Okay guys, I think it's 10 o'clock. Thanks Michael for joining us. Feel free to join us every other Thursday as well. I'm not sure yet whether we're going to run next week or not. I don't be getting a bit close to Christmas, but I'll check that out with Tito and Alice and let you guys know shortly. What we are planning to do after Christmas at least is to we're going to change the time a little bit. We're going to go a little bit later. I think around two hours later than this. And we're going to open it up a bit. At the moment is this a little bit of a closed community that we just started out doing the like the first three sessions. So it's not very widely known, but it's going to be announced in the newsletter and put something on the cup. So we'll be talking to a much wider audience in January. But I'll let you know shortly whether we're going to do anything next week or not. Otherwise, thanks a lot Michael. Thanks everybody for joining again this week. See you soon. Thank you. Thank you. Thank you. Thank you very much. Bye. Bye bye. Okay. That's all the folks man.