 Thank you. So today we want to have a discussion around reverse proxy and in particular engine X. But I know in the past we've been using Apache to proxy. But that is also another discussion. I know both likes Apache to proxy more than engine X. So my tools, the tools that we are using the Ansible tools is, I guess, preferring a use for it's leaning toward engine X proxy rather than Apache to proxy. So we want to get deep and at least know the structure, why we are using these proxies. Why don't we just access our Tom cut instances directly. Why should we have proxy in our in our setup. Yes. So, more generally, what we normally have is a setup like this that you can see on this picture that we are accessing our infrastructure through a proxy. And the proxy that we do run is either Apache to or engine X. And on that proxy, we just open HTTP port 80 and 443. And then we block everything else on the on the firewall firewall level host firewall level. Of course, you could again block other stuff if you have a perimeter firewall running on physical maybe firewall. But then we want to make sure that even our host if you you are on an environment where you don't have physical firewall then your host should at least filter traffic that it needs and drop everything else. So, that is why on the on the proxy level, we want to only listen on port 80 and 443. Of course, the softwares that we are using do not really, you know, you can you can use use them to serve even static files. You can if you have like two, three or four instances that you want load balancing, then you that can also be achieved. They are not specific to you can use it, you can use them as web servers as for instance, Apache to design as a general web server, but then you can do reverse proxy and and and load balancing as well. So, these diagram notice that we have a proxy, and then we have two statutes we're having and normally that would be at the attached to web application, and you can have one or two instances, and then of course the database and the monitoring bit of what and in future, we will be including integration stuff also, most others. This is just this is just a standard install, but you can have even more middlewares around, you know, after after the proxy instance that you will be running. So, why, why don't we just access Tomcat directly. I mean, I know you could run and a single instance of Tomcat and then you can, you can have say up one files up to files up three files, you know, as much as the number of web applications that you want, and Tomcat will route request to those instances, but that, but that means you will run one single Tomcat instance, you know, and then on that single Tomcat instance, you, you, you, you, the request will be forwarded to them, the web app, depending on what to append on your or base path of your URL. But the way we do set up our infrastructure, our, our application stack is that we wants to segregate this application so that they run in somewhat virtual environment, or rather within container. And in this container, in this containers, they have everything that they need to have their own separate Tomcat instances. And this separate Tomcat instances have all the dependencies and and they listen even on port 8080. Yeah, something like that. So if you are not listening, then what will happen if you have say two instances of instances of the HS to you'll have to worry, you will listen to a single sub domain, you know, and forward to different application say using containers, for instance, and how will you be looping a request that goes to application. You know, something like it's going to get more. That is one of the reasons why we do have a proxy. I'm hearing background. Maybe you can mute some guys that are not talking. I'm not supposed to talk. Yeah, I think it was Darjo I just muted him. He's got many lady friends nearby. With a proxy in front of our setup, it's enabling us to match, you know, the apps that we wants to, to match and then route those requests to their respective applications that they will that will be serving those requests. It gives us another advantage of logging. We can view the logs of them, of them, of them, all the request that we are receiving on our server centrally on a given on one single proxy. I mean, the front end. Yes. Next is that whatever happens, whatever happens behind our first proxy is, is, is, is not feasible to them, the users that are accessing our infrastructure. Yeah, they will be using one sub domain, and then that sub domain routes traffic. I mean the proxy routes traffic to respective applications, and that that is infeasible to them. There's another scenario where you, you might be having even two or three Tomcat instances. That's something that we are thinking even of supporting with our data as two tools, load balancing, so that you have 234 Tomcat instances all of them run writing to a single database. And, you know, if you have a system that is very intense on the on the on the logic data is to up. So, so, so that you want to run, maybe two, three instances, and he wants to load balance traffic back to those instances so users would not even realize that they have the accessing instance ones or or instance two or three. That is abstracted and infeasible to them. So it's because of the proxy that that is, I guess, we have that ability and even the load balancing bit of it. That advantage is, is, is we are getting from with adding the proxy into our, our map here. So, of course, there are other things like security that that that are improved with using a proxy, because sometimes we have endpoints that maybe have a security vulnerability and, and, and we want to block those endpoints maybe before the solution is the actual solution is developed. So we can even with the on the proxy level match the exact you are I and block that endpoints, you know, something like that. And, and in most of our deployment, we are using let's encrypt most in most cases we don't have SSL certificates. But we have sub domain mapped to the servers public IP address, and we have we are to worry about TLS and SSL ourselves. So with let's encrypt, it implies that everything has to be automated that the way your certificate are renewed is automatic. You don't need to worry about certificate renewals in future. Normally the certificates that you get with let's encrypt maybe for a limited period of time normally three months, and you want that to, to be seamless so that you don't have to come to your server and do renewal yourself. So there are modules that are developed open source module that are developed that will enable that with, say in the next proxy or even a party to proxy. I assume now that you're not using a proxy somewhere here that you want to access your old stuff, your Tomcat stuff immune. Then you would you would have difficult time worrying about how you would handle these let's encrypt and automating this let's encrypt stuff. If you have a proxy in front of your infrastructure in front of your application stack. It's just, it's going to be very easy for you and by default, the install script is, is focusing on using less encrypt rather than you getting your own certificate of course that is supported with the standard. You can you can switch and use your own certificate. And then there is this geolocation where you once you might also there's a module that at least only the next and I guess a budget to also has that, that you can limit users that access your system based on their geographical location. That is also maybe reducing attack service of, you know, those who are attempting to into the system so you limit where you want the request to hit your, your, your web server. So that is also possible on on budget to our engine next proxy you can say I want all the request that hits my server to come from this particular location, and it will deny all the others, you know, all the others from other different locations. So the way engine next configuration is structured is that they are done into context, we have main context, which is the main configuration file, and then we have HTTP context or section and then stream and then if fans section. Normally, the section that we do tweak every every now and then is there is the HTTP. Yeah, but then you have stream section also events section. So the HTTP section is is having other child context like server context and within server context, we have location context. You know, so it's it's kind of a tree like structure, where you have the root and then within the route you have the child and then child of that child, yeah, something like that in and and variables that are defined on HTTP or the main section are inherited into the HTTP into this. And within the HTTP section the variables and directives that you define there and are inherited into the the server section and so forth. So we are going to at least have a look on this section and how this configuration structure look like. And we can start with HTTP context because that is that is normally where we are focusing on what we are using mostly. And within that HTTP context we have we have server section. And on server section we get to define these virtual hosts so that on on the server section you can have different, you can have one, two or many server sections within the HTTP context. Sometimes you might use one in the next proxy to serve two, three or four or so many virtual hosts, and these virtual hosts are not necessarily, you know, you could have one virtual host or sub domain focused for maybe the hs to application stock and another one, serving some other or matching some other application or some other application. So you are not restricted you can have you can use this proxy to do so many other things and that is achieved with the server section and then upstream. Yeah. Well, as much as we are talking about this proxy it can do things that one is that it can serve static files and it can have it can even have it can even serve other dynamic sites like PHP and and even Python sites. But majorly we are using it as a proxy we are just receiving request and then we are using it as as an interface where we root request to various backend applications so for that to be possible. There is this section called upstream, you know, upstream section. Rather, it's actually location section location context context is matches the request that you have and then it pushes or it gets it passes those requests to the application that you wants to access. So, so the server context. So the HTTP context has server context and server context is it looks with like this code snippet that we see here. That you have HTTP context and then within it you have server context and and within that server context we have location context so it looks like this code snippet that you see on the on the right. So the location is is is going to be within the server context as you can see on this on this slide. So we have the server up there and then maybe the first server block that you have is it is is listening on HTTP port 80, but then normally we don't want users to access any content on HTTP port 80, because it's insecure. So, so every request that hits port 80 is redirected to 443, which is secure, which it which adds TLS and SSL encryption on top of your of your implementation. So within the server context, we have this location now and this is where routing happens to your backend applications. Okay, so let's check another thing that that proxy gives you is the login. You can inspect your logs gets to know where your users are coming from gets to know the IP addresses the public IP addresses, and if you have errors you can check error logs what is the problem exactly. And these are you get these by default, if you just install your your proxy, you get to the logging done into bar look in the next director directory and within the next directory that far look in the next directory you have error logs and access logs. And that is where you, you, you, you check when you are troubleshooting, maybe you have a problem you get to dig into those logs and maybe locate where the problem is. So you have error logs and access logs. Of course, you can have more. You can you can tweak your in the next server context configuration so that you are era looks and maybe server looks are redirected into a different file you don't. This is the default, but then you can decide that for this particular sub domain or server context, I want to write to a different file. Yeah, so that if you have so many sub domains and he wants to really check and and and and and he wants to focus on one, then you can even configure your logging so that for that context you. You write to a separate file, not the default, because the default is going to have as many context as you have in your in your installation. Yeah, so after these quick slides we want to at least check what we normally have in our on our standard install and you know, and see how this is implemented the same the same things that we are talking about right now. How are they, you know, implemented on our automated install, you know, so I will head back to the terminal and assess one of the servers that we have to set up already. Yes. Hmm. Okay. I'm looking already. This is the server. I wanted to access, I think. Yeah, this is the server of the process. So I'm looking already. So this server used the standard the size to tools with Ansible, and it's on a couple containers, and the two containers that we are seeing here. The first two are running Tom cut, and then this is meaning meaning monitoring and then post press and proxy. So we are focusing today on the proxy because this is where request anomaly hitting. Whenever you access DHS to instance. Yeah, so for you to check these conflicts, you need to execute into this proxy container, and then go to the standard DHS I mean standard directory where we normally have the next configuration. And normally when you run the tools, you might want sometimes to even tweak your engine next configuration to suit to suit the installation of requirements, more special requirements that you want. Sometimes you want to say anything that we hit, like slash root is redirected to a landing page because normally the standard installs that we have right now, do not have a landing page. We just focusing on the DHS to like, like, let me demonstrate this just open our browser. It is on DHS to DHS. So as you can see, this is a DHS to instance, it's a web application, and it has also it has. HMIS, so you can see DHS and that HMIS. Yeah, and HMIS also is resolving to the endpoints that it which is a DHS to instance, and also meaning and also say DHS, no root endpoints. So the tools will set the application instances, DHS to instances and the monitoring for the corresponding instances. But normally, you want to come back and set passwords for these tools, you don't want to to leave them open to the public want to come back and set passwords to this to these tools and we demonstrated before how you can set passwords for muting. For, for glory, it's pretty straightforward, you can even add another user, you can come to administration and add another user, add new user, give it as let it be administrator and delete the default anonymous user that comes with the installation that that is pretty straightforward, but for for glory, we demonstrated last time, how you can do that. So what happens when I access route. You see, we are getting an empty response. So sometimes this this is scary, you might you might think that the install is not working. But then this is something that the tools is doing is doing it's it's just matching the route. It's like that. And then it, because it's not matching any of the applications that it's deployed here, it returns an empty response which is normally for for, but sometimes you might have a landing page and you want to edit your proxy configurations. So that you redirect all the request that goes to route into a landing page, or even you could decide that anything that goes to route is redirected into the product instance of your DHS to. Yeah, so that means you need to understand the configuration syntax of of the proxy where where are you going to touch. Where are you. Yes, things like those ones. And all of them are within this container proxy container. And for you to be into that container you can execute into it exact proxy. So this is a bash prompt, and we are sitting in this container. And now, if you just check which ports are listening here, you can use SS you see we have bought 80. We have bought 443. We have bought 8443. I added these for demonstration purposes, but then that normally on on a standard install we have bought 80 and and and and 443. That is for 80 tp and 80 tps. And of course, for nine for nine, if he wants to enable Munion monitoring, at least for this container. Yeah, and of course, much more. When not everything here are accessible from the Internet, you can check. I mean, not all these openings, both openings are accessible from Internet. They're also restricted on the fire one level. You W status will show us. As you can see, we don't want everything to be accessible on the Internet. We want 80 and 443 and for Munion, we want it to be from them the monitoring instance you can see here monitoring is put to the party. The IP address for the monitoring container is to the party, and they are only opening access from that instance. And this one's this one I added for demonstration purposes. And of course, it's IP fashion for and fashion six as you can see the support. If you if you have IP fashion six, then this, these are the rules that corresponds to that. So normally, standard install for for the proxy in the next proxy gets the configuration files into it see in the next. This is where you normally have configuration files. You see, and the main configuration file is normally in the next.com. This is the main configuration file. So I have, I have added another context, that is the stream, as you can see here, this is the stream, the stream context. And the default context that you get by by just installing the next is the events and HTTP. And, of course, this this two context which is events and HTTP are actually on the route that they're on the main context which is where you define other things like user, you see user here is not specific to a particular HTTP context or a particular context, it's just, it belongs to all engine next setup here, and even the worker process so the worker process here is sets to auto and normally it takes the number of them, of the CPU course that you have on your, on your installed environment, so that if you have maybe four CPU course, the number of worker processes that are spawned are four. You can maybe you can check. There's less CPU, how many CPU course do we have here. We have six CPU course. No, no, how many, how many CPU family. And how many in the next worker processor do we have this out. Grab in the next. We have only one worker processes because I think we have one CPU core here. We have one CPU core. So if we had two CPU cores, then in the next worker processors would be for I guess I have another install on the next somewhere. Let's see list. Let's get to this instance. And check how many CPU course do we have less CPU. We have four CPU course as you can see here. And how many in the next worker processors do we have this out. In the next. Yes, as you can see we have four worker processes. And these worker processes are being run by. The other worker processes are spawned by the master process here it creates these worker processes which are running as the blue data user and the blue data user is actually something that you have more control you can decide which user that you want your in the next to be running with. And you notice with in the next month you can choose the first directive that you can choose the user that you want your in the next to be running with. Yeah, so the next allows you to include other configurations. It allows you to include other configuration files. By specifying where they are they are located. And you can use relative path or absolute path as you can see here. They have used absolute path. And whereas this is we can see this we can as well use. We can as well you can as well use. Sorry, a relative path to accomplish that and you can see you can see that in just a few how or where the relative path was used. So, these main configuration file is loading is loading other configuration files here. And we don't know we don't know what are those. What context are those configuration files having and we can on this call explore. So the standard the standard the tools that we normally use to set up our DHS to pushing it's configured is pushing it's configured configurations into come to the directory. But then you can have them on sites, sites available and then create a sibling, sibling to sites enable that is that's up to you. But that is the preferred way of doing things. Because maybe your configurations are having errors, and you want to test them. Before you you affect your configurations, and it would be better if you have, if you just have to create a sibling so that if there is a problem you just did that sibling, and whatever it's running with them would just not be affected. So, the config file files that are loaded are within the directory. As you can see, we have now this, this configuration file. This is loaded, at least with, with this, with this, it's actually included with this directive here include directive it's including everything that ends with dot com. Within the directory. This is the directory that we are sitting on right now, as you can see, and there are files here. This is ending with dot com so it means this is going to be it's going to be loaded. Whenever your engine next is initiated or whenever your engine next is reloaded, whatever is in this file will be loaded as well. But what we have here, let's check. As you can see, we have server directives, we have server context. And this server context are two. It's not just one. We have this context here up to this. You know, the context is normally open with with Calibres and close with another Calibres. So this is the first server context. And the next one is this. Is this block is this block of context, as you can see, so this is the next one. So, why do we have two contexts, why do we have to server context within this file. So the first one is listening on port 80. It's listening on 80 tp port 80. But it does, it does, it does just receive everything that gets to port 80. And it returns or permanent redirects redirects everything that you have to 80 tps as you can see. So that if I even open an incognito window here and access the HHS to the com HMIS, even though I want to say I want to access, you know, according to HTTP like that. It's redirected back to, you know, 80 tps. So that happens on the on the on the proxy level and that that is what this server block is doing. It's receiving everything on port 80, and it forces it back to 80 tps for 443. Of course, if you didn't have this next block here, this page would not have loaded. So it is just returning 80 tps. It's returning everything you're redirecting everything to 80 tps 443. And if that when when that happens, this block captures that. So when it directs everything to 80 tps 443, this is the next block that captures that. And on this block, so many things happen. It is not just, it is not just listening on 443 and, you know, it enforces it enforces SSL encryption, TLS encryption, encryption. And it matches, you know, it matches your server name, of course, even on the HTTP block, your server name is matched. In the next, by default, I mean the server block is is is this block is is a child of a server is a child of HTTP block. So that means your index proxy is working on the seven of the OSI model. It's matching even the HTTP headers, post headers, and that is that is why it's even having ability to match, you know, this domain, you could have another domain, you could have dhs2.example.com and you want us to be doing something else so in the next or any other proxy has that ability to match those proxies so that you can serve so many applications within single single proxy. It matches also the server name and then it returns to HTTPS and then on HTTPS block now, it's enforcing encryption with TLS. Of course, you can see you can either use this directive here, like listen on 443 and enforce SSL. Otherwise, if you don't want to do that, normally you can say listen on 443 and then SSL, I think SSL on something like that. Yeah, if you don't support it on the on the same line, you can do something like this. And after that, well, server name is there, of course, you want to specify where you have your SSL certificate and key, because if you enforce TLS on this server block, then it needs it to try to read your SSL certificate directive. If you didn't have these two directives here, you will get errors, your NGINX will not even come up and you can test that. Let's just comment this line and say NGINX-T and as you can see, we have errors. If you try to restart these blocks right here, it will not come up and that is why normally you are advised when you apply new configuration, when you have changes to your configurations, don't rush to restart. You have to reload and see what happens with the reload because with the reload, it keeps already existing connections. It keeps NGINX as it is and it tries to apply your configurations while keeping the already existing connections. But when you restart, it stops NGINX and then it tries to start with the configuration changes that you have applied, but then if they are wrong, if they have syntax errors, then you'll have your proxy going down and if you have other sites that are being served, it's going to be a problem. So you are normally being advised to reload instead of restarting. And as you can see here, we have errors because we just commented a line on this block, we just commented SSL certificate block. So that implies that the SSL directive there implies that you should have your certificate somewhere and this is even generated with net encrypt as you can see. This is another advantage of using proxy because if you are hitting our Tomcat instance, how are we going to deal with this net encrypt stuff? And then another line that you can see here, let me just put line numbers so that it can be easy to demonstrate. Line number 16 here, you define the protocols that you want to support. Yeah, there are so many, there are other protocols that are filtered out here like SSL, SSL, SSL version 3 for instance, TLS, version 1.0 and 1.1. I'm hoping those protocols because they are known for their abilities to them that, you know, that it's encouraged that you support more recent TLS protocols which is TLS 1.2 and 1.3. And there is saver suit down here, line number 34 as you can see here also. The protocol that we support on our standard install is quite strict. And if you have all the browsers, you know, maybe you have old saver clients that support only all the protocols, then you're going to learn into some problems. So you would be locked out. Or if you have other old clients like old Android devices, if this is very strict as it is right now, then those clients would have trouble connecting to this DHS2 install. Yes. And then most of the things that you see here, like from line number 14 to line number 20 are just relating to SSL and TLS encryption. Now other in the next directive that we do support for particular purposes, for particular reasons, you know, like keep a lifetime out and other stuff. So these are other in the next directive that we could even spend days talking about, but then they are very well documented on normally on the next site like here. If you go to this site, all these configuration directives are documented. You can click each and everyone and see what they're doing. Why are they there? What are the defaults normally, you know, like this one the default normally is where you know it's thoroughly documented and you can you can whenever you have doubts about a certain configuration, then you can go and check what it does. You know, there's so many and each and every one of them, if every one of them is documented, you can just copy something like this client maximum body size and try searching here. This is in the next ATP proxy, but then you can go back and see what we have on that endpoint. So in the next ATP call module, so this all configuration block plugs that this is just documentation, you can even get to the upstream module and see all the directives that you have on the on the upstream configurations, you can get to keep alive and check all the things that you can, you can tweak and even learn what they are really what really happens with those configuration plugs. Yes, so it's partly documented as I mentioned. So, but things that we have here are improving performance for our install, but then on this file you don't see, you don't see location configuration you don't see routing to various applications, because they are included with within these files here as you can see, they're just location configurations are sourced from another file or directory, as you can see, and notice that here is relative path is used. That means your, your company should be within somewhere here on upstream. And as you can see upstream is here. And as upstream, we have now specific location configurations, you can even check what we have on upstream low root. So this is the low root. And since we have two instances, we will be having two low root instances. We have one for HM is low root, and another one for the highest low root. So, in the next we will match this endpoint slash the highest low root, and, or even HM is low root, and then it will forward your requests to the to the right application to the right low root install. So this, this is why it's very important for us to run proxy. If you didn't have this in place, then that means we would, we would figure out how, how we will listen on port 4000 to the outside world. We would have to figure out how we will expose port 4000, not even just one photo, I mean, because we have two global instances here, they are both listening on port 4000 how would we, you know, expose this to the outside world, because they are now listening to the same port, at least within those containers so we would expose two different ports to for us to be able to access to these two instances the same with them, with the with the DHS, or the instance that I stood down, we would have to figure out how to expose this 8080 for HM is an 8080 for the HHS to, you know, but this is simplifying things for us, the process simplifying things for us, that we don't need to worry about that in the next is, or a party to just route request, based on what it will be matching here. How the request are being matched. Yeah. Initially, when I demonstrated that when you eat the root domain here where when we don't specify the app that you want to access you get an empty response. Why is that it's because of these these very last configuration. It's even this one that if everything is, you know, after including the location configuration and you're not matching anything on route, then return for 44. It is going to return for 44 so that you get to see this empty response here. Yeah, that is the reason, but normally you would make you maybe you have an empty, maybe you have a landing page, so that you want to hit to that landing page and and users should click just HM is to be redirected to HM is endpoints or app. The HHS so that they can get access to the HHS application and then monitoring so that they can get to them to the Munich instance. So, so that you want you want that nice page that gives users ability to just be clicking where they want to get to. So, you can have can implement that on this on this endpoint on this location block. That you say location route, and then you put route where you have your your site here. You know, you can say it's in the bar, www, you know, directory, where you have index index files in that directory and your static file in that directory. Yeah, something like that and that it's going to it's going to serve your static files. Yeah, sometimes shoes you would you can decide that I don't have a landing page, but I want users to be directed. I want users to be redirected to the HHS instance, maybe that is the production and the production instance I want users to be directed to the HHS instance, whenever they don't specify where they want to go to. You can just uncomment this and rename that into your HHS instance, and that means the default or those who are accessing route location will be directed or written the request will be written into the HHS endpoint. And after changing your configurations normally you want to check if it's having any syntax errors like that. See, and of course there is there is an error. So if I had just rushed and restart this, which will stop our in the next and it will not start and then if we have other application being served here, we will land into problem but the problem here is that we are not we didn't close this directive. Normally you have to close that with semi colon. And again, we have these error is defined. So we should check. And you can even see shows you which which line number is that problem. You have that problem. We can get edit the game. And sets number. It said number eight something somewhere there so somewhere here this we commented last time and you did and comment so you need to ensure that your SSS get past is is defined and then test your configuration once more. Sorry. And it's passing. Of course you can use other long, long format of this service in the next. Is it conflict test or test conflict. In the next conflict. Yeah. conflict test so this should show you it's going to show you that the configurations are okay. This is the long, long, long format of the next my nasty. Yeah, and if your configurations are failing, we can make it fail. Use the long format, but just comment commenting this. You will just log and see it's it's failing. Yeah. So before you do anything before you reload your next configuration before you do anything you need to test your configuration and make sure that there are no syntax errors or any other errors. Can I make a small, just a small point there. People often ask the difference between doing a restart and a reload. And make a change on the engine x configuration, or even on their Apache configuration. If you do a restart, it will stop the proxy, and then it's going to try to load the new configuration. And if there's an error, then it won't restart. It won't start. Yeah. So you're going to end up with downtime. So if I need to always safer to reload, rather than restart because if you reload, it will test and if if the new configuration won't load. I think the engine x should continue to run. Maybe you can test that quickly now. If you make a error and do a reload. Exactly. Let's demonstrate that. This is a useful thing for people to be aware of. So if we did these and introduce a syntax or just an error, maybe. Let's make it a text error by removing that closing directive. And then we write that and then system test first. It's going to fail. It's failing. We say system. Starts in the next. So it has. If you check right now, nothing is going to be listening on port 80 as you can see nothing is listening on 8443. And if I try accessing this and put this side here. It's no longer accessible because as book mentioned, your engine x instance was stopped and then tried to apply the configurations that you have got the syntax error and then it failed to gamma. So what if you had other sites that you were serving remember it's not only maybe this was not the only site that was having this one on this proxy. So that will render all the other sites inaccessible. But then let's demonstrate using reload instead. By editing. Maybe you can comment on that before we proceed. No, I just thought you were going to do the reload first because you have to fix it before you can. Yeah. So with the reload, we have an error like this. Let's just restarting the next. We have the site. Now should be should be accessible. Yes. And let's again introduce that syntax error. And then instead of restarting we reload. Yes, there is there is there is there are errors here. But is our site still accessible. Yes. The moral of the story is always reload. Yeah, only only only restart. If the if the reload is doesn't work. But use use restart very cautiously because you can end up with your site being down because of an error. The other thing of course as you say just run run the engine X minus T. So that it verifies your configurations before you attempt or you move you go forward with a reload and and this time, but then for. Now if you do a load. The new configuration includes the other. The other the other errors or just be ignored and continue with what was running. It will continue with the old ones even it will try to load the new one. But if it fails, it will just carry on with the old one. I'm sure as a bit like a bit like Postgres like Sam says there's there's probably some engine X options that require a restart. You know, like if you change the server port maybe and things like that. I think I think for most options, the reload will do you. 90% Okay. So, yes, so most of the things that we we've talked about here is in the next or this proxy working on on the seven where it matches the server first names and routes request based on that and stuff, but then you can use it. You can use it you can it can also work on on there for so that you might be, you might have ability of just listening on certain ports or even SSH maybe your firewall is opening. Your firewall is opening, say, but you can even redirect anything that goes to port 80 to SSH port if you want. That is purely on there for doesn't expect your HTTP headers, but it's just receiving request on for whenever post that to you expose and then it redirects everything to. Yeah, and that is actually stream stream configurations. Let's go back to main engine X configuration here as you can. You can see, this is the stream configuration that here I want to listen on port 8443 and then pass whatever I'm receiving to the local most ports 22 something like this one, something like this one. So, no, sometimes you might right now we are implementing this not on the on the next level, but on IP tables as you can see is before those here. This password. You know, I'm sorry, I'm going to have to leave you but you guys please carry on. It's a shame we don't have more time because then I could explain to all the reasons why Apache is better than engine X. Talk about the other features being available on the next class, you know, everything that you want is you told that upgrade to plus that is that is a downtime for engine X. Anyway, thanks. I'm going to leave you at it. You guys please carry on. Okay, so yeah. Something that I want to talk about here is is is is is in the next working on layer four, but then you can achieve that, like we have done here. In this case, anything that hits port 80 is redirected to our container, which is the proxy really put 80 and anything that hits for for three is redirected to our container for 443 as you can see. So, Alexi list shows that the container that we are redirecting our request to is this two dots to as you can see with these, it's actually two dots to, and then that means this is actually implemented on the on the on the IP tables that filter kernel models of Linux, but then we can we can achieve the same on engine X by leveraging on the stream, the stream context. Yes. I think we actually on top of the hour and can summarize and maybe take a few questions. Yes. I'm still here. I see Sam had asked he'd love to hear those. What are we talking about so you're talking about the engine X versus Apache thing. Yeah, yeah. Yeah, I think most of the limitation engine X works really great for for simple use, like the way most of us are using it. For example, if you want to get into doing load balancing, right, a lot of people are enthusiastic about running a cluster of Tom cats with the Redis. And then they have to consider you configure your proxy, your reverse proxy to deal with the upstream engine X has some some important features for doing health checks on the back end. Right, so you know, let's say you've got four Tom cats, and you want to know if one of the four is in trouble if it's down. You need to do a health check on it to know, you know, is this one ready to send requests to there is an option on engine X to do that. I think I can list it in here. It's an option to do that. But when you read the documentation for it, you'll see that option is on the on the commercial one anyway. As you can do it of course on Apache for free. I hope you think so as engine X for basic scenarios like how we mostly use it is fine. Increasingly you know things like load balancing, the free version has some limitations. There's some other things as well I think that the status, the default status page you get with Apache gives you more information. The status page on engine X is a little bit limited. Again, there might be more that you get when you use a commercial version. I'm not sure. And about that, I can also, yeah, we've had in the past, a setup where we were using in the next very fast proxy. And then on the back end, we had Windows, Windows Power BI, so that we wanted to get all the requests that gets to, I mean to the engine X proxy and some sub domain proxy pass to that. So the Windows server normally supports other encryption or other authentication protocols like NTLM basic, you know, yeah. So we wanted to really enforce only NTLM authentication and not basic. And we realized that open source engine X is not, is not geared, is not supporting NTLM authentication. And with that same feature is provided with NTLM as engine X plus, and which is very expensive is very expensive. And we ended up using other proxy protocol that proxy software that is a proxy because it was it had that ability to proxy pass NTLM authenticated request. So that is when I realized. Yeah, so I think you're getting into more complex scenarios. You might find that you're going to hit limitations with at least the free version of engine X. But having said that, I mean, it's, it performs very well. You know, it's a very, it's a very solid word proxy is hugely popular. I know that it's only old people like me and dinosaurs who still stick with. I guess the where engine X is shining is the way it handles the way it manages the managers the resource the memory. Apache tools going to need more much more memory to handle more connections than engine X. That's that's that's that's a myth that you find on the internet. It's usually usually by written written by people who are trying to favor engine X. There are, there are situations where Apache to can use a lot of memory. You load lots of modules like PHP and Apache to go many, many modules. And then you fork many, many Apache to workers, you know, separate processes. It's going to use a lot of memory. But if you're using the more modern Apache to MPM module and you're not you don't have lots of modules loaded. This is actually very efficient. It has limits built into it, which are a bit more stricter than engine X. So, I mean, just like with Postgres, you know, Postgres has got this, this section for configuring the maximum number of connections, right. Everybody knows the maximum connections are built by default, the maximum connections quite low. I don't know what I can remember what it is, but it's like, you know, people use Postgres and they, they find they can only handle a certain number of connections. Then they complain and say, well, Postgres is only limited to a certain number of connections. It means it hasn't been configured. And the same is true to certain extent with Apache by default, it will throttle the number of connections it will allow in the front end. And this is to prevent it from saturating stuff at the back. But, but all of that can be, can be retuned. That problem is that the syntax of tuning on Apache is a little bit archaic, I guess. The engine X configuration is much more modern looking people can understand a little bit better. But I don't think it's as powerful. The only thing that I find more powerful engine X than his own Apache is the, the. Oh, what do they call it's the rate limiting engine X got a rate limiting module, which is a bit better than the Apache rate limiting module, I think. Yeah, it's like a religious war. I've been supporting this Apache thing now for a long time but I was considering even this week thinking I should just let it go. And just just let you carry on and promote engine X but then I realized even with this load balancing thing in fact you can't do it. Because if you want to do load balancing properly, you need to be able to do this health check thing. If you can't do the health check you've got a very not very well functioning load balancing. So that that that means your Apache Apache two doesn't doesn't support that by default. It does. It does you see that you see the you see the other link I got in there mod proxy health check. It is there. Open source. So they were continuous. So that is the different engine X got a lot of nice features but some of them are some of them you have to pay for. Yeah, I think we try to set up load balancing one time for Uganda using engine X, but we reached some somewhere where we needed a feature that it supports but then requires the page. Version, especially around allocating the each request that come allocating to them to the less busy server or something. Yeah, so we ended up so if it allocated me two weeks ago and IP address that it should be each time I go it will keep giving me that IP address regardless of the of how how how how loaded that server is. Yeah, the expectation is that it would actually look at all the available servers and then give me another IP address to route me to that server something so that's kind of a weakness or maybe they didn't intentionally until you pay that version of engine X. I'm not sure if. I believe that the other that I mean, I mean the balancing and new balancing algorithms that you want to choose by default it's normally using long robin, but then there is IP cash where where But even the wrong Robin was not working as expected. I think that features specifically was for the paid version that it can actually. Yeah, so, but then you need to enable least connection. I mean, you add that directive explicitly on the on the upstream configuration configuration. If you don't, you don't add the upstream that the least connection directive then that is not going to be used. I don't know if I don't know how your configuration configuration looked like, because that is something that you need to enable otherwise by default it's not enabled. Stephen Stephen did you try and do it with a patch. What did you just give up at that point. I think we haven't, because I think we had a lot of ideas behind, but somehow requires a bit of resources here and there. I think the means for even coming to a position of procuring one load balancers and then we try again to have it fully set up and all those kind of things. Yeah, but also I just wanted, we had a scenario one time. When we had an instance that was set up using engine X, and at some points, it could not withstand the request and response and enough, not until when we had to switch to Apache and then was working perfectly well. Maybe I don't know maybe we had left us some but we could verify the configuration so like all previously, you know, you're looking at this one is this the values, the values are the same. But somehow somewhere it wasn't working. I don't know what would have caused that or something maybe it's performance up config tuning that we didn't do right but then when we compared all the values for both Apache and that engine X configuration. And most of the things were like taken care of as expected. Yeah. Yeah, Stephen, it's a difficult one. Because it's a lot of work kind of trying to maintain configurations for two different proxies is what we're doing at the moment. I'm still reluctant to abandon the Apache one because they're just some cases where it seems to be better. There will be a good discussion point to have in Kigali. But I think we discuss it every time anyway we haven't reached a conclusion in 10 years. Yeah. How much does it cost is the other I mean the other thing of course Stephen is his ministry can just buy the commercial engine X. Around 6000. In command or something. It's a little expensive. It's a little surprised by the cost quite expensive. Then you then you just need to bite the bullet and learn the Apache configuration. Yeah, but I think they wanted to get the physical load balance. Yeah, that's the other solution but then then you're paying for that as well. But that's the other solution. There's a lot of myths that you'll read out there on the Internet about how how how fully performing Apache to spare it's mostly it's mostly written by people who really don't understand Apache. I know. Yeah, we had the same same feeling. When we were in the other to T and I think that all the members were pretty much more used to engine X. We tried to come out with this Apache and most of now the installation since a bundle with the and we and actually we came up with that version of engine X support on the dishes to because of that those discussions. Yeah, yeah, I remember because to so broke my tools in the process. Yeah, but unintentionally. Yeah, I think and I think the fact is that there are people are going to work want to work with engine X because they're familiar with it. I just think that we need to be just honest and straightforward and say, look, if it's good that you should be working with tools that you're familiar with, but you need to be aware that there are some limitations. And especially if you want to get into this load balancing business. I think there are some others as well but that's the one I'm familiar with. Tito, sorry. Interrupted your beautiful presentation with my religious war. No worries. That's even more interesting with the discussions like those ones. Yeah, there's a few things we want to talk about that engine X configuration as well because the other thing with my Apache to configuration is that I spent about a week. The CIS benchmarks right of all of the, all of the kind of security considerations to put it to the configuration. And I'm not sure you've done the same you've done the same homework yet with the engine X configuration because from the security perspective it doesn't look quite as strong as the Apache one that you'll find. That's not to say that you can't make it secure. I'm not arguing engine X is not, it's not as secure. It is but a little bit of work still needs to be done on that. Yeah, exactly. I agree. Those standard that the configuration flux that I have needs to be checked against the recommended CIS you know. Yeah, so some was me to do an Apache to session. I'm not. I love to do that. So I'm not sure I'm going to get time to do it in the next two weeks. Then after that is the Academy, maybe after I promise you I will do on but probably not in the next couple of weeks. So, maybe we should even go ahead and talk about Postgres tuning next week. A few stuff. Yeah, I'm there. What do you think guys was there more to talk about with proxy still. I think we've gone through quite a lot. Very good background. I think a lot of people on this call are quite familiar with proxy configuration, but for the Academy it will be very useful. Yeah, you guys suggested topic for next week. Or better still does anybody want to present something you know we've done a lot of these things with with me me presenting or Tito presenting. We're going to do a session which is maybe more with some of the implementers talking about real problems or issues that they have. Now they all gone quiet when I suggest they do something. I don't know. We can we can we can reach out during the week. Okay, no problem. Thanks. I gotta go down. No, I'm definitely going to go. Yeah. All right. Thanks again. See you guys.