 Well, we will go ahead and get started. Hopefully everyone is in the right place. We're here to talk about the Apache HTTBD server proxy. These are things that I've kind of come across in my day job as well as my time kind of maintaining the proxy code and working with the web server that I think is really useful that I'd like to share with you. And at any given point, if you have questions, comments, thoughts, and concerns, except for the two gyms and Chris, go ahead and speak up, because I definitely would like to know what you guys are doing out in your day jobs that we might be able to answer questions for in here. So if there's something that I say that kind of sounds like we can dig a little deeper on that, go for it. So really quick, about me. My name is Dana Rajeri. This is what I look like in case you can't see me from where you are. I am an infrastructure guy. I work at MasterCard. I've been there for 11 years now. And I'm a big open source nerd. I hang out on the HTTBD development list. I actually do read every email. I don't chime in too often. But I'm there, hang out on users as well, and then community dev, right? Everybody loves community. Otherwise, I do always like to put this in my slides. I'm here on my own behalf, not necessarily on MasterCards behalf, because we at the foundation we're individuals anyway. So if MasterCard were to try to say, this is MasterCard stance on the ASF, then the ASF probably wouldn't want to hear it anyway. So I'm here as me, come as I am, take it or leave it. So what is between you guys and the delicious coffee that we have been dying for since about 11 o'clock? So we're going to talk a little bit about things that aren't necessarily part of the proxy. One of the great things that Apache HTTBD brings to the table is it's the Swiss Army knife of the internet. You can do all kinds of stuff with it. When you involve that kind of stuff with the proxy, you can do all kinds of really neat things. And you'll see some of those examples as we go along. That's also very important for you to understand your own applications. There are some things, and I'll try to call them out, that could be a little bit dangerous if you do in your application. If your application sets bad caching headers or does something weird, you need to understand what these things might do to your application. And the last part, the really important thing is if at any point I start droning on and you think this is terrible, that's fine. Just download the presentation. Everything that I'm gonna talk about, in fact, a lot of things that I'm not gonna cover are in the presentation, in the slide notes. There are some examples. There's a couple of things that you can kind of read through, like what does the web socket, HTTP upgrade negotiation look like? That's actually in the slide notes. So go ahead and grab this. Every time I do an update to the presentation, I make sure I keep this location as latest and greatest. And I also made sure to upload this to the conference website. So grab the presentation. So this is a wrong-running section. I've been giving this talk now a couple years. I think this is the fourth or fifth time. And what are the things that have changed since the last time we talked? And there hasn't been a ton, but if you look in the slides, you can see some of the hidden slides that have already kind of, yeah, it's not that new anymore, but things that have happened in the last four or five years, you can see some of those differences. Still, the exciting thing to talk about though is of course, H2, right? HTTP 2.0 support in Apache HDTPD. So this is very similar to web sockets negotiated through HTTP 1.1 headers using upgrade. And it's in 2.4 now. So last time we talked about the proxy last ApacheCon, it hadn't yet landed in 2.4. This code is in 2.4, you can take advantage of it. If you're on a newer version, just know that this is currently an experimental state and we're actually having conversations right now on the development list of, okay, when do we drop that experimental tag? Because it's settling, it seems to be quite reliable. Looking like maybe 2.4.next, I should say. So I think we're on 25 or 26 now, in the next version, there's talks of dropping that experimental flag and moving it to a more traditional review than commit model. So really quick, you'll see examples like this throughout the course of the presentation. I don't expect you to be able to read them all because these can be a little straining. But this is how you would enable it. Very simple, load the module and then use it. So, fair enough. The next one that I'm incredibly excited about is in 2.4, we do have active monitoring. Calm down, everyone, yeah, calm down. This is something that a lot of folks have actually been complaining about for a long time. The web server knows about backends, but only when those backends fail during the handling of normal HTTP requests. This came along and actually it was just a week or two after the last ApacheCon, I think, this landed in 2.4 as something that you can take advantage of now. If you're using a vendor-provided package or an OS-provided package, it may not have these features yet, but you always want to download your web server, you always want to compile it yourself anyway because that's the best way to do it. So we will have a whole slide on this topic alone in a bit, but know that this is one of the new things. It's landed in 2.4 stable, and it's something that I really encourage folks to take a look at, and we'll talk about why in a few slides. We good? Okay. So let's talk about what are things that I think are very important for a proxy to do in order to be decent. One of those things is connection marshaling and protocol enforcement. What does that mean? Well, I don't know if you guys have been on the internet, but it's a scary place out there. So if you expose the soft Nugedy Center of your application server to the public internet, very bad things may happen, but when you have a protocol enforcement point or something that is gathering these potentially slow or bad connections on the internet and sanitizing them, it allows that soft Nugedy Center to remain soft and Nugedy, right? Load balancing is also incredibly important. I'd imagine a lot of us here have more than one application server. You should, if not, let's talk some other time. And then connection pooling and offloading of things like TCP and SSL, these can be costly, much less nowadays than they used to be, but you would expect your proxy to be able to do these things. And then failover, health monitoring. We were talking about load balancers. How do we know that one of our pool members isn't feeling well? The next two, actually three, I would say, these are some of the cool bits and the extra things that you can do because of all the different modules that HDTP brings to the table. So modifying your balance or pool, modifying the traffic as it comes in or goes, and then of course mitigating some of those terrible nasty things that exist out on the internet. Are we good with that list? No complaints? Cool. So let's talk about connection marshalling protocol enforcement, right? So I'm sure a lot of us have heard of an end-tier architecture where you have the big, bad internet, you have something that you want to protect, and then some number of things in between, whether it be firewalls, whether it be proxies, whatever it is. The idea in this scenario is you would use HDTPD as your termination point as it comes into the environment, and you can do your sanitization, you can more or less verify that the protocol the client wants to speak is actually the protocol. It's one of the benefits of having HDTPD out there is it's been around a long time and it knows how to talk to the various clients. It also knows what the rules are so you don't have to, in your application server, implement necessarily all of that logic, and it can be very difficult. There is one of the things I do want to point out, and this is something a lot of folks on my team kind of struggle with, and I want to talk about the difference between a forward proxy and a reverse proxy. Has anyone heard these two terms? I expect you guys have heard it at some point, and it really boils down to what does the client know? In a forward proxy scenario, let's say Jim is my forward proxy, and I need to speak to the camera, which is right behind him. I know Jim is there, so I will say Jim, please allow me to speak to this camera. On the other hand, in a reverse proxy, the client doesn't know anything about what's back there. All I know is Jim is the camera. So Jim is taking my messages and relaying them directly to the camera. I don't know that there's anything else in between, so that's really important to kind of know those differences because if you accidentally turn on forward proxying and you meant to turn on reverse proxying, you've just become a bad internet citizen. So let's keep that in mind, and here's an example of a forward proxy. So you load the proxy module. If you're like me, you like to use a little bit of SSL here and there and make sure you enable the proxy connect module, and then this is how we would turn it on. You really only need one line, proxy requests on, but please add more lines. Please don't be an open proxy. Everyone knows why that's really bad, right? If Jim is an open proxy, anything that I do appears as though Jim is the jerk doing it. So I mean, not that I would do a jerky thing at all, but it's so cool. So that's a forward proxy. Let's talk about the reverse proxy. There's no fewer than, I think, five or six different ways you could implement a reverse proxy in a patchy HD DVD. So I'm gonna kinda run through some of those examples, but this is the one down here that I like to use in the slides because it's the most compact. This is the most efficient way because it actually avoids walking the file system and all of that stuff. This is the fastest way to get your request to the proxy. Just know you can use a lot of these different methods. So the first is, of course, in a location block and you can share other things in this block. You can put access controls or whatever. Another is using proxy pass standalone. And then you'll also see the proxy pass reverse, I guess, directive there. So just a really quick word about the proxy pass reverse line. It's not necessary unless your application does a redirect on the back end. So, well, they do. If you've ever used SAML or any sort of IDP SP relationship. So when you do the redirect, it looks for this and it replaces with what's on the left side. So kinda look at it backwards. That's another thing a lot of folks on our team have kinda struggled a little bit with so I wanted to call that out. Yes, Chris. Yes. It does not. I don't know if proxy pass reverse works in a location block, actually. Because what it's doing is a find and replace on the location header. So I don't know if it would work that way. That would probably be something we would play with. No, you can still put a proxy pass reverse in the location block. Oh, you just need all three arguments. I believe so. Okay, I mean, both arguments. Yes, cool. Okay, so the documentation says it can and you know what, as far as open source software documentation, the HDB server is the best there is. A lot of stuff out there really sucks. We do great. Another way that you can do it is with proxy pass match. And this is really handy or interesting if you wanna maybe cherry pick the different URL components that you wanna proxy. And yes, it is regular expressions, so now you have two problems. Going further, we've all heard of the rewrite engine. I hope you can do just about anything with the rewrite engine, including proxying. So in this particular example, the presence of the top secret cookie is what gates access to my application. Now that you guys all know my secrets, I expect that being friends will kind of keep this under wraps, okay? But this is an example of under a certain condition, then I will proxy to this location. And you'll notice with the rewrite engine, my cursor doesn't show up, but the P flag is what says this is a proxy instead of a rewrite or a redirect or something along those signs. You can also implement load balancing and some quick terminology that we use, excuse me, throughout the rest of the presentation, is the concept of balancers and workers. A balancer contains workers, workers are your back end nodes. And when you take a look at something like this, we have a balancer, as you can imagine, called MyCluster, and the worker nodes are one, two, three, four, and one, two, three, five. I just so happen to give them a route name of Mercury and Venus, but the real nodes are those IP addresses. And then going back to our proxy-pass-reverse conversation, this also works through balancers. So I don't have to have two proxy-pass-reverses for balancer members, right? It's just one back to the cluster. Does that make sense? Getting a couple of nods. I know guys, it's dark in here, so try not to fall asleep. I'll do what I can to keep things interesting. But wait, there's more. You can also implement proxies through a DBM file. And what's really cool about this is you can have a large set of URLs going through a large set of different backends. And because it is a DBM file that is externalized from HTTPD, when you update that DBM file, HTTPD is smart enough to realize that and change your proxying rules. It's a fairly niche itch to scratch, but it's actually a really cool feature because you don't have to do even a graceful reload of the config. You update that file, not parse it, but you remap it and bam, HTTPD is doing the right thing. It's sending requests to the different places. And then finally, you can also set it up as a handler. And this is semi-new in 2.4.10. And this is a really fun, funky example. So for this example, we are proxying to a local Unix domain socket, which is one of the new things that we talked about a few years ago, which is actually a FCGI process on the same machine. Cool? All right. So load balancing and traffic distribution. This is what, in my mind, what the killer feature is because if you run your infrastructure as I run mine, there are many nodes, there are a lot of different ways. Things can come in, they come into the proxy and I need to spread them out all over the place. So yes, HTTPD does that. It has different load distribution methodologies. I'm not gonna read them all to you. Just know that they're out there. And I'm going to, I wasn't really gonna pick on them, but I'm gonna pick a little bit on Jim Riggs here. One of the proposals that he put together a few years ago, years ago, was to maybe create a new way of doing load balancing that's more standardized. And that is a way for a back end to advertise its own health to the server. Maybe share a load number or something like that. And he had some really great ideas. He threw it out there, we talked back and forth, and then nothing. So I still have hopes that maybe we could turn this into a standard. I still, I wasn't gonna mess with you, but you started, so. No, actually, I mean, I hope we can all kind of see the value in that. This is actually what ModCluster does in the JBoss world. It's actually able to advertise, I am the server, I have 15 concurrent requests. That guy's only got 12. Maybe you should send the load his direction. It's more or less the desire to share that information from the back end to the front end. And I didn't even take a drink. So let's talk about some of the interesting things you can do with load balancing. You can actually do asymmetric distribution. And there are some directives that come into play in order to do that. So load factor is one of them. If I assign a back end with a higher load factor, it's going to get more requests. It's gonna be the ratio exactly as you would expect. If you put three here and five here, it's gonna be three to five. You can also set up a hot standby. And this is very useful if you need to provide something back to the users. Or perhaps you have a backhauled connection to your back ends. Let's say you're in your primary data center. You always wanna talk to your local data center because it's just gonna be more efficient. But if those go down for maintenance or go down because somebody unplugged something, why don't we talk to the backup data center directly? Maybe a little less efficient, but the site stays up. So that's where a hot standby would come into play. You can also use load balancer sets. It's almost, I would say when you're just doing two primary and secondary, it's almost like a duplication of functionality between the hot standby, but you can actually get pretty advanced. Maybe you have three or four sets. So that becomes an option. And another one is to selectively not proxy. And we'll see an example. So let's take a look at our weighting example. So we have, again, my cluster. And we have two balancer members. And we went ahead and threw a third one in there. And let's say, I don't know, that might be the old machine that Daniel's running underneath his desk. It's still alive, it's still on live support, but it's really not as beefy as these other two machines. So if I ask you guys, there are five requests. Where do we think they'll go? With the load factor of two and two, we would have four requests to these guys. And one request here. So two, two, and one. It really is that straightforward. There are a couple other parameters in here that I'll kind of point out. We have a soft max being set, as well as some voodoo going on with the sticky session. And we'll talk a little bit more about that. But I wanted to really just highlight, this is how you can set additional parameters. You just kind of slap them onto the end. Make sense? Good. So we talked about hot standby, and I think that's a very good use case. So for example, in the hot standby setup, my local data center, 1.2.3.4 and 1.2.3.5, those are right here in my edit. So ideally I would talk to the servers that are located next to me, but if both of those are down, I should talk to 1.2.3.6, which is in Austin. So the hot standby, yeah. That's the best way to mark it. Administratively down? We'll get to that. Yeah, there are ways that you can do that administratively and even programmatically with scripting. And we'll get to that in just a minute. Good question. Yes, hot standby stay will only be entered if all other pool members are down. So I think you can, interesting, interesting. Because that way I can always make sure I've got, at least two, or yeah, no, that's a really good, use case, I actually never contemplated that. That's curious. Well, we have plenty of room to add additional states, right? It's just an integer and we're, you know, bit masking. So, right, and I think, hey, cool. Start the conversation now and then we'll talk in three years when nothing happens. Okay, so selective proxying. This is actually a really cool example and one that I've used in my day job. Let's say I have a fairly busy website. I have my static content right here on the web server. I have two back ends and then I also have a third back end that does other stuff. So really quick, let's take a look at how this is declared. Order is very important for the proxy. In this example, if a request comes in for slash static, HDTPD will serve that content directly because I've added, and this is much harder than I thought it would be, I've added the exclamation point there to say do not proxy this location. So that's the first hit. If I come for slash application A, I will not hit on static, but I will hit here and I'll go to this balancer. Application B would again go to another balancer and if I match neither, I don't think you would say neither, if I don't match static, I don't match application A and I don't match application B, then everything else should go to this other cluster and this is actually our hot cluster from the previous example. So in this example, we have three clusters, three load balancers, three back end applications and then all of our static content here served by Apache HDTPD. Yes, Chris? We often recommend against remapping URL spaces like you happen to have here from application, from slash application A to slash on the origin server. Is there anything that's inherently Tomcatty about that problem, or is this a universal problem? Absolutely not, no. So yeah, so what Chris had stated is generally in the Tomcat community, you would avoid doing URL trickery in your proxying or as you come in, and what he observed and this is a good point that I didn't point out is I'm coming in on slash application A and that's being proxied back to app cluster one, whatever comes after slash application A, that's what goes to the back end. So you can use this to kind of manipulate the URI. There's pros and cons and one of the cons is if your application is doing server qualified redirects, you have to be very careful with your proxy password verses. So you have to catch those and fix them on the way out. If you're doing just relative redirects, A okay. Okay, so this is not really a Tomcat problem. No, and it also- Is there a problem with any origin server that might have the same issues? Yes, and it also boils down to links too. So if your application thinks I am always gonna be deployed on slash Chris, then I might construct all of my links as server relative slash Chris slash about.html. If I do this type of trickery in the proxy, I break that link. So remember how I said, know your application? This is why. So there will be some strange breakages. On the other side, maybe marketing comes in and says, look, I don't like having slash Chris in the URL. Can you get rid of that? This is a way that you could do it. Sorry, Chris. There was another note I wanted to draw out here, but I couldn't remember what it was. I did tell it not to proxy static because HTTP is a very good web server. So- It was very, very important. Yeah, Jim made a very, yes, this is a very good point. If I were to reverse these four lines, everything would go to hot cluster. Everything. Because it's the first match. Yes, Chris. Now we got rid of slashes. I don't know. There have been examples that you've had that before, but I wanted to mention that I fought this battle a couple of times without realizing what was going on. The exact syntax you used for proxy pass, specifically at the end of the- Oh, the trailing slashes. You must match slashes. You must, must, must match slashes. In fact, that should be an enhancement that, not necessarily a security issue, but it could screw, well, it could become one, but it's not necessarily one, right? It's all in the context. Most of the time it'll manifest as broken links or just your proxy isn't working. So what Chris was pointing out is you must match either having a slash at the end or not having a slash at the end. If you put a slash on one and not on the other, it gets mapped directly in that way. So you may completely change the context route that you're going to on the back end by mistake. That does seem like something that would be easy enough to detect and warn on. So maybe, maybe there's a patch that I could write for that. Correct me if I'm wrong, but I believe the URL segment where you have like application A. Yes. That just gets a prefix match as you're trying to map it into the proxy. And so while your brain might think you don't need a slash because you're thinking of it as a path segment, if you had application A with how the trail of slash, if you requested application, application with a capital A, that would go through the proxy, even though someone reading the configuration might not work out what happened. The easy answer is just match slashes. Just match slashes. Cool. How are the examples, does that make sense guys? Okay. So here's a new one. This slide is in progress because we just decided there's gonna be a change coming. These are the different worker statuses that we might see in the proxy through the load balancer. So there's disabled and stopped. And for the life of me, I can't really figure out what the functional differences between the two. But you asked Jay earlier if there's a way to administratively put something out of service, that would be how. Either one of those would do it, it would stop getting traffic. I believe, yeah, requests that have that route mapped. So if you set a cookie that says I should go to the Jay server directly, it'll still be honored until that machine is done for. You can also put a worker into ignore error state. And I encourage you guys to double check the documentation. This is just a quick summary. There are a little bit of nuances with some of the statuses that you can pull from the documentation. So if I have a B at the end and it stands up to a reload, then all of the requests will, only ones with a cookie will continue to go to that particular node member. Yes. I think he wants to drain state up in the same way you want, and not be a mess. Sorry, yes. Yeah. Actually, like in our situation, all I have for status, but we do set a cookie on the proxy for debugging purposes. It was an aggravation between service and debugging. So we do have a cookie, but it's only on the proxy. We set the cookie, the proxy sets the cookie. So with that, when we mark a server for disabled, we just want an existing session on its server to sort of bleed off. Yeah, that would be the drain mode. I misspoke. No, you drain would be the... Actually, instead of changing the configuration file to a recent restart, using the balancer mapping. You're jumping ahead. But wait, there's more. I prepared for this. Yeah, so we talked about hot standby. The error state is when the proxy has detected during normal operation that the worker is not healthy. So that is live traffic went to that guy and it did not work. Then there's also the drain state, which we just discussed, and then the check, which is an indication that health check found a problem and has intentionally marked it out of service. So normal requests will not go to this worker because the health check has said, hang on a minute, something kind of smells here. And then there's the redirect state is requests that go to this, it's not really a state per se, but requests that would land on this balancer member should actually go to this other balancer member instead. And then there will be a new one that Jim's going to introduce for us here soon. R for what? P for spare? How about J for Jim? So sticky sessions. Everybody loves microservices, especially your proxy administrator because then I don't have to care if I send you to the same backend that I originally sent you to. But unfortunately there are a lot of things that do require that. So anytime that you're using a shopping cart, that session information has to live somewhere. And unless you have taken the trouble of extracting that session information out of your container and put it somewhere else, maybe a reddish cache or persisted it to disk or database somehow, it's very important that the next time we talk, I send the request to where it originally went. Otherwise my shopping cart is now empty or I'm not logged in or something that I was doing just got lost. So, oh and the other option, and I'll point this out, a lot of application servers offer session replication. That can be expensive depending on how you do it and it may not scale very well depending on how much traffic you end up seeing. So, you can do built-in load balancing, I'm sorry, sticky sessioning. Mod Proxy Balancer does include facilities to do this, but it depends on your application server. Works great with Tomcat. Also requires you to know certain parameters about Tomcat. It can work with websphere. I'm not so sure about things like web logic and then PHP, not even close, right? So, the other option is you roll your own, which I don't really like either because anytime you reinvent something, you have to ask yourself, am I doing this as well as perhaps the experts would do it. Regardless, the route parameter is what does this work? So, we saw earlier on some examples way back when we had a Mercury and a Venus cluster or a node in our cluster. If I am presented a cookie that says I should go to Mercury, then that's where I should go. Yes, Chris? On your previous slide, when you say roll your own, does that essentially mean write your own module? Write your own module, do it somewhere else in your infrastructure. You could use a rewrite engine to do it, man. There's a lot of different ways that you could do it because, and this is where I kind of go into it, it's just looking at cookies and it's just making a decision. Fun session. You could use Fun Session, yep. To create a local session on Apache HD DVD. Yeah, there's a lot of things you could do. What I like to do, well, let me finish saying why this is a problem, at least for me. We talked about different Java backends, Jboss, WebSphere, Tomcat, even PHP. All of these different backends have different cookie formats for their session cookies. So you have to know a lot of things that you may not want to know or it may be a pain in the backside to know. Maybe you are a web server administrator and the application server administrator is a different team and you have to talk to someone else and that's awful, nobody wants to talk to people. You also have to know what those values are. So in WebSphere, there's a clone ID and it's a very strange series of someone tapped a bunch of keys on the keyboard and that is your route parameter and it means nothing to humans. So because of those reasons, the built-in is not 100% compatible. But there is a really great way to use the other features of HD DVD and some of the environment variables that the proxy module will set. So you, as we did before, create our balancer and then inside the balancer, we're going to tell the proxy module, hey, the sticky session cookie, we're gonna call it Daniel's app underscore sticky. Name it whatever you'd like. It doesn't matter because in the next line here, we're actually using the header module to say, and this is a little bit to digest all at once, if balancer route is changed, that's an environment variable. If the proxy detected that I'm not going to, or I haven't been presented a cookie that takes me to a specific route, then I'm going to add a cookie called Daniel's app underscore sticky, there it is again. I'm gonna put some string and then a dot and then I'm gonna put the value of the route that the proxy chose. So when does this come into play? It comes into play if this is the first time I showed up because the balancer is going to decide I should go to the gym cluster or I should go to the J cluster. It'll also come into play if, let's say the gym cluster goes down, the balancer has decided, I can't send the traffic there, I have to send it somewhere else. A new cookie gets set by mod headers. So I use the heck out of this because I don't have to talk to anybody, I don't have to look in other configs. The front end is completely stateless with relation to any values that the back end might set. So this is just a nice little recipe I like to use. And we are going a little slow, but it's fine because you guys are going to download the sides and you'll have all the information. So connection pulling, big deal for proxies, big deal for load balancers. So easy, it's almost automatic. So there are a couple parameters that come into play. Max is the hard number of connections that will be open to anyone back end. A soft max is if I have any more than this number after some time I'm going to tear them down. And then TTL is how long is the connection allowed to be idled? Some other things come into play though, and I just want to point this out really quick that if you have a long lived TCP connection open to a back end, that back end probably has keep alive timeouts or other TCP timeouts set. So that could diminish the value of any connection pulling you configure in your proxy if the back end is closing the connection too soon. So generally speaking, what I tend to recommend is on your back end web application server, on your back end HTTP server, whatever it is, set keep alive timeout to infinite because you're not talking to that nasty scary internet and set the TCP timeout to a pretty long amount of time. That way when a connection is established by your front end it can stay open, ready and active until it's needed. So here's an example of connection pulling. You'll notice we really didn't do much of anything different or special. We just set some different parameters on how many connections will be opened. Now I will point out there's a min parameter, but that min parameter is not actually the minimum number of connections that will open when HTTP starts. Min is the number of connections that will be persisted if that point is reached or gone beyond. If you only have a need for one connection at a given time you will only ever see one connection open. But if you set a min of two or three and you still only ever use one connection you're still only gonna see one connection. However, if you set a min of two, a max of 10 and you burst up to five, you'll see it somewhere between two and five. Does that make sense? It's not actually, I guess aggressively established. It's established when it's needed and then it doesn't go below that number during normal server operation. So the other part that I really think is important for proxies is doing the hard stuff by the things that are really good at doing that hard stuff. So SSL benefits, Apache HDTPD is, it uses the open SSL library. If you are rich, you may have crypto accelerators that plug into that open SSL library. You know, John Frederick right this moment or in the last session is actually giving a talk on how to squeeze out more performance from Tomcat and one of the ways is to use open SSL. Let HDTPD speak SSL for you and then inside the trusted network within those walled garden boundaries, go clear text. If that suits your security posture, it makes a lot of sense. Running a little time tight right now, but I have a really cool Node.js use case where I don't know if it's still the case but Node.js SSL support really sucked. I mean, it was really, really bad and an application that ran at about 38, 40,000 HDTP requests a second. When turning SSL on it went down to about 6,000 requests a second. We fronted it with HDTPD through a UDS socket and bumped up to like 38,000 requests. So let the thing that's really good at doing that thing do that thing. So failover and health detection. So there's, it's not all roses. Some failures are only detectable by handling it with live traffic. Sometimes failures are detected by your users. These are examples of things that you might see in your environment. So SSL errors, for example, those will go back to the user. There's really not a way cleanly to recover from that. If the backend goes down though, that's TCP. No HDTP session has been started, no work has been done, so that fails over seamlessly. Also, slow or hung back ends, excuse me, those go back to the user as well. But because you know your application and you've spent some time talking to your application developers, you know that you can use the fail on timeout. Yeah, that's what we were talking about earlier. I couldn't remember the exact directive. So if you have a request that takes five seconds, you can time out in a shorter amount of time and then you can mark that backend out of service and say that guy's really slow. You're distracting me, Chris. Here's some really cool stuff. And this is stuff that I'm really jazzed about. This is a patch that Jim put together last year. One of the things that we voted into, 2.4 proper just after ApacheCon wrapped up last year. And you can do probing of your back ends. So you can do very simple TCP knock, knock, knock. Are you there? Or an advanced HDTP request. You can even be really clever and say as long as this match shows up in the response I get, I should consider the backend okay. So what you can do on this is with your application server, you can implement a health check, a self-help diagnostic that says, yeah, I can get my database connections. Yes, the ESB is up if you're into that sort of thing. Yes, I can do all of the things that I need to do. And only after executing those checks does it return something that indicates it's health. Otherwise, it can proactively be taking it out of service. And this is all stuff that's done on the side. It's not wasting connections from your actual connection pool that your user traffic is using. Very cool stuff. And here's how you would do it. So this is a, it's an open source example, which since it's open source, I shamelessly stole it from my slide. Right out of the documentation. It's a really good example of how you can check that the word failure does not show up anywhere in the backend. So I have these balancer members, let's say alpha and beta. And I'm using the health check template that I'm just declaring right here. And as long as the health check expression is coming back clean, which in this case, it's a negative check. As long as failure does not show up, I'm good. As long as that comes back clean, these guys will stay in service. It's really exciting guys, come on. Somebody should be clapping right now. Give Jim a high five guys. The other part is you can do live traffic monitoring. Keep an eye on how things are going. You can set some parameters. If I can't connect after this many seconds or milliseconds, I should abandon the connection. You can also do proxy timeout. So if I don't get a response or submit a full request after this much time, bail on the connection. And then also fail on status, which is something that I actually really needed in the web sphere world, because there's a period of time between when certain application servers will start answering TCP connections and before the application actually starts. So it'll give you a 503. Hey, I'm not ready yet. I would rather you be out of service until you are ready, please. And then of course, another option is you can just drive traffic through your site with monitoring. I would suggest you should use the built-in health check unless you need something a little more advanced. We're gonna have to fly, guys. Yeah, peace. Yeah, two minutes, I know. Okay, so, and this is what you're kind of talking about, Jay, so you can dynamically modify your cluster configuration, your proxy load balancer. The best way to do it is with the balancer manager. So there are a couple of things that you'll see in the balancer manager, and these are all things that you'll see on the next slide. But what I will point out is you're an admin. You're working on production. Be safe out there. The balancer manager will not say, hey, are you sure? You just said drain on everything or take it all down. Are you sure? Hey, you just wrote a script, and are you sure you wanna? Yeah, so be careful. So this is what the balancer manager looks like. On the left side, we actually see the balancer manager, and then on the right side, this is an example of what the workers would look like. So this slides a bit of an eye chart. It's very clear in the downloaded presentation, but you can see the different parameters that you can set. You can see all the various things that you can learn about your environment. How often was this worker chosen? What's its current status? All kinds of stuff. You can change a couple of parameters. There are some parameters that you cannot change, but to your question earlier, one of the things that you absolutely can change is the state. And I just commented out who stands and what's needed in the company. So anybody can come behind and figure it out as opposed to having to... So there's good and bad, and there's some errata to kind of share with that. In 2.4.4, you can actually persist that state. So the challenge in 2.4.3 and older is if I mark this out of service intentionally, if I have to do a restart of HTTPD for whatever reason, it goes back into service. You can enable the persistence of these parameters that are set. So even though you don't necessarily see it in the config file, you do do a restart and come into the state that you expect to be in. Yeah, I think the newer docs actually show you may have a kernel example of using shell script, the kernel, to basically do things if you're done. So you could have a cron job set up. Absolutely. Or you could have a kernel request, because that's basically all the things that it's making a bunch of requests. Yeah, so. It's just to get requests to our edge. Jim makes a really good point, and you can script this stuff, because it's not full rest-like, but it's deterministic you can enable and disable. The trick is you have to either set the nonce to a value that you're expecting. It's like a shared secret between your client and the server or disable the nonce. And that exists to avoid cross-site request forgery in browser land users administering HTTPD. Well, it depends. I mean, it depends on what's right for you. I guess the way I would approach it is, I hear your concern. If I'm gonna be in a maintenance mode for longer than a typical maintenance window, I would probably wanna do a change to the config and reload. Because then, maybe I win the lottery between when maintenance starts and finishes, and I just walk away. I'm a big stickler. I like the configuration to represent what's actually out there. But if you're doing maintenance, and it's a five, 10 minute ordeal, and it's all scripted, this would probably be the most efficient method. That's true, that's correct. Yes. So I'm over time. I still have a handful of slides, but that's okay because you guys are gonna download them, and it's gonna be great. And in the next couple of slides, I talk about things that you can do to shape traffic. I wanna, you know, my application writes bad links. I'm gonna fix it on the way out. I'm going to compress or cache my content. I'm going to do things to enhance my security and make sure nobody comes into my house and makes a mess. There are modules that can help you with that. Go ahead and grab the slides, take a look at them. And one last time, this is where you can download it from, and I try not to keep a low profile when I'm here. I absolutely would love to talk with you guys about what are your use cases? What are things that I could add to this presentation? What are things that just didn't really do it for you? I'd love to hear it. Otherwise, let's clap real quick and then go have some coffee.