 And welcome back to the next episode of Reclaim Today on Reclaim TV and all of our Reclaim brands. We're super excited to be chatting today or I guess just capturing where we are and our thinking around multi-region setups at Reclaim. I'm joined here with Jim and Chris. And this has taken up a lot of conversations over the last couple of weeks. So I think it felt appropriate to get on here and record just where we are, what we're thinking about this because it's pretty exciting. So I see some people nodding. So Jim, are you excited about this? Oh yeah, I am excited. Can I tell you how excited? So I do actually just, so to set the stage a little bit where we're specifically talking about WordPress multi-region. And I think I have abbreviated it maybe wrongly. Like I've gone with the great American tradition of acronyms, WPMR. There's a tag that I use now on my blog because I've been writing a bit about it so that what the multi-region piece of that means is that we have WordPress running the same instance but kind of replicated across two different regions of our kind of cloud-based data center. So we have an instance of WordPress say on the East Coast in a data center and the West Coast. And obviously that can serve traffic closer based on where you are, which may mean less latency and faster, but even cooler if you have a high, if you have a site that has to remain online because it's mission critical, if there's an issue in say the West Coast data center because nothing ever goes wrong on the East Coast, the East Coast data center will actually stay up as the West goes down. And all traffic that was previously going to that redirects to the East Coast and the customer nor the client have any idea there was an entire server down on the West Coast because those Berkeley hippies attacked the data center. So that gives you- But then it allows you to troubleshoot and investigate what might be going on without that pressure of, oh my gosh, everything's on fire because you're able to redirect traffic, keep the sites online and still do maintenance. And I know Chris is like, yeah, that's right. That's a good start. Always need time to troubleshoot. I think, yeah, that's definitely one of the harder parts of this field that we're in with web hosting is that feeling of things going down and then feeling like, okay, not only do we have to get things back online, but we have to do it live with people waiting and that pressure. And knowing that we would have that relief makes this idea and this concept worth it in and of itself, I feel like, aside from the tech and just how cool it is to be able to manipulate traffic and have things running across multiple spaces. It's really exciting and it's been cool to watch so far. It kind of gets to the level of enterprise hosting, right? Like an enterprise host that wants to run, say, for our purposes, their main site on they're using WordPress, but they want to ensure that it's up no matter what happens. Previously, it would be hard for a company like ours to say, okay, we'll create an instance in a droplet, say in digital ocean on the New York server data center and then one on the San Francisco, but that replication of database and files is not trivial, right? So like that was a big piece. So we hadn't done it previously, nor did we promise it, but I think Chris, I might be right, like in November of 2021, we had our kind of annual get together reclaim we all met in Nashville and it was at that session where Tim was like, I think our cloud provider, which at the time was still as it is now is virtual. So said or announced that they have a WordPress multi-region setup that we can click, configure and install fairly easily. And I think we went down that rabbit hole. Is that correct? Let me, I'm the, yes. I remember sitting in that, what was it, the hotel living room? You and Tim were writing on the whiteboard figuring out everything. Yeah, those team trips, man, they really have a way of just pulling out some cool ideas, but I think that was definitely the start of, the serious start of where reclaim was like, wow, this could be possible for our team and definitely something that we should investigate as a future setup and maybe where reclaim can eventually head. So yeah, it's been now a year later and where are we? Good question. It's funny because it was fits and starts throughout 2021. Soon as that happened, Tim and I installed one and we were playing with Cloudflare. And that was my first, like I had used Cloudflare for some basic DNS with reclaim cloud because it can make things easy and you kind of manage IP addresses in Cloudflare. And we'll talk about Cloudflare more in a bit. We started it, but they had this WordPress multi-region cluster, which was using Galera cluster databases. And like, Chris, you can speak better than me on this, but they are complex. Is that what you're saying? Yeah, it's very performative if you know what you're doing. But if you don't, it can be a headache. Things can crash, everything can go wrong very easily. Yeah, they seem quite fragile in some ways. So we had a full three regions set up with the Galera cluster and we were playing with that. And it was a little bit too much for our purposes and it was still an alpha. So we were like, this is great, but it's an alpha. We'll sit on it for a bit. And throughout much of like early 2022, I was playing with bringing my own blog, Boba Tuesdays over to multi-region. And then I realized the Galera clusters, it's just too complicated. Is there another setup they have for multi-region? And I think it was like in May or June, I realized that they do have a WordPress standalone. So basically not with the Galera clustering that will take that single WordPress, which is pretty performant in and of itself. We use it for so much of our infrastructure for the managed hosting and replicate that into multiple regions. And at the point where we tested that and the setup was far less complex and fragile, I would argue. I think we started to see maybe four or five months ago that wait, this is really gonna be possible and even possibly simpler than it is. And I think that was the moment where we realized this could be possible sooner than like, the moment I realized it. And so let me just show you something, I'm sorry. So I can actually give you dates on everything. Hold on, let me, I have this little trick. Wow. So this actually is the tale of the tape of since November 26, 2021 through March 7th, then March 17th, then the 22nd of July. So I guess that's when I did the multi-region standalone. And then really the last three weeks, October, October, we really started pushing on multi-region. So I've been blogging a lot of discussion of this, but the kind of breakthrough for me was this, this was the Gelastic JPS. And this is a one-click installer that is basically spinning up a container. And this JPS doesn't do the Galera cluster, it does the WordPress standalone. And by using this, I was able to set up and get a functioning multi-region WordPress up and running for Baba Tuesdays. It's been running in there for several months. And I think it's at that point where we were like, okay, now let's put our money where our mouth is and let's see if we can get multi-region working for reclaimhosting.com. And I went to Chris with, I was about to call you Tim Chris, which is scary. I went to Chris and I said, look, if we were to do this at an enterprise level, do you think we could? Because Chris is in charge of our infrastructure, so I didn't want to like assume anything. And you had an interesting answer. Only if we can do it through Cloudflare. That's cool, why? Why did you say that? Cloudflare is, I'm gonna use this word again, but extremely performative. With their load balancing, we can direct traffic to multiple servers. And we can proxy some domains through Cloudflare. So we can take advantage of their CDN, the load balancing as I just mentioned. And it's just, it's fantastic. And I would only be willing to do a multi-region setup if we were not the ones handling load balancing. Yeah, and something occurred to me while we were doing this. And I have tinkered with the multi-region and I'm proud of the fact that I was the one who set it up and then you're the one who made it work, at least I played some small role in that. But the thing that occurred to me is just what you said about load balancing. In fact, load balancing with a lot of our kind of cloud infrastructure that say more than one server or complex is happening at the server level. But everything with Cloudflare shifts that responsibility to the DNS level. And I think at that point, there's a C change or a C shift or whatever we want to call that in terms of how we're able to kind of guarantee uptime and manage basically traffic shifting or direction or shaping for a community or for a site like Reclaim Hosting, right? Good example is we have two regions for ReclaimHosting.com and we'll talk about why it's running on www.ReclaimHosting.com versus just www.ReclaimHosting.com but I was actually playing around with the multi-region and I hadn't copied all the images to the second server. And so people were like, well, some images aren't showing up and that they weren't on one of the servers, say in the East Coast but not on the West Coast. And this was for the Dev site, right? No, this was for the actual one I switched Friday morning without Chris where he was like, what? But anyway, that's another story. But I actually could within seconds say, okay, I want all traffic then not to even go to the West Coast, all go to the East Coast while I sync the rest of the files to that server. And that's to your point, Lauren about like within a second, all traffic could be shifted at the DNS level over to the server that had all the images and there was no sign of wait, that's a broken image, right? That was amazing to me. I think for me, the light bulb that has gone off in the last couple of even days, I feel like I'm still learning a lot about this. So, I probably won't have as much to contribute to the technical side of today's conversation but for me, it's been trippy to, just try to understand how the load balancer is pushing traffic around and the ways that that's possible because I think once you start to understand that, troubleshooting and knowing, okay, is this actually an application level issue or is this more caching or just based on a certain data center that I'm loading right now or going to? So, I think that that's been an interesting part of site management now even for the main reclaim site is knowing that, okay, it might not be an application level issue but something larger. Yeah, I do think that opens up questions and Chris and I before you jumped on, we're talking a little bit about that. Like multi-region, one of the things you could find is something could work well in one region and not well in another and the load balancer might help you kind of redirect to what they call an unhealthy region. And I think the way they're doing that where they're basically offloading the load balancing to Cloudflare or at the level of DNS is through basically CNAME flattening. I think that is really the kind of secret sauce for Cloudflare more generally and something when we've been talking with schools recently about this, we've had to kind of really almost become like, you know, translators about what that is and how it works. So Chris, I'm going to put you on the spot like it's one of those meetings. Yeah. What the heck is CNAME flattening and what is Cloudflare doing with that craziness? So with Cloudflare, you can do either a partial setup or actually point your name service to Cloudflare. Pointing name service to Cloudflare definitely gives much more control of managing DNS to Cloudflare. You can manage everything within their DNS interface and that's great. But if you already have DNS records somewhere else that means foreclipting everything over to Cloudflare. So with Cloudflare's partial setup, you just point a CNAME record, a single one. So in this case, www.reclaimhosting.com to Cloudflare and you can manage that CNAME within Cloudflare itself. Certain DNS providers also allow you to use the CNAME record at the apex of a DNS record. So usually we could only point reclaimhosting.com to an IP address with an A record. We can't do CNAME flattening on ours. Our DNS provider does not support it, but certain ones do where you could use a CNAME record there to point it to Cloudflare. Right now we're doing a redirect because again, our DNS provider doesn't support CNAME flattening, but it's possible and it allows you to go for a partial setup in Cloudflare, which is the route that we're going for, www.reclaimhosting.com. It's interesting too because this whole thing made me start thinking about like, okay, so by doing that CNAME flattening, what essentially you're asking the school to do is say, take the subdomain www.university.edu and point the CNAME for www to this address at Cloudflare and then here's a text record to verify that. So those are the two records that they're doing for that. And then when it comes over to the Cloudflare now, you can manage those basic A records that are pointing to DNS without ever affecting the main point from that university site. And the thing that hit a light bulb for me when we were doing that is, wait a second, one of the struggles we have right now is how a lot of these servers are tied to an IP address. Hence when we wanna move a server or upgrade it, like there's, like you said, a fork lifting of the whole server to a new IP, maybe people are mapping IPs or pointing IPs from their C-Panel account or shared hosting or a lot of times with universities, they're pointing an A record at a specific IP address for our C-Panel servers. We gotta reach out to them if we were to change the server in some circumstances and be like, you're gonna have to change that record with this setup, there is no changing of those records on the school side. It's all ours. So we can move that server, put it in a different data center, do something like multi-region and not have to worry about kind of going back to them and saying, you know what, to do this, we're gonna have to move DNS, you're gonna have some possible like wonkiness for 24 hours, all of that goes away. But not just for them, also for our infrastructure when we wanna upgrade servers and stuff like that, we don't have to worry about kind of communicating your IP address might change or you're gonna have problems on this front, still in some circumstances, but in general, it makes the DNS side of stuff, amazingly easy. Yeah, it kind of gets back to that idea you were saying earlier, Jim, about how you're no longer relying on infrastructure changes. That's sort of happening and being manipulated above, you know, separate from the records themselves. So it does make that coordination with schools quite a bit easier or, you know, even with reclaim, you know, with how we were kind of changing our setup for the main homepage. So very cool. Another thing too there, and I was, as again, Chris and I were chatting quickly, I had put Bava Tuesdays on a multi-region site and I had also load balanced it in Cloud Flare and realizing that the server we're using for both reclaim, hosting.com and I'm using for Bava Tuesdays has Lightspeed as its web server. Chris, you were like, you know what? You can access the CDN from Cloud Flare as well. The Lightspeed cache plugin allows you to use a CDN. It has built-in settings for that for, I wanna say it's like Quick Cloud, something included within the plugin by default. But it also has settings that just allows you to put it in a Cloud Flare API key and it starts working with the CDN. It was super easy. And the other thing I used was something that they have, which is on top of the traffic balancing or the load balancing, it's in their traffic section of Cloud Flare is something called Argo. And Argo basically maps the shortest routes for your instance. It's a $5 a month charge for an account but then it also charges you per requests and gigs transferred. But I was saying like my site, I put in heavy images, I don't optimize anything. It was loading like 2.6 seconds. Like it was kind of like 1990s internet. And now it's loading 0.6 seconds. So here's a question then that I have because Argo is new to me. I haven't learned much about that but how is that different from geo-trafficking and using the load balancer to point you to the closest data center that you might be located to? So what is like, how is that different? So with Argo, and I think Chris, I'm gonna let you talk about the different like things with the load balancer you can do because this is on top of the load balancer. So load balancer can do some of that stuff. And Chris, you could speak to that but this is basically like a service you pay for where it's almost like they do the math for the quickest routes. Like they have this gigantic network that they find the fastest routes for your site to be hit. So it's premium and they're kind of playing on that but it's based on I guess calculations they're doing. The load balancer by default, whether or not you have Argo which is basically a service on top of that that makes things even faster basically gives you a couple of options to run the load balancer with. And it's worth talking about that a bit. So what are those options, Chris? So I actually have it up right here. So I'm gonna be swiveling my neck for a little bit. So you can set traffic steering off which will route things in a failover order. It will go to the whatever server you have listed first. And if that goes down, it'll go to whatever you have listed second. You can set it to dynamic steering which will go to whichever server responds the fastest. You can go to geo steering which routes people to the nearest server, geographically to them. There's proximity steering which it seems does the same thing. I may need this reading it but whichever one's closest in proximity. And then random which will based on whatever way that you have. Sorry, I was gonna say like round robin. Yeah. Yeah, it will route it to whatever, however you have things set up. So you can weigh the different servers and we'll randomly route them to that. So for our Dev site, we have two origins, two servers running the Dev site. And we have each one of those sets a way to 50%. So it'll randomly route between those two servers. Yeah, it's interesting too, because one of the things that you can realize and we've kind of run into with multi-region and they have like a lot of this stuff. It's almost like how digital ocean made, managing droplets or VPS is super simple with their interface. That's kind of what Cloudflare, I think of them almost like cousins. Well, Cloudflare is managing more sophisticated DNS. Like everything AWS refuses to do to make these things easy, Cloudflare and digital ocean make quite simple. And I think that's a very powerful kind of push for them. So this is all laid out in the load balancing area. And you can choose that and select it on with a simple kind of like radio button or slider. The one that came up that we were also talking about before the call, we did a lot of talking before the call and not sure at 10 minutes, but one of the things is there's something called sticky sessions and this is very important when you have WordPress multi-region I think or any multi-region is someone will go to a site and then when they come back, if you've done an update and the way caching works on these bigger sites, like that could be cached for an hour or two. So you could go back and you could be like, oh wait, that's not, that's new. Oh wait, it's not on this site. So this actually puts a cookie in someone's browser so that when they return to the site, they're not randomly sent to the other or another server that has the same site. They're sent to that server. And I think that's something that will help us control any of the kind of like users reporting that they saw this on this site, but it was gone on the next kind of paint of the site and what's going on is there a problem. That will actually help us manage some of that. And I think it's called sticky sessions and probably they developed it, I imagine for multi-region setups like this. Yeah, and that, before we turned that on, it made it for a really interesting user experience when you went to edit, for instance, the main site. The other day I was playing around and updating events and because we have cache, like a cache plugin installed on our site, I'm used to refreshing my cache often to be able to see those changes. And then every time I did that, it would ask me to re-log in presumably because I was being pointed to a different data center, essentially that hadn't tracked the changes that I had made yet. So I'm not sure how fast it's sinking those changes between the data centers. And then Chris, I know you have a static site as well that's kind of sitting in the background that we haven't really talked about yet either. But I think that sticky setting that you were talking about, Jim, has definitely helped since then because I can now see the changes that I'm making a little faster and I'm not having to re-log in every couple minutes. So that was a confusing start as well. I think, and I might be wrong, that we didn't use sticky on the production yet. Not just kind of, we're pushing people to one server predominantly, but the sticky, we're gonna try it on dev, which is the beauty, which has, we have a full multi-region dev, we'll make sure it works the way we expect and then we'll push it out. But the thing is that you can do that, like we have one server for reclaimhosting.com is enough, right? Like we have a big beefy server behind that one. We have duplicated or replicated that. So should there be a moment we just gotta direct everything to the one while we work through this? Even that makes the experience like so much easier rather than saying, oh, we gotta revert. We gotta revert everything because like there's performance issues or something's not loading. It's like, no, we can just direct traffic to the one we know is working until. I do think though, a part of it is there's several layers of caching too. There's the caching with the multi-region server, there's the caching with light speed, there's probably a cache going on at Cloudflare. So there's like three different caches that are gonna make seeing updates difficult. And that's also seems to be a similar theme with traffic and how you're routing traffic too. You have the load balancer, you have Argo, and you have this sort of sticky option. And I'm assuming they're all playing well together. And it's really just, it's that first load where everything is doing the work, right? And then after that, it's the caching takes over. But I'm curious to know, when you make changes, what is prioritized in that tier of traffic routing? So it's interesting. And I didn't know this, Chris. And I found this out because when I went ahead and made the switch on a Friday morning, Chris is on Fridays and Saturdays. And I felt like I've been leaning Chris heavier on this for all. So I was like, you know what? We're there. I still needed to help with an R-Sync, but it worked, it worked beautifully. So credit to the work that was done. But what happened when I did that is Tom Woodward, who had been working on another project for us with our community site that I'm sure you'll be talking more about in the near future, Lauren. He was working on that and it was pulling from reclaimhosting.com. And so as soon as I made the switch, everything he was pulling in like basically broke, which is something we have to do some investigation with any schools we work with is like, who's pulling stuff from this and how could that affect it with these integrations? Like that's a big important conversation. But to the point, Taylor Jaden, who works at Reclaim basically did some investigation because he's working with Tom and he's like, Cloudflare caches the header of the page like for a while. So Cloudflare is doing some really deep caching to make this even more performant and the site is more performant. And I think that is kind of what you're probably bumping up to against there. And what we really need to do is find a clean way to be able to work in a cash-free environment for those updates and then stick people on the sessions they're on so they don't see that walkiness, if they're coming back. Yeah, because we did a sync from our dev environment to prod a couple of days ago. And I think at the time, we had some URL issues that we had to search and replace. And then I think even someone in Discord a couple of hours later was like, hey, I'm seeing this issue. We were like, yeah, that makes sense. Changes have already been made, I think, but they're likely still cached. So the caching is awesome, but it can definitely slow down fixes to some degree. And so being able to say, okay, we need to just put this change out now because something is broken and kind of work on top of that cash, I think would be ideal too. There's almost got to be like a big, red, Homer Simpson nuclear. Refresh. Purge, all cash, everywhere, forever, now. Well, since we are hooked into their CDN, I think purging all with the Lightspeed settings does purge stuff on the CDN. True. So that might be the way to do it is Lightspeed purge. And I think that's something we'll just have to work through and figure out like, how often are we making updates? If the sticky session is enough, it's really just us being able to test those updates, right? Definitely. And I think that's gonna have to be the workflow as much as I would like to be able to sync between our Dev environment more regularly. You know, it's just, I mean, we pushed out changes and literally the next day, we had like seven plugin updates. So it's like, all right, let's do this again, you know? And I think that's just where we've got to find a balance. The third WordPress update this month, it's like. Yeah. And then all the plugins push out updates for that. And so it's just, yeah. Staying comfortable. And for a full side editor, all this for full side editor, all this pain for full side editor. But you mentioned it, Lauren and Chris, I am fascinated to hear this. So let's paint the picture. We got multi-region East Coast, West Coast. And then it came from Canada, we have a third scenario, should the East and the West Coast get beaten up by Columbia hippies and Berkeley hippies in the data centers taken down, Canada's asleep. It's too cold for them to have any hippies running around. What do we have going up in Canada? What an intro. So the Galera bit cluster, the very performative or performant, I don't know, words are hard cluster for WordPress that we have set up runs in three data centers. The standalone only runs in two, but three is more than two. So we want that third. So I had an idea, we set up an Apache container and NGINX container, just a container running a web server. Lightweight is not running database, is not running like everything else. I wrote a script that relies on WGet to mirror the site. And it just pulls it in once a day, creates a static copy of the site and sticks it in the web serve directory. And it does that once a day. So it's, yep. Oh no, I, sorry, I keep interrupting you. I was just gonna say, so even if those two data centers go down, we can push to a read-only site. That's updated daily, based on the non-job you've got. Yeah, so it doesn't update like hourly live or anything like that. There changes will be a little bit behind whatever is live, but it's static, it's read-only. And using Cloudflare's load balancer, we can set a fallback origin. So if the two live sites, if the two WordPress instances go down, it falls back onto the static copy of the site. And it's, it works. And that's all I can ask of it. Yeah, and in some ways it kind of reminds me of that emergency landing page scenario, which we have received interest about that from other schools where it's like, if our sites go down, we need to be able to redirect, redirect traffic somewhere that is up. And that in some ways feels like a better version of just that emerging emergency landing page because it's still the site, it's still functional, it's just, it's landing. And it's super fast because it's all flat files. And like, you know, database goals and it will stay up like pretty much like no matter what, until anything on the other two regions that went down, which means, you know, there's some, you know, apocalyptic scenario down in the lower 48 that are kind of, you know, making Canada the new world power of the Western hemisphere. And so- In that case, we are safe. And I am not looking forward to our new overlords. I am, I got Cousins in Dawson Creek having British Columbia, I'm good. But- I guess. But the big point about the static site is it ensures more or less 100% uptime. And I'm knocking on wood for that because I never want to promise 100% because as soon as I do, Murphy's law takes effect. But on top of this, we also have enabled Cloudflare's Always Online, which stores another static copy of the site in the internet archive. So if even our static site goes down, Cloudflare begins serving from the internet archive. So- Which is amazing. Like that's now the fourth failover. Yeah. It's actually, I'm quite impressed honestly that Reclaim has gone this long without any sort of failover for our site. So it feels really good for many reasons to be able to do this, but I hadn't realized there was even a fourth option. So that's cool. I'm gonna ask you like a question, Chris, that is silly, I can ask you offline, but I'm interested. Like, so do you have to turn that option on or is it on by default? You do have to turn the option on. If it is in, where is this setting at? It's somewhere. It's somewhere in Cloudflare's dashboard that allows you to turn on Always Online for the domain that you have set up and stores a copy with the internet archive. And if you look right now, Reclaimhosting.com and Reclaimhosting.dev are both in the internet archive. I wanna say for, it depends on the plan that you're on. I think ours is a weekly, a weekly snapshot of the site being stored there. Still, I mean, that's awesome. And so we would be able to say, okay, our stuff is busted at the moment. We have to figure out what's going on. So in the meantime, we're just enabling this with Cloudflare to- And they're basically pointing our DNS at the internet archive. So it actually looks like our site is loading over Reclaimhosting.com. As far as I'm aware, we haven't had the need to load it from the internet archive yet because when we do, everything is falling apart. The static site isn't working. The failover isn't working. The secondary site, nothing's working if it's loading from the internet archive. We're all eating rice. Well, that's loading on the internet archive. Like, shit has hit the fan. One thing I did wanna mention with the static site though is because it's the static site, there's some weirdness. I wasn't able to issue a normal SSL certificate for the static site, like we could with the two WordPress installations. But Cloudflare offers self-signed certificates that are only good for communication between the site, between the origin and Cloudflare. I mentioned this in the blog post I did about doing multi-region stuff. And it's worked, it's great. And that's, again, that's all I can ask of it, it works. But so my question there is, will when that happen, so West Coast, East Coast for Reclaimhosting go down, it reverts to Canada, will it for the user show up as HTTPS or we have to do something at that moment? It should show for the user for HTTPS. And I can test this on Dev, after we get off this, just shut down both WordPress instances and see if it goes to static. But users will be routed through Cloudflare. The SSL server between the static site and Cloudflare, only Cloudflare recognizes. That's a good, it's really another good point. Cause when I was playing around with some of the stuff on Friday when I switched over the site, one of the things I realized is, we deal with a lot of SSL certificates at the server level and we deal with them and you know, Virtuoso has a really nice setup where we can add on and do less encrypt. But once you're running through a proxy setup in Cloudflare, you no longer need those SSL certs, which takes a whole level of complexity out of the picture. Am I correct there, Chris? Yes and no, Cloudflare still does recommend that you put an SSL cert between them and whatever origin you're setting up, whatever server is actually hosting the site. Virtuoso has some documentation on setting up the multi-region where they do recommend you issue a less encrypt cert, but Cloudflare will also offer one just between them and the origin. But yeah, Cloudflare does put their own SSL cert between the user and them and then handles everything on the back end. Crazy. And that's, I think that's maybe as we kind of wrap this up, I think one of the things in my last post on writing about, you know, what we've been exploring here with WordPress multi-region and Cloudflare is like, I now, we have, you know, a Cloudflare, you've done work previously in Cloudflare, Chris, getting us on a zero trust network and really starting to lock up our SSH access across Reclaim, which was a really big security measure and amazing work there. And I think that that was our intro to Cloudflare in some ways. And now that I'm seeing what's possible in the DNS and the management side and the load balancing, I'm increasingly like, I think Reclaim now moving forward needs to think every time we're doing a new server, every time we're offering a service, we should be looking at where does Cloudflare fit in this for DDoS mitigation for CDN, you know, really start thinking about making our relationship with Cloudflare part of that setup and that integration given everything it offers. It really makes it a value to not only us and our management but to our, you know, the people we work with. And I think that for me is the light bulb of all the experimentation we've done over the last few weeks is this can now be an integral part of Reclaim moving forward. Yeah, even if, you know, the setups that we're doing are not multi-region, even just not being tied to a specific IP is enough, you know, to be part of Cloudflare. So no, I completely agree. And I like that, you know, where Reclaim feels comfortable enough in the multi-region setups to feel like, okay, this is something we could probably do for other schools, you know, that, you know, having that failover I think is really a big deal and it's cool that this is kind of what we're thinking about right now. So. And, you know, no company is perfect except for Reclaim hosting, but there is the idea that Cloudflare just set up a new service called R2, which I love the name like R2D2, I imagine. And one of the things that R2 does is it's basically Cloudflare's answer to S3, an object storage. And why they're pushing this hard is also behind the idea that Amazon, AWS is S3, charges egress charges, which can make that service extremely expensive. And Cloudflare is arguing that like that, we need to get rid of that charge because we need to level some of the access around these resources on the web. And I kind of obviously buy into that and you know, AWS serves a lot of the web and it has become the de facto infrastructure for the web. And I think seeing other companies come up and provide a balance of what's the better way to do things or how do we manage that or how do we make things more affordable is very healthy for the ecosystem. So that's another thing I really like about Cloudflare and the fact that you're taking on a giant like AWS on something like egress fees, that's no small thing, good for them. Like I can get behind that fight. We might have to rename this call to WeHeart Cloudflare. You know, there you go, that's Jim's heart. There it is. It's like a Minecraft heart. It's a diamond, like a diamond through the forehead. That's what Cloudflare is like for reclaim. Great visual. It's from Apocalypse Now. That's what Kurt says. And it hit me like a diamond through the forehead. I don't know if he does that, but I like the fact. Exactly. Well, is there anything else that you all wanted to share today before we wrap up? Is there anything else you think we should share before we wrap up? Probably not the way you said that. No, I think this is about comforts it. Awesome. Well, thank you everyone for tuning in if you've made it this far. And I'm excited to see where we go from here. So it's only up. Off time. Up time, there you go. Catch you later. It's only up time.