 Last session of WordCamp Toronto 2018. Guaranteed. This is Fast and Furious. My name is Doug Shepherd. Before I start, first, I'd like to thank the organizers of WordCamp Toronto putting on an event. Even one that's just one day is a lot of work and there's a lot of unexpected surprises and they have done such a great job handling all of them. And I'd like to thank our sponsors because without them, we would not have pop, we would not have snacks, we would probably not have an event at all. And there isn't a slide for it, but I would also like to thank the volunteers because without them, there also is no event. So the next time you see a volunteer, give them a high five. So, away we go. Who am I and why am I here? Well, as I said, my name is Doug Shepherd. I am a full stack developer. I've been using, for most of my life, the various P languages. PHP, Python, Perl and JavaScript. I have, in fact, been coding for money since 1983 when a classmate asked me to write a baseball card catalog for him on his Commodore VIC-20. Ask your parents or feel old and then tell your kid what a VIC-20 is. These days, I do full stack development at a company here in Toronto called Biblio Commons. We do software as a service for public libraries in Canada, the US, Australia and New Zealand. I'm part of a team that works on the Biblio web product, which is a WordPress-based CMS. And today, I'm going to be talking about these four items. Object caching, WordPress transients, Redis, and the Redis Object Cache plugin, and that time that everything just sort of fell over. So first, what is a cache? We all know what a cache is. It's a thing you clear while you're debugging. But what is a cache? What it is is just a way of storing results from work that you've done so that you don't have to do the work again. It might be stuff that's frequently accessed, or it might be stuff that's very slow to generate. It's a way of trading space for time. And why do we need to trade space for time? Because your computer's RAM is extremely fast, but every other part of the system is slow. Hard drives are slow. Even solid-state drives are slow compared to RAM. Networks are slow compared to RAM. Humans are the worst. So it makes everything that isn't RAM slow by comparison. What we do is we use RAM, and admittedly, we're still going to be using networks because we need to get the information from point A to point B, and there's still going to be humans involved, but that's for another presentation. So what do you cache? Well, for the purposes of this talk, we're going to be talking about, for example, database queries because some queries are extremely slow and they don't update very often. Some queries, we may be fast, but we use them all the time, multiple times in displaying even one page. And since a cache that is only half full is a cache that is half wasted, we also cache anything else that we actually have space left for. Another great thing to cache, external network requests. The Biblio Web product talks to internal Biblio common services for things like staff lists from libraries. It talks to third-party services like syndetics for book cover images. And these requests are usually quite fast. But remember, compared to RAM, everything is slow. This is quick, but it is never quick enough, especially if that network request will block page render. So why wouldn't you cache? I think that you need some reasons not to. And the first one, remember, cache is trade space for time. If you're space constrained, then a cache is not going to help you. The worst possible case will be that your cache server doesn't have enough RAM, your physical box doesn't have enough DDR4 in it, and it starts to swap. And now you have the worst of both worlds. You have all the benefits of that slow, slow disk, and the RAM is not as useful as it could be. If your database isn't under high load, caching may not make much of a difference. It won't hurt, but it won't help so much. And finally, things that change all the time. You don't want to put them in cache because they're changing all the time. You would not, for example, cache the current time of day because that changes literally every second. It makes no point to cache it. You're just going to get it when you need it at that second. So how does WordPress implement cache? WordPress cache is just a key value pair store. The keys are strings, and the values are just PHP objects. Anything that can be serialized can be put in as a cache value. The API will take care of serializing and un-serializing for you. You don't have to JSON encode it. You don't have to call serialize or anything like that. You just throw it in an object, and it will throw you back that object. The first function that you use with the API is wpCacheSet. You pass it the key name, the data you want to put into it, an optional group name, and an optional expiration. The group is so that you can reuse the same key in multiple contexts. Instead of having to artificially manufacture keys called userEmailOne, customerEmailOne, just put one set in the user group and the other in the customer group, what have you. The expiration time is measured in seconds, and that's when this key will go away. If you don't pass an expiration time, it just means cache this forever, for as long as you possibly can. There's also wpCacheAdd, which does the same thing, but only if the key isn't already there. There's also another function wpCacheReplace that will only set if the key already exists. These are just little convenience methods. Then to get the key, to get your data, you use wpCacheGet. Key, group, force, reference to found. The key and group we already know. Force, we don't care about in this context. Found is a reference because wpCacheGet will return the value of the key or false if the key doesn't exist in the cache or false if the key exists in the cache, but you set it to the value false. If that matters to you, then you need to pass that reference to found and triple equals false that to see if that matters. Deleting a key on your own instead of letting it expire, you use wpCacheDelete. It will return true if the key existed and now is gone. It will return false if the key never existed. Therefore, it's already gone. Either way, the key is gone. We use all these in what is called the getSetPattern. What you want to do is first, when you want to look up something, check if it's already in the cache. If it is, use it. If it's not, calculate it. Then set it in the cache, then use that value. That way, it will be in cache for the next request. It looks something like this. For example, this is one way that you might implement staff lists for a library. We want to store our lists in a cache key called lists in the lib group. We wbcache get. If lists triple equals false, I don't care about the false thing here because I know I'm never going to put false in there. If it's false, then it doesn't exist. So do the slow network request to retrieve the staff lists. Then set the key. And now it is there for the next time. One thing that is kind of a problem, and is going to be the next two thirds of this talk, is that the default WordPress cache is just an array. It is just an array. So the moment your page finishes rendering, it goes away. Poof, it's gone. It's still kind of useful because there's a lot of things you refer to multiple times in the course of a page. But it's not as useful as it could be. One way to fix this is transients. Transients are a special type of cache item. They always persist even if you haven't set up a separate object cache plugin. By default, they get stored in the options table for the blog. Do you do that? They're great for short-term persistent storage, like maintaining data across page loads instead of sticking it into hidden HTML fields that you have to parse that an attacker can manipulate and all that sort of thing. And best of all, since all of you are running WordPress 4.8 or later, they clean up after themselves. It used to be that you would have to install a special transient cleaner plugin or a cron job or whatever that would occasionally sweep out expired transient. Now it just happens when a transient is looked for. It just does the house cleaning for you. How do they work? They're really, really similar to the object cache API. Set transient, get transient, delete transient. They don't have groups. They just have key names. If you want to do network-wide storage, there's a similar set of functions. Get site transient, delete site transient, and so forth. One difference is that the key length for a transient is 172 characters because it's an option. And options.optionName is a varchar191 in MySQL. So end of transient is stored by prepending its name with underscore transient underscore and the expiration it might have with underscore transient underscore timeout underscore. 191 minus the length of transient timeout is 172. And that is how you accidentally build dependencies between your database and your cache layer. So this is what it looks like. So what I have here, the first thing I want to do, I want to get featured item and set a transient that will stay around as long as possible in the key feature. Or you can give it an expiration period, featured item, get featured item, set transient feature, featured item, day in seconds. This is a constant that WordPress gives you so you don't have to multiply 24 times 60 times 60. You also have hour in seconds, week in seconds, month in seconds, year in seconds. Always use these because it's much easier to think just read day in seconds and think, oh, this is a day measured in seconds. Then to look at 24 times 60 times 60 and remember why you're doing some math. Get transient right here. Again, very similar. And delete transient to delete a transient, whether or not it's about to expire. And you use the same get set pattern here that we did with WP cache get WP cache set. If you remember one slide, this is it. Transients, when you give them an expiration, they have a maximum lifetime. They don't have a minimum lifetime. Don't treat them like they do. The API guarantees that at time X, your transient with an expiration at time X is gone. It guarantees nothing at all about any time before X. So I am about to show you an excerpt from the world's most technically compliant cache. There is nothing about this that is a lie. The transient API doesn't promise a minimum lifetime. If you write something that throws away the value, the moment it gets it, that just assumes that every value doesn't exist, that's fine. That is perfectly acceptable. Transients don't have to exist. They just have to go away when you told them to. So now let's talk about Redis. We have reasons that we've seen for using external cache. First, object persistence. We want objects that continue to exist even after the page finishes loading. And we don't want to keep everything in transients because by default transients are stored in the options table. You set 5,000 transients. You have 5,000 users. You have one transient, indeed. Congratulations. You now have somewhere between 5,000 and 10,000 extra rows in the options table, just for that. There is, for database queries, MySQL's internal cache, which is okay, but we don't want to depend on MySQL being fast because every query is a chance to hit the disk, disk slow, and every query has the potential to be computationally intensive. Computationally intensive is slow. At Biblio Commons, out of the options that are available to us, we are using an external cache server called Redis. It runs just a giant pool of RAM that you can pour stuff into. You run it on the same box as Apache or on another box. It is extremely fast. Just on this thing, which is not a server, it's just a standard little Mac book. I can do 50,000 operations a second on the benchmark. Redis is also super highly reliable. That is the website, redis.io. Yes. Compared to MySQL, this thing benchmarks about 8,000 ops per second on MySQL. On an actual server, you can get up like 250k or so. It was properly provisioned with Redis. To install it, I'm not going to walk through the whole thing because you all have different ways of installing things. You can compile it from source. You can use your package manager, apt-get or yum or port install or whatever to get it from your operating system. Or you can do what we do is use Docker Hub because we run all of our services as Docker containers. That right there, help.doctor.com slash underscore slash redis is where you can get a Redis container pre-built for you. Redis is again another key value pair, except the key can be up to 512 megabytes and the value can be up to 512 megabytes of binary safe strings. For example, you can store an image as a value. If you want to build some sort of image caching server, just store the raw image. You could even use the image as a key. Redis doesn't have namespaces, but the tradition is just to fake them by using paths separated by colon. So two colon, user meta, colon, email. This is blog number two, user meta table, email. That sort of thing. Redis also, of course, lets you expire keys. All keys are given in expiration in seconds and they just go away magically at the right time. And by default, every other key, anything that doesn't have an expiration is kept around forever. This is fine because memory is infinite and never runs out. Let's try that again. This is not fine because memory can run out. What you can do instead is tell Redis how much space you wanted to use. 32-bit, if you're running on a 30-bit system, it's limited to 4 gigs. If you're running on a 64-bit system, it's limited to practically no limits because it's 64-bit. And then when you start running low on storage, Redis will start evicting keys, deleting them even though they're not right now due to expire or due to expire at all. There's a setting called maximum memory policy that tells Redis what to evict. And you have two decisions to make there. What type of key do you want to evict and what subset of those keys do you want to evict? For what type of key? All keys versus volatile. The key is volatile if it has an expiration. All keys is all keys. And then the strategy it can use, random or LRU or TTL. It can just pick keys randomly. Like all keys randomly will pick just from the entire name space. Or you can delete the key that is least recently used. It's the oldest one that hasn't been touched. It's probably not that useful anymore. Get rid of it. Or you can go with the one that has the shortest time to live, the shortest expiration, the next earliest expiration time. Because it's going to go away anyway in two seconds. Let's just get rid of it now. If it needs to exist again, we will be able to bring it back. For recent reads? Yes. So it actually tracks reads? Yeah. Yeah, timestamps is the most recent read on each key. Yeah. What we do is volatile LRU. Keys that have an expiration that have the least recently used timestamp. We also in our plugin set every key to expire in day and seconds, just in case as kind of a belt and suspenders thing. All keys least recently used will work fine and ignore that last bullet point. So Redis, I mean, so now we're going to do a third version of the same dance. I just want to make sure that you see that this really, that these cash services really do have a pattern. So the Redis server exposes a command line interface. The command is called Redis CLI. And here's an example of its usage. Redis CLI minus H, our host name, minus A, the password you set on the Redis server, and then a command. Or if you run it without a command, it will just pop up an interactive prompt for you. The same one that you get when you run PHP minus A. How many of you knew the PHP minus A does a thing? Give it a shot. So let's just do a quick little walk through the commands. Set key value, similar to the key cache set. The key and value are both strings. They need to be quoted if they contain white space. It just is the standard command line parsing. Set X key expiration time value. The expiration time is in seconds, just like in WordPress. You can also add an expiration to an existing key with a command called expire. Set X is just a convenient shorthand for it. Get key, TTL key, time to live. It tells you the number of seconds until a key goes away. Or it returns minus one if the key will never go away because it has no expiration time or minus two if the key doesn't exist. And then there's, this is actually somewhat really useful. Inker and Decker takes the value of the key and increases it or decreases it by one. Except what it really does is, Redis doesn't really have any internal number data type. So it takes a key, which is a string, interprets it as a number, adds one to that number and stores the string. You don't care about this, but this is a great way to impress people at really lame parties. And finally, keys and a pattern will return a list of all the key names that match the pattern. It just uses normal regular expression syntax. You can do keys star. It works, but it's not a great idea, especially on a huge production system. If you need to do something like that, then there is a set of commands based around the scan command, which I'm not going to get into right now. Now, so we have our Redis process up and running. And this is better than the WordPress cache, which is just an array that goes away at the end of page load. And as far as the transients, which are rows in your option table, but it's still not actually that much better because we also need to persist to disk. Because if we don't, then the moment our Redis server goes down, we have lost our entire cache. So we need some way of saving. And there's two formats that Redis uses. The simple one is called RDB. It's just a binary dump of the contents of the entire cache to a file. And the second one is called AOF, which is not as simple, but it's still really simple. It stands for append only file. It is a list of every command the Redis has received since the server started up. If you save that to a file and then you start up the server again and read and do every command from that file, then by definition you will have recreated the exact state of the cache at the time you save the file. There's commands save and bg save for RDB files. Save will block incoming server connections, so you don't want to use that almost certainly. You want to use bg save, which will fork a child process which performs a save. You might want to do this overnight. The thing about this, it's a binary dump, so if it gets really scrambled, it might be unrecoverable. AOF, it just saves every x seconds by default, just automatically does it. It also can do a save after some percentage of your cache has changed, so that even if it hasn't been 60 seconds yet, but half the cache is different, you definitely want to save that. It just happens in the background automatically. And because it's just a log of commands, if this gets screwed up, you still have everything in the cache right up to the point where the disk failed. And every command after that is lost, but you still have your first part. So which do you use? AOF is general persistence and short-term recovery. RDB is backup. A lot of people use AOF only. We, in fact, use AOF only because it has been remarkably robust for us, and since we're using it as a cache, we're not going to have any data loss, even if Redis goes away. We're just going to have to give back all that time we traded away for space. Things will get slow for a while. There's a whole bunch of other stuff, by the way, that Redis does. It's got all sorts of other data types with lists and sets and hashes. There's a lot more commands that you can use with it, and if those commands aren't enough, it even has a built-in Lua interpreter so you can write your own Redis commands that run internally on the server. I'm not going to talk about those right now because we're not using them yet, but they're available for you. Now, I just said what happens when our Redis server goes down is that everything just gets slow, but let's say that you care about keeping things up. Remember that I called Redis highly reliable, which is a code word for fails. Not very often, but often enough. You need to turn highly reliable and sometimes fails into never fails, and the way you do that is multiple instances of Redis. Run one Redis server that has a master, and another Redis server that has a replication server. All it does is listens to changes that come in from the master server and stores them. If the master crashes, then your replication server can just fall into place and take over. Just remember that never fails is also a code word. Never fails means if you don't do something dumb, like run both of your Redis servers on the same box so that taking that one box down kills both of them, then it never fails. Be sure to do that. What would cause Redis failure? Use our error, which will come up. Your server hardware fails. You have a hard drive crash, or a raccoon eats a fiber at your colo. We live in Toronto. You know that either this has already happened, or it will happen and be on BlogTO tomorrow. What you use for this is a package called Redis Sentinel. This does monitoring and failover. I'm going to be very brief about this. You can look it up yourself. It's not too complex. Basically, you set up your two Redis servers, one master and one to replicate, and you set up three instances of the Redis Sentinel, ideally in a fault tolerant way so that they're on three different boxes. Your client no longer connects to the Redis server. That minus H argument from the Redis CLI command earlier. You connect to a Sentinel, and the Sentinel will tell you what server to connect to. In the background, as far as you know, as a user, everything will just continue to work. You need three Sentinels, by the way, because if you only have one, you don't have high availability anymore. If you have two and they lose their connection, then things will get confused, or one can die and now you're back to, and you're back to the low availability situation. With three or more, you're guaranteed to have a quorum. It never fails. So we've done a deep dive into a whole bunch of stuff running on another box. We have Sentinels and all that. Let's get back to WordPress. How can we use this? And the answer is plug-ins and drop-ins. Drop-ins are the other WordPress extension mechanisms, the one that you may not have heard of. All sorts of functions in the WordPress core are able to be overridden by you. Core functions like the Cache API, for example. WordPress will look for a drop-in that may contain definitions of all the functions in that API, and then it will fall back to setting up these functions if they haven't already been defined. The file that is the Object Cache drop-in is, of course, Object Cache.php. It'll just be in the root of your site. So we use a plugin called Redis Object Cache. This is its URL. It is really well supported. The last time it checked, the last change to it had been two days earlier. It is actively maintained. And it works with Redis servers and Sentinels and Redis clusters. To set it up, you network install it. Just install it on your site if you're not running a network. Network install it if you have more than one blog going. And in your WP config, you set up these constants, the Redis host, port, password, if you've got a password on the server, maxTTL, which will be the time to live that will be given to every key where you don't set one. Like I said, we use day and seconds by default. And if you're using Sentinel and replication and all that, Sentinel and servers. Then you just activate the plugin, go to its control panel and tap enable. And now it's going. It will cache nearly everything for you. It will cache most tables in your WordPress instance. Users, posts, psych meta, options, and transients will go from using the options table to using Redis automatically instead. There's a few tables that it ignores like plugins, for example, because you have a chicken and egg situation. How do you know if the Redis plugin is installed if you can't get to the plugins table because it's on the Redis server that has gone down? Also, remember, options.optionName is varchar191, and the Redis key is 512 megabytes. That 172 I mentioned earlier, that no longer applies. So what does it do? Why does it work? It's a get set pattern. The thing that I showed you before with the cache API and with the transient API, WordPress internally uses it all the time. Every chance WordPress has to avoid using the database, it takes. And this is slide number two that is important. This is the other thing that will bite you. It has bitten me. So I'm giving you the warning that it will bite you. WordPress prefers cache over database, fast over slow. So if you want to make database changes, it can no longer just cowboy into a MySQL prompt and do a select insert update there. Because if you do that, Redis won't see it. And if Redis doesn't see it, your site won't see it. You will be serving whatever was in the cache, whatever was in the cache at the time you did the changes. WPCLI, the WordPress command line, gives you a whole bunch of cool ways to update your database that are guaranteed to also update the cache as well. Really look right into it. Now, Gutenberg. Here's the thing. You know that this has been a heavily Gutenberg-oriented WordCamp. And a lot of stuff that I've talked about doesn't seem to really apply, or does it? First, what you know, if you've seen some discussion of the internals Gutenberg, is that what it stores, it stores block directives as HTML comments in your posts. Posts are stored in the post table. Metadata for those posts is stored in post meta. And Redis object cache automatically caches those tables. You don't need to do anything. You are already caching everything. This is Gutenberg ready, and has been for several years before Gutenberg even existed. So how good is it? So I took a raw WordPress image from Docker Hub, spun it up, did nothing but run WP install, and looked at the page welcoming you to the Gutenberg editor with the debug bar on. And rendering that page took 24 MySQL queries. I installed and enabled Redis object cache, reloaded the same page, and it took six queries. We have seen our MySQL load drop by about 80%, since we have started using Redis and Redis object cache. And this is what that looks like. I can read the statements on the left. None of you can, but we can all read the statements on the right. On the left, 24. On the right, 6. And how fast is it? I'm no, I'm no scientist. I barely graduated from Bovine University. But I am able to give you a rough benchmark. I added an init action to this stock WordPress site that just sets 500 random transient keys and values, then retrieves them, then deletes them. This is what it looks like if you're curious. And this is what it looks like in table form. How much time do I spend in MySQL? Either 1,890 milliseconds or 6.8. How many queries do I send to MySQL? Either 2,530 or 6. In fact, the same 6 that we just saw. And how much time does the entire request take? 2.5 seconds versus 1.14. This is inflated a little so that there is a measurable difference, but even 1% a million times over is still a lot. In fact, one thing I forgot so that you can learn from my pain. The first time I did this, like the default stock WordPress was 1,000 times faster than Redis, which made no sense. And then I remembered that I was using the default cache, which is just an array that goes away. And that's why it was 1,000 times faster and that's why I had to use transients. So now let's run through a few quick ways to monitor the server and monitor your plug-in. For this, we want to use Redis CLI. It is a definitive source of Redis truth and accessible to anyone who can tell that to that port on your server, which means you don't put it on the open internet. Don't! Just don't! If it has to be accessible over the open internet, at least set a strong password on it. We don't do that, but we do have a very... We do have a VPN that you can't access at all from the outside world and Redis is only available to servers inside that subnet. So some commands are really useful. Info memory. Info sits out a whole lot of stuff. Memory is just the memory block from it. Memory usage, key volume, expiration rate. The three interesting values there, used memory, how much memory Redis is currently using, used memory peak, the maximum instead of the reached, and your max memory policy. Persistence is about AOF enabled, which should always be enabled. AOF rewrite in progress and AOF rewrite scheduled. These will all be Booleans that are either 0 or 1 for false or true. And then you also have, I mentioned WPCLI earlier. When you install the plugin, you'll have an added command in WPCLI called WP Redis. WP Redis status from the fan line will tell you the current status of the object cache, whether it's enabled or disabled. It'll tell you the implementation detail of what Redis PHP library you happen to be using. And WP Redis enable will turn Redis object cache on, enable the caching. There's similarly a WP Redis disable. So what we do is we have a very simple daemon that runs in the server crontab. Not the WordPress cron, but the server crontab. And every five minutes, this just runs WP Redis enable. And this way, if something temporarily breaks the connection to the Redis server and the plugin sees that the connection's gone and disables itself, it'll turn itself back on in five minutes. More serious outages that actually happen to the Redis server, we get a ping from our site monitoring. Don't worry, you'll know if it goes down. Because if you're using it for permanent storage, things are gone. And even if, like us, you're using it as cache storage, you no longer have a cache and now everything is slow again. How does that happen? These are two things that actually happened to us. Learn from my pain. First, you might fill up the disk. This doesn't seem like it's very likely, but it can happen if you are using a network storage and you don't allocate enough space. Redis requires disk space to write the AOF file and to occasionally compress it and rewrite it to reduce how much space it uses. It needs space for the existing AOF and the new one that's writing alongside it. As your backlog diff grows and grows and grows, if there's a lot of activity or you're starting to run low on disk space, so does the time that it takes to keep up and rewrite it. And if it gets too far behind, it will not catch up and it will fall over. In fact, this did happen to us once. And then, if you run out of memory, because you'll remember that we carefully stroke through the word infinite memory, what happens is that Redis is pretty reliable. Sets will fail because it can't add a new key to the database, but gets will continue to work. It'll do what it can, but if it keeps growing too big, like for instance, if you have set to never expire keys, eventually the Linux out of memory killer will introduce itself to you. The OOM killer is a process that kernel runs to save you from yourself. When a system is so badly overloaded that it's either going to crash completely or some processes are going to die, the OOM killer says, I volunteer with the gun and it will see your Redis server, which is several gigabytes in size, and it will say, oh, that has freed up a lot of memory. And now you have no more memory problems. This is how we fix those problems. It really is that simple. Just have about two times as much RAM in the server as you have set your max memory directive, and just in case, keep three times as much disk space. Redis has not complained since, even though its current uptime is at least 400 days straight without a restart, without a reboot. And I will now take any questions you might have. It's a plug-in that I have on my server that I use on something. Memcache D. That might be it. So that's what the settings are, the right ones. But you're not aware of how to work through all that. You would not configure Lightspeed. You would configure the Redis server itself to change those settings. Not many. But what you're offering here is this is a plug-in that we can pull out through the repository? Yeah. A WordPress site or it can install somewhere else? You install the plug-in on your WordPress site. And it points out a Redis server that you set up separately. It might be, it might not be. If it's been working for you, then it sounds like they've done the setup on your behalf. They've probably made sure that it has enough space that keys expire, that it's doing backups and all that. But you'd have to ask them. Yeah. Yeah. Memcache D. Memcache D is basically the Pepsi to Redis' Coke. It's pretty much, yeah. Pretty much this entire talk, I could change about 12 slides, and it would now be available to Memcache D. I turned this on. I didn't see any significant difference. So it was sort of like, is this working? I'm not sure. I don't think so. Because I, you know, rise of cache and minify, all kinds of stuff along. Yeah. So I didn't see a difference. So if I were to go this route, would I get a better, possibly, outcome? It's possible. Like, part of what's happening is that if things are already fast, they're already fast, we're running this because, Biblio Web, we have several million monthly active users. And like I said, taking 1% of one request, taking one request and making it 1% faster, you might not notice that. But if you have a million requests and all of them get 1% faster and all of them use 1% less MySQL and all that, that you'll notice. You might just not be under heavy enough load yet for the efficiencies to really kick in super-obviously. It's like cutting it here. Well, the reason I'm interested is because the site, even though I've got everything, you know, tweaked is tweaked. Yeah. How many plugins do you have? Six or 10 or 20 plugins. Have you ever optimized your load times? Have you ever optimized your images? Yes. Yeah. How many resources are you loading? There's so many other factors. I agree. And I've been examining all of that to the point where I think, well, maybe the theme itself, are you running a shared posting environment? It's an enterprise one that shares that. Yes. Is it shared or is it taking shared? Yeah. He's probably like running this on a dedicated server. Yeah. Yeah. Some of these things are the things that you do when you have, like I say, tens of thousands, millions of active users when you're serving in petabytes per month. Some of this will be useful. All the stuff that I talked about, if you're developing plugins, you will want to use Trans, you will want to use the Cache API, even if you never run Redis. Knowing about the Get Set pattern and knowing how that works, that will still help you as a plugin developer and as a plugin user, it will help you be more aware of what happens when something breaks. It helps to know what's going on under the hood. Okay. Yeah. Yeah. Okay. Yes. How does this work with other Cache plugins like WP Cache? WP Rocket. It doesn't really. If you pick one Cache solution, then you don't need another. Like if WP Rocket is working for you and it's giving you the performance you need, then you don't need this. But if you switch over, then the other Cache plugin is not going to work. They're both going to try to put in that object Cache drop-in and whichever one wins is the one that's actually going to be used even if you have both of them activated in the control panel. So for a smaller site, like you was mentioning, is it better to go with Redis or to go with a plugin like WordPress Rocket? For a smaller site, I lean toward Redis, but that's mostly because I'm a server guy. I am the sort of person who just says, well, I'll just drop a Docker image of WordPress in here, start up a container and do some benchmarking. If you're not there, then look at other Cache plugins. They will all work on the same principle of trading space for time. And if you find one that works for you and you feel like you've got your head around it, then that's fine. I'm not advocating for this product specifically. I'm advocating for if things are slow, this will help a lot. Yeah. Yeah, but it won't matter. It's just whatever HTTPD. Nginx talks to WordPress, WordPress talks to Redis. There's no connection. We've got a lot of moving pieces in our application stack right now, so we don't really have the ability to really do an isolated experiment like that. We've got Redis, we've got MySQL, we've got Varnish, we've got a load balancer, et cetera, et cetera, et cetera. We might do that experiment someday, but we'd have to be set up to actually give us something useful. Anything else?