 OK, so we're going to talk about some more advanced sides of the render cache system. I always thought I was pretty knowledgeable of the cache API and has been a Drupal 8 developer for a long time and add my tags and my context and thought I knew what was going on. But in the past year, I've been digging into the caching problems on a really complex site. And I found out that there's a lot more to it that I didn't know. So these are sort of the topics that I wish I had known a year ago that I learned the hard way. And so my name is Jody. And I used to be the CTO co-founder of ZipTec, which I did for many years. I'm currently the web development manager at Renaissance Electronics. We have a really large Drupal 9 site. It's a semiconductor company. It's a really complex site. We have a large Drupal 9 development team. And when I came to the project about a year and a half ago, I was just like, well, this site is just really slow. We're going to fix this because there's no reason to have a slow website. It's a problem that we're going to solve. And I really met my match. I really thought I was going to make the site fast in a couple of weeks. And it's like a year and a half later. And I'm still fighting with the caching. So along the way, I was able to hire Yana as my expert consultant in the cache system, and let you introduce yourself, Yana. Yeah, I'm Yana. I am with Tag1. I used to be a D8-9 media initiative lead. And a few years back, I also worked at examiner.com, which used to be the largest group of websites on the internet. It was still D7. That was fun. And if you don't know about Tag1, we work in many technologies, but primarily Drupal. And we have probably the highest concentration of core contributors. We also found full-time person to work on Drupal health with Drupal.org, with infrastructure, and all those things. So we really tried to invest back into the community that made us successful. And we're always hiring. If you are looking for a new job, let me know. And yeah, we were brought up to work with Joanie and Renaissance to try to help them improve performance. And we also had some help from Fabian, who's one of the creators of the cache system. Fabian Franz, who's our director of technology, I believe, is his official title. We really thank you in core for being contributing to core a lot, really knowledgeable. So the cache API that we're talking about, that is the render cache system with its system of tags and contacts, it was introduced with Drupal 8. And previous to that system in Drupal 7, there really wasn't a caching system for authenticated users. There was no real way to have parts of your page be dynamic for the authenticated users and still have good caching on your page. So this system that was created for Drupal 8, of our render cache API, is this incredibly powerful system that lets us have caching for not only anonymous users, but for authenticated users as well. So it's a very powerful system. And out of the box, the Drupal 8 caching step works pretty well. And you'll have a pretty good cache hit rate so long as your content is not updated too often. Or you don't have a crazy number of different pages on your site. And or if your audience is mostly all anonymous users, it'll all work pretty easily. So the problem that we had with the RenderJas.com is it supports authenticated users, so that makes caching more complex. The content is edited frequently, like all day long. Like every five seconds, the content's getting updated because we have scripts that are updating content in mass. We have a whole team of editors editing content. We have thousands of pages. And we have 27 different region and language URL variations so that it makes like we have those region language prefixes because we're a multinational company. And that makes it much harder to get page level caches because now we have 27 times more pages. We have tons of custom code. We have heavy use of views module, which we'll get into views module specifically and how it works with caching. And we have these really large complex pages that have just hundreds of content references on them. So we kind of had every possible. And then, of course, we had a team of people who had originally built the site in Drupal 6 and then had upgraded it to Drupal 8 and nobody had ever really taught them how the cache API worked. So they didn't really know what they were doing with the cache API. So we had just every kind of bad practice that you could have in the custom code. So what we're going to go through today is we're going to just like review some of the basics. Hopefully you guys know the basics because otherwise it's going to be a lot, but just to bring fresh to mind, review the layers of Drupal caching because they get pretty confusing. And then talk about how caching often goes wrong, how the place-holding system works, how to debug your caching, how to log your caching with cache metrics module, and then some specific issues with views module and caching. So some of the basics of how this system works is all about the tags and the contexts. So the caching works on render arrays. So really a common example of a render array that you would need to add caching on would be like if you made a custom block plugin, you're going to output that as a render array. You need to add your tags and your contexts into that render array to make sure it's cached properly. So tags are what's used for invalidation. So all the render arrays get these cache tags, and those are saved with them in the cache. And then Drupal fires cache invalidation events, which clear things based on cache tag. So if your data had a cache tag of node 11, and then your editor went in and edited node 11, and they hit Save, then everything that had a cache tag of node 11 would get wiped out of the cache because it's now stale and get recreated. So in that way, Drupal always knows what exactly to clear out the cache when somebody edits content or makes some other type of config change that would affect the page and make that piece uncacheable. You don't normally have to write your own cache invalidation events because the ones at Core already does cover most cases, like just typical ones are just invalidating things when someone edits the content and stuff like that. But there might be a case where you're doing something really customary. You have to add your own cache invalidation logic. But usually all you're doing is just adding your cache tags. Then cache contexts are the system for the variations in what the cache data is. So a really common context that you stick on a chunk of a render array is route, because a lot of times your block has to be different on each page because it's showing you something specific to that page. Sometimes you have to vary things by language or by per user even. So but the more specific you make these contexts, the less likely you're going to get cache hits, because now somebody has to have hit that specific context variation before it's going to be cached. If you don't use the context, so then it just gets cached for the first situation that someone looked at it as, and then you're screwed to it. So there are actually default required contexts that are configurable in your services.yaml file. The defaults are languages, theme, and user permissions. So those just get applied to every single render array by default. So you don't have to add the language context that's already there. The other property in addition to tags and context is max age. Max age you don't normally need to use. The default is negative 1, which means that it's permanent. Since the tag system is really smart and can clear things out at the right time, you normally shouldn't have to set a time-based. There could be some situation where you do. But in general, you don't really use max age as often. And then we come to the concept of cache metadata bubbling. Cache metadata are all those three things. Like that's the name for tags, contexts, and max age all together. And when you were thinking about bubbling, you really like, I think that the best model is to imagine a single page as a tree structure where you have the root of the tree is the page. And then each element is a child. So each block would be a child. And then inside each block, you have, I don't know, an image, a piece of text. These are the children of the block and so on and so on. And then the smallest pieces on the page are the leaves of these three. And what is going on is when you add some cache metadata to any of the children, at the end when the page is being rendered, everything that is all the metadata that is on the children will bubble up. So if you have a block on a page that has an image, and on that image, you added a cache tag to clear it when node 1, 2, 3 is being updated, then this cache tag won't only apply to the image, but it will also apply to the whole block and to the entire page. So everything that you put will eventually end up influencing the pages of code. And that's really important because if you add something that causes problems on an element, it will affect the entire page. So if you have a link that changes per user and you have to add user cache context to that link, you potentially created the whole page basically un-cacheable because you will have to potentially vary the whole page per every user. That's not very effective caching strategy. And this is why thinking about bubbling is really important. When you do things, you're not only affecting that small element on the site, but potentially the entire page. It makes it really easy to break your caching. Yes. There is one tool that we have that breaks this bubbling, which is called placeholdering. And we will talk about that a little bit later. Yeah, I told that. Especially a lot of times, you would see when people setting max-h0 to some element, usually because they are bugs related to caching and they don't know how to fix it. And then they just put max-h0 to it, which can potentially make the entire page un-cacheable, which is not what we want. And this is really problematic. And then cache contexts that cause too many variations, like user and sessions, are also very problematic. So when using these things or max-h0, we really should stop and think about it and figure out if we really need to use this or there is some other way to make it in a more efficient manner. And yeah, I mentioned like lazy loading slash placeholdering is a tool that helps us make caching more efficient in situations like this. And there is also auto placeholdering in Drupal Core, where Drupal tries to identify pieces that are bad for caching and it will try to auto placeholder them if a few conditions are met and we will come to that part. OK, so just to review all the different cache systems that you're dealing with in Drupal, you've got the Drupal Core systems. So the first layer is the render cache. So that's the fact that each renderer array with its tags and contexts, that can get cached. So there are just bits of cache data for each rendered. Well, it's a render array, but what gets cached is the actual markup after it's rendered. That's the point of it being cached. So every single little bit of blocks and other render arrays on your page is separately cached. That's the render cache. Then the other things kind of build up from there. So the internal page cache, which is pretty much the same type of cache that we had in Drupal 7, is a page-level caching that only works for anonymous users. It's a core module that's enabled by default. It does use the cache tag system so that it knows when to invalidate the entire page with it figuring out which cache tags are on the page by bubbling them up from everything that was in those render arrays. So each render array has a bunch of cache tags. Until you get to the whole page, then you have a whole big list of cache tags. And if any of those cache tags get invalidated, then that flushes the page. So that's a lot nicer than in Drupal 7, where they just would flush the internal page cache anytime anybody edited any content or added any content, because there wasn't the system specific cache tags. Then there's the dynamic page cache. That system's new in Drupal 8. That's the caching that works for authenticated users, as well as anonymous users. And that system uses cache tags and contexts. And it also, in the same way, bubbles up the cache tags and bubbles up the context from the render arrays until you have an entire page with its tags and contexts that can get fully cached even for authenticated users. So it might be that there are parts of the page that are specific. It says, hi, Jody, in the corner. Well, you can't cache that and serve that to every user, but that's where the placeholder in gets into it. So the dynamic page cache can cache most of the page. And then process the things that couldn't be part of that cache and then add those separately. So it can kind of cache the whole page, but then get back. If we're using a reverse proxy, like, say, varnish or a CDN, do we need, does the internal page cache module operate that, or do we want that disabled, and it's going to run it itself? Yeah. So the internal page cache isn't really going to do much for you if you have a CDN because it's just going to be a kind of redundant layer of doing something similar. Yeah. Like, we have ours enabled, but it doesn't really make much difference. It's just the same type of level of cache. And then big pipe is a core module that can build on top of the dynamic page cache. And so it's just adding a way of streaming in the placeholder elements in a different and faster way. So once you have your cache and working well with dynamic page cache, then you have the option to enable big pipe, which would let you stream your page and then bring in the placeholder things later on in the stream as a separate chunk, instead of trying to render, instead of grabbing the page and then grabbing these non-cacheable pieces and then shoving them in and then giving it to the user, it gives you the fast cached part immediately. So that's a great tool as well for authenticated users. And then for the external page caches, those are the systems outside of Drupal. So a lot of people have varnish and or a CDN, like Akamai or Fastly or Cloudflare. These systems, these external systems, can be integrated into the cache tag system. There's a module called purge that a lot of people use to help integrate those. There's some of the CDN type of custom modules. But basically, the idea is that you include the cache tags in the HTTP response headers. And then you have Drupal when there's a cache tag invalidation event. You have it reach out to varnish or the CDN and let it know it needs to clear everything based on those cache tags. So if you have this all set up, you can see that all of these parts of Drupal all use the cache tag system. So if your cache tags are right and working well, you can achieve caching that allows you to never have stale content ever, which is really important. Because even if users can live with stale content, a lot of times editors can't. It's impossible for them to do their job if they can't see the changes that they just made. And it's not just them being confused about caching or something. They can't do their job unless they can validate that what they did looks right to the end user, and now they can move on to the next task. So it's a very necessary thing that you can serve fresh content and make sure that your site is correct. Yeah, so this would be the list of the most common problems that we see when we do performance audits of Drupal sites. And the single most important one is Max H0. Like, I've seen sites that had this put on so many places that basically the entire site was uncached. And usually why this happens is when you have a cache-related bug where things are behaving strange, when things are not updating when they should, or even more like you have, if your cache contexts are wrong, you will, in some situations, serve wrong content to wrong people. And these kind of bugs are usually really tricky to reproduce and to fix. And a lot of times people are just like, OK, Max H0 done fixed. But that is not a fix. That just masks the bug and makes your problems even bigger. So never, ever, ever do that. There are some situations where you actually need parts of the page to update very frequently. So in those cases, maybe this would be a valid option. But even then it's usually better to at least cache for a few seconds or 10 seconds or whatever works. Like, it's still better to cache for a short period of time than never. And if you have pieces that need to be updated so frequently, then you also have to placeholder them. Because otherwise, the whole page could be affected. And the other problem is missing cache tags. If you have custom code and you forget to include tags for everything that affects a piece of markup, then your things won't update. And this would usually then end up maybe with Max H0 or some other half solution. But when you are, this mostly comes to custom code. When you're building custom code, you have to think about what is the data that I'm taking in to build this piece of markup. And you have to include cache tags for all these data. Otherwise, at some point, it won't update correctly. And then besides not adding all the cache tags, a problem can also be using tags that are too general. One example is there are entity list tags in core. User underscore list, node underscore list, taxonomy term underscore list. And these cache tags are invalidated every time when any entity of that type is being updated. And then if you have a site that uses nodes for everything, and if you have this list cache tags put on your elements, which views do, every time when you update any node, everything would be invalidated, whether it's affected or not. Which basically wipes out entire cache. So yeah, try to avoid this. And for views, we have a module that will recommend that you have to solve this problem. And with cache contexts, the problem is usually either not using all cache contexts that you need. Like here, you have to think in which situations this piece of markup that I'm building will be different. And you have to cover all the possibilities with correct cache contexts. So for example, if you have a view, or no, no view. If you have a custom block that takes a query argument from the URL and does something based on the value of that query argument, you have to include a cache context for query arguments. If you, as Jodi mentioned before, are doing something different in your block based on the route, then the route cache context needs to be used, and so on and so on. And then the other problem with cache context is if you use cache contexts, that's very too much. And examples of this are basically user, session. These two are the most problematic. But then also even the simple URL cache context could be problematic. Like you want to try to go more specific. And an example here would be instead of going with URL, if it's query arguments that affect the content, then use URL.query arguments. And even further, if it's just one query argument to some cache context, you can send in parameters. And here at the bottom of the slide, you see an example of that. Like if you have user.query.arch's cache context, you can tell it which query argument you're interested in. And now the markup will vary only when the value of this specific query argument changes. If you just use URL, it'll be like if they come in with a different query argument, it'll be a mess. So sometimes people use URL when they meant route. And they're like, oh, I just need to vary it by URL. That means that when somebody comes in through a Google ad that has a little query at the end from their Google ad, they're going to get a cache miss now. So you're going to pay for these Google ads. And then as soon as people take the Google ads, they get the slowest page in the world. Because you're getting them cache miss every time because it's a little unique string that they have. I have to also mention that this could also be a tech vector for a DDoS attack. If somebody knows that you are rebuilding caches for every different value of a query argument, it's very easy to just change it all the time and you will be constantly rebuilding caches and your style will go down pretty quickly. Yeah, we already mentioned the topic of place-holding a few times. And place-holding is a tool that we can use when we have a high-credibility context, aka we have markup that changes a lot, that varies a lot, or if we have a high-frequency validation, like if we really often invalidate things. Things that are problematic in terms of caching the entire page. Like if you have pieces like that, then we can use place-holding to basically remove that piece of markup from the general markup that is then cached separately. And then we use place-holders to render these problematic parts separately and then put them into the places where place-holders have been left before. Then when you have place-holders, big pipe can leverage that and can deliver you most of the page initially and then serve these pieces that are place-holders separately and then the appeared performance is much more. And there is also automatic place-holding available in core, which will work if you are using lazy building. And then if Drupal detects that there is a piece that is lazy built and there are any of the problematic contexts or tags attached to it, it will automatically place-holder it. You can do it manually, and I will show you how, but if you don't do it and you don't specifically say that you don't want it to be all place-holders, Drupal will try to be smart and do it for you. And one example in core where this is used is blocks. So blocks are built through the lazy builder and if you have problematic things on block, Drupal will auto place-holder it. One thing that we just learned yesterday is that context when it's serving blocks doesn't do lazy building, which means that auto place-holdering doesn't work and there is a patch about that. So if you're using context, use that patch. It will help. Now an example of place-holdering. Can anyone guess which piece on this page will be place-holdered? Which piece will change very frequently? Any right? The message, yes. So the nature of the status message or error message or warning is that it's only meant for a single user and just once, usually. So if we would cache this page together with the message, we would basically need to cache per user and we don't want to do that. And that's why, if you look into the render element for status messages, it won't build the message in the main build function, but it will only apply this lazy builder callback and say create place-holder. Lazy builder is basically just telling Drupal, okay, I don't want to build this part of the render array just yet. You can do it later. And in order to do it, just call this callback and it will return you the render array that needs to be there so you can use it when you're rendering the page. And if you don't place-holder it and if Drupal doesn't decide that it wants to auto-place-holder it, when the page, the render array is being rendered, this will be called and injected into the general, the bigger render array and rendered as normal. But if you have a place-holder, then Drupal will not call that at all and just call it after when it built the page-cached it and it needs place-holders it will render them separately. And if you go to the next slide, here we see the example of this lazy builder callback and it's basically just a standard render function that returns a render array. So if you know how to work with render arrays, you can use this tool. So you just declare a function that returns a render array and that's it. You obviously also have to provide cache metadata for it. So this piece can be also cached, probably not as efficiently, but you can still cache it, right? And then if you go on. And here I have two examples from the renaissance page where we used place-holder in and at the top we have the top bar of the site. And I'm not sure if you can see, but in the right top corner, there is a card icon with the number. And this icon basically counts how many products you have in the card. So as you can imagine, like the entire top bar, it's pretty, you can cache it pretty efficiently because it's more or less the same for everybody. But then this card will change basically by the user. So we place-holder that icon. And in the support block, we have a few links that are basically static, basically. But then we have this link that says subscribe to document updates. And depending whether you are already subscribed or not, it will change, it will say subscribe or unsubscribe. Which means that this link will vary per user. And then if Drupal would auto place-holder that, the user cache context from that link would bubble up to the block. And then the block would be place-holder in the core block system context. But if you lazy build and place-holder just that link, then the block will still be cached efficiently and only that link will be place-holder. So it's still better results than place-holder in entire block. There are a few tools in Drupal Core that you can use when you are debugging problems related to caching. One are the cacheability headers. If you enable this parameter in your services.yaml and you don't want to enable that in production, you usually do that in local environment or maybe you're in a UAT server or something like that. And then Drupal will add headers with all the cache metadata to your responses. And that lets you see which tags, which contexts are added to that page. And here we can see, I hope, that there is user context in the list, which is then, you know, when you see that, you have to wonder why is this page varied per user? Can I avoid that and how can I avoid that? And well, this is very useful when you're thinking about a single page as a whole bigger piece. We also now have a tool that lets you debug individual pieces of the page separately. And that one has been committed just recently. It will be part of 9.5. We are using the patch since we're still on 9.4. So if you, we really recommend it, it's really nice. And when you have the patch, or when the 9.5 will come out, there is another parameter in services at YAML that lets you enable it. And then if you go to the next page, when you have that, if you open the source of the page, you will see these comments on top above every element that was rendered cache. Like each block, each article or whatever you have, like each piece that is cached, you will get this information for. And what you will see is whether it was cache hit or not. If it was not a cache hit, you will see how long it took for this piece to render, which is really useful if you have a page that takes a lot of time to load. Usually it will be just one block that is responsible for that. So you can enable that and just scroll through and find the one that took the longest and started looking into that. And then you will see all the cache context to this element used, all the cached tags. And you can also, it's separated by pre-bubbling and post-bubbling. So you will see which cache metadata was added to this element as a parent. And then which cache metadata was there on this element after bubbling happened. So you will be able to see, for example, which tags were added on a block and which tags came up to the block from something that was beneath, like from an image that was displayed as part of this block or something like that. And this is useful when you are trying to figure out why you have a user or a session context somewhere. And if you see that it only appeared post-bubbling, then you know that there is some element on that block that caused it. It's not the entire block that's causing it, but something like further down in that mental tree structure that we talked about before. So really nice when you're dealing with caching and performance problems, this patches. I also sometimes look at these because if you're deploying a hot fix and you would like to not clear the entire cache for your site, these tags will tell you what you can change that'll clear out just the thing that you fixed. So if you had a bug in a certain block and you're like, I really need to just clear the cache for that block, you can see in these tags, you're like, oh, if I edit the system site configuration, I'll just go in there and hit save and that will clear out this block. So then I'll have to do a full, oh, can't do that. Can you do that to get a brush as well? Yeah. It's brush, clear tag or something like that. Yeah. I don't know what you're talking about. Look at these keys all about. You didn't mention those before. Yeah, keys are basically the name of the cache item when it's stored into the cache back. Keys are also set on the render array and just as you can set tags and contexts, you could also add keys to any render array. What that does is if you add cache keys to something, it will be cached separately. So if you have a block and block has some general text that doesn't change much and then below that block you have something that is changing very often. You could add cache keys to that thing that is caching very often and then it will be cached separately like the content of the block will be cached but then also the thing that is below will be cached separately. Usually when you're doing custom codes you don't need to use that. It's very specific situations but that's basically what it does. And you have to be careful if you do it. If you use cached keys that are already used somewhere then you will have a collision. Yeah. So then the other kind of side of debugging this is so you can see those cache tags and cache contexts like in your local and see what's going on but that doesn't tell you the whole story because you still need to see what's actually happening on your production site in terms of cache invalidations. It doesn't really matter what the cache tags are without knowing what's going on with the cache invalidations. So you don't know that like you have problematic cache tag if you don't know what your invalidation pattern is like. So there's a fairly new module for the past couple of years called Cached Metrics from by Moshe Weitzman and it sends you this logging of what's happening on production into New Relic. And this really helped us to solve some problems that we just couldn't explain to by really being able to see what was going on on production. So what the Cached Metrics module does and it's pretty much just like an enable it when you're done type of module. Every time there's a cache invalidation event, so like when an editor saves a node or something that creates a cache invalidation event, it logs that. So you can see all of that data and you can see the patterns of like which tags are getting invalidated a lot and then compare that to the tags that you're using. Because only together can you kind of get the full picture. It also tracks the cache hits and misses. So you can understand like how good your cache hit rate really is and like which which things are not getting cached. So this is like a dashboard that I set up in New Relic using that data. So in here I can see like within an hour like which cache invalidations are happening or I can change the time range and I can see like oh this user list cache tag is getting invalidated a lot. And I found out oh that gets invalidated every time a user logs in. So anytime any user logs into the site user list gets invalidated. Then we found oh wait a minute, we have user list cache tag on tons of our content. It's like all over our site. So every single time someone's logging in it's like wiping out our whole cache. We found out that Drupal likes to stick a user tag onto like your like node content just in case you wanted to display the author's name which you never do. But just in case you did, they will helpfully, every time you have like a person who's like their job is being the editor every time they log into the site it'll clear out the cache for every piece of content that they authored. That's just what it does like by default. So you have to like go in there and like get rid of that cache tag to get that to stop happening. So this caching validation by URLs that's like helping me understand this data a little bit more. So like this is our like SSO callback. So when they're going there it's clearing out the like those user tags. Then this are like my overall dynamic page cache misses. 71% are misses. So obviously we're still fighting the good fight over here. So like get our cache hit rate up. And then I can dig into it and I can see like, okay, these are certain URLs where I'm getting these misses a lot and I can start to dig in and like see which is what I need to fix. So the last thing we need to talk about is views because one of the big things that we found debugging this stuff is how many problems we got with caching from using views. So views as a core module so people can be fooled to think like that it probably works pretty well with the caching system is actually pretty problematic. So I would recommend not using views if you're trying to have like a high performing site because it's kind of a hassle. But if you're using views and you're trying to fix things in views what you need to know is there's three options in each view how you cache it. The first one's tag-based and that one is just using the cache tags but views isn't really that smart of like which cache tags it's going to add to your view. It will add the node list cache tag to every node view. So that means that like every time that anybody edits a piece of content or adds a piece of content on your site which in our case is every five seconds it'll wipe out the caches for every single view across your site every five seconds. So that's basically like not having caching at all. Then you might think okay I'll get around this problem because I'll use time-based mode. So you ever like try to change the mode to time-based? Well the time-based mode your thought is that this will clear based on some amount of time but actually what it does is it keeps the cache tags the same cache tags that were already there and then that we're already clearing every five seconds and then it adds like a time part as well. Yeah, that's not what you would think it would do. No or want it to do but that is what it does. Just in case it was cached properly it will invalidate after a certain period of time. Yes, so there is like an issue for it it's been around for years but yeah never use the time-based thing it just doesn't work. The other option is none. If you use none you're thinking okay it's never cached maybe that's what I need but the problem is unless you had like place-holding working for that view that's gonna make your entire page uncacheable not just the view. Another really fun way to make your entire page uncacheable with views is to use like a random sort. If you go into like the sorting and pick random then views is like well we can't be random if we're caching we better just like invalidate this entire page cache so that it can actually be random. It doesn't warn you it just quietly destroys your performance. So the way that to deal with views is you have to add this contrib module called views custom cache tags. When you add this module it adds a new option in addition to these three that's called like the custom cache tags. And when you set your views to use that option it removes the node list cache tag and then lets you set more specific cache tags onto the view. So typically you would use like node list article if it was like a list of articles and that way it'll just clear when somebody updates the article. Because the issue with these node lists like tags a lot of views if you're like just listing like here's like the 10 newest articles you can't just use like specific cache tags of like these are like the 10 nodes that we're gonna clear it if these update because if somebody adds a new article then it has to be in there. And so you have to wipe it just because somebody added a new article. So you don't always know like every piece of content that might affect the cache. You have to program it, program out of the kind of tag design that's not so big. It's configurable, it's in the UI, yeah. Yeah, you like change the mode and then you have to type in the cache tags, yeah. So yeah, it's advanced but it's the only way to cope with it. It's also like just if you enable it and you know that you have a view with only list articles, it's actually very easy. You enable it and you put this node list colon article or whatever your constant type machine name is, but it also lets you do really complex validation techniques. Like I had a client that they had a list of users but only some fields on the user entity affected this. So we started issuing custom cache tags just when these fields updated and we used those in here and that increased the key trade immensely. Okay, anybody got questions? Yeah. Do you do the cache tags in context automatically bubble up or do you have to explicitly ask them to bubble up? No, they always bubble up. How do you handle to part cache tags table? It's growing old. Can you repeat? How do you part? How do you delete the cache tags tables in a database because it is growing old? I mean, you can delete it, it's not a problem. If you delete data from that table, it will just mean that all the cache items will be invalidated. Because the way this works, like in this, first, if you're using memcache or redis, you should put cache tags in memcache or redis because it could be faster and it depends on which one you use. There is in Rhythmify, there are instructions on how to do it, but the way this works is you have this table in the database or in memcache that has a cache tag and account. And then each time when you invalidate, it increases the count. And then when you are loading an item from the cache, it has the count at the time when this cache item was stored and it compares them and it says, oh, if they're the same, this cache item's still valid. If they're not the same, it's invalid, so I will do a cache miss. So if you just drop that table, it will basically invalidate all the cache items because the counts won't be the same anymore. Are you purly purly purly don't know if it's available? Are you delete the cache tags? Sorry? Are you deleting your query? We are usually using memcache or redis, so we don't have this problem on the database. Yeah, I think it'll just, when it runs out of room, it'll start to drop stuff. Yeah, and it will drop stuff that was accessed. Usually it will drop stuff that was accessed, the most amount in time in the past, so cache tags will be pretty frequently accessed, so it won't be dropping those usually. Where's that lock cache cache? For context? Yeah. Yeah? Can you see it? It's on the slide, but it's node ID 2919934. 2919934, I can write that. Yeah, the bottom there. Yeah. I have a silly question, but if you invalidate a parent cache tag, does it also invalidate the children? No, with cache tags, you don't have a tree structure. They're all independent. But if you have, for example, on a typical situation on a view would be that you have a bunch of individual cache tags, so node one, node two, node three, representing the nodes that are displayed on the view, then you will also have node underscore list. And if you added node one, both these cache tags will be invalidated. If you added node 55, it will be node colon 55 that is invalidated, and this view that we are now imagining doesn't have this one, but it will have node underscore list. So every time when something is, when some action is happening on the side, a lot of cache tags are invalidated, not just one, like not just node colon, node ID, but also other cache tags that are relevant to that event that is going on. Thanks, Kist.