 Hi everyone. I'm here to talk about our story with caching in Django. Just a quick background. I work for HappyFox. We provide a customer support platform. We provide help desk, live chat, chatbots, and business intelligence. And our application is a large Django application. So our application has a lot of Django models. We have more than a hundred models. And a good number of these models are account configurations. So our product is extremely customizable, customizable by the users. So a lot of the time we just have to record stuff in databases which just are updated very rarely, but read pretty much in almost every request. So the challenge with these types of models is that you have to read them very, very frequently. And if you want to cache them, you have to cache a lot of specific queries. And if you want to invalidate them, it's very, very difficult, right? Especially if these models have a lot of relationships between all of them. So what we were looking for was a way to easily cache all of these models and invalidate all of these models as and when some change happens. So the challenge with this type of automatic caching and invalidation is that it's very hard to figure out when something should be invalidated if your model has a lot of dependencies or if your queries are joining across a lot of different models, right? So what we try to do, our first attempt was to use a third party library called Django Cache Machine. So Django Cache Machine, what it does is it provides a Django model manager. Basically, when you write a Django ORM query, you typically do model.objects.all or filter. That objects represents a model manager. So Django Cache Machine provides a mix-in and it provides a custom model manager, which you have to add to every model that you have, right? So this is an example straight out of their documentation where, as you can see for the model, they've extended the mix-in and they've overwritten the objects property with their own custom manager. The problem that we ran into with this library was the fact that this did not support models that had many to many relationships. It had challenges, some issues with foreign keys where we did reverse lookups, right? And because it works using the Django model manager, not every feature in Django uses the ORM in this way. Some directly do database queries and those things were not handled by this library. So what we realized is we have to go a little bit deeper than the model manager to accomplish this type of automatic caching and invalidation. So let's look at a more concrete example before we go further. So this is an example of three models from our application. So as I said, our application is a help desk ticketing application. Most of you in the room would have used something like JIRA. So you would have tickets categorized in some way. You would have users. They would have roles, right? With permissions. So here I have the example of a category model, a role model, and then an agent model, which has a foreign key to a role and a many to many with categories. This is the kind of thing that is very hard to cache and invalidate automatically because you never know which query is connecting to which data model. So we ended up writing our own custom library to do this. And the way we did it was by patching Django's database layer. And I'll get into what we specifically did in a bit. What the library does internally is it tracks what tables are used, what tables that table is related to, and whenever an insert update or delete is performed, it invalidates all the queries that are tagged to those tables, right? So in order to use this library, all we have to do is we just have to specify a metaclass property called stash equal to true. Stash is the name of the library that we wrote. So compared to the previous example, all I need to do to cache all three of these automatically and have invalidations taken care of automatically is just specify stash equal to true and everything just works. So to go into a little bit of how this works internally, Django's database layer has a set of classes called SQL compilers. This is what Django uses to compile the ORM line, so something like objects.filter into an actual database query. And especially if you have joins, that join query is parsed by the SQL compiler class. And we have a method called execute SQL which has the raw parsed query. So we use that to get all the tables. Yeah. Use that to get all the tables and automatically invalidate things. So this really solved all of our performance issues. We were able to dramatically reduce the database load and most of our endpoints became extremely fast. And we wrote a small abstraction on top of this to automatically cache endpoints to automatically invalidate whenever a table was updated. So I'm running short of time. So if you guys have any questions around this, we're in boot g5, please come and talk to us. Thank you.