 Okay, great. So the idea is it should not be the typical session where I talk to you I want to hear questions from you guys no matter like every performance talk I've done I always get people afterwards to tell me about their performance problems with their sites and I think those are really interesting questions, and I really want to make a session out of that if we can So I'm hoping that you all have questions I'll first introduce myself again I'm a performance engineer at Acquia That means I'm generally working on internal things I deal with the performance related Issues of all of our internal products and then once in a while I do some external stuff, and I do some performance work in core Now there's a lot of talk about Drupal performance in the community I'm sure many of you have seen People talk about varnish fan cache all these different things you can do to your site Even though like we love to talk about code level things like Ternaries are slow and things like that I Like to get the point across that none of that matters Outside of the context of your site So that like these performance like we always want to make a black-and-white thing out of performance It's very very very rarely the case. I also want to get the point across that faster is not better. I See this attitude a lot Let's just do it this way because it's faster, so why not there's no harm in it there is harm in it There's always harm in it Optimizations are not free if you change one piece of code to do something else to optimize it to make it faster that will very very Commonly come at the cost of readability so I always want to get points across that the That there's always a cost and to consider that cost and figure out is that cost worth it And before I move on to far here in here. I have some slides. I would love to never get to the rest of the slides I think I would like to start and see if anyone has questions or issues. They want to talk about. Yes Yes, please very specific. Yeah They're not better with Redis Okay, I love I love this question. You just gave me everything that I wanted To recap you're making a site you are testing the performance of it with Jmeter and AB And you're using Redis for cash And you're noticing that when you your results from AB are actually slower with Redis enabled Okay, not as good as okay, that's fine and then you I think the actual question was something about your query cash, right? Okay, so Let's talk a bit about How fast things are? We generally it's just a general assumption that you don't use database for cash Memcash Redis is always gonna be faster faster doesn't really cover it Yes, it's coming from memory But if you say if you're on a machine that has they say you have my school local that machine And there's no network overhead. I've seen instances where I mean you already have that database connection there It can actually be faster use database for cash instead of them cash in that environment For rights Memcash Redis will be significantly faster because you're writing to memory and you don't have to write to disk in this scenario You're not writing you're only reading so I'm actually not that surprised that you saw that the other component to that is The biggest benefit I think you get from something like memcash Redis is scalability versus performance Right like you're going to be able to scale that horizontally in a way that you can't my SQL And so if you're just talking about one instance of that, I don't think what you found is really abnormal at all I think I probably expected Yeah, that's a hard one That's also a good reason why I don't I don't actually benchmark or load test that often especially with tools like that because If you're I mean if you're just trying to evaluate those different pieces of your stack That can be a reasonable way to do it But it's probably better to just profile it and then you can see exactly how much time is spent where Because when you say use something like a B Like in this case you're testing the difference between Redis and memcash, right? Okay, I said memcash actually meant my SQL so it's Redis and my SQL So you those are the two things you want to test when you use a B you're testing Those two things plus PHP plus everything else that happens on request plus Apache Plus the network overhead DNS like you're testing 20 things and you want to test one, right? If you profile it or something like xx prof You can get in and look at exactly how much time was spent in cash gift or cash set And then you can know and you can compare those like xx prof has a thing where you can Give it two runs and then compare the diff and then it'll tell you like this percent difference over that and Generally Like when you do the AB Jmeter thing, it's more of like a scale ability test. You want to verify that everything is working well It's very rare that I see issues there that I didn't see just looking at one single request If that one single request looks good, you're probably gonna be okay That later test like the load test is actually more of like an infrastructure test to make sure like there's anything wrong But I hope that's a common theme that we're gonna come back to that actually profiling is really all you do in most of these cases Yes to do what? another Yeah, that's another great question. I Don't I mean so there's the basic stuff. Well, I mean you touched on Measuring tools and like optimization type tools They're sort of the good Yeah I don't actually use many other tools the next day profit. That's like my main tool that I use I find Finding finding bottlenecks and finding why the request is slow is all about Tracking it down to the one single thing, right? If you start at anywhere else but profiling it's very easy to get off on the long track So if you start to say like a page is slow I see this all the time people look straight at the note or the slow query log Or they'll pull up in a top or my top at that point You haven't you haven't figured out if it's even a my issue or not And so who knows what the hell it is If you start with XH crop you can see immediately, okay? I see a bunch of time spent in DB query or select PDO whatever And then at least I can say okay now. I know it's nice drill Then the next step might be okay. Let's look at the slow query log 90% of the time it's just gonna be a Query that's not indexed or that is creating a stem table and it's gonna be really obvious. What's what's going on? Very very rarely do I have to pull out like special tools like IOS that like look at like actual like system level stuff very rarely At least in the work that I do and so I would always suggest just don't jump to those tools Always start with profiling and then let that inform where you need to go next But in terms, I think you did mention like off-code caches and stuff. Yeah, I mean APC is a requirement I don't I don't profile sites unless they have a PC on them Monitoring what kind of monitoring so New Relic New Relic is cool. I've looked at it a little bit it So like when I work on a site that already has it it's sometimes useful to Sort of start me looking in the right direction But I almost always then still profile it with xx-raw Because the difference is so there's two kinds of profilers well not really two times But theirs is a sample profiler and what that does is it basically like take samples during the request and then it figures out Okay, well this at this point I'm in this function or this function and then so your slowest functions are gonna be apparent because it's gonna catch those more often That's a really nice way to get like low overhead data and not add a lot of overhead to the request But you don't get the full backtrace I think you get a little bit of a backtrace with New Relic, but it's really confusing to me somebody might know better and have more information The really nice thing about xx prop is I get not an exact backtrace, but I get the entire request So if I see okay all this time is spent in DB query I can figure out where that came from and in Drupal. We also we often have very very long paths And so I want to know where that came from or I want to be able to look at the data a different way and sort of a different way Which it sounds it seems like I'm gonna have to get into that and show it. How many people have your views xx prop? Okay, so maybe I don't know All right question. Yes Yeah, I've been on one side with like the full The full expensive plan and I felt like like there was definitely more backtrace But it wasn't quite the same very possible that I just didn't know how to use it That's if I find it very confusing Okay, any other questions? Yes, okay Do you know Are you the only one on it? Okay, and just repeat the question the question was you're on a virtualized instance You don't run the hardware it sounds like Sometimes it's slow. Sometimes it is Not slow and you need you want to be able to figure out when Or why it is without having correct access to the hardware Slow is an interesting term when talking about like the entire web server Because most of the time when I find that it's actually queuing So say with PHP no matter what version of PHP you're running if you're running PHP FPM Or if you're running mod PHP you set a number of PHP processes that can handle your look, right? so on Say you're doing mod PHP. That's going to be a pastries max client setting right and say you have 20 max clients That means you can serve 20 people at a time The next person is going to get queued and then when one process Freezes up then that one's going to serve them if you get 60 people in then you have people then you're essentially your page time keeps getting Multiplied exponentially by how many people are waiting in this queue So that is the it's much harder to measure because I noticed most times when people think that something is slow It's really just they don't have the resources to keep up with it Right and having your page be faster will certainly help with that But I would want to know how many how many processes you have It's a pretty good. Yeah. No loads What gave you the what gave you the impression that with the memory with an issue? Okay, so I mean, I think one way to prove that would just be to basically like profile a page at different times and show that the exact same thing with no load is very different, but I Don't know how to do that with memory, but with CPU you actually can look at that if you look at top I can't pull it up here because the BSD top is really different. There's a parameter When you look at the presented percentages You'll have to Google this but one of the parameters is I forget what it's called But it's basically how much CPU is being stolen from that box What steel that's called Yeah, what's what's the there's like an abbreviation for it? xt ST ST look at the percentage of ST if that's super high your shit is getting stolen from you so But also it I mean if ESX is anything like Xenor happens other VM platforms most of the standard tools are lying to you anyways, so who the hell knows Okay Yes, it's always a good idea to use an SSD whenever you can But most of us do not have that luxury Even on AWS we have SSDs, but it's an ephemeral so you can run your database on ephemeral MongoDB, but lots of people like durability, so That's maybe not the idea, but it's always always a good idea to use an SD when you can if you have real hardware That's hugely helpful. Especially if you have a site worth like you you have 10 tables that you just can't get rid of that's gonna be significantly improved by an SD Yeah, yeah, and if you have if you have hard work definitely do that like if you're if you're bare metal It's absolutely worth the investment Oh Yes Yeah, so and I guess So you're talking about the reverse proxy support in Drupal 6 the reverse proxy support is in Prespo in 7 I mean there isn't really much in Prespo 7 Because a lot of that is already in 7 so you don't actually anything that works great with the reverse proxy But you went through a lot of different things there So what are the requirements of this site because if you're talking about booth and you're talking about reverse proxies It sounds like a large amount of anonymous traffic. Well, yeah, so So the in terms of the anonymous traffic performance problem, it's more or less solved, right? So we have I mean page caching is already pretty good by itself Varnish reverse proxies like actual CDNs an order of magnitude even better than that, right? That's it. That's an easy problem to solve But what you're talking about You're talking about you have the need to possibly have one block that is either catch differently than the rest of the page Or be dynamic That's actually an okay use case for ESI Or and I hate saying the word ESI because I don't actually mean it I mean that block being pulled in in some other way so that Yes, absolutely. I much prefer the JavaScript route because then you avoid the complexity of ESI and it works in your dev environment really simply When you you can key the cache for that based on essentially whatever you want depending on which module you use There's the ESI module which Marcus back there. You wrote that You can it Mark as it does have like a hook where you can basically add any context You want that'll change the cache key for that So if it's zip code you could add the zip code to the cache key and then that would get cashed by that region, right? Okay, that's triple seven only if you're in Drupal 6 though. It just respects block cash rules, right? It respects block cash rules and in Drupal 6. I have a patch that's in the queue that lots of people don't like but it's awesome It It provides another hook in the function. I forget what it's called the function that comes up with the cache ID for a block It's totally fixed You can't really add your own stuff. You have your own little I mean nobody uses block cash because of this You can only do like by role by user That passes to add a little hook in there to where you can implement that hook and in your hook and your hook implementation You can say if the block is this here's another thing to add to the key And then so you basically have all the flexibility you need for block caching without one tiny core patch it's search search for it like my so my name M. Sonobom block cache and You will get there's like I have like three or four block caches you Because basically like block caching is a fantastic way to improve the performance of a site You just have to tweak it here and there you just have to understand the limitations of it Yes Okay, so your question is you're trying to use varnish. You're not getting any hits claim that you're not using cookies or sessions All right so The way to debug that is curl Every so you can you can't debug within the VCL, but honestly, I think You there's things to check before that I can show you really quickly how I would go about that to say window So I know that They like aquee.com Dash I anyone knows that curl flag. It's a head request It's not actually the most reliable thing to do because some VCLs might not actually work with head But it's still if you want to be really sure you can do that You can actually just force the message to get but what a head request is is it does a request it's like to get It doesn't actually return you the body of the request and so you can just run that and then So we have Xdrupal cache miss Who thinks that means? We had a varnish miss All right, good. That doesn't mean anything at all We have The things to look for cash control my cash control is max age More than zero that's one thing to look for And the magic thing here is this some Yeah, so we actually do have xcash hit here and that means that it was a varnish hit But that's a specific thing that we put in our VCL right so we have to put that in the VCL underscore hit we Send that header, and then we also have an incremented counter so that we can see how many hits that page got But that's not always reliable because that requires something in the VCL This one is what you want to look at if there's two numbers it was a cash hit if there's one it was a miss I forget the details exactly, but I'm pretty sure it's like one is like the hash for that request And the other is the cash ID or the hash of the cash object that it serve In place of that and so if that second one is not there it is a miss What now? Was that sure? Yeah, but the main things you want to look at our max age Whether it's private or public which I mean all those are usually it's coming from people They're all gonna be insane that they're gonna be max age zero max age not zero and as long as you don't see a set cookie Header then you're fine But even if you think you're not using sessions or cookies or anything check it just make sure I've seen many cases where that is You do this if it was if you were setting a cookie it would there would be a cookie that says set cookie and there would be a cookie after Yeah, I wish I had a way to check that, but I don't know that I do But yeah, curl that shy wonderful, okay, no Because so the question is do I do I do I have a preference on web servers? I use one or the other I Don't really care because web servers are almost never a Drupal bottleneck Right. There's all kinds of benchmarks out there You can have people talking about how much faster engine X is an Apache I can I can almost guarantee you that it's not your bottleneck So you can implement like you can throw away your Apache and implement engine X and you can add complexity to your setup Especially if like maybe your team doesn't know engine X It may not do any good for you at all instead. You actually just made everything more complex Engine X is a fantastic web server. It's great To create load balancer does a lot of things that are I mean I will say we we run engine X as a load balancer But we still run Apache as A web server because it's just not our bottleneck Now the issue is a little different if you have mod PHP Because mod PHP is going to use more memory because that means every process every Apache process has an entire copy of the P2 runtime and That can be an issue as long as you're not so if you serve a static request if you serve CSS with that Then you're going to use way more memory to do that So if you have mod PHP make sure you have a reverse proxy to make sure that you're always serving your static assets from something Like varnish and then it's a wash Yeah, that doesn't really matter it's just saying It's just not giving it it's using the max age header instead of working with the expires her Generally, that's a better Header to send then expires because it doesn't really matter when that thing expires. It only matters how long it's good for But yeah, that's three different things and there's actually a better answer to that. I just forget what it is Yep That's right Yeah, and that yeah, that's why they're I mean HTTP There's like three or four different headers that actually control everything But what we are generally going to be paying attention to and modern implementations is cash control so question is if you have a You have a set of four you have multiple web servers and How do you deal with the issue of bringing up a new one and it taking a while to warm up? I? question what you mean by warm up Because there's there's warming passes Right, which is probably I think what you're talking about a PC would definitely be a cache that needs to be warmed on a new web server That's gonna happen more or less instantly the first request that comes in There's I mean you could try to Prime that but I don't know that's really gonna be a big issue It might be an issue if you actually have an instance of your cash on that is that what you have like like memcash But shouldn't they all be talking to the same say like database and cash? NFS So I mean I think the issue here is that it's heavily dependent on the setup I think in the ideal scenario this wouldn't be an issue Because you're only client cash should be a PC Are you running your files like your PHP files off NFS? Okay, I'm gonna recommend not doing that if you can help it Yeah, yeah Okay, yeah, I mean that's that's not ideal for a lot of reasons Which I don't know if it's useful to go into past if you have any choice at all don't run Yes all shared file systems are terrible and they're all a pain in the ass Run whatever you're comfortable with in terms of your team's capacity What you can support and the amount of high availability your site truly needs If you can get away with NFS is a shared file system awesome do it if you need to go into something like Gluster It works we run it. It's not without its issues Unfortunately, we're stuck with this problem for a long time because Drupal has a couple I mean there's a few things in Drupal that are just very aware of a file system like the image Cache pattern where we check if a file exists and then if it's not if it doesn't exist then we create it We can't do the sort of the Heroku style thing where you don't where you just don't have a shared file system And then you just use something like S3. We're not at the point where we can do that I would love it if we could do that But with things like Drupal 8's PHP file storage, it doesn't seem like we're like there's too much interest in that I have a feeling that both We acquies hosting and pantheon are probably pretty interested in that because it is a pain point, but Maybe it's just a pain point for us But yeah, I mean it's a good example of alligates very very much pain point for us But I think a lot of flights get by just fine with NFS just for your files directly not for Yeah, avoid shared files whenever possible And just avoid avoid behavior where you're dealing with a file system because file systems are so unpredictable You never know what's like if you think a stat is an expensive Don't assume that you never know what's gonna happen depending on the files from that you're dealing with Yes Like to do debug why that's low. Yeah. Okay. I think that is a great Yeah, I think that is a great segue into XH prof. I'll just quickly demo that so if you go to So XH prof is a PHP extension. It was written by Facebook The extension itself just provides you with a handful of functions the main ones are XH prof enable and disable You start enable at the beginning of the request and I'm going to disable it It gives you a big ass array and then the build like the code that Facebook gives you just writes that to a serialized PHP array in a file and then they have a UI. This is the UI to look at it If you look at the session The session page for this page. I have a gist at the bottom It shows like a very simple the simplest way. I know to install the extension and to and how to enable XH prof profiling on your site. There's also the XH prof module which that'll do it as well But once you get that file and then you can view it here So let's look at one of these so first thing you get is the summary You guys read that okay Okay, yeah, that's fine just that table Just the terms here wall time is what we would just normally think of as time CPU time is Wall time minus IO and so like time spent in my S2L or memcache. We're not going to be reflected in there Sometimes that's useful. Generally, you're just gonna want to look at wall time and then you have memory usage and number of function calls It starts It starts by D by sorting by inclusive wall time And so inclusive wall time means say inclusive for call user funk array means how much time we've spent in Call user funk array and every function at call, right? It's not surprising that that's at the top of this because that's like the basis of the triple hook system So that doesn't really tell us much And then you're gonna just go down and see all these things like panels render see tools These are the things that are happening at the very top of the request. I Don't think this is a very easy way to start. I like to flip it and start with exclusive So if you sort by exclusive wall time Now you see everything So exclusive is only time spent in that function so time spent in the function minus all of its children in this case this I Know that this is a site that is Drupal 6 with a version of views before 6.2 11 I think Because of these stupid unpack options things these no longer are there if you ever see these actually No, regardless if you have an old version of views 6 to upgrade it, please As you can see this shit gets called a lot of times is array 121,000 times Unserialized and so when you look at it actually when you look at this this time That's microseconds. So that's saying 1.17 seconds we spent in the unserialized function That in itself isn't gonna tell you much because it's like well, I can't optimize the unserialized function that might be your Your first idea, but that's It's more interesting to think of what it's calling the unserialized function So you click through to that and then we look at the function level view You see the current function all of its parents and then you see its children But this is a native PHP function. So there's no children and then you look at the breakdown of the parent functions. So The percentage of the call ease Or the callers are here when you see this huge disparity here like you go from 98% to 0 I ignore the rest and I just immediately click through to that top parent because that's already telling me all these Unserialized are coming from cash get now It might be interesting at that point to look at What's calling cash get so much? I Think it's more telling if I look at the child function I see unserialized and I also see db query and what that tells me is this site is using database cache Because if you use a memcache it wouldn't be calling well It would be calling db query and it wouldn't be calling unserialized because it wouldn't have to the memcache extension handle that for you So at that point I would probably stop and say I Just change it to memcache Although I'm not actually totally comfortable that diagnosis at that point, but Seeing the unserialized function take that much time is actually probably a result My guess would be that this was profiled on the machine when it was maybe a little CPU bound because it still shouldn't have been that slow xk12 does add some overhead to the request it doesn't add as much as xdbuc not nearly as much as xdbuc But And yeah, if any of you have used xdbuc the profile before it's fine If you look at like relative times, but it adds a lot overhead it doesn't do memory I can't I can't think of a single reason to use it instead of active crop. You should just use xdbuc But then when you look at something like Like is array taking a hundred and almost a hundred and forty milliseconds because it's getting called that many times That's a little bit Exaggerated because of the xx prof overhead because xx prof is gonna add a little bit of overhead to each function call and so very fast small function calls That's gonna add up in a way that it won't add up on function calls a lot of IO if that makes sense but What I've found is the I mean I built a lot of tools around xx prof like trying to make it easier like extra module makes It easy because it puts this whole UI within Drupal. I don't use it just because it's a little slower And there's a there's a JavaScript implementation. I did at xx prof.io If you can just drag the file on but it only has the one level so far and There's actually I'll put the slides up at the end at the end. There's a link I just made like a single page version of this UI so even simpler to install But I think the trick is just trying like just doing it getting some results out of it even if you have a hard time Reading this and getting something useful out of it There are people that'll help with this like if you hit me up on IRC and show me that like I have this But I don't know what to make of it. I'll probably help you But I know that many people just don't try because no one asked me So my my thought is that people are just afraid to try But it's really in my opinion if you're a developer you've ever set up a debugger You've already done something way more complex than set up this and this is sort of an essential tool I actually use it for debugging because I can just see what called what So yeah exit profit off and y'all should use it and that's really the only tool I use for that Next question. Yes Yes So generally the biggest performance issues you're going to see our IO related My SQL is there. It's going to be a popular one mostly Because of unindexed queries and like queries that create temp tables I hesitate to start with my SQL because I don't want people going straight to it Until you see that But the other popular ones like Drupal HTTP requests Shell exec stuff like that anything that is Anything that is IO basically going out and asking for something Drupal HTTP request is a tough one because It's a really horrible thing to do within a within a request that is serving a page Right There are times when you need the results of that HTTP request to create the page you're trying to make and that's legitimate if you need to do that maybe I Would suggest if you can move it to the front end So like if people who were here last hour Jeff talked about Having like the consuming feeds Via JavaScript on the front end of a Drupal site that's ideal I would much rather do that because your browser can make requests in parallel PHP cannot So if you do that HTTP request, and it takes you I mean even the fastest HTTP request You're still looking at probably 200 milliseconds if you're lucky probably more Your entire process is blocked until that comes back So if you can avoid that I absolutely would the only the only use case where I think that's it's pretty justifiable to have it in the request is aid it's absolutely necessary to render the page and You need to hide credentials Like you're making it a web services request that has credentials that can't be sent on the front end in that case There's not a lot you can do But in the case where you need to make an HTTP request Because something happened like a user visited this page they clicked something they saved something So now I know I need to say like post this response to To To some web service, but it's not strictly necessary to return the next page you should queue it The QAPI is the greatest thing in Drupal 7 that nobody uses Still in Drupal 8. There's like a wonderful back for it to Drupal 6 It's a great way to take I mean basically it's the way in Drupal and PHP to process things in parallel Right because you can't do it I mean PHP is single threaded can't actually do anything in parallel But you can say stick it in this queue and then have a drush worker running That's just constantly processing those things. So it's similar to cron, but if it's in the QAPI you can just run it all the time You can run it every minute or there's if you're have if you have Redis or beans.de You can run a long-running queue worker That's actually just a long-running drush process that as soon as something hits the queue it processes it and does it That is my absolute favorite pattern I think it is a great way to To offload tasks off of the UI, but it involves a little bit of complexity though, but it's worth learning Oh, okay, so I'm gonna give you the real dumb version It's basically when MySQL has to do The MySQL has to assemble the results in a way to where it can't it can't do say like the sort or Or the condition you need Without creating a new denormalized table either in memory or on disk to process that with There are two parameters. I think it's like max heap table size. I'm probably screwing those up There's two parameters or something like that that have to do with whether or not there's 32 megs like default If the data set is larger than that, it's gonna stick it on disk. If not, it's gonna do it in memory memory is gonna be significantly faster, but ideally you just want to try to figure out how to make the query without MySQL creating a temp table You'll know that it's creating a temp table by running explain Does everyone run to run explain on their MySQL queries before? Any MySQL query you have you can put the word explain in front of it And then MySQL will show you like the execution plan for it And you can see it every step whether it's using an index whether it needed to create a temp table and how many rows return from each step to say you have a query that's Joining on the node table and then like five other things and the first step is the node table And it says that first one returned like a hundred two hundred thousand records and then filtered from there That's that you should not try to figure out how to get it to not do that It can be a tricky process in making that happen because different versions of my SQL The query parser works differently And it's hard to predict what my school is actually going to do Hopefully you can you can save it with an index if you can't save it with an index and it's a view Be ready to be ready to just rewrite it by hand Yeah, that I think that's a general tip if you are really fighting with like a query in views Don't be afraid like if it comes down to it You're just gonna have to replace it with testing code and that's and that's an okay thing to do Yes Damn near never like I like I've been wanting to get into extension development for a long time But like I just can't justify it I mean there there's the one there's a Drupal extension And I think like I forget somebody somebody did some bench for attack one did some nice parts. I think And they looked good But it's not enough to you to justify the complexity in most cases. I think I Wish I wish that was our bottleneck Okay, so you so you have a process where somebody logs in It makes two web services request. Oh Just one. Well, so I'm unclear on the process. So you log in and then what happens? You're authenticated locally. So a web services request is made when I press submit. Okay, sure You're syncing with an external force So I actually did a site that was very very similar to that and this is where I Got heavy in the queues what I would do is when the I Would have a process that ran like every day or every two days that would actually just go through all users and sync them and just Take them in a queue and then I would just process that queue regularly Just make sure that no one was too out of date But then if you logged in I would immediately put a process in the queue that said go fix this user's information and update it And so I had and I probably had four long-running workers. So we're there ready to do that And so that the queue API gives you a way to get that out of crime in Drupal 7 and 6 the only hook for queues is hook you cron info it's unfortunate but the queue I think it's the queue UI module Came up with hook queue info which bypasses crime and the other hook means like I'm declaring my queue and Cron will run it automatically. I'm sure we did it just because it's convenient and then it just makes sure that all your queues get run If you use hook queue info Then it's up to you to run it but Drush 5 supports that hook and It has a drush hook or drush queue run command So you just stick that in cron with your queue name every minute and you're good to go There's not a lot you can do. Nope Which is why I actually like LDAP in those situations Great question. The answer is you do not have a website yet. You are making it. You're designing it What can I do to not end up with these performance issues in the future? You should do nothing You should never try to predict I mean within reason there's there are acceptance But you should not try to predict the performance of something and you should not try to design for performance Unless it's like a critical part that you know I mean it's useful to know big O notation in this in this respect so you can say if it's an O and operation That means I'm gonna do this thing for as many other things as I need to versus O one It's like an always like a hash lookup. I'm explaining that terribly, but maybe look up look up big O So there there's some situations where you can tell like okay I recognize that as like an O and so I'm probably gonna need to fix that but do it the simplest possible way Then profile it Otherwise you end up fixing problems that you actually may not even have Right, and then you make your site more complex Because you anticipated problems that you didn't actually ever prove that you had and now you have a complex site That's hard to manage the biggest performance problems come from complexity And so I would always suggest like you should do the absolute simplest thing possible And then fix performance issues as you find them, but just regularly look for them We're just always measure is the thing just don't don't reason About what might be slow measure and let that data tell you what's kind of what's slow. Oh, are we done? five minutes, okay Yes, yeah, I think I think I have nothing against this model If you are on a shared host Where you may not have a reverse proxy and you have very catchable pages. It's totally legit to use this model I've actually never used it Yeah, I can't I know Mike. He's a super smart guy and I respect the stuff that he does I just haven't used it But I think I think it's just because I don't work on sites where it's an appropriate solution. I think there Some invalidation issues that can come up, but yeah Yeah, yeah, I mean if it's it's a great option to just just sort of KTML So I mean just serving that from a path. He's gonna be super fast Or I mean, I've seen actually some pretty crazy setups with that I can't even mention who it is extremely big site They used it to generate that and then like they sync that and actually only serve what boost generates So it can be a really interesting tool Yeah, yeah, and for a lot of sites. It's it's like enabling it So I think that's a decent option But if you have varnish if you if your team can handle setting up varnish and maintaining that I would prefer it but It's I wouldn't automatically go to varnish just because it's going to scale better because you may not ever actually need what varnish Okay, if there are no more questions, I think that is it. All right. Thank you guys