 I have my colleague Abhishek Anand with me, we both work for Apia and our session is scaling Drupal8 and we believe that after attending our session, you will get to know about lots of good techniques, how to scale large scale websites and small scale websites within the limited infrastructure. Hi guys, so the session topic is scaling Drupal8, I will talk about scaling and we will talk about Drupal8, will that work? So a brief introduction from both of us, we work at Apia and we are working with the lightning team and we are working with lightning and we have some very good stuff there, so if you guys are interested we are hiring, come to our booth. So like I said, the topic is scaling Drupal8 but I think it's essential for us to understand what scaling or what optimization is. And why is it important and how and I will briefly cover what are the general principles of optimization and why should we do it. And then we will get into Drupal8 specific things, what are new in Drupal8 and we will see a small video in the end which will show some cool stuff that Drupal8 has in terms of optimization. So what is website optimization? Site optimization is basically the process or things that you do to make your website load faster. But the question is why do you want your website to load faster, is it important at all? What do you guys think, is it important? How important is it, little important or very important? And why is that? I don't really care user experience, I don't really care about Google introduction I care about money, I care about my sales, do you think that is going to impact it by a slow website? Why? That's right. So there are data to prove that. Amazon found that every 10 milliseconds of latency cost them 1% of their sales and 1% of Amazon's sales is a big deal and we are just talking about 100 milliseconds. And Google found that an extra 5 seconds in search generates a drop of traffic by 20%. Just imagine 20% of the traffic of Google, it's a big deal and just 0.5 seconds. So now you might know how important it is to load your website in a very fast way. How fast should the website load? It should load sooner than you blink your eyes, right? And we want to do that with Drupal, do you think Drupal is a very fast? No? Why? Why do you think Drupal is not fast? So Drupal is not fast is a, this question is very subjective. If I tell you Drupal is very fast in serving anonymous requests, is that true? Drupal is not so fast in serving authenticated requests. But can you tell me something which is very fast in serving authenticated requests? There's nothing which is really fast in serving authenticated requests because you need to build your page every single time. But for anonymous request, you can do a lot of things, you can get away with a lot of things. We'll see all that. Let's see a little graph. So x-axis, the horizontal axis is your page load time in seconds. And the vertical axis is the percentage of people who abandon your site. So if your site loads in four seconds, you will have a traffic drop of 25 percent, which means 25 percent of people are not, are bouncing off from your site because your site loads in four seconds. It's a big deal, 25 percent of your traffic. When we talk about optimization, there are two major aspects of it. One is how much your server can scale or how much request your server or your application staff can deliver concurrently. And we generally focus there. I should have a farm of servers and it should be able to serve all the requests. But even if you do that, there's one aspect. Your site can still be very slow. Imagine I had a lot of server, high-stale server, and still my site is slow. Do you think that can happen? And why would that happen? And why would network interfere in your site speed? Because the way you have made your application is not very optimal. There is a lot of things. The server has only a very small part to play in a request response cycle. It just delivers the page. 80 percent of time is spent on the network. The HTTP call that goes to the server and the response that comes back. 80 percent of the time is spent in the network and the rest of the, the rest of a small percentage is, it's not a small percentage, but a big percentage in rendering or painting the page. Your browser also takes time to paint your page. And I'm not sure if you know this, how many of you like panels a lot? And how many of you hate panels? Like equal number of people hate panels and equal people like panels. Who hates panels? And why do you hate panel? And why is that? C tools has to do things on the server side. It has nested, nested diffs. If you have, if you guys would have attended a modern DK session, he wants to take out every single div that he can. Why? What is the disadvantage of that? It takes a lot of time to paint your page, the render your page, the render time goes very high. And that's a, that's one thing which kills a lot of web pages. And in, and you find this problem with a lot of people size, because people use panels heavily without even changing the DPLs. And at the end, their page, the server might be very fast, but at the end, the page does not appear to be very fast. Isn't being, it takes a lot of time to paint the page. Also they write JavaScript in a not so nice way, in a very blocking manner. And JavaScript is the biggest culprit in your page's paint time. We'll get into details of all that, but some basic stuff that everyone should do in optimization, optimization is that you need a very specialist, it's a specialist job to look at a website and tell what are the problems and how to solve them. And the first thing that you need to do when you are optimizing a website is to find the bottleneck. Do not go around optimizing everything because every kind of optimization comes with a cost. Right? If you're adding a reverse proxy in front of your site, there's a maintenance overhead. If you're adding a load balancer or a replica database, there's a maintenance overhead. So always know your requirement, find the bottleneck and then solve the performance problem. But besides that, there are certain things that everyone should do. Not doing these things is a crime. And that's what I'm going to talk about. And then we'll talk about Drupal 8 and other stuff. So never send uncompressed content to the browser. Your browser can decompress content in a Jiffy, very little time. But it saves a lot of network bandwidth. And Apache has a mod deflate and Nginx is very good at compressing your content. You can do that on the server level. You can also do that on application level, Drupal does that. Advanced aggregator module does that for you. But I would recommend do not do it on application level. Try to minimize the overhead of the application, do it on the server level. And if you're using Nginx, Nginx has one more beautiful feature. Every time it compresses a content, it saves a .gz extension in the same location. Next time, if you request the same content, it will try to find a gz extension. If it finds it, it will serve it. So it will not compress the content again and again. Combine, do not have too many CSS and JavaScripts. Always aggregate that, Drupal does this for you. You don't have to worry about this. But make sure, writing your code in such a way, do not add a lot of conditional code in modules. What people generally do is hook in it and Drupal adds JS on based on some condition. This is, that is not a good thing to do unless you really need to do it. Because more conditional JS that you add, there are more advanced aggregation will have to create more files based on pages. What it does is, it creates an aggregation file based on a certain page. If you go to a different page on which this aggregation is not valid, you have to create another aggregation. So try not to add conditional JavaScript unless you really need to do it. The best way to do it is to add in the info file. Make as less HTTP requests as possible. This can be achieved in multiple ways. Like the previous one, combining JS and CSS in a single file. Try to use sprites. Sprites, you guys know what is a sprite? You combine all your small images into one big image. And then you save a lot of network requests. Maintaining sprites is a big overhead. So what you can do is there's a module in Drupal called CSS3 embed. You can always embed your images into your CSS file using base64 encoding. If you do that, you don't have to maintain sprites. It is not supported in IE7 and lower browsers, but if you support IE7 and lower browsers, then you'll have to maintain sprites. There's no other way. And then always have JavaScript at the bottom, never on the top, because JavaScript works in a blocking manner. When JavaScript resource is requested, it will block other contents from being delivered. So try to include JavaScript at the bottom. Using sprites, avoid iframe, I think we have already covered this. Always use a CDN. How many of you know what is a DOS attack? What does DOS mean? That's something which my grandfather used to use. Not even him. Grandfather of Drupal. In terms of experience and knowledge. So DinalOS service and how many of you have faced real challenge with that? And we are application developers. We are not server architecture architects as Drupal developers. So it is not your application job to find out DinalOS services and block IPs and things like that. CDN like Akamai or Cloudflare will detect DOS or DDoS. DDoS is distributed DinalOS services. It is smarter than DDoS because with DDoS attacks, you can block the IP address and that attack is gone. But DDoS is more complicated. They have different packet patterns, they have different IP addresses and everything is poofed so it is not easy to detect and check. So these series, they have expertise in doing this plus your server, all your static resources will be served from the CDN, you don't have to worry about that. All you have to worry about is the only HTML request that comes to your page which involves Drupal and other things, no static resources. The reduced DNS lookup, sorry, other point is always use cookie free domain, not a lot of people do this. I have a domain called example.com. I serve my CSS, JS and images from the same domain, but do you know what happens? What happens is every other request has cookies attached to it. What is the use of cookies in images? Can you tell me? Do you need cookies in images? No, do you need cookies in CSS? And cookies, HTTP is a, you know, the format of HTTP, right? So it will increase, if you pass a cookie and if you're using application like Drupal, your cookie might become a little big. So around 4KB or 5KB, you're wasting with every request. Instead of that, if I have example.com, I can create a domain like static.example.com and make that domain cookie free. All your static resources will go from static.example.com which will be cookie free and you will save at least 500KB of data on every page request, not 500KB, but 200-300 easily. So always use cookie free domain. Reduce DNS lookups. Do not have a lot of domains to be looked up in your web page, like do not have third party domains, like I have example.com, some other website, Facebook.com, Google.com. If you add more of domains, what happens is there has to be a DNS lookup and it takes time. So the more number of domains you add to your web page, the more DNS lookup will be there and hence the slower page. Remove duplicate assets. I have seen a lot of sites, outside Drupal, where jQuery is included twice. Some people want jQuery one per day, some want jQuery two, jQuery.no conflict and add another jQuery. Do not do this. Do not add duplicate resources. Use expires headers. I think everyone does that. We generally do not have to worry about it because Drupal takes care of all these things. But it's good to know to deal with expire headers. Expire headers and eat ads. These are very important concepts to be aware of. Sorry, I have used CD another time, but it's the same thing. Always use a reverse proxy. I don't think I need to talk more about this because everyone knows what a reverse proxy is. And if your user base is 80% anonymous, you should always, or 70% anonymous, you should still use reverse proxy. You should use reverse proxy in any matter, even if you have 100% authenticated users. Reverse proxy will help you in a lot. For example, Warnish will save things in memory. And even if you're requesting a static resource, there has to be a file IO, which is still expensive, more expensive than a memory lookup. So always use reverse proxy. Memcache has a lot of utility in Drupal. And outside Drupal world, people also use Redis. Both are key values stored. Redis is persistent. Memcache is volatile. But we generally use Memcache. What happens with Memcache is we map all our cache tables into cache bins in Memcache. So instead of a database request, like even when your page is cached in Drupal, you still have a database request going to the cache tables, which is nothing but a key value store. Your cache tables are a key value store. So you can have those key values stored in the memory, and memory lookup will be a lot less expensive than your database lookup. So always use Memcache, and you will find that all your page starts loading a lot faster. Opcache comes built in with PHP now, so just enable it, and you will see a significant difference, even if you do that in your local machine. Database indexing, it is a separate subject in its own. It is one of the most complex parts of optimizing an application. But it is also the most important part because most of the time of a request on your server side is spent on the database. PHP only takes 30% of a time, 60 to 70% of time is taken actually by the database, especially in Drupal. So if you have unindexed tables, it's going to create a lot of problems for you. You need to find out your views, what are the queries that you generally use in your views, enables slow query log in MySQL. And try to find out the slow queries, and try to optimize or do indexing around those queries. And you will find a significant improvement in your application. There is a script called DBTutor available on d.o. Download that and try to use that on your site, and it will give you a good insight of what is wrong with your database. So till now we talked about anonymous user mostly, authenticated user, optimizing authenticated user is a big challenge. Till Drupal 7, it was not an easy task. How many of you think they have done a very good job in optimizing for authenticated users? I don't think there's anyone, neither me. But there were certain things not really worked really well, but it was still there. And there are a lot of examples which are, for example Blazemeter used to use AuthCache. What AuthCache does is it has to try to cache the static part of your page. And the dynamic part of your page will be, there will be token left on your page when the page request is sent. And those tokens for your authenticated part of your page is replaced with a subsequent Ajax request. That's how AuthCache works, correct me if I'm wrong. So that's how AuthCache works, what happens is in your web page, everything might not be dynamic. There will be certain blocks which are dynamic. There will be the header section where you have hello admin, hello username, there will be certain blocks. And some part here and there, otherwise the page, there will be another section of the page which is non-dynamic. So you can still load the page and serve the dynamic part later in an Ajax request. This worked pretty fine, but it was still like you will notice that there will be a page and there will be some static component there. And after some time it used to change. There was not a very nice user experience, but still it was doable. And that's how people used to optimize authenticated user in Rupal 7 at least. There's a smarter way of doing that, AuthCache also has an AuthCache ESI module. Do you know what is ESI? How many of you know what is ESI? It's like include. What AuthCache ESI will do is you don't have an Ajax request now, you have an ESI. And that ESI component will be fetched for the dynamic part rather than an Ajax request. So it's a better way than this. Now there's one more thing called big pipe which I will cover later and I'll show a small video on this. This is something which is going to make Rupal 8 very interesting. So we will see that later and this would be the way to optimize your Rupal 8 application. And it's pretty nice. We'll talk about it sometime later. Back to Navin to teach you some other things. Some are the big performance improvements in Rupal 8. We have now SSSIM in code that will provide a lot of functionality out of the box. So we don't need to take headache to manage the assets or CSS that whatever the level they will get altered. So it was a big problem for the CDN, for the varnish at the varnish level that got cache at the whole pages. We have the entity cache. How many of us know about the entity cache module in D7? It's in the core. Its whole functionality is in the core. And caching is enabled by default in the standard and the minimal profile. And all the assets and the CSS aggregation and JavaScript aggregation is enabled by default. So how many of us know about the cache API in Rupal 7? How many of you are aware about cache, get or set? A lot of us. That was the cache API in Rupal 7. It's just a fancy name. You use it every day. So how many of you have faced issues with cache, get, cache, cache set? Anyone of you like cache clear or? I like it when I have to take a side down. And there's something which is very interesting is if you enable devil module and look and enable queries on your page. You will see the slowest query is the one which is to the cache table. I don't know why it is very slow. So there were problems with cache API in Rupal 7. So the cache API in Rupal 7 was not so robust. But in Rupal 7, it's more robust than was in Rupal 7. It is designed not for the anonymous users, but only for the authenticated users as well. But the caching system for the anonymous system is same as in the Rupal 8. That was in Rupal 7. So there is nothing more changed with the anonymous caching. But it has been changed for the authenticated users. So there are a couple of things that cache API has. We have cache tags, cache context and cache message and for invalidations cache tags. So whenever in Rupal 7, sometimes we want to clear the cache. We directly do the cache clear all. So that's not the right way. It was clearing the cache for the whole of our site or our application. So in Rupal 8, we have cache tags here for adding the data dependencies in the render arrays that this particular part is dependent on this particular entity or so. So we have cache tags with us to manage the dependence on the data or the configuration that we have. So in Rupal 7, I have a module with a block that is displaying the Facebook like box. So in Rupal 7, what we need was we just passed a constant there, Rupal no cache. Do we know Rupal no cache constant? Yes. So in Rupal 7, it was also the same. If we don't pass any constant there, by default it will add the max cache that we will discuss later that if we want to explicitly not adding any cache metadata with any of the things, then we can use the cache max there. So in Rupal 7, we don't have anything render caching or fragment caching with us. So what we were doing is we were saving our whole of our HTML page and we were saving it either in our cache tables or at varnish levels or at our CDNs. So cache can be varied by variations from permissions to roles to URLs to from user roles to anything. So for handling those, as well as for access checkers, we have cache contacts here. So we just need to add the cache contacts with our fragments or our render arrays that we have. So we will add the dependencies that this particular part will be varied by this variation or so. The cache API will take care of that out of the box. So dynamic page cache is one of the modules which is doing the same stuff for us. So cache max as I was saying earlier that if we want that some render arrays to update explicitly ourselves that sometimes we explicitly want to clear the cache like I already said that cache clear all functions that we all like. So we can add the cache permanent or cache null as well. So one more thing within this cache API is that we have services with cache. All the caches are services in Drupal 8. How many of you know about these services? Yeah. So we can replace out all of our cache system with any of the system we like. So don't do that cache back and null. So it will set the cache to null. So we have big pipe. So that improvement, that idea came from the big pipe. So for implementing the big pipe strategy, the cache API got introduced and a lot of the improvements got included into this. So dynamic page cache was one of the major step towards big pipe. So big pipe is not in the core but it will be. Already there is an issue introduced to introduce this module as experimental in 8.1. Now Abhishek will proceed with it. Thanks. So we will look at different kind of caches that are present in Drupal 8. For example, there is something called page caching which is similar to what we have in Drupal 7. It has not changed so it gives you caching for anonymous pages and it works similarly to Drupal 7. We did some benchmarking. We used a tool called Apache Benchmark to benchmark and see how caching works for anonymous user. You can see there are 67 requests per second. But when the user becomes authenticated, the conferencing reduces with the page load time increases if we add more conferencing. So it is doing fairly well. There is one more concept called dynamic page caching which is new in Drupal 8. This was not in Drupal 7. Earlier we talked about AuthCache. Dynamic page caching is similar to that. So it is again enabled by default in Drupal 8. It has some cache metadata that it leaves on the pages when it is being rendered. And then that part is the dynamic part is later on filled later on with Ajax request or ESI. So it is similar to AuthCache that we do where we talked about earlier. So benchmarking with dynamic page caching also gave us pretty good result. What is interesting is BigPy which Navin was mentioning earlier. It is generally for authenticated users. One thing that BigPy will not do is it will not decrease your page load time by the way. So there is no difference in page load time. You will see after enabling BigPy. So what is the advantage of using BigPy? So if the page loads and if the document dot ready fires in 3 seconds, after including BigPy it will still fire in 3 seconds. So why use BigPy? Yes, so there is something called perceived optimization. We will see a small video in which we can see how the same page load but the page looks really different. So like we have in the slide during rendering the personalized part are sent on later. There is something called flush and OB flush in PHP. How many of you know what is flush in PHP? So what flush does is it will clear the output buffer. The D8 architecture has changed a lot from D7. In D7 this was not possible. In D8 you can flush the output. So what D8, what Drupal does here is it will generate only the static parts of the page in the building and it will leave the dynamic part for later. Once the static part is generated it will flush the output and the output reaches to the user. Now later on when the dynamic part is calculated and is rendered it will send it to the client again. This is how flush in PHP works. I personally tried to do OB flush in D7 but it was not able to do it. If I could have done it it would have been a great thing but it was not possible in D7. In D8 we have BigPy now. It is very interesting. Like we did some benchmark and did not find any difference in Apache benchmark with BigPy but here is something which will tell you why is BigPy interesting. Here we see a page with some dynamic content on the side and then there is a comment which is again personalized content. So on the left side we see traditional, on the right side we see BigPy. Now we are seeing this for cold cache. Cold cache means the cache is not generated, the cache is not warm and it is ready. So 6.5 minutes at the same time but you saw the difference. Now when the cache is warm both static and dynamic part loaded at the same time. The page load time is the same but you see a difference but if you tell someone which page is loading fast what would the person say, the left side or the right side. So this is called perceived optimization. There are some certain tools which you should be aware of if you want to do some performance optimization. For the front end side the easiest way is Google PageSpeed always refer to the Google PageSpeed inside and if you have a score less than 90 your website is not good and if you have something less than 80 your website is bad. You seriously need to do something about it. Wiseflow is another good tool. It will give you a lot of recommendations, both of these tools and all you need to do is just follow those recommendations. Another very important part is I have seen this happening a lot in India than in other places. What happens is designers give you images and those images are very high quality images and developers they will just put the same image in the server. Drupal will optimize it, Drupal will make certain changes but it will still not optimize it as much as you want it to. So it is always a good idea to optimize your images like if there are some designers in this room is there any designer in this room? No. So if you meet a designer you ask him to always save for web. Never do a save in Photoshop. If you are creating images in Photoshop always do a save for web. What it does is it optimizes it. For the web it removes all the metadata which is not required and the images are optimized. There are some tools available. I have not mentioned it. I will probably mention it later. There are some tools which some script which I found which will just optimize all your images in your size default files directly. Once I ran that tool and my website suddenly became very fast. Apache Benchmark is a good tool for stressing but do not consider Apache Benchmark results as the exact speed in which the site will be rendered to your users. Because with Apache Benchmark if you are doing it locally it is just testing the latency of the server. It is not testing the network or the browser rendering. It is a very inaccurate tool for that and you need to understand that it is not meant for that. What you can do with Apache Benchmark is you can stress test your server. You can see how much stress congruent users it can handle. That is different from what a particular user is how fast the web page is for a particular user. If you want to stress test your server it is a good tool. If you want to stress test your server it is a bad tool by the way. Now why is jmeter a bad tool for stress testing your server? Now jmeter is a good tool for stress testing your server if jmeter is installed on the same server. But if I am in my local machine I have a jmeter installed and I am trying to stress test a server on AcquiaCloud it is a bad idea. Because my network bandwidth will exhaust lot before the server will reach its limit. So how do we solve that problem? Something called Blazemeter. Blazemeter is an enterprise version of jmeter. It is jmeter on cloud. So what it will do is it will do jmeter tests to your server from different geographical locations across the world with a high bandwidth connection. You will never reach that stress and you can do stress testing of your server. So if you want to really stress test your server you might want to check out what Blazemeter is. And that website is made in Drupal by the way. I think that is the end of the session. If you have any questions please ask. Questions? Was it that bad? It is not a part of core yet. It is a separate module. It will be part of core 8.1x. Probably not even I am not sure. Probably it will be by very soon. There is already a proposal for that but it is up to the core maintenance. But there is a contributed module which you can use. It will give you... And you will not notice a lot of difference in a page which is slightly dynamic. If you have a high dynamic page like if I had a page like my Facebook wall big pipe will make a huge difference. But if you have a default Drupal's installation page like home page you won't notice much difference with big pipe. Yes. Views query we have to optimize. So how can we identify that this view is firing a slow query? So in mySQL if you go to my.cnf there will be a configuration where you enable slow query log. And you specify the path of the slow query. If you are too lazy like me then you can use the web profiler module. Look at the query and try to find bad joins try to remove bad joins and if you really can't remove any joins try to index your database with the fields that you are... In the where class you will find some fields that you are accessing. Try to include those where... fields in your where class in the index of the table. Jeff could answer that. Have you heard of... It doesn't tell you how to fix it but it tells you where to start looking. Thanks. Yes. So Apache Benchmark is a very small tool which is shipped with your Apache default Apache. And it's very small. It's not even full-fledged stress testing tool. You just give Apache Benchmark some parameters and it will make a network connection to the website. Very simple. A command line tool for easily doing some kind of small stress testing or see how fast the site is loading. Jmeter is a more full-fledged stress testing tool and it's a Java-based application which you run. It gives you a lot of things that you can... For example, in Jmeter you can mimic how an authenticated user will be... For example, you will find a lot of Jmeter scripts called GMX on GitHub which will be meant for Drupal which will be meant to stress test Drupals. For example, authentication or creating a node, creating a user, all these things. You can write steps for creating a user and you can run, replay those steps using Jmeter and it will exactly mimic how a user will do it. And with the stress of 500 users, 1,000 users, also you can ramp up in Jmeter. Start with one user and ramp up slowly and then come down. Apache Benchmark will just burst all the requests. If you have a concurrency of 10, it will burst. It cannot ramp up or ramp down. Blazemeter is just Jmeter hosted on cloud. For example, I cannot use my local machine. Jmeter is my local machine to stress test an Acquia cloud environment, a server-on-acquia cloud environment because Acquia Cloud will have a high network bandwidth and it has more computation capability than my systems and the bandwidth that my system has. For example, my bandwidth would be 10 Mbps. So the maximum amount of stress which I can give to the server is 10 Mbps. But that is like nothing for a server hosted in any cloud environment. So that is a bad way to stress test that. So how do you use Jmeter to stress test a live server? You can use Blazemeter. Blazemeter is just an enterprise version of Jmeter. And what Blazemeter does is it has several locations, geographical locations just like any other CDN works. It has geographical location where they have Blazemeter installed. You can choose different geographical locations and from there it will have a high bandwidth stress on your server. And the bandwidth now will not get choked. So that is what it is. Thank you. Any more questions? Yes, please. I will follow that very much. Can you explain? I am scheduling one of the content. Okay. And if I set up a cache max life, then it won't show up after this itself. Okay. So how do we... I think I understand your question. So I think she is talking about the... There are two settings. The cache max lifetime. And the cache run-on. Like expiration. Yes, I think the one she is talking about is the minimum cache lifetime. And if you set that, what happens is your cache will not expire before that time. So if you are scheduling a content to be published in that amount of time, it will not happen because still that time your cache will not be updated. So only after your cache is updated, even if you run cron, your scheduler will run and it will be published in the back end. But your cache, it is not available in your cache because you have set minimum cache lifetime. So the ideal thing is never set minimum cache lifetime to so high. One more thing I want to add. What they could actually do is actually for the 15 pages, do the cron itself. Like when you run the cron, the scheduler and also run your home page cache such that those really published nodes do pop up on the home page when you register. But you don't want to set... You don't want to reduce the lifetime of your cache because for performance reasons, at the same time you also want your really published content to be visible after that. They have to do it, right? So I think, selectively or clearing the cache of the site as part of the same cron that runs the scheduler might do the trick. You should have like a debate panel debate about minimum cache lifetime. Yes, that's very debatable. All right. Any more questions? I want to add something here. There was an issue that got committed in 8.1.x that on running the cron, the cache will not get cleared. Say cron, you could have your own... Yeah. So to one custom cron, she's actually scheduling some... I mean, running the schedule and publishing some nodes. And I would probably add clearing the cron between the cache of my home page or the whole site programmatically on the same cron itself. So when the clearing of the cron and the development, I'm also clearing the cache of my home page and probably also using money like at their birth or birth. I'm also clearing the cache of the site of the cache. Or there's one more option that you can do, but do not do this if you have a lot of content which is being created very often, is whenever there is a content which is published through Scheduler, you can use rules and you can clear the cache through it. Whenever a content... For example, are you using Workbench Moderation? Okay. Just... Okay, Scheduler. So you can use rules and you can find out whenever content is published, just clear the home page cache. So that is one other way to do it. Invalidations will automatically take care of that. That service is... If you want to extend that service and want to take care of... and want to do something other stuff or do clearing of other things, then you can do that as well. But service decorated is the best option instead of extending that service. Yes, please. Yeah. Let me rephrase what you said. You said that when we are using CDNs, we make extra calls to a third-party site, which is a CDN, right? We do not make an extra call. The CDN sits in front of your site. And what it will do is it will take care of most of the requests. When you're considering a request, when you request a simple page, Google.com, do you know how many requests does it make? It will make more than 100 requests. One will be the request of the document and then there will be JavaScript, CSS, images, and whatnot. So there are at least 100 HTTP requests, which is fired by one simple request that you make. So what CDN will do, it will take care of all the static requests. It will make static cache of all these things. So your server can do what is meant to do. What your server is meant to serve HTML pages, right? Through Drupal. Drupal at the end generates an HTML page, right? This is a static request, which has nothing to do with Drupal, unless you have private files. So this CDN will sit in front of your application and will take care of everything, including attacks like DDoS, DDoS attacks, or security attacks, right? And it's not an extra call. It's just something in between and it just proxies the legitimate request to our server. That's it. You will do the same thing if you have a reverse proxy. For example, if I warnish, it does the same thing. It proxies the request to Apache whenever it cannot find a cache. CDN pretty much does the same thing. And can be rendered through the regular www domain, which renders the rest of the site. For example, you have seen a lot of site using cdn.sitename.com, right? That is because you do not want to serve your static resources from the same domain where you are serving your dynamic requests because that has cookies. So you create another cdn.yourdomain.com. So that, and that trade-off is not too bad. You're just adding one more domain lookup and it's not even a domain lookup because you're just subdomaining it. So it's not even another domain lookup, but you are saving a lot of extra headers in terms of cookies. That's why we have this cdn.sitename.com. Can we cache the REST APIs? Can we cache REST APIs? Yes, why not? How exactly? So REST API, like if you are talking... Services model. Okay, so it has... What it does is on the view layer, everything. What you're generating as HTML, you're just generating it as a JSON, right? The view layer only changes. Everything else beyond that remains the same and your caching happens there. It's not page caching up. If you're talking about page caching, it is different from that. But your caching happens before the final render. Only the render changes, right? In Drupal, everything ultimately goes to a render function which renders it either through a DPR or through JSON, right? Only that changes. Below that, everything is the same. Okay. So it is handled by Drupal Gatch and the mapcatch? Yes. Yes. So if it was an anonymous REST request, it's just an HTTP request. Yes. And then you get cached. So long as you're not also passing a session... Yeah. I mean... A parameter. Any more questions? Yes. Oh, I forgot to mention this. Neuralink is a beautiful tool. If you are doing performance optimization, you should always use Neuralink. You should always use it. It has very good insight. But I think we are... Neuralink and XHCroft, these are very good tools for... Not only performance optimizing, but for performance solving. For example, Neuralink will tell you exactly on what layer your app is spending how much time. How much time is going to do database, how much time is PhD consuming, and what is the slowest function, and what is the slowest query. Everything is... You have to be very careful. You have to be very careful. You have to be very careful. The slowest query, everything... Everything is... You get a variety of information with Neuralink, and Neuralink also has a Drupal agent, which will give you a lot of information about Drupal PhD agent, Java agent, and so forth. It has Drupal agent. So Neuralink is one of the finest tools that I have come across, and you should try using it. If you are doing some serious business with your website, I think you definitely need to have some of your system. Thank you very much for this. Any more questions? So one thing I want to add with the big pipe that... I told you that there are a couple of three modules that... internal page cache, dynamic page cache, and big pipe... with Drupal for rendering of the data. So... we don't have any UI associated with it. Why don't we have any UI associated with it? Any guesses? So... if we will provide a UI for that, then we have another dependency of config factory for it. So there was an issue for that. So I have... I am working on a Sandbox project. Big pipe all... automatically take care of the part of the page and automatically send it first. But if you are interested in... if you really need a UI for that, for ordering it, I am working with... I am working with Pinglias on it. If you are interested, feel free to join, feel free to contribute. I would love to add more contributors to it. Yeah. Big pipe all... we don't have any UI for the page is that... to alter what stuff to come serve first. So that stuff would be taken care of there. So... this thing we can add in the core and we definitely don't recommend anyone to use it on the production. But in the development, it would be a great tool. And please remember, we are hiring. So anyone who is interested can come and speak to me or come the room. Thank you.