 So, welcome to my dog, High Performance for Drupal. I hope you had a very great Drupal con so far. This is kind of the last session slot. So I appreciate you all being here. And I hope you all have a wonderful time with me in this session. Just to start, this is a beginner session, so it's very, very important for you. If you're not a beginner, then leave the room. No, but please be expected that the level is beginner, so you might not learn something, but you might be entertained. We'll see. Yeah. So, High Performance for Drupal. What does that mean? How can we achieve that? And how can those things... Oh, wait a moment. What? Hello? No, no, I can't. I'm in session. What? The website did go down? Oh, my God. Yeah. Marco could take over for... My boss is calling. And he's telling like the website is totally slow and it just go down. And there's this big media event and I have to immediately leave. So, obviously not. But it happens. Your boss is calling and you have to immediately do something. You have to be prepared. You have to work on those things. And yeah, you just have to get that site running. So, it happens to the best of us, especially during Drupal cons or during elections. Elections are kind of the most wonderful work for someone doing high performance because you have like three days. And in those three days they get millions, sometimes billions of users and you just have to prepare for that. So, the site goes down. The site is slow, grab a tub and make it grow. I hope I can provide it today with a nice tutorial on how to improve your websites. As I said, it's beginner level. So, let's see. But first of all, let's start with a little story. It's a sad story. I read it and it goes kind of like that. Where's the power of Drupal? I hate Drupal. And it always overload the database. Sometimes the Lord reached 200 and it never reached it before. And they enabled core cache and the server got down again the next day. And I think that's very sad. So, that was one of the things that prompted me to go into the high performance because I wanted that everyone could have a fast site. That everyone could have a great fast performing Drupal site. And what I was seeing was that there was a lot of like tutorials and there was overviews and several things. But what was still missing was something like a comprehensive guide where you were going step by step. So, how can I optimize such a site? How does that work? How is it nice? And that's something the session wants to provide. So, my site is so slow. Help! There is obviously a need for high performance Drupal. That's pretty obvious by now. Faster sites earn more money. They are even officially ranked higher by Google by now. And your visitors will love your fast sites. And then there's this really special case where you will be mentioned in the media or you have a big site lounge or something like that and everyone is there, the TV is there, etc. And then it would be good if your site performs and is fast. So, what happens if your server goes down exactly at that moment? That's kind of the most disastrous thing. And that kind of directly ties in what Dries said in his keynote that Jeremy's site when it was slash dot got always down and Drupal is well prepared to help you get a fast site. So, but first, the question is now, how do I get a blazingly fast site? There are several ways to do that and one is to do it wrong. Let me show you an example of what I've read in the groups dot Drupal or high performance. Okay, I've now tweaked my sauerkraut settings, but the site is still slow. So, what happens to a need to tweak so that it is as fast as xyzat.com? In this case, this was APC and that a sudden matter will come to that later. Another possibility to doing it wrong is I've set up 10 slave DB servers, but once I test the site, it is still slow. So, what happens? And then this other person, I've set up NGNX with advanced ag which combined with entity cache and views of cache modules, but the performance is still the same. And then you could ask the question, yeah, but have you set up man cache? And the person like, what? Never heard of that. So, and then the other thing is, well, yeah, I've set up the static page caching for all the pages, the high traffic take can come. What could possibly go wrong? Famous last words. So, we all wish we had the one thing, the magic pill, the pill that you just take and your site is fast, just one pill. Get it? And boom, sites fast. But that's not how performance optimization works. Performance optimization is a process. It's not a pill. So, now that we've seen kind of the scenarios of how that can work, let's take a look at more ways to fail. So, there's, in my experience, there's four ways to fail. The one is that you are trying to optimize one part to death by neglecting all the others. Let me quickly show you this here. So, I have like a, like a nail, and I'm putting it into the wall. And it should look like this, because I want to have my image visible like this, but it goes like this. So, what many people do in terms of performance optimization is, they put a screw in here, an additional one. They put another nail in here. They put something else in, some rubber band, some wrist bands, et cetera. And still, it still looks like this. And what's important here to do is, you have to, every edge, here a nail, there a nail, there a nail, there a nail, and then it holds, and then it's stable, and then it works. So, don't forget, you have to optimize at all fronts. And the other thing is that you try to optimize things without even knowing what the problem is. So, you really have to understand where is my bottleneck? Is it the database? Is it PHP? Is it Apache or something else? Because if you don't know that, then you will end up hammering more nails at the wrong part. The other thing is what I often see is that you try to optimize things with new methods without really understanding them. That's lost common now, but it was very common a few years ago when people were kind of always trying to reinvent the wheel and doing things, and they were using the newest technologies and everything and tweaking every last setting of that without really understanding what they are doing and what they are trying to achieving. So, the question is reinvent the wheel or just stand on the shoulders of guidance and use a common proven performance stack that kind of everyone else uses. You can always tweak afterwards after you've got a good baseline, after you've got a great working setup, afterwards you can tweak more, but not before. Get something good going first then optimize on that. And then the last thing, optimizing things without testing, it will hold a lot. So, once you are featured by the bignews.com your server goes down, that's kind of the disaster. And there's, by now there's hundreds of even little startups, web services, Blitz.io, just to name one, a random one with which you can test your site for that big day. They are putting lots of traffic to your site and then you know if your performance optimization really will hold. There was this famous Kim Kardashian post recently. You can Google it up. It's really funny to read where one guy explained kind of how he prepared his media site. There's normally just getting some normal traffic for this enormous traffic because everyone wants to see the naked butt of Kim Kardashian. So, to recap that again, there are four common ways to fail. You can optimize one part to death. You can just random parts optimize. You could just use the new busword technology without understanding it or optimize without testing. So, there are a lot of ways to fail. Isn't there something better? This is also complicated. Is there nothing to make this easier and have a fast site? Yes, there is. The easy one is higher performance consultant. Higher performance consulting now or call now in the second 0800 Drupal performance and enjoy blazingly fast site. Yeah, so that's the end. No, no, performance is really difficult to get right and that you should hire performance consultant. Remember this number question? Yeah, just kidding. Hi, you got me. So, on the other hand, hiring a performance consultant can be really useful at times but what I think is even more useful is learning and spreading the knowledge. So, let's continue. So, let's stand on the shoulders of guidance and walk the path of our ancestors. You are missing loading your mission. We have a Drupal 7.8 site, kind of standard. We have several performance problems, kind of real life problems, and let's meet some friends first and help them in their needs. And I'm introducing you here Mr. Drupal Pages. He feels slow, sluggish and big. And the Drupal Pages are totally unhappy. This is so heavy, Lord. Let me have Mrs. Myesquial. She is totally exhausted, needs the time out. I just need to select break. Let me have Mr. Apache, who is sweating under the Lord. I give it 100% all the time, but this is just so much. And then we have some Mr. Code that's buggy and he's known as a real troublemaker. Yeah. So after you've seen my friends here, we see it's all red. And this is a mission, and your mission is to investigate infix. So let's take a look at how we can improve things here. We start with server performance. Measuring server performance. The system load is 4.14. The page load time is 20 seconds and the page load is 100%. How to measure performance on a server? You hopefully never see an image like this on your server. This was now an example I've set up just for you all. It was a lot of fun. I just put a wild true loop into index PHP, then added like 20 tabs. And it was 40 tabs and so I could end up with that. This is really fun. But actually I've seen that in real life. And it was scary. It was really, really scary because it was one of those things where there was this big media campaign and yeah, everything was prepared. Vanish was prepared. There was some boost cache behind that so that even when Vanish was expiring there was still some backend cache. It was all really good. There was just one little thing we've overlooked. The little thing was that for that media campaign it was using Google Analytics codes. And Google Analytics codes meant that every user would have gotten a very special unique ID for their thing. That means whoever clicked on that banner and featured there was getting a unique URL. That means Vanish was doing nothing. And then it was on that server and it was like the Lord was climbing and you know like in the movies when there's this bomb and the Lord was climbing more and more and I was trying to stop the Apache and I was just putting in service Apache to stop. It was too late. Server froze, everything was down because the other thing we did wrong was the server was not properly sized. That means that then once the Lord was way too high it got into memory trashing. There's a very useful tool called Apache Buddy which automatically checks your PHP memory size, your max memory size and how many max clients you have in Apache and then it automatically calculates if your server will hold that or not. So that server did go down. We had to spin up our emergency IVS instance for that, switched IPs and fixed that problem before. It was just some little Vanish tweak and in this case we fixed it even in the HDXS. That was an intense time. Anyway, hope you never see that but top even the most basic commands can help you getting a feel of how your server performs. Then here's a pretty handy drush command for page generation time of any page. This is now a Drupal 7 example. So what's important about that is you can actually execute like any node, any pass with that little handy command and that's very, very helpful. Really? Yes. Yes, yes, it is helpful because on production you often have one little problem and the problem is you don't necessarily want to change on the code on production and you still need to test this one page but you need to test it under the production conditions and then it's very helpful to just very isolated be able to check the server and that. So yeah, production debugging because sometimes those problems only show up on production. So now that we know that there is a problem, a lot of forest, really a problem, how do we solve them? And one of the things is the foreshoulders of the guides. I know of that, that you should know your pain points first but there is a kind of stack that many high-performance sites use and that is PressFlow or just generally good code, APC and OpCache, MAMcache and Redis, Vonage, NGI, NX or CDN. So how can those giants help me? PressFlow was really only relevant for Drupal 6 sites. Anyone still uses Drupal 6? No. Drupal 7 already includes most PressFlow fetches and approaches so it's not really needed anymore and Drupal 8 has performance best practices all around but there is kind of like an in-official PressFlow that's hiding in some corner that many people use in the high-performance world but no one really talks about like Fight Club. So that's this magic number 210683 and that has all the collected performance patches that are really relevant for Drupal 7. They are listed by order of how much risk there are for you site and you can use them and yeah and if you yourself have written like a performance patch you've using a core patch which you say well this should be on this list come on in at that patch and help make the Fight Club even better. I hope no one else heard that. So we have APC that's an alternative PHP cache. It's highly recommended. It's also easy to install. It's just a Piglet Expansion and it speeds up PHP execution by caching the pre-compiled PHP object. You have to understand that your PHP is like just a source code and PHP originally was an interpreter. That means every time it goes through this whole source code it checks every single line and then it's interpret some on the fly. Why this APC? It kind of compiles it to some intermediate format. It can just load that and that's way faster. And the nice thing is starting from PHP 5.5 that you probably should start migrating to at the moment because 5.3 PHP is end of life and 5.4 should be probably soon. PHP 7 is just around the corner. So it's already in by default and it speeds it up. So APC opcache, is it worth it? Yes. This is like a Drupal 7 site. It saves around 100 milliseconds and what's even more important it also saves a lot, lot, lot of memory. I did some memory profiling for Drupal 8 and first I had this opcache off because for all Drush operations that's very important to remember if you're not using opcache enable command line then it will not use the opcache at all. While it can't be saved stored permanently when using it from the command line it will still reduce the memory. So it was like profiling this and I was saying like well there's 40 MB of classes that feels a little high but once you enable this opcache it's down to 2 to 4 MB. Then we have memcache. It replaces the caching of the MySQL database it's really just simply a key value store that lives in the main memory. It's very fast. So what does it get me? It gives you way less load on the database. It gives you overall way faster caches which is very important for those of you who are having lots of things or planning to have lots of machines you have normally one database with perhaps some slaves but this database might not especially be optimized for right performance because once you are scaling to a certain level you then have to buy a bigger database server then you have to buy another bigger database server. So what memcache allows you it has like a distribution module where you can have several memcache instances and then it's like like this DrupalCon registration you know when you register there was like A to F E to G to whatever and then till that. So it partitioned the space of attendance so they could serve faster and that's how memcache works as well. You have a key it gets with some algorithm put into the right queue then you can have several memcache servers and it's a very simple model but it works well. As the next thing we have Vanish on our kind of high performance stack it saves the whole response to memory it serves the response for memory and it's like a shield for your server. So Vanish is really crucial for high throughput for your media sync site and yeah to just explain a little how Vanish works it's like you're having this incoming request that comes in here and that doesn't need a bug let's see to that okay great so we have this incoming request coming over from here and down there at the back that's our web server and then there's Vanish it's our magic shield so whenever anything comes in as a request then Vanish checks do I have that already and it sends it directly back so and when you are getting a request and Vanish doesn't have it then it asks the server hey can I get that or here you are and what's here important is one thing that's very very important in that even if you have thousands of users going to your front page and the front page is not in cache then Vanish won't go and just say hey backhand I'll just connect you a 1000 times because of 1000 users here waiting but it will kind of queue them all up and say hey wait a moment you just have to get this freshly from the cache and then he gets it from the backhand once and then it returns it to also 1000 clients so that's really protecting you from server things the other thing is if during that kind of media event there's something happens and that can always happen and your web site, your web server goes down but one still has a cache then it says oh that didn't really respond anymore and it can serve those things from a kind of grace cache so as long as this backhand is down it will just serve the old front page content and your site won't appear as down as it would be if there was no protection at all so it's really nice technology you can do much of that the same as nginx but we use vanish at most things so I'm just recommending one way as a best practice so overall that sounds pretty complicated so here are two best practice configurations especially if you have several servers and you have to ensure that they are there's this lullabot configuration if you need to configure it for multiple web servers and there's the one that I also use myself for some servers and that's the four kitchens one that's a very very simple one it's kind of the simplest vanish configuration you can use for Drupal but it's also plug it in and it works and with Drupal 8 and Drupal 7 it's gotten so much easier to use vanish with Drupal 6 it was like a real adventure so what does it get me 50 milliseconds partially 10 milliseconds response times so your mission update the anonymous pages are now blazingly fast but they still feel a little big this is my score well I have less to do now but if I have it is still too much those authenticate users are really killing me and I hate those anonymous UTM requests so that was what I already talked about the quick fix for Google Analytics problem with nvanish you can just use this little game of code and it will strip all the UTM things because they are only used by JavaScript anyway another possibility is to configure Google Analytics to use a hash based approach without using query parameters and that works as well client performance measuring client performance so we have a page load size 300 kb page load time of 20 seconds how do you measure performance on the client the best way to do that is to use a google chrome developer tool bar so you just right click and then there is inspect element and then you click inspect element and for example chrome and then there is a network tab and if you click that reload your page you can immediately see what takes long in newer google chrome version it's even cooler because then you can even filter by media, things images, etc requests etc another great possibility to measure performance is webpagetest.org so but the question now is why are those Drupal pages so big what makes them take so long and the reason is we need compression of the javascript css and the html so compression of css and jas it's very simple in Drupal 7 core just go administrate menu configuration development performance and activate that check box to enable compression of css and javascript and to use the minification for that and in Drupal 8 Quartz even enabled by before so aggregation and compression is really something that you want to do before go live your users will thank you for it you need the mod revive and mod headers for that and if you then want to further compress css and javascript in Drupal 7 you have to do nothing in Drupal 8 you have to do nothing but if you want to also compress all your pages you have then you have to use the mod deflate but be aware this could be putting a high load on the server so if you combine that with one it's kind of the best way to ensure that all those compressed pages you're not kind of compressing again compressing again then there's advanced aggregation which by now I can wholeheartedly recommend for Drupal 7 which gives you way less aggregates and a lot more control of what you are doing and the last thing is to set proper caching headers and in Drupal 7 in Drupal 8 it's perfectly set up for you but you might want to adjust the numbers to cache some as it's longer or less than what's set there before so what did we achieve we now have only 4 HTTP requests we have a much faster page load time so let's get an update your mission update we are blazing the fast really slick and really really happy really really really happy so some additional techniques you can use use a content delivery network those have gotten so cheap in the meantime there's a CDN module for Drupal 7 you plug it in and then kind of all your images are going via CDN CDN caches your files close to the user's location it's very useful for images css and javascript files just use it another possibility is for example to use a pjax module which could be very useful for pages on image jellies and only reloads the content you really need so that's another thing here another little quick tip if you have some slow javascript pages where you're getting like an javascript error on loading of the page because you're doing something directly on page load and then it waits and waits and waits you can use a workaround you can wrap those code in a set timeout function to just run it a little later in general this is something I can even recommend because there's sometimes javascript that's really not needed within the first 100 milliseconds of a user visiting the page because the page is busy anyway with loading images here and there and it's some obstructive link that's down there in the footer somewhere and in this footer you need some javascript to for example make it expand and then you can just defer executing this code with this little thing and it can make a difference as last thing we have the module performance which we do measure then we have like Drupal bootstrap with a menu execute active handler that's Drupal 7 now then we have a memory usage for example here 104 MB and in this example I've created a little loop that was just using a lot of memory in that that can also make a site slow but how do I measure such things, there's a very useful xhprof PHP extension there's both for Drupal 7 and Drupal 8 the xhprof module and under Drupal 7 you configure via admin config development develop because it was original part of the develop module and under Drupal 7 Drupal 8 there's also a setting under the performance things let's take a quick look at let's see let's take a quick look at such an xhprof report to oh there it is a quick look at such an xhprof report so here we have a page that loads 3.1 seconds and that's obviously a lot this is now a Drupal 8 site just to give you some variation and in this we now need to find where's the problem so what we can do is we can obviously just go and we just follow this fellow that's what I often don't do and we're just going there and there and there but what also works is to take a look at that report and see where does go like that would for example here be and so we see like oh this might have something to do with blogs and then we go up a little more and then we see oh the system branding block build that takes a really long time and then we see like oh let's see what's happening here and then oh the developer forgot to use sleep in there so yeah the backcode isn't that seldom so what I've seen in practice is not a use sleep but someone needing to wait for a web request and was busy waiting for that request to return was always checking is it done yet and that would have also cost like a lot of four seconds on server so this is kind of how you read such xhproff reports you just dive right in and you check things that are slow things where you see well this looks a little suspicious what's going on here you can also take a look at what's happening here and there and then if you take a look again at the top level report you can also sort it for example by the number of calls so again you see this area is called really often and in this case it's a little older Drupal site the container is really often as I said Drupal 8 stuff so let's go back to our presentation and do some little common pitfalls one of the common pitfalls I've seen on Drupal 7 site and yes it made a big big big big big site go down is a variable site on each page request before Drupal 8 where we have this great configuration management and where we have this great state service variables were often used to track state and what people did not understand is that setting one variable was clearing all variables for everyone because variables were loaded on each request so what you do is if you have like a variable set on each request then it needs to rebuild all the variables and that means it needs to go to the database again it's not cached in memcache or somewhere else anymore and yes there were some modules who were doing that in hook init and that caused problems let's say like that and yes it has brought the DB server to its knees for that big site another common pitfall especially in the Drupal 7 world is that you are using sessions for storing data for anonymous users let me tell you if you use varnish and one of the configurations I've shown you and you are using like the session for setting like a low bandwidth flag where you want to have the page behave differently based on different things don't do that please because that will mean that now your varnish is very very unhappy because it says well they all have a session they must be authenticated users and use a cookie use a normal cookie adjust your varnish configuration to strip this cookie out and then you are safe in that because then you can actually say well one should not deal with it they can still cache it and for example if you need to do like little dynamic things do it javascript wise it's possible to for example store the count of a card within a cookie whenever the card is updated and then just display a zero for the card and it's the same for every user and then once the page is loaded dynamically in that real card and then you can cache it in your javascript is very effective and you can cache this page suddenly and they are very very important and that's one of the best tricks I can give you if you are kind of dependent on varnish and how you can very simply configure things but please don't forget you need to change your varnish configuration as well because by default it only says wait a moment yeah it strips the session cookie and it checks for the session cookie it strips all other cookies sorry you are fine I was mixing up things then the other common pitfall is having installed way too many modules please decide on a strategy use for example panels or decide where you really are into this place or you want to do everything with custom entities or you want to do something with your custom special views plugin or whatever but decide on one way to build the site and down kind of use for every new little use case that comes up just another module that makes problems with three or not modules on the site you are kind of guaranteed to run into some performance problems somewhere and don't forget every of such modules is really not only giving you a little to every page request your page gets way bigger even with all the optimizations Drupal does but you are also getting a way more complex site we had the pleasure to recently performance optimize a distribution and some really great core packages came out of that because it ran into performance problems never seen before we now have 90% faster seam system registry rebuilt performance so it's fantastic for a performance developer but probably not for your sites then there are some more interesting pitfalls by now there is also leaflet and other possibilities but we really had a problem we had to have 5000 nodes on one page and views in Drupal 7 was loading them all and that was just not going to scale because there were pretty big nodes because they were doing something else still and I've written a little solution for that which is called open layers quick query it's just in a sandbox it's for advanced use cases but I think it's still in production at the moment obviously directly improved performance you can use the block caching the render caching there's this render cache module I've written that's used on Drupal.org and on Drupal.org it does not do much but whenever you view an issue queue and you see a comment that's render cached and it improved the performance so much I think we saw like 30 to 40% performance improvement and that was simple it was really easy but it makes a difference think about your site like little building blocks and see what you can first of all avoid doing as much work when you serve your page to the user and second check what you can cache what little things can you cache that might take even a little time here a little time there but if there are 300 comments then for if you are into block caches you can use the block cache auto module for Drupal 7 just install let's say this block is cacheable by user if you have a pretty simple site it even works well in that and then you can say I want to cache this block and by now we've even ported the time based setting from Drupal 6 so it's possible to say I want to cache this block for 3 hours and per user and then you are done and for simple sites it's totally enough set up views caching set up panels caching just click those checkboxes and even if you are editor they are saying well our content needs to be as current as possible we can't really do this for many sites it's totally okay to cache things for like 6 hours or 3 hours and that makes a difference so take use this time based caching all those things give you for that distribution side we optimized one of the things we did when we were doing the performance audit we did go into menu panels we clicked the checkbox to cache for 6 hours and site was suddenly performing way better so it can be as simple as that even in our pretty complicated job so what did we achieve we found that bad code that code is no longer bad we removed the use leap and now the page is really much faster now and there can always be little bottlenecks that no one thinks about where you are for example implementing hook entity lord and you are doing some database query and that query is slow and you have a page which was pretty fast and was for example built with panels and it shows like 30 entities or something it's like little building blocks and it's nice magazine site or something like that and suddenly it gets so slow and you are wondering what has happened here and then you see oh wow it's this little code that I would never have thought they would cause such problems so use xh prof would have shown you there and might probably show you again is a little tool called xh prof kit which not only can optimize performance things let's quickly do that so what you have seen here is it has like the special index my-perf.php and that's again something I use for production because on production when I come into a server I might be allowed to put in a file and they might be okay to put in like the xh prof module but they might not really be okay to enable the xh prof module or the devil module or any other module and I really don't like to do that because just by enabling this module I am changing the system state I want to monitor so what you can do is the xh prof kit which is on github you just put it in a sub folder of your installation you execute the xh prof kit setup and then you are getting the special index-perf.php and then right at the bottom which is here is the wrong color you not only get the timings but you also get the profiler output and that's especially helpful if you take a look at the page source because what I then do is here we have our 3.1 second time reload sometime often to get some deviation in how it would look and then I am seeing oh now it was 3.2 seconds and I also can click on that link and take a look at that profiling output so this is just a little tip I voted for completely different purposes to automate that but I have seen that for any new client I was just installing this xh prof kit in a sub folder and then there is a sim link for the index-perf.php and to clean up you just remove 2 files and you are done and that's a really great way to measure things so page is much faster now we found the bad code so your mission update the missus mysql is almost happy she still feels kind of slow sometimes the apache is really happy the drupal pages are really happy and then we take a last look at database performance we have a slow sql query still in there like for example ntlord to analyze slow query logs you have to first enable that in them in the mysql configuration and then you can use pakonas mysql query log analyzer I can assure you it's the best thing sliced bread because you are getting like an output and you directly see what's causing your database to be slow so just use it enable the slow query log there's a dbtuner module meanwhile also available for 6 7x mysql tuner script for 7x and then you can use explain queries to get to the gist of how that works not sure anyone still uses that but if you are using my isam please switch to inodb except you know exactly what you're doing and then be aware of the barrier for x3 x4 file systems you have to set no barrier 1 and I have been bitten by that I should be knowing better but I was profiling Drupal 8 performance and I was like this database is so slow what the heck happened and I was comparing installation times with other people and they were like why is that so slow for you and I was like oh my god I forgot the barrier again so there's many many threads in forums by developers well my server is so slow suddenly so use a no barrier 1 be aware you need to use special hardware for production there's some useful gait in the reddit handbook for that or even better use like xfs file system still very much proven for running mysql database size appropriately because if it goes too big like if you put like 2 terabyte and you have to clone it to make a backup or so not a good idea to fix slope varies you have to use the explain command so you use that slope vary you got from the log and then you put like explain slope vary on it and then you add indexes where necessary then you run explain again and hopefully it's then faster so no more slope varies yes no we are doing some commercials no best practices set up the base performance ensure you have a great working performance stack ensure you are following best practices you have one to have your own high performance stack and it's not as difficult as you've seen setting up abc memcash varnish yes it takes some work but there are great tutorials out there and many many many enterprise sites still use this stack and run successfully every day analyze your pain points first where's your problem is it server based is it client based is it the modules is it the database then optimize this pain points and now we are at the end your mission update so yeah I'm very very happy Apache is I'm really happy and the Drupal pages are also really happy mission completed wake up Neo so questions one point you recommended nginx as an alternative to Apache you can definitely use nginx also as an alternative to Apache but then you have to be aware that you then need to use php fpm and other things I know of people who use this in production but I know of several large enterprise sites that are still using Apache for various reasons as far as I know Apache usually slower than nginx because of the htaccess files correct me if I'm wrong so if you stripped those out and you just put everything in the vhost file they should be comparable so the question is about nginx with Apache performance yes you could strip down for example the htaccess files from Apache so that it doesn't use this file access anymore but on the other hand they can be very useful at times and I've been bitten on some cloud based hosting where I didn't have those available so some code I wrote didn't work as I expected so your mileage may vary overall Apache gives still good enough performance for many things it has many tweaks available and nginx needs a lot of configuration to work properly that's my experience and next to that you use xhprof how does New Relic compare to that in your opinion New Relic New Relic is giving you one thing that xhprof is not or you could also use if you're using it for open source you could use blackfire.io by symphony this is very nice because it's a browser extension so you don't have to tweak the URL like you've seen there with the indexperf PHP you just click a button and it measures for you what's great about New Relic in my opinion is you are getting data over time so xhprof is giving you a snapshot of how it is now but it doesn't explain in a way how it would be for a user that wants to so let's say like that you are measuring with xhprof and everything is okay but what New Relic gives you is a report that your server performance is not good the reason is you test it at a time when the caches were warm but New Relic tests kind of every page a little like they are doing some sampling of that and I'm giving you a report over time of how your performance varies across the day and gives you a report of what your users are seeing and if you see your users are having like page load times of 20 seconds 10 seconds or if you see that your users are when they are in comments they have to wait sometimes up to 10 seconds and it's a bad user experience for them this is something you are getting from xhprof obviously you could write your own tools to kind of measure xhprof data over time as well but to my knowledge no one does it okay thank you for the very entertaining talk hi say without varnish are there any performance implications of using sass or less or other pre-compilers that you've seen so you are mixing up two things are you talking about using sass to generate javascript and css on the fly right usually even with a sass compiler it will cache it to disk so that means that whenever your sass file changes your javascript file is then generated newly and until you sass file changes again which is okay and fine you just have to be aware of the first user problem the first user that comes when the caches are not there will have to wait a long long time and that's again something you would see in New Relic and the other thing is you have to be aware that you don't go over some time kind of memory limit or something if your site is already heavy and then you are adding sass compiling that could put it over the edge but I would say even without varnish there's no concern because it permanently generates css and javascript files any more questions okay when we're done please evaluate the session and tell me what you thought I hope it was entertaining for you like a nice Drupalcon things end of Drupalcon session and now please enjoy the closing session and have fun, meet some people