 Hello. I'm going to talk about how to make your WordPress site responsive and fast. I'm going to cover a little bit about the meaning of page loading speed, how it impacts your business metrics. I'm going to cover what the performance bottlenecks of WordPress are and how you can avoid them and how you can set up a theme and plug-in logic that will ensure your WordPress pages to deliver hopefully below 1,000 milliseconds. Why page loading speed is important. We want our devices and the software we're using on an everyday basis to be fast and responsive. We enjoy the present shock and we want things to happen now. So page loading speed has major impact on whether we're going to use or abandon an app. Because it is important for you to deliver your pages as fast as possible. Many studies have shown already that page loading speed has major impact on key business metrics. It will, a fast loading page will improve your UX metrics. You will gain more visitors, you will gain more page views. It will strengthen your brand and the perception of your brand. In one study, even a 500 millisecond delay in network connection makes people think that your pages are broken. So slow loading pages have the perception as being broken. It's a well-known fact that Google incorporates page loading speed into the search rankings. And last but not least, all that boils down that whatever your conversion or business strategy may be, slow loading pages will have an impact on your conversion rate. So if your page sucks, no one will stay on your site and Google won't send traffic to it. If you have a goal, your site must be fast. So what is fast? Lately we had a lot of talk from Google that we should deliver pages even on mobile below 1 second. But they have done here how much time we have because on mobile we have higher network latency. So we have exactly 600 millisecond for the network or TCP connection setup and just 400 milliseconds to deliver CSS and critical assets to display above the fold content. I suggest you get a speed index below 1,000 milliseconds. Speed index is a metric introduced by web page test.org. Web page test is the most important performance measuring or addition tool we have available. You can run tests from different locations. You can make settings on how the connection speed is. You can block scripts, et cetera. The speed index itself, it is the value calculated on how quickly and how fast visible parts of the website are rendered or displayed. This is depending on the size of the viewport so it will differ on desktop or mobile sizes and it is expressed in milliseconds. So telling you have a speed index of 1,000 milliseconds or below means that in less than a second you have all the visible contents displayed on the page and the user can start interacting with your page. On this slide you can see that the red line shows a slow loading web page and the blue line a fast loading web page. The y-axis is the visual progress and the x-axis is the time in milliseconds. But now it is supposed to be slow. You have to set your stakes high and have to try to crack the 1,000 milliseconds barrier. But you must consider all pages that load or have a speed index above two seconds to be broken. So why are pages slow? This screenshot is from the HTTP archive. They're collecting stats on the top 1,000 sites or even the top 100,000 pages from Alexa and we can see that in the last three years page size has more than doubled. So how to make your responsive site responsive and fast? I'm going to quote Steve Salas here who said that 80% of the waiting time for website is spent on the front end, just 20% on the back end. So that's where our main focus will lie. Although the statement is quite old and was made years ago, it is still valid and unsignificant again when you started to make websites for mobile because we're dealing with less bandwidth and higher network latencies. How do we optimize our back end? You have to assure that your kernel, your hosting provider runs Linux kernel version 3.2+. I'm not going into too much detail, but this ensures that you have the latest TCP improvements available at hand, which most importantly means that the initial congestion window size, which is the amount of data that was transferred in the first roundtrip during the connection setup, lies at 10, which is a multiple of earlier versions where it was just four in terms of kilobyte. It's 14.6 kilobyte in the first roundtrip against earlier four to six kilobyte. Your server must be able to run critical plugins, but more on that later. Most importantly, we'll have a page caching logic running and your server must be faster than 300 milliseconds to deliver the pure HTML response. You can test it by downloading a page from WordPress, uploading it to your server, making a web page test, and assure that the first byte time is below 300 milliseconds. After that, you should deploy the HTML5 boilerplate server configurations. They are available for Apache, for Nginx, Lite, TTP, and others. This will just cover 95% of all performance optimizations you can do on server side. On the front end, the fastest HTTP request is the one not made. So we're talking of high network latency, and so still HTTP requests are the most expensive bottlenecks or assets during loading a website. You have to question every image, every icon, every font that you're using on the page, because speed is more important for user experience than design. You have to prioritize important and above the full content and try to deliver it as fast as possible. In terms of WordPress, the key performance bottlenecks of WordPress are which teams, which team you're employing, and which and how many plugins are running. As shown before, web pages are continued to grow in size, and the same goes for WordPress themes. They come in many different, you have, many of them are over-engineered and unoptimized performance-wise. You have very many customization options, and in the end, a good team supports your content, a bad team can even destroy your site. Good teams versus bad teams, good teams are slim, and later they depend on very few resources. They don't depend on third-party plugins or libraries. They don't use images solely for design purposes, just straightforward. Bad teams are heavy on bulk, make high use of images and make lots of requests. They come in X flavors and with many customization options. They often depend on third-party plugins and JS libraries and are in general over-engineered. So how would you choose your theme? The best thing you can do is get the theme of choice, like, and run a web page test. You should look out for the key metrics, start render, the speed index, the number of requests, and the total payload. In general, all should be low. Start render should lie around maximum 700 milliseconds, and the speed index at 1,000 milliseconds. This is a comparison of nine out of the top ten most-selling themes on the team for us, along with one good one to see the difference. Theme nine is the one performing worst, and you can see that even the, at first, is the good team, which has the yellow bars or the speed index, and we lie below 1,000 milliseconds, and even the best of the bad teams has 1,800 speed index, the worst even 7,000 milliseconds, which means you wait seven seconds until the visible parts are populated. You wait three seconds until you see the first pixel drawn to the screen. This is a selection of four good themes. The last one, the default WordPress 2015 theme. Without any optimizations, it has a speed index of roughly 1,300. The good theme and the bad theme, it was number seven on this slide, compared in web page test metrics. The good theme starts to render, starts painting on the screen at around 700 milliseconds, and has finished painting the screen around 750 milliseconds. The bad theme starts drawing to the screen at 2.8 seconds, so you wait 2.8 seconds staring at a blank screen, and finishes at around three and a half seconds. Some more statistics. The good team just used 27 requests. The bad team 119, and you can see that we used 25 CSS and 21 JavaScript files, all of which is render blocking and blocking the browser from displaying the page. The next potential bottleneck plug-ins. Plug-ins are the glory of WordPress. Plug-ins are a performance nightmare. You have to realize that every plug-in you install will add additional ballast to your pages. Plug-ins will include styles and scripts, where it fits them, and in the end, excessive and direct use will mess up any performance optimization you already might have done. The good plug-ins, however, are the exceptions. You have to deploy a caching plug-in. A caching plug-in just means that you take the heavy work of the database, and as you query the database, you have the response, you have the page displayed and saved on a different location. So the server just delivers the HTML response alone. If you send the request to the about us page, the server doesn't query the database no more, it just sends the saved copy of this page. This means you have nearly no back-end latency. And the next is the boilerplate HD access, which is server configuration. We just had that before, and we're going to much more detail into the total cache plug-in. This is specific, and it is an overall guide that I'm showing now. Your page, your site will differ, but this is mandatory settings that you should have deployed. As a total cache plug-in, not just only ensures page caching, you can also set browser caching directives. You can fingerprint your static assets so you can set high age, maximum ages of your static assets. You can bundle and then queue your scripts and styles. The first thing is enable page caching. You can pretty much sure go with the default settings here. Just if you run into any troubles, you should inspect the detailed settings. The next setting is minification. Minification handles the combination and minification of all text-based assets. So to say CSS and JavaScript, here you should first enable it on the main page, settings page, and you should use the minify mode to manual. Then you should turn on HTML minification. And now you have to bundle all your styles and scripts to one file each, which is then included at the appropriate position that you can control further. To resolve dependencies, you must now go through your team, collect each script and each style sheet and provide the plug-in with the exact location of it. So here is the first script I'm including is jQuery. This goes through all scripts, bundles them to one file, and then I can say, okay, you should embed it before the closing body tag in an unblocking way using the async attribute. Again, you have another possibility to use different templates here. You can say this script should be included on every template, or you can say it should only be included on a single post or a single page. The same you have to do with your CSS, you have to collect each URI to your style sheets, provide the plug-in with it, and it will handle a minification combination and delivery of it in the head. The CSS is usually included in the head, whereas at JavaScript, you have the option to include it in the head or at the bottom of a page. After that, you have to remove the original VPE and Q script calls that are usually found in your function's PHP file. Navigant-enabled browser cache. Browser cache means that each request a browser does could or can be saved in the browser cache for a specified amount of time. So you say, my CSS is valid for one year, then it won't change, you set it to one year. Because the plug-in allows fingerprinting, it appends a query string at the end of the request. The browser can be tricked to see, okay, this is a new. You have query string version one, and then you switch to version two, then you trick the browser, although you set a high maximum age, to download the new file. You can go here with default settings as well, and importantly, enable gzip if it isn't enabled by default by your hosting provider. And as I said, enable the fingerprinting of static assets. You should set, generally, the last modified header and the expired header. As fingerprinting assures to trick the browser, we can set very high expired headers. This is specified in second. Here it means this is cached for one year. Gzip compression and enabled fingerprinting is done by preventing caching of objects after settings change. The next thing is HTML5 boilerplate, which will, again, as I said, it's just an install and activation one click. It will cover 95% of your optimizations. It will also cache font files, will gzip font files, things that you usually won't know of or be aware of. The next thing is plug-in organizer. It's a plug-in that lets you enable or disable plug-ins as granular as page level or post type level. You can say, globally, I turned the image carousel plug-in off and I just included on posts that have the post format gallery. So you will just add the overhead of the carousel JavaScript on this page type. It won't affect each and every other. Image optimization and lazy loading. Image optimization is a very important part because, as we've seen above with the size of pages, it is very much due to the heavy use of images and the heavy use of unoptimized images. If you save the JPEG in, I don't know, 90% quality, it is far too much for web. There are image optimization plug-ins that can automatically reduce the size and strip them off any meta text and information. So you can optionally have a... a potential decrease in size of 50%. Another option is to deliver responsive images, which means when you're on a small viewport, when you're on your mobile phone, you request just 300 pixels of the image when you're on the... on desktop, you request the full size. Lazy loading of images is also quite an important part. You just load the above-the-fold, the initial visible image with an usual image tag and defer all the images below the fold with JavaScript, so they're just loaded when you scroll down to them. I tested this on a shared hosting environment using the usual 2015 theme. In the original test, we can see that we had... there is no page caching, nothing in place. We have 450 milliseconds of first byte time, so the time until the server sends the first byte to the browser. We have a hard render metric, the time when the first pixel is drawn to the screen in 1.5 seconds, and we have a speed index of 1,500 seconds. A millisecond, sorry. This is not that bad, but it's the... yes, it's the base for... okay, I lost the word. So, okay, the next thing is I enabled the total cache plugin. We decreased in the first byte by nearly 50%. The start render went down the same amount, so we are on 1.2 seconds, and we decreased the speed index by 300 milliseconds, too. The thing here is that we're still including the 2015 theme. It uses two different fonts in each four different ways, and even the italic versions. So we have eight requests to the Google servers that will block rendering because, for example, Chrome will not display fonts or text until the font files are downloaded, which means that that's the main reason our speed index is still that high. I removed four of the custom fonts, so I just removed every italic version, so we just have to play in versions and go down to 1,100 and removing all custom fonts gives us a speed index of 800 milliseconds and the start render of 700 to 800. We could reduce the speed index by 700 milliseconds, nearly 50%, and the total load time by 500 milliseconds. In conclusion, you should ensure that your server is tuned for optimal performance. You should use a lightweight and fast team test and go to web page test and test it out. Insta and configure all the plugins that were mentioned before and you should constantly monitor your performance. The tool at hand is webpagedest.org, Google page speed, Chrome DevTools and Google Analytics. It shows you the connection time of your server. Speed optimization is not a one-time project. It's a process and each time you add new features to your site it will have impact on your page loading speed. You have to monitor and react over and over again. That's it for my part. I was a little bit too fast, I guess. A little bit. But we'll have a plenty of questions. Hi. Thanks for very useful presentation. My question is how would you actually initially measure the team, the team speed? So you said, okay, we have a couple of teams on Team Forest. How would you measure the team speed before you actually integrate and fill it with content, all the images and everything else? I'll just go back. All of these are the about us pages of the teams. So I went to Team Forest then you have the demo version you get this URL and I navigated the about us page and made the web page list. So it's already filled with content because the demo's already have sample content in it. They also have the overhead of lots of posts or you have the database to be overhead. So you have the experience or advice with inlining critical CSS into the header of the page because I noticed that Google page speed says a lot about that and I was wondering how to best go about doing this. It is a very important step. It's not easily done by using plugins. You would have to integrate that in kind of a build process. It's a grant or a GALP tool. You know GALP? A grant. You can streamline that into your build process. So you will just inline critical CSS and lazy load the other CSS. But there's not an easy way to do that over plugin. But it sure has you just have to look out that you stay below 14.6 kilobytes and if you exceed that you have to do another roundtrip on the network. On mobile if you say it's 200 milliseconds roundtrip delay and you have to do another you exceed to 400 milliseconds before the first byte. Do you have any recommendations for such lightweight themes that you can recommend? I didn't want to make any recommendations. I just wanted to find it out in slides from the URLs of the web page test if you want to. Or you can ask me later. She's asking. I have a quick question regarding critical CSS. As I understand it from you now you would place the critical structural CSS in the header in a separate CSS file just above the closing of the body, right? Yes. There are different possibilities to do that. If you set the style sheet linked to media you can set it to screen to print or whatever. If you set it to a value not known to the browser it will not download the attribute to screen whatever and then it will start downloading the CSS and applying it. It doesn't download the CSS and doesn't block on it and just does it after the load development. I don't think it's an easy answer but let's say you test your site or a WordPress site and it loads in 4,000 milliseconds. How far do you get with the critical plugins? Or would you optimize the code in the site or the theme itself? How far do you get with the critical plugins just by using them? It depends. If you would use the theme with 119 requests you would have 25 CSS files which would mean 25 different connections to set up HTTP requests. The plugin would bundle all that to one CSS file. I would say if you have 4,000 milliseconds of start render I think you can get down to 50%. Because you remove very much network overhead by bundling all the styles and scripts and you should get down. Even with lazy loading images because this theme is very heavy on images I'd say you get to 50%. But very fonts are very expensive. You just have to wait until they arrive and that's the first step to cut I'd say. Hi. Short question. Do you have experience how well specialized caching like Varnish does against WordPress onboard caching with V3 Total Cache or something like that? So in other words is it worth it taking the hassle of setting up a Varnish server before WordPress compared to onboard caching? I have tried it once in a development environment but I have no real life stats and I guess if your site is big and large and you have the necessary amount of traffic that it would be rental, I think you should try it so but I can't say yes or no. I think that it's very important not to just look at the WordPress base because a lot of stuff is actually happening underneath WordPress. If you're on a slow shared hosting you can do all the optimization in WordPress it will never get really fast. So I think it is also important to look at the machine you're running, what type of Linux you're running to switch off everything you don't really need and also I think just because the question before was is Varnish Cache working? I think the deeper you go on your system and the better you optimize this then all the performance will work through to WordPress and every single plug-in so I think it really makes sense to look at your whole system and not just at the WordPress part. Yes, it's true. It still accounts for that very much happens on the front end so it's the biggest these are the lowest hanging fruits personally making changes on the server is often not possible if you're on shared hosting so you can often just optimize the front end but it's true what you're saying. Thank you very much. Thank you Holger.