 My name is Otto Keka Lainen and I came all the way from Finland to enjoy the snow-free land you have here which is very warm and nice for me. So this is the last talk for today, but it's packed with useful information, how many of you are developers? Wow, almost everybody, so this is super important and useful and concrete information for developers Ja myös ihmisiä, jotka on yksityisellä yksityisellä, voivat yksityisellä yksityisellä yksityisellä yksityisellä yksityisellä yksityisellä. Ja olen already published my slides online, ja voit nähdä my Twitter handle there. So if you follow me on Twitter, you can find the links to the slides. So you don't need to make any notes or anything. You can just look at the slides, which are already online. All right, so I've been using Linux for since like 1999, and I love open source, and I like going around the world in conferences, teaching best practices to people, and I also like to contribute and participate in open source, and I call myself a full stack developer, meaning that I code in JavaScript, in PHP, and then in a few other languages, so that I also participate in improving like Redis and Nginx and Meridibi and Linux, the kernel and so on. So I'm a true full stack developer. And currently I work as the CEO for Ceravo. He's a very technical CEO for Ceravo, and Ceravo is a company, it's not a digital agency, it's a pure hosting and upkeep company, so we have our background in Linux server maintenance, and at Ceravo we manage hundreds of, actually we have like 1,500 customers, and we manage their enterprise grade, WordPress sites taking care of security and performance and everything so that they run smoothly, and we have lots of experience regarding scalability and performance. So the background of WordPress, it's hugely popular. I just read that something like 29% of all websites online currently run WordPress, and the success factors, at least I think so, is that WordPress is very easy to use, it's easy for the end user, at least if you compare with other publishing systems like SharePoint or UUMLA or Drupal or whatever, WordPress is very easy to use compared to those, and also from a developer's point of view it's very easy, if you want to make your own custom team, at minimum you can make just one style.css file, and then you have your own team, and it's very easy to take WordPress and adapt it and use plugins, and it's quite easy if you're a PHP developer or HTML or CSS JavaScript developer to make your own teams and stuff and get WordPress to do useful things for you. So easy to use and easy to extend in my opinion is the reasons why WordPress is so popular. And then there are also some very common problems related to WordPress. One is security. I'm quite sure all of you have seen lots of news here and there for a long time about security issues in WordPress. Well, at the moment I don't think at least WordPress core doesn't have security issues, but traditionally this is something people are very concerned about and it's something newspapers like to write about. And then a second common challenge with WordPress is its performance. People usually, or quite often you see developers install like 60 plugins on their WordPress site and then they ruin the performance and so on. So these are two fields that needs education and maybe some technical improvement as well. And these are also two fields that if you go online you will find endless and endless amounts of guides and tutorials and blog posts. They are usually titled something like 10 things to do to improve your WordPress performance or 10 things to do to fix your WordPress security. And most of the time they recommend that you install more and more plugins. And I don't like those because most of them contain wrong advice. They are not meant, those articles are not being written to truly serve the reader. They have been written just as a clickbait or something like that. So I've done lots of mid-busting around WordPress security with this talk and showed a sensible way how to approach WordPress security. By the way, the timer is not on here, so I still have 30 minutes left. But now today I'm not going to talk about security, I'm going to talk about mid-busting WordPress performance issues. So instead of just saying recommending 10 plugins to install and then your WordPress performance is solved I'm going to explain what is my approach to WordPress performance. And the most important thing is that you need to be able to measure the performance. You can't just randomly try out, read some blog post that recommends you to do 10 things and then you do those 10 things and then you expect that you have solved a problem. You can't just randomly do that. If you're very lucky it might solve your problem but usually you're not. So you shouldn't follow advice like that. Instead you should have a systematic approach that first you need to figure out how to measure the speed of your site then you need to figure out something to improve and then you need to measure again to validate that you actually improve the performance that you actually made a change that matters. And then you need to do this over and over again. Rinse and repeat. So the step one is to measure. This is the most important step because people usually completely ignore this step or measure the wrong things. So you need to, before you start improving anything you need to establish a baseline. How fast is your website at the moment? Sometimes people even when they read these blogs online that 10 things to improve your WordPress performance sometimes they even make their sites perform worse but they don't understand it. They don't notice that because they didn't measure before and after. But what you should do is measure before and after. So there's lots of easy to use tools online. How many of you have been using webpagetest.org? That's my own favorite. There's also lots of others like GTmetrics and Pingdomtools and so on. And these tools are easy to use. You don't need to install it. Anything you just go to a website. They're free to use. At least the basic versions are free to use. And they do a nice job of visualizing what they find from your website and what they measure. And these all measure the full page load. But now when I'm talking about WordPress then we are actually not interested in the full page load because that includes CSS and JavaScript and such things which are not specific to WordPress. But what is the thing that is specific to WordPress? That is how quickly the WordPress PHP code base can produce the HTML file which is sent to the browser. So if you're using, for example, webpagetest then you can see that here's the arrow. That's the step you're interested in. That's WordPress and PHP code that's responsible for how many milliseconds that step takes. That's the thing to measure. And my favorite way to quickly measure this is to use curl. How many of you know curl? Wow, quite a lot. Good. So it's a simple command line tool and you can use it from your own laptop or even better you can SSH in to your web server and then use curl from there because if you are measuring with curl the speed from your own web server then you completely eliminate network lag and you measure purely how fast is your own server and the PHP. And this is the command to run. Is somebody familiar with a command like this? Did you know that you can use curl to measure the speed of a response? A few hands are racing. So to most of you this is a new thing. Good. So this is how the command looks like and you can look later on online the slides and copy-paste this command to your own website. So what this command does, runs curl, it directs the actual output to null so it doesn't show you the HTML stuff. The only thing it prints out is the total time it took for curl to connect to the server and download the content. Sometimes if you have a professional hosting environment with front proxy caches and stuff then your site when you're querying with curl might not always come from PHP. You might have a proxy in front so then you can add this pragma no cache header. You put a capital H and then the HTTP header text there and then it will burst all the caches and always fetch the result from the server from PHP. This pragma no cache, it's a standard thing. For example if you press F5 in your browser your browser will send this pragma no cache to the server when it reloads a page. So this is a standard thing. And then of course you might have some random variation. There might be something in your code that sometimes is executed and sometimes not so it might vary how quickly the page is generated or you might have a busy server that there are some other processes interfering so you get the variable result. So to get a result which is easy to compare and reliable to compare you need to run it for a while and then see what the average was. And here is a simple bash loop to do that. First you define that you have standard US locale so that you get your numbers with the dot in a standardized way. And then you have a for loop which will loop 20 times. You can change that number to whatever you want. And then it will run the same curl example I just showed but it will loop it over and over for 20 times. And then when it's done it will pipe the result to AUK which is a command line tool for doing calculations. And then this thing will add the speed and then calculate the average. And in this example the average was 137 milliseconds. Also another way, if you are using curl this obviously requires that you know what is the exact URL you are testing and most of the time you are probably just testing the front page. Sometimes you might have a performance issue that is not visible on the front page but it's related to producing some sub page. And to catch that you can either do some scripting magic to loop all your pages with curl or if you have a good server environment you can for example in NGINX you can log the request time in your HTTP log. And here's an example. If you look like this you will get here how many milliseconds NGINX took to deliver the result and that's visible because here's a custom additional field in the NGINX configuration. How many of you are using NGINX? Maybe a little bit over half. How many of you are using Apache? Yeah, that's very popular as well. My own favorite is NGINX and I think Apache has something similar but this example is for NGINX. You can log real actual usage data on how fast your different pages were delivered to visitors. And then if you have a log like this you can then further analyze it with something. For example my own favorite is Go access. And in Go access if you analyze an access log it will print out these columns which are zoomed in here. So this means the average time it took to, this is the average time for any URL on your site and then this is for the most popular if the table is ordered by this column. And then you have something called cumulative time which will calculate for the same URL all the time in a cumulative fashion how much your server spent delivering that URL. So if you, in this example actually the table is ordered by the cumulative time spent delivering certain pages then you can find which pages consumes most of your server time and then you can focus on the top ones to optimize them to get your total server kind of resources decreased. How many of you have heard about Go access before? Yeah, a few people. So this is pretty neat. All right, so here was some ways to measure how fast your PHP is. Then the next step is to optimize. Here's a quick and dirty way to do it. How many of you use VPCLI? I think everybody should raise their hand next year because it's great. You can do lots of things with it very quickly and you can script it and so on. And here's an example of scripting VPCLI. So sometimes the project doesn't have a budget to fix like improve PHP code. You just want to have a quick and dirty improvement in speed and with this CLI loop you can deactivate one plugin at a time and then measure how fast the site is. So here it will list all the plugins that are active for your site. Then it will echo the name of the plugin. Then it will deactivate it and then for five times it will run curl and then it will activate it again and then continue which means that it will take the next plugin and do the same thing. So the output will look like this. Here we have... This is the script running. So here we have one plugin that got deactivated and then curl is run five times and you can see that it takes about 500 milliseconds to load the page. Then that plugin is activated back again and then it goes to the next plugin. Here Advanced Custom Fields Pro as an example and then when that is deactivated it takes 60 milliseconds to load the page and then this plugin is activated again and then the next plugin is deactivated and then it's back to about 500 milliseconds. So this reveals that this one single plugin deactivating it will significantly decrease the loading time of this page. So this is a quick and dirty way to find out which is your slowest plugin. Then if you want to do some more in-depth you can use the debug bar. So in WordPress.org there's a whole bunch of plugins which contain this debug bar in their name and you can install them and after installing them you will get this debug bar and then a varying amount of plugins to this plugin different pages which will then tell different things about your site when it has been loading. I think that for example the plugin called Query Monitor is quite popular and widely used. However that has the problem that it tells you what function or what part of the site is slow but it doesn't quite seldom translates to something actionable, something that you could actually fix. So the best way and the most precise way to find bottlenecks in PHP code and in WordPress is to use X-debug. How many of you have heard about X-debug before? Wow, almost everybody. Well I hope you will learn something new anyway. So X-debug is a tool which will instrument PHP in a way that every time a PHP function is called it will log which function was called and how long it took to execute. And visually this means that when you are starting WordPress when you are requesting a WordPress page it will first load index.php and inside index.php it will run lots of functions and each one of those functions will then run lots of other functions and you will get this execution graph and you will know what function called what function and how many times and how long did it take. And to install X-debug here's the example how to do it if your server is running Debian or Ubuntu and one note is that this makes your site very very slow because it instruments every single PHP function so don't do this in production this is something you should only do in a staging or development environment. So first you install this PHP plugin then you need to configure it's config file and enable it and then put this make sure you have these configurations in place this is the directory where your profiling files are going to this is the file name pattern they use and then here is this enable trigger which is very important and I will show you soon what the trigger is and once you have updated the configuration then you just restart PHP and then it's active and then to profile a page all you need to do is to append this get parameter to the request this is the trigger if the request has this get parameter then X-debug will activate and generate a profiling file so then when you run this you will notice that the page is much slower than usually and you will notice that in this location every request will generate a log file and this log file is a text file you could technically read it yourself but that's very inconvenient so what you want to do is install some kind of tool that makes a graphical representation of the log and my favorite is webgrind because you can install that on the server right where you are doing the profiling and you don't need to do anything special and it's very easy to install here are the commands you just clone the git repository it's PHP and then you install some libraries for it so it can do some graphs and if you are using the WordPress core development environment variant vagrants then X-debug is already pre-installed there and also if you are using some other vagrants for example our vagrant then X-debug is easy to enable there alright so after you've set up X-debug then you start profiling and this is how webgrind looks like when you open a profile file so here is the selection you can select which profiling file you want to analyze and here you can choose if you want to show milliseconds or percentage and then you will get here in this example I've chosen milliseconds and then these columns will show how many milliseconds these things took and then this percentage here will filter out some of the PHP functions so there's less results of the more minor PHP functions so it's not always that relevant for profiling but here I've set it to 98% and this table it has a few columns this color here is not an indication of speed in any way it's just an indication of what kind of function type it is here is the name of the function and then here is a sub table of all the functions that this function calls and then you have the invocation count is how many times that function was triggered and how long PHP is spent inside that function and then total inclusive cost means how much time PHP is spent inside that function and all that of its children so this is how you then track around your goal is to find some function which has a high total self cost that means that PHP is spent a lot of time inside that function and then that's the goal you want to optimize usually if you have a clean WordPress install you will see lots of these translation related functions and then in webgrind there is this search field you can use so either you can either you can sort the entire table used by the total self cost which will show you the slowest functions at the top or you can use the search feature to find some usual suspects which are might not be the slowest completely slowest function you have but which are often slow and sometimes quite easy to optimize some of the typical words you can put there is like load and open and curl and query then you will get lots of results related to file opening and especially if your code is opening with load or open and curl are opening some files from an external server this will show it up and here in the UI you also have this button show call graph and if you press that one then you will get this call graph and you can see in a visual manner how your code behaves and this is actually quite good not only for profiling but in general to understand what your code is actually doing because WordPress has so many hooks and actions and whatever that you might not have a clear picture of what your code is actually doing and looking at the call graph you will learn what it does so these colors here represent the time how much time is spent in each function and then the color and the width of this line represents how many time the call was made so when you go profiling what are some typical issues and how can you solve them so here is a table a profiling result which is sorted by the total self cost and this is I don't know if this is percentage or milliseconds I think this is probably percentage because the numbers are so low so here you can see that the MySQL query function included the highest total self cost but that is actually quite a logical thing because the database is most often the bottleneck in the application so you should when you are looking at your code the database function should be at the top and that is kind of normal behavior and then if you look at the other functions that are up here you will see the slowest ones but these are not perhaps that obvious what to optimize but here I've looked at the VPDB function get results of who is calling them and I can see here what this function is calling and then who has been calling this function and from here I find this function called avada upgrade clear Twitter widget transients so it turns out here is actually a very convenient button when you click on this icon a new window will pop up and it will show you the exact line in the code where this function exists so you can look what it's doing and when I did this I noticed this function here and I went looking at the code and it was about this avada team that did an upgrade check on every page load and that was completely unnecessary so I just commented it away and then the page was loading much faster and here's another typical thing so WordPress to start with it has its native get text implementation which is pretty slow so quite often in the profiling you see translation-related functions high up and then if you install this very popular translation plugin VPML it also usually ends up quite high and it does a lot of database queries and stuff which the WordPress own translation functions don't do so this usually creates a lot of so this usually leads to a slowdown of the site and if this is an issue then I recommend polylong which is another translation plugin which it doesn't have all the features as VPML but it's significantly faster then here's an example of the curl exec that was on the top of this profiling you probably don't see this text that well but here it reads PHP curl exec and then you can trace down who is calling it and it turns out so all of these are links which you can click so if you see here a function that is slow then you can just click what was the function it was called from and then drill down to the function that initiated it and then you will find what was the problem and in this case the problem was this line code which was slow in a plugin called leads use and the way to fix this was to use VP transients so you couldn't just disable this API call because it's relevant for the daily function of this plugin so what I did I wrapped it in a WordPress transient how many of you know what the WordPress transient is this is something every WordPress developer should learn it's super easy so this is this caching system built in caching system in WordPress it basically has just two functions it's super easy to use one is to set transient one function you give a name it's just key value here's the key here's the value and then what is the expiry time for that data and then it has a second function which is get and then just get by the key name so it get transient and set transient and this is how you use it that instead of always calling something that is slow and might not always even update so it's a good target for caching you just wrap this around so you just first check do you have it cached already if you have it then use that value instead and if the cache is empty then only then you make the actual call to an external server and then you save that result and the next time this page is called if it's within one hour it will find the cached version and sometimes you have a piece of code that doesn't run on every single request so what you can do in that case is that you can run your your xDbug profiling in a for loop in this example I've done it for a hundred times and then you print out the time how long it takes and then you will see that most of the time it stays at the baseline but every now and then it is slower for some random reason and then if you go looking at the files then you will find some files which are significantly bigger than the other ones these are the ones where the code did something special which was unusual and then you go profiling that file to find what was special in that PHP run alright so then you can remove some code or limit some code or use transient or whatever when you've pinpointed what is the exact function that is causing the slowness then you can try to fix it some way you do some change and when you've done then you validate that did this actually make the site faster or not and then you use the same same measuring things I already show you and then you do this over and over again it's usually not done in one time you probably do the measuring and validating like ten or twenty times before you're done optimizing a site alright do you have any questions at this point no I'm not complete I just if you have any questions at this point nobody's raising their hand alright so here's a little bit alright hi I just showed a transient solution that you implemented I have two comments on that first of all you can't be sure about the duration of the transient expiration I mean I don't know what this file web form is and I don't know how often this is refreshed but maybe in this hour you might lose valuable information first is this and then this is a plugin that is developed by another developer so you need to do this thing every time the plugin creates a release an update how efficient is that yeah so what the best way is that you do this once and then you send the patch to the original developer and then the original developer at least in this plugin it wasn't using transients anywhere so apparently the original developer didn't know about transients at all so you do your patch then you send it to the original developer and then hopefully they will adapt that and then when you update the plugin the next versions will have it built in and the original developer will if this decision to update once in an hour was wrong then the original developer will hopefully do a better decision in that but of course it's a good point just go around poking your plugins because everything you change will be lost next time you upgrade the plugin but if you have a performance problem on your site and it's some plugin that's causing it then you need to fix that plugin to kind of validate that it was really that plugin that was the root of the problem and then when you've done that then you can for example send the patch upstream that's the good part of open source and hopefully your plugins are open source and open source friendly developers are behind them so here's a quick example of this methodology in practice so how fast can we make the default 2017 first you profile it and then you sort the column by total self cost and then you can see all this translation related built in functions and then when you look at the code itself and do some research you will find that WordPress doesn't use native get text which is say Linux Unix library but instead it has its own get text that's completely made in PHP and it doesn't have any caching and it's kind of somewhat naive and there's actually solution for that there's a plugin made by Aukor which is called Dynamic M.O. Loader and it will instead of doing the naive stuff that the WordPress built in get text is doing it will smartly load only the translation files you actually need and it will also cache them so this is the profiling before and then after installing that plugin I use Composer to install it because it's not available on WordPress.org and then after you can see that the same functions either don't exist here anymore or they are much faster and this is how you validate your entire speed improved and you can see that the functions actually behave in a different way and in the expected way the code in this GitHub account of default 2017 installation which is slightly optimized. So a few things to remember so you don't need to when you are profiling or when you are optimizing your page now you know with curl how to measure the speed so you don't have to measure the response time but you might not know which page is the slowest one so that the engine access logs with the response time is a very good way to find you don't need to measure anything you just read the logs and see which URLs are slow and then remember never to run an X debug in production if you have it active it will make PHP much slower and then there are some other projects like XHProf and UProfiler which you can run in production because they are not active all the time they are based on some kind of sampling that they do random sampling of certain PHP requests and then most of the production PHP requests stay fast but these projects are somewhat defunct at the moment and let's see what's happening in this space I think there are some companies picking up the development of these and making new versions but it's not there's lots of turmoil in that area at the moment also a few built in PHP functions that are good to know are these with this function you can insert this command at any point in your code and then see how much memory PHP is using at that point so you can use this to debug what's happening in your code and you can also spray this around your code to print out the time in your log what is the micro time precision of how much the time was at that point in code so then you can also use that to see what line you are going and how much time it took to get there alright so that was it thanks for listening and you can find these slides online and you can go back home and then tomorrow do some profiling and test out this do you have any more questions what's your opinion on cache plugins I noticed that you haven't mentioned those should we use them or just optimize the WordPress installation to use as let's say less plugins than it used to yeah so in our servers we don't have any caching plugins at all because when you add more plugins to your WordPress installation it will slow down because you are running more and more PHP code so the sensible thing is to do have some kind of proxy caching but to do that outside of WordPress and for example if you are running nginx then nginx has great built in caching stuff to use so I would recommend stay out of caching plugins and use something that is working on C level code instead which is like an order of magnitude faster than anything that happens in PHP try to if you are doing something that happens in PHP or close to WordPress sometimes in PHP then you have already lost the performance game in that sense but one thing in WordPress which you should install is the object cache PHP drop in so I showed this VP Transience in my talk by default WordPress saves that in the database but if you install the object cache drop in then you can save it in for example Redis in memory only it's a key value storage very fast so that's something I would use but I would not use anything like v3.total cache or similar plugins there they are just too slow when fixing optimization problems have you noticed common patterns any common patterns arising and then cache plugins well in general I would say that that currently the good thing with WordPress it's so easy to do a plugin and it's so easy to get something useful out of it but then it's also sometimes too easy to do something with WordPress and then people might just throw up some throw together some code and start using it and then somebody from third party might think it's actually a tested and well done code and so on and there's for example what I would like to see for the entire ecosystem to evolve is that there should be some kind of quality assurance in WordPress.org plugin repository so currently when you submit a new plugin to WordPress.org there's a thank you there's a human who makes a review but when you do subsequent uploads there's no review, there's no automatic testing there's no quality assurance, there's no performance testing there's no static code analysis or anything and if you want to know how I would do this then you can come to WordCamp you vascula in Finland in February and enjoy the snow there and listen to my talk about automatic quality assurance for WordPress plugins can you give us some hints? Well for example Travis CI it's a continuous integration platform that's free to use for open source plugins and then you can look for example in my presentation I had links to my 2017 optimization there's a whole WordPress project in that Git repository and there's also Travis CI files that do automatic testing for every commit but I haven't built automatic performance testing for WordPress plugins and commits yet alright thank you