 All right, hello everybody. Isn't it amazing to be at an open source event with over 2,000 attendees? Yeah. How many of you have been using WordPress for over 10 years? Yeah, quite a few. Who would have thought 10 years ago that we would be such a big thing today as WordPress and as open source in general? It's great. Can I get my slides on the screen? Cool. So especially for those in the back of the room, some of my tables have fairly small text. So you can go to this link and also open the slides to see them more closely if you want to. So I'm going to talk about WordPress performance today. And I'm going to have this slide here for a few seconds more if you want to pick up the link. Here is also my Twitter handle. And I'm later going to post some performance profiling stuff regarding the WordPress REST API later. So please follow me on Twitter to get see those results. All right, so who am I? I'm a longtime Linux user. I've been using Linux on my desktop for almost 20 years and almost as long I've also been advocating open source. I'm a great fan of open source, both as a user and I've also contributed to lots of different projects, obviously WordPress, but also lots of other open source projects. I'm a very technical person. I am a CEO, but I'm also an admin and developer in our company doing all kinds of WordPress and Linux related stuff on a daily basis. And our company is called Seravo. It's a Finnish company and we're specialized in WordPress upkeep and hosting. We run lots of enterprise-grade WordPress sites on our servers and we make sure that they work all the time and they are 24-7 monitored. And if something goes wrong, our staff looks at the site and so on. And obviously WordPress problems in production is something that we encounter often and we are very professional on solving them. So we are now, I think there's something like 2,200 attendees here, right? So WordPress is hugely successful. We've learned that it's now around 28% of the internet websites are running WordPress. And I think that the reason why WordPress is so successful is that it's so easy to use. It's very easy for people to produce something useful. It's easy for a kind of end user to install plugins and make settings and stuff. And it's also easy for a developer to just do minor modifications if they want, but also with all the hooks and actions and stuff to extend WordPress. So you can, in a small amount of time, get a lot of results and useful stuff out of WordPress. However, there are some common challenges related to WordPress and people usually complain about its security and its performance. And if you go online, you will notice there's lots and lots of tutorials and blogs and stuff, especially regarding WordPress security and speed. But I don't like most of those blogs and tutorials because most of the time, they just list a bunch of things that you're supposed to do, but they don't provide any evidence for you that why it's actually good. And lots of that advice is false advice. And that's why I like to go around to WordCamps and talk about these issues and do some mid-busting regarding these issues. And some of you have seen me talk about WordPress security. Anybody here? Some. And today, I'm going to talk about speed and performance to give some sound advice what you can do and what is really useful. So this is, I like evidence, and I like having a good methodology. So obviously, when you do speed optimization, it's much easier than solving security problems because you always have something that you can measure and you can measure it in an objective way. So when you are about to optimize the speed of a site, you need to first measure what is your current state. Then you need to find something to optimize. And when you've done the optimization, then you need to validate. And then you do this over and over again to constantly improve and not just randomly try different things and not really validating if they work or not. All right. So the first step is to find your baseline. Obviously, it doesn't make any sense to do speed optimizations if you don't validate that you actually improved. And how to do it? There's lots of online tools. How many of you have used webpagetest.org? How many of you have used GT metrics? That's fairly popular. How about Pingdom tools? That's also popular. So here are some of my favorites regarding online tools you can easily use and measure the full page load. So these measure lots of things. They measure how fast your WordPress is, but they also measure quite a lot of front-end related stuff like CSS rendering and JavaScript and image stuff and so on. And now I'm going to talk specifically only about how fast WordPress is. And by that, I mean how fast is the PHP code in producing the HTML result which sends it to the browser. Alain had a very good talk before me about the bootstrap process, and that's exactly I'm kind of involved in the same thing, analyzing what WordPress does when it loads the page. So here you see a screenshot from webpagetest.org, and there the line number two. That's specifically the 244 milliseconds. In this example, that was produced by WordPress code. So how to start? I like command line tools because it's easy to repeat and automate. And my favorite way to find out how fast your site currently is is to simply curl it. So this is what I would do. I would go to the server to eliminate any possible network lag if you are on Wi-Fi with your laptop and so on. You can't use that for measuring. You need to go to the server, and on the server you can run this to have curl output how many seconds it took to load the HTML part of your website. Curl is a really great tool. Then if mostly if you have a good site, you will have some caching, and you want to burst the caching to actually measure how fast your PHP was and not just how fast the caching layer was in front of your website. So then you need to add this HTTP header, pragma, no cache. This is the same thing that your browser sends to the website if you press Control F5 or command whatever. It's on Mac or Windows. I only use Linux. Here's a good source from Google explaining how caching works. So with this curl, you can actually measure what your PHP does every time. And obviously, just doing a single request is not good. It's not valid. You need to do multiple times it. And here is my bash tip how to make a small for loop. In this example, first of all, it makes sure that your output format is in the standard format. And then it loops 20 times curl, and then it uses another command line tool, AUK, to sum the results and then divide it by how many times it was requested. And then you get an average. In this time, an average over 20 requests. Of course, doing curl is a manual process, and you might not want to, if you want to be systematic and so on, it's going to be a big task to curl all of your pages because they might behave differently. So another source of information of how fast PHP WordPress delivered your site is to look in the logs. And if you are, for example, running Nginx, you can update your Nginx configuration to include in your logs the time how long it took for Nginx to deliver the result. And in this example, it's the last item on the log line. So then you can analyze your logs, manually, or you can, for example, use Go access, which is an excellent log analyzer. And it even includes features that it detects if this timing is available in the logs. And then it will show you the average load time and cumulative load time. And average, of course, will help you to find requests that are slow always. And then the cumulative time will help you find good optimization targets because it calculates how many times that request was done and then sums all the time your server spent serving that request. So that kind of is optimizing that will decrease the load of your server or, in general, produce the fastest result. Oh, no. What happened? Oh, it was just the front machines. Now they're back. All right. So when you've done the measuring, then you can start to optimize to find and solve the bottleneck. And this is obviously a somewhat creative process. You need to have some kind of idea how stuff works. But there's different levels of how deep you go into the code. Now the quick and dirty way is to use VPCLi. How many of you use VPCLi? Yay, it's great. So here's, again, a small batch implementation, a small batch script. It will list all of your plugins and then deactivate one plugin at a time. And then request the site five times to measure how fast it is when that plugin is deactivated. So here is a screenshot of how it looks like. The baseline is around 550 milliseconds on this example site. And when you deactivate VP to Twitter or Polylang, it doesn't change basically at all. However, when you deactivate advanced custom fields, you can see that it drops to 65 milliseconds. So this will reveal to you that this particular plugin is really slow. And if you disable it or optimize it, you will get an improvement. Then if you want a little bit more depth, you can use the debug bar. How many of you use the debug bar? Yeah, quite many. Good. And in debug bar, there is a plugin called Slow Actions. And then it will reveal this new page in the debug bar, which will then list some of the functions that took the longest time. This is a way to find something. However, my own favorite and the tool that goes as deep as possible is XDbug. Now, how many of you have already used XDbug? Yeah, quite a few. All right. So it's a tool for PHP developers to analyze what PHP is doing. So basically, it instruments every single function and then calculates how long that function took and what function is requesting what function. This is how to install it on an average, on a Debian or Ubuntu-based system. There is a package. And then you enable the configuration. And this is how I like to do the configuration, that I have this line, Profiler Enable Trigger equals 1, which means that it's not going to profile every page load, but only the ones where you have this specific get parameter XDbug profile. All right. So when you have it enabled, then you can add this get parameter. You can see this curl example. And that will trigger the profiling. And when the profiling runs, then XDbug will write this file, which is basically a text file. And it lists all the functions and what functions they called and how much time they spent. It's huge. You can't, in any way, manually analyze it. So the second tool you need as a friend for XDbug is some graphical tool to analyze the output of XDbug. And my favorite is Webgrind. And installing it is as simple as this. And also to get the pictures of the functions you need to install GraphWiz. And if you're running, if you use VVV or our vagrant, then there are simple commands to enable XDbug. So you don't need to do this configuration manually. All right. Then when you have XDbug and Webgrind going, then you can start with the profiling. So you run the profile, and then you open the result in Webgrind. How many of you have used Webgrind already? Not that many. So you can also open the XDbug profiling files in other tools. But my favorite is Webgrind, so I'm going to explain how it works. So basically, it will show you a table. It lists all the functions, and then how many times that function was called during that execution, and then how much time was spent inside that function. That is the total self-cost. And then how much time was spent in that function and in all the functions that function called. That is the total inclusive cost. And it also has convenient links. You can see here these small, very small icons with black and red lines. That is a link directly to the source code so you can view what the function is and does. And in the Webgrind UI, there is a filter function. You can put there in words to find out specific functions. And some good search terms are load, open, curl, and query. They all are related to external or heavy internal functions that are the usual suspects. And then there is the show call graph button, which will show you in a visual way what in the beginning there is the main function. And then it will show you what functions it calls and how much time is spent. And those who saw the previous talk will recognize that this is the bootstrapping process you see in the beginning of this call graph. And if you look at the whole call graph, then it will help you to understand how code flows and maybe figure out some ways how to completely avoid certain code flows. Right. What about the typical issues and their solutions? So obviously, the most common bottleneck for your WordPress is the database. So quite often, you run into the situation that your application is doing something stupid. It's calling the database too many times, or it's calling it in a way that there is no index, so that the query is very slow. And these kind of things are easy to spot with X debug. In this particular example, the text is quite small. You can perhaps see it. But here, you're able to notice that the VPDB query is called an average amount, but the total call cost is 40 milliseconds in this example, and that's unusually big. And then when you click on that function, it will show you which functions calls it. So then you can trace one step upwards and notice that here is a function called avada upgrade, clear Twitter widget transients, which takes 40 milliseconds alone, and that is the root cause why it was slow. Then you can go and fix that, and then you're done. How many of you are using VPML? How many of you are using Polylang? Yay. Polylang is in the majority in this room. So quite often I run into, when I profile sites, I run into that the root cause of the slowness is VPML. So that's a very common cause. And Polylang is much smaller. It has maybe less features, but it's certainly significantly faster, and it has a newer code base, and it's fully open source, so it seems to evolve in a smarter way. Also, one common situation is that during the page load, PHP code does an external HTTP request, and that is obviously a very stupid thing to do, because your site will then never be able to load faster than what the external server loads because you are always acquiring an external server. So you can find, for example, this curl exec functions being called during your load. That will always take hundreds of milliseconds. And there's a very easy way around those. That is by using transients. How many of you use transients? Please, everybody go and learn how to use transients. It's super simple. Here is an example. There was a plug-in leads use that was doing an external curl on every single request, which was incredibly stupid, and the way to get around that was to wrap it in a transient, so that request is done only once in an hour, and then the result is saved and used from a transient. And sometimes you have nasty cases that your site occasionally loads really slow, but not on every request. How to hunt down those rare beasts? This is how to do it. Once again, we go back to batch scripting. You do a for loop that requests the site 100 times in a row. And then you will notice when you look at the profiling results that some of the files is unusually large. And that is usually the one that did something that is not done on every request, but when it happens, it's really big and slow. This is the way to find just do 100 requests in a row, and hopefully one of those requests will be exceptionally slow, and you will have it, you will detect it by the file size. All right. So these are ways to get insight into what your code is doing. These are very easy to use, and graphical, so you can browse this and play around and find things that are potential optimization targets. And once you've done your optimization, please validate that it actually did what it was supposed to do. And then do it over and over again. All right, so then a quick demo at the end. So how fast can we make 2017? So the first step, measure the baseline. So this is how 2017 looks like from a profiling point of view. Most of the time is spent in getText functions, because this example site uses the French local, and therefore it's doing getText queries, and lots of time is being used in them. And the reason is that WordPress wants to be as universal as possible, that you can install it anywhere. So instead of using the native system getText, it has its own PHP implementation of getText, and that's really slow. And there's a ticket about it. It's many years old. We still don't have... For example, I hope the bootstrapping project could take this into account, and one of the optimizations would be to check if we have native getText or not, and if there is native getText, use that instead and not the fully PHP implemented custom one. There is also a quick solution. There is a plugin called Dynamic Moloader, which uses the pluggable functions stuff to override the built-in WordPress getText functions, and it only loads the translation files that are really needed for that page view. And then, when this is installed, you can easily validate that you can see the built-in getText functions, and they're really slow in action, and then after, you see the new overriding functions and that they're much faster, and the result is the difference is in hundreds of milliseconds. So that's it. And we have a demo site. If you want to look what Webgrind looks like when you browse around, and we have also a demo project with some small optimizations, you can look at the git comets to see what was done there as an experiment. All right, so remember, in production, you can easily log with EngineX how long the requests take, and you can analyze those logs to find production issues, and XDBOG should never run in production because it's very, very heavy. It makes the page load take maybe five or 10 times slower than normal because it instruments every single PHP function call. There are also other tools, XH Prof developed by Facebook, which have a sampling mode, and you can use that in production. However, these projects seem to be a little bit stale, and there's even an older project from 2004 that's certainly dead at the moment. And last but not least, also remember that in PHP, there is built-in two functions you can also use to measure how long in the execution of PHP you are. Microtime will do that, and with getMemory uses, you will also see how much memory is used at any certain moment. All right, thank you. Thank you very much, Otto. Are there any questions? I would have liked to ask a question, but... I have one question. Is the XDBOG developer in the room? Nope. He should be attending. Does anybody want to ask a question, Otto? There's a hand up. There's one. There are mics here in the pathways. Please walk up to the mic and ask the question. So people from the live stream can also hear your questions. And while you are walking up to the mics, I just want to say that XDBOG is a great tool, and profiling is just one thing it does. It also does lots of other things. So please read up on XDBOG documentation and play around with it. Okay, so is XDBOG itself... You said you needed a GUI. Was that like an external thing, or is that like part of the... The XDBOG only produces the profiling files. So you need to open the profiling files. So this is how it looks like. It just produces text files, which are really big. Then you need some external software to open those text files. And the easiest way is to use Webgrind. Webgrind, okay. Yeah, there's how to install it. Thank you. Thanks for your talk. Have you mentioned switching from WPML to Polylang? And I saw that there is a WPML to Polylang plugin. Have you used it? How well does it work for big sites with many posts? I don't personally have experience of that one, but this was just an opinionated thing I wanted to throw in here for people who are interested in performance. You can find out these details about Polylang online easily. Thanks. How would you measure requests to WP-AJAX? To AJAX. So the way you measure the request is that you add this... Let's see if I... Yeah, so you add this behind or after your request. So this part can be anything. So you could... And the curl can also be anything. So you can change this to whatever you want. You can put the AJAX endpoint here, and you can also add to curl a post payload to do the thing you want. So just read up on curl documentation how to do the request you want and add this at the end. And when you run it, a new file will turn up in the folder where you have configured the XD bug to the output. All right. Thanks for your talk. I have a suggestion on how to measure outgoing connections as a plugin called Snitch. And it will monitor every call that is using the WP GET remote or POST remote. And you can even disable calls per function. All right. Do you know of it? No, I haven't used it. You could also just grep for curl or get remote in your code. But that's a... Thanks for the tip, Snitch. Any more questions? No? Oh, there is one question. Please come to the mic, sir. I checked your website, and I noticed that you are using Redis. So maybe could you tell us a bit about the Redis and how does it connect to the site optimization which you do and, you know... Sorry, what did you say? Redis, the object caching. Redis. Redis, that's it, Redis. Yeah, thank you. Yeah, so that's a topic for another half an hour or an hour talk. But that's my personal favorite, and you can easily activate it for WordPress if you have a server environment where Redis is available. And then you will also notice in X profile that there will be less database lookups and stuff going on. And obviously, using Transins is useful even if you don't have Redis. But if you have Redis, then the Transins will be super fast. Yeah, thank you. All right, thank you very much. Otto, big applause for Otto, please.