 So, welcome to this talk about PHP Upcache, real podcast and preloading. So we will be talking today about how we can tune our PHP installation and we will do that by tuning PHP settings. So new PHP versions always get faster, but we can make them go even faster just by changing the things that we get for free. So if we take a new version, they figure out some way of doing memory management a bit better. So we get faster PHP versions. We saw that from 5 to 7. And then we, a minor version, since 7, got a little bit faster. So we have that straight out of the box, but we can also improve it even more just by fine-tuning the settings of these new features. So first some groundwork. Who doesn't know PHP? There's one. Okay. So PHP is a scripting language, as you will know. And basically it's a fire and forget language. So this means that we don't have any manual compilation step. So every time we type something, we go to a browser, we hit reload. Everything is getting compiled on the fly and we see our new output. So this is the groundwork we all are familiar with what PHP is. And then the second concept for setting the stage is PHP FPM. So who isn't using PHP FPM? I think, yeah, few hands, but almost everybody is using FPM. It's the only relevant SAPI nowadays. So what is FPM? We have a from-end web server, like Apache or Nginx, and if we have a PHP file that has to be executed, we forward that request to the FPM. So for instance, we have a master process listening on port 9,000, and every time a request comes in, the master process starts or looks if there are any trial processes available. And then creates one or just forwards it to the running trial process. So just to keep in mind for the rest of the presentation, we have shared memory, and shared memory is located in the master process. So if you have trial processes, the shared memory is located in the master process. So this is really the concept of this shared memory. So here we have an example of the master process that's always running, and every time somebody goes to our website, we have a trial process running. In the rare occasion that I have multiple visitors on my site, I have more processes. So yeah, you get the concept. So now this shared memory. What is this shared memory? So in computer hardware, shared memory refers to the block of random access memory that can be accessed by several different central processing units in a multi-processed computer system. So what this means is every process has its own batch of memory, and the shared memory is shared over those processes. So if we want to save something and share it between the different processes, we use a shared memory. So this is the groundwork for the rest of the presentation. So all set, okay. My name is Joachim Kudanis. I'm a father. I started running like now a year and enjoying that very much, but more relevant to this presentation is I'm the co-organizer of a local user group in Belgium. I'm a coach at our local coded dojo, where we teach kids how to program and how to interact with cool little robots and stuff. And I'm a developer at Combell, a web hosting company in Belgium. So this talk is about performance, right? Yeah, you're right. This is all about performance. But why did I create this presentation? It's because at Combell, I'm part of the performance team. So that's a special team which analyzes problematic websites from customers. We do a deep dive in the code of the customer. And we also have the knowledge of how our systems, how our backend systems are set up. So we really know where to look and where to spot issues. So here we have the quote, 80% of the performance issues have nothing to do with your server. So we really dive into, as a hosting company, we want to sell more servers. But sometimes we really have to optimize the code itself. And a lot of that optimization is just by knowing how everything works under the hood and knowing where we can tune settings, but also how you write specific code. So this is an example, a pretty famous gist where we have an overview of the different latency numbers. So at the top, we have the L1 and L2 caching from the CPU. And you see that every time we go a step further. So if we want to read stuff from memory, we are a bit slower. If we want to read a megabyte from the hard drive, we are again a bit slower. If you want to add a round trip to a different data center, everything gets slower. So we want to keep everything as close to the CPU as possible. So this is more a graphical visualization of that. So now that we know all the things that we want to change, how can we actually fix it? And we see a lot of problems with disk IO. And disk IO, if you have a look at the Linux storage stack, at the top there you have our little application. And every time we want to access the disks, we go to the virtual file system. And there we have the block-based file systems that we know, so the local storage. But at a hosting company, we use the network file system a lot because that's easier just to scale, do backups. But the problem there is we're not only doing the file system lookup, but we also have to do a round trip to the network. So if you slide this a bit up, you come at the block layer and then the block layer will talk to the physical drives. But in our setup, we use NFS, so the network file system. And the network file system has an extra, we have all the advantages of using the NFS. But we have a new set of problems because instead of going from our user program to the file system through the VFS, we have to go through the network and then do again look at the file system at the NFS server and then take all the file information. So a lot of the things that we can optimize are located in that and stopping that overhead of going over the wire. So we have, if you're familiar with NFS, you have NFS cache, which caches a lot of information on the client. But if we're using PHP, we want to have highly dynamic, we want to program with PHP, we want to change our files. So we still occasionally have to travel that network layer. So now that I showed you all the theoretical information, how can we actually tackle all these challenges in PHP? And as the talk title suggested, we have three topics that we use to tackle all these problems. We have the real podcast, the opcache, and then preloading. So let's dive into the real podcast. So what the real podcast is, it is a system in PHP and it is used to reduce IO. So we want to eliminate the IO coming to the file system as much as possible. And it does that by caching all possible pods to a specific destination pod. So if we use Composer, we have a lot of files that we want to load, and in our application, we do a lot of relative pods, like .slash, and then we go to a pod. Instead of always calculating all those, where that specific file is living, it will explode, kind of explode on the pod, the complete pod, and store every segment in a real podcast so that it doesn't have to go to the file system every time it needs to find out where something is located. So remember the shared memory I talked about. The shared memory is located in the parent process and the master process, but real podcast is not stored in shared memory. It's stored in the process memory. So this means that every time a child starts, it has to rebuild the whole real podcast, which isn't that bad, but if you see a lot of problems with cold starts of the process, you just want to have at least one process always running so the real podcast isn't cleared. So by default, we have four megabytes for that real podcast to store, but again, it's not stored in shared memory, so you have to watch out if you have, like, 200 child processes, you have four megabytes, five is 200, so you have to watch out that you don't run out of memory. So before PHP 7, the default was 16K and we all know that all the frameworks that we use, that's way too low, so the new default is four megabytes. And as I mentioned before, this helps a lot with the NFS problem that we have. So try to keep as much processes running to avoid the overhead. So we have a couple of functions that we can use. We have the real pod cache kit, which just gives you a list of all the files that are in the real podcast. And we have the cache size, which just shows you the configured, the currently configured size. There's also a real pod function, but this has nothing to do with real pod caching. It uses real pod caching under the hood, but it just calculates the actual destination pods of a specific relative pod. So a small demo. I have this file, which I try to access three files on disk. So I have a relative one, the ffr.php, then I have an absolute one, and then again another example of file loading. So if I open the real pod cache kit, I first of all get my current working directory. So I have my home directory. And then you can see that it stores all the upper pods. So when another script is accessing something differently, most of the information is already there. So if we scroll down, we see that we have the relative pod to the ffi, which maps to the absolute pod. And then again, we have the absolute pod, which maps everything. So you can see that if you have a framework, if you have a lot of files, this can get quick quite quickly. But this helps a lot with performance. So real pod cache, simple concept. There's a couple of things you can see. We have some in-eye settings. So we have the size, which is four megabyte. We have the time to live. So time to live on a server, just put it really high. It's not really that important. If you're in development and you want to benefit from it, put it a bit lower, it's just a small time to live. OK, so real pod cache, not that exciting. Just keep in mind that it's there, and you want to keep the master and job processes in check. So next up is opcache. Who's not using opcache? Nobody. That means everybody is on a recent PHP version, because opcache was just added by default. And what is opcache, it improves PHP performance by storing pre-compiled script, bytecode, and shared memory, thereby removing the need for PHP to load and parse scripts on each request. So a basic PHP request looks like this. We have the PHP code, and we want to execute that code. And first of all, we have a lexer and a parser, which creates tokens. And that's token. Those tokens are then available as an abstract syntax tree. And then we can use those tokens in a compiler. The compiler will generate upcodes, and those upcodes will go into a specific VM for a specific operating system, and then will be executed to produce the results. So as I mentioned before, PHP is a fire and forget. It's a throwaway language. So it always performs these steps. It reads the PHP code, it lexes, it creates tokens, then it compiles it and generates upcodes, and then it executes those upcodes. And once that's done, the request that's served, all the information is discarded, everything is thrown away. So every request, we have to do that over and over again. So if we have a look, we have the abstract syntax tree. And then the next step is the upcodes. But what are upcodes? So in computing, an operation code is the portion of a machine language instruction that specifies the operation to be performed. So what does it mean in the context of PHP? If you want to have a look at what upcodes that are available and what upcodes that are being used in PHP, you can go to evol.org, where you can run a lot of PHP script, and then run them over all the versions that are available. But there's also a little tab called a VLD. And that's sourced for the Vulkan logic dumper. And the Vulkan logic dumper generates a list of all the things that your little script is doing and shows you all the upcodes that are being generated. So here you can see that we have an echo, that we have a jump if something, if the false is being true, false or false, then you go to the next branch. So here are all the upcodes for this little script. So again, as you can imagine, this can get big fairly quickly if you're using libraries or frameworks. So this is just the small script, which is to show you all the upcodes, but this can get gigantic if you have a lot of files and codes. And most of the time, the upcodes don't change. First of all, if you're using Composer and you have a vendor directory, all the files in that vendor directory are not changing. And if you are running a specific version of your application in production, you're not, hopefully, live editing files in production, so those upcodes, they don't change. So when things don't change, what do we do? We just add some cache in it. And here is where OpCache comes into play. And OpCache comes from a long history of OpCode caches. So before, we had OpCache as part of the PHP engine. We had APC. We had MMCache. We had the Zend Optimizer. So we had a bunch of OpCode caches that we had to enable manually. But as of version 5.5, Zend donated the Zend Optimizer to the PHP source. And Zend Optimizer came from Zend Guard, which was or still is a tool to encode your PHP files and then distribute to them. But they took out that Optimizer part and just put it into PHP. So this whole step, this whole parsing and then compiling and then executing, if we add to the cache, we just eliminate the first part of it. So all those OpCache, all those OpCodes are cached into the shared memory. So they're, in the case of the FPM, all stored in the master process. So part of the OpCache was because it stems from Zend Optimizer, part of the OpCache Optimizer, part of the OpCache, sorry, is the OpCache Optimizer. And that's a specific part of OpCache, which gets better every time there's a new release. And what it does, it optimizes branches. It optimizes that code. So if it sees that there are some lines below a return, it will detect that and just throw away the OpCodes instead of storing OpCodes that aren't actually doing anything. And also, small things like this notation, if you don't do this already with a static analyzer, the Optimizer will just do it under the hood and save the better way of doing it in the OpCache. So if we have a look at how that actually results, so what you see here is we have the if false. So the first part is never executed. So all the parts of the condition, I just not needed there. So what the OpCache Optimizer will do is just echo true and then assign something and then echo again and return. So you can see that this reduces the OpCodes in this example by half. But if you have a lot of PHP files, this will reduce all the OpCodes drastically. So now that we know what the OpCaches are, what functions do we have? So we have some functions to gather some information. So we have the get configuration, which just all the settings that are being used. We have the OpCache get status, which is the most important one to get all the information from. I have an example later on. Then you can pass a specific file to the cache to see if it's stored in the OpCache. And then we have some specific functions to take action. So there's a function compile file where we can manually take this file and store all the OpCaches, so calculate the OpCodes, and store them into OpCache. We can invalidate specific files. So this is handy if you're doing a release, a deploy, and you want to invalidate specific files so that the new OpCodes are getting stored. And there's also the OpCache reset, where it's just resetting the complete OpCache, so all the files that are cached are being destroyed. So again, small demo. I have a small script, which just does a printr of OpCache get status. And what that gets us is a nice overview of everything that's happening with OpCache. So we can see that OpCache is enabled. The cache is not full. We have no restart coming up. And then the most important part, the memory usage. You can see if it's full or if the OpCache is nearly full. We have the wasted memory marker, more than that later. Then we have the intern strings. That's also a trick to get our OpCodes a bit smaller, also more on that later. And then the most important part is the OpCache statistics. So here you can see that we have only one script because we're executing it from the command line. We only have one script that's being cached. We have a number of, we have no hits because this is the first time that we execute this. And then we have some restarts. So also again, more on that later. And we have no hits and one miss because this is the first time that we execute it. And if we do it on a command line, if we execute it again, we again will not have a hit because there's no shared memory. And a very important one, if you execute this on your web server, you will see here the OpCache hit rate. And now it's zero. But if you are running a production, you really want a OpCache hit rate of 99.99 because the moment it drops, this means you have changing files. That means you're not having a very optimized way of running OpCache. And then if you ask to include all the scripts that are being in the OpCache, you get a list of all the keys and then some statistics about the specific files. So I talked about the entourage strings. And this is something that's being used in a lot of languages. It's available in PHP since 5.4. And it's some kind of compression for source code. So if you have a lot of the same OpCache combinations that are occurring a lot, you can just compress that so that you have an even smaller footprint in the memory of all the OpCodes. And the entourage strings block is a part of the complete memory block that you reserve for the OpCache. So if this block is full, you will just use more memory than is needed. But again, that's also stored in the shared memory of the master. You see the keys thing a lot in the statistics. And this is just the full part and all the different relative parts, again, to a specific file. So if we go back here, these are just all the keys. This specific file has been compiled into OpCache and is accessible through that key. The next thing that we saw in the statistics output was the wasted memory. So by default, OpCache doesn't do any defragmentation. So OpCache was built for performance. So let's say we don't have anything in OpCache and we start PHP. We have two files and we put it in OpCache. And then PHP sees that the file has been modified. So instead of replacing the OpCache in memory for that previous entry, it just marks that previous entry as wasted and then adds the new compiled file at the end of the memory. And that's just done so that it doesn't lose computation over for cleaning up that memory. But if you have a lot of changing files, you have a lot of memory that's being marked as wasted. And sometimes you have a lot of files that are being invalidated. And if you have a lot of wasted memory, instead of cleaning up that memory, OpCache just resets itself. So it does a restart. So we're getting started from a clean slate and everything is filling up again. So yeah, if files change, they cause recompilation. And we have a couple of scenarios where the OpCache is restarting. So if we have a lot of wasted memory, so first of all, if the OpCache is full and we don't have any wasted memory, then the OpCache will just be full. It won't do a restart. It will just try to add some more files to the OpCache, but it will see it as full and it will just not add it to the OpCache. If we have a lot of changing files and we reach a specific percentage of wasted files, it will do an out-of-memory restart. So if you are monitoring your sites and you see a lot of old memory resets, it's just because you have a lot of changing files and you want to do something about that. The hash restarts are just if you have too many files and you have a limit on the number of keys that you can store. If you have too many files that you want to store and the OpCache sees that there's no place anymore, it will also trigger a hash restart so that it can start again adding new files. Again, you don't want hash restarts. And then the last one is just the function I showed you before, the OpCache reset, and that are the manual restarts. So again, as a hoster, we're monitoring all these settings and there's an example of a WordPress plugin trying to do an upgrade and then doing an OpCache reset because there are new files, so they didn't reset. But we saw an upgrade that was failing constantly, so we saw the loop and we saw in our statistics that there were a lot of manual restarts and that's something you can have a manual restarts once a day if you deploy new code, or if you deploy more, but you can't if you have 300 restarts in five minutes, there's something going on. So I showed you the restarts. Don't ever have a full cache. So if you don't have any memory left and OpCache is trying to compile your code and store all those upcodes into the cache, you will actually have done more work than before because you will have done all the extra work. Then you want to check if you want to put it into the cache and then you have to discard the information. So you actually added a new step in a way, which makes it even slower, so never have a full cache. Whenever we see sites that are having problems, the first thing I do is I check the upcodes, I check the statistics, and 9 out of 10 is just a full cache and it's much better if we just double the amount of memory that's allocated to it. So this was the part of the OpCache itself and we have another concept because now we're storing everything into memory. But there's also something which is called OpCache file cache and instead of when the process starts, reading the PHP files from memory, compiling them and then storing them into the OpCache, we can store the opcodes into a file and then the moment PHP starts, it will just take those upcodes and then put them directly into OpCache. So this can help busy sites. For instance, if you do a reload and you have a lot of visitors and instead of doing everything, compiling on the fly when they arrive, you can just copy it over from the files directly into the OpCache memory. And it also helps with the CLI. So in the example, in the demo example before, it just starts, every time it starts the process, it has to read the PHP file from disk, compile it, store it into OpCache if you want to and then throw it away. And if we store it into files, the PHP, the console can just start it based from those OpCache files. And we only need to add one setting and that's just a user directory. So here we have the file cache, we want to store it in that folder and then some extra checks. But the important one is the first one. So if we run that on a command line, we first have to say enable CLI because by default, OpCache is not supported on the command line. And then add the file cache that we want to store and run everything. Again, it's the same file as before. And if we then have a look at the cache, we have a hash and then the complete three where we store the OpCache in a binary file. So yeah, that's another thing you can do with OpCache. There's a cool example in a PHP architect article where they also deep dive into the OpCodes and to the OpCache and they're trying to use the compile files as a way of distributing packages, of distributing PHP so that you have like pre-compiled PHP. It's a nice way of investigating OpCache. Don't use it in production or don't use it to distribute your PHP code. So now that we know what OpCache is and what all the possibilities are, what can we tune? So we have, like I said before, we have some disaster recovery scenarios. Is your memory full? Is the internal strings parts full? Is the key store full? You just have to check all those scenarios by just getting the status from the real FPM processes and then fine-tune everything that you see. You want to double the memory. You want to give it some more. Internal strings space. But also, if you see a lot of restarts, dive into the codes, see why those things happen. Maybe they are generating new PHP files and that they on the fly, for instance, and that they have to load. OpCache will also give a lot of warnings if this happens because you will see that you won't have the hit rate that we want. So we have some configuration settings that we can fine-tune. So first of all, in the memory, we have the memory consumption and this is the block that OpCache will be dedicating for the process, for the FPM to be using. So I think by default it's 46 megabytes. It depends on the application if you need more, if you need less. Then part of that 46 megabytes will be the buffer to store the internal strings so that you can fine-tune too. And then the how many files that are actually being allowed to be saved. So this depends also on the U-limits of your Linux systems, of your Linux system. But yeah, just make sure all the files that you have fit into that max. And then for invalidation, we have a flag where we can say that we want to have OpCache validate the timestamps of the files that exist in cache. So for instance, we have a file, we are chasing the file, and if we say validate timestamps on, then every time before fetching the opcodes from the cache, it will do a stat to the file system and see if the modified time is newer than what's in cache. In development, this is great. You can even say the revalidate frequency. So every two seconds have a look if it's changed. In production, you could say that you only want to revalidate every 60 seconds, but that's also a bit strange because you know exactly when you're deploying. We mostly recommend to just say validate timestamps off because if you do a stat, if you use the NFS, you have to still do the complete network, go through the complete network layer just to see if the file was modified on the NFS server. So we recommend just turning validate timestamps off and then just restart the opcache the moment you know you deployed a new version of your site. And you can also configure the threshold of the max-wasted memory. So I think by default it's 5%. So the moment that the opcache sees that 5% of the memory is being wasted, it will just do a restart. So yeah, to avoid having a lot of restarts, just put it smaller, but on the other hand, just make sure you don't have any wasted memory. Try to figure out why the wasted memory was there in the first place. So we get stats by using get status. And the get status is good for parsing it into something else and then have some visualizations. So there are a few projects where we have a single PHP file. So we can just take that PHP file, I put it in our directory, in our web server directory. And it will use all the information from the get status to show you how many memory you have, how many hits you have versus messages. So you have some more information. It's very important that this page is being served from the same FPM pool that you want to investigate because if you run get status on a command line, that's just the status from the command line and nothing to do with the FPM. So just make sure if you are monitoring that you monitor the correct process. So this is an example, you have another example where you can see the different restarts, where you can see the stats of the memory. So it's all based on the same information, it's just different views. So I mentioned the revalidation stuff and what you can do. So if we want to deploy a new version, we want to avoid that all the new customers, all the new visitors are trying to compile all those upcodes and then putting them all in the opcache. So we can do some kind of priming and we can do that by, for instance, we deploy a new version and then because we want to restart a specific or we want to invalidate a specific opcache from an FPM, we have to, for instance, do a web call, do a hidden script or a password protected script, which on this turn tries to modify the current opcache. So we have, for instance, we do a push, we know what files have been changed and have been deployed and then we can, for instance, get a list of all the changed files, push that to a hidden file and then ask the file to compile all the changed files. Most of the time, they just do a callback to a PHP file that's on the server, which on this turn just say FPM reset and everything is emptied. But if you have a busy site, you can use different FPM pools. So we have, like I showed before, we have a FPM master with different clients, with different channels and if we deploy a new version, we can create a new FPM pool, which we then prime a bit by going to some sites on that specific pool and the moment the new FPM pool has been primed, we can then point the Nginx or Apache, which it says in front, to the new port and we have a fully primed opcache. So we don't have any downtime, we don't have any spikes in visitors trying to cache all the opcodes. Or we can use preloading. So preloading is something new in PHP 7.4. So preloading preloads PHP functions and classes once and uses them in the context of any future request without overhead. So magic, everything. So it is basically opcache on steroids. So if we look at preloading, it is part of the opcache because it is just an optimized way of storing those opcaches. And what it basically does, if you start an FPM server and you have preloading configured, before accepting any requests, it will preload your application in a way that you wanted to be preloaded. So this loads the code into memory permanently. So we don't have any overhead of copying it from the shared memory to the process memory and then doing stuff with it. So if we compare it to opcache, in opcache we have opcodes stored on a file-based way. So we have file X. So we have the class animal and we store all those opcodes into the shared memory and then we have the subclass cat and then we store all those opcodes into memory. But every time that we want to use the cat class, we have to fetch those opcodes from the memory and then construct the complete object and glue all the classes together. So what preloading does, it helps with all those class libraries. So it helps with all the opcaches, opcodes that already being generated and then stores them in a specific way that we don't have any overhead. And what this basically does is it will, your own code will be as performance as the native PHP, the native PHP functions because it was just been stored into memory before accepting any requests. And it does that by a simple file with some loading magic. So the configuration here, we only have one configuration option and that's opcache.preload where we just specify the path to a specific file. And that file will be executed. So that's the only thing it will do. It will start FPM, it will see the opcache preload, it will execute that file. Once that file has been completely executed, it will start accepting requests. So what you want to do in that preloading file is just get a list of all the files that you want to preload and then run them through opcache.compile file so that they are stored in the opcache. And then just that script exits. And from that moment, all the things that you have been, that you loaded into the preload.php file will be available for all the future requests. So as I mentioned, we have the cat and the animal file. You have to make sure that you do that in the correct order because if some classes can't be constructed, they will just be discarded from the preloading and it won't be in memory so it will still have to load them every time. So preloading in the wild, we have two examples. We have a lot of example which they try to tackle the preloading problem. And in PHP, in symphony, since 404, so there's a blog post about it, since 404, they're already using the knowledge that they have in the dependency container to generate a file with classes that you probably want to preload because they know all the files that you want to use. So they use, you can't see it really well, so they have the var cache def and they have a .preload.php file and every time you compile the complete container, that file will be present since 404. So if you want to start using it, you just add that file to your preloading ini setting. So there are some things that you have to keep in mind. We have a .ini setting, so it's a complete server. So if you are doing multiple applications in a single server, this won't work because the preloading is for a complete installation. And you don't have any functions to reset the preloading, to reset the preloading memory. So because it is a file which is executed before we start accepting request, you have to do a full restart to empty the memory of the server. So you really have to keep that in mind. So when we talk about all the files that we use, composure knows what files we were using, right? So the moment that the preloading RFC and PHP was available, there was a preloading support RFC and composure where a lot of excitement, a lot of exciting people were, we know what files that we want to load because we are doing all the dependencies. But yeah, it's a bit tricky. Composure doesn't really, composure knows all the dependencies you want, but it doesn't really know what files you actually are using. So there were some benchmarks on this link and there were benchmarks doing executed without preloading, then preloading of the hot classes and the hot classes are just a trick where we get the status of all the files that are being in the opcache and only use those files because composure has a lot of files everywhere, but you're not using all those PHP files. So by doing this trick, you only take the files that are actually being used in the opcache. And then they also had a benchmark where they just took all the files and they did that by requiring the complete class map. So if you have a look at the results, so only the hot files, we have like 900 ish and if you do all the files that are available in composure, you have almost 15,000 files. So because the preloading.php happens before the server starts accepting requests, you see that the more files you put in there, the longer it takes for the server to actually boot. And if you do only the hot classes, the overhead isn't that much. Same for the memory. You see that we're only almost using 105 megabytes of memory to load all the classes that we don't even, that we're not even using. And then if you have a look at the request per seconds, you can see that the hot class preloading is the way to go. Beside the fact that this is a very active topic and there's a lot of contributions, they're not really eager to start doing it. So you're welcome to keep the discussion here as a central point, but to be clear, I'm fairly confident that in the near future we are not going to add anything to composure. And if any year or so it turns out that there's something composure is uniquely positioned to really help with this preloading, then we can have another look. But for now we see that it depends on your deployment, it depends on your application. A lot of more research about the actual performance improvements have to be done. So there were some issues in the beginning. So the RSC talked about opcache compile file and then you have to figure out the object graph or the inheritance yourself. So the Laravel and the symphony just took a file and they used auto-loading. So they did require. So the auto-loading automatically knew what files it had to include. But they more focused on the opcache compile file way than the require. So in the beginning there were a lot of segfolds but most of that has been fixed. And there are some issues in the bug tracker but it's not that bad. I think the previous version, the previous minor version, disabled preloading by default for Windows because Windows and memory does some crazy things. So yeah, about performance. Is it worth the effort of using preloading? Well, it depends on the situation. So you have to figure it out. Also you have to make, you know your application the best. You know your application the best to know what files are really important for your application, what files that you want to have there immediately. So yeah, stick with the hot classes. Don't put everything into memory. And from a hosting perspective, we're now looking how we can do that, how we can use all the information and start suggesting preloading files for customers. Even try to figure out the hot classes and then automatically adding a preload file for them and then see if we see improvements. So yeah, even if it's a small improvement, if you have a lot of accounts like we do, we see a lot of improvements on the cluster overall. So if you want to see some more reports, a lot of people are actually using preloading. The results are mixed. So if you saw the results from the composer outputs, that's from like half a year ago, that were very promising. The real life improvements are, yeah, they vary. So some preloading resources, we have the internals email, we have the preloading RSC where you can still find most of the information and go out about preloading. And then we have the pull request if you really want to see how they did it and see have a look at the pull request. So in conclusion, about PSP performance, know enough about the PSP internals, how does op-cache work, how does all the loading works. Know how your application works and that depends a lot. So use the op-cache status to get real-time information about your application if you have restarts, if you do use a lot of memory and stuff. If you then take that information, fine-tune your settings and then most importantly, repeat because if you double the memory because your op-cache is full and you have a huge application, you could have another full memory and you'd have to reiterate. So make sure that you keep inspecting your performance. So one last point, what we do at Combal, we have an elastic stack and we gather for all our customers, we gather all the data, so we have all the real-part op-cache statistics and we shove it all into elastic so that we can see what we can do about it. Like I said, it's changing constantly so you have to repeat it a lot. So we keep monitoring and if we see that size are full, we try to implement auto-tuning so that we can increase memory, decrease memory so that we can even drop the loads on our systems and then we're trying to figure out if we can do stuff with preloading and auto-configure preloading and that concludes my presentation. Thank you very much.