 OK, and with that, we start. So welcome to our Cooking 101 class. And now you are all here to do some nice bignets and other local New Orleans food. So what's the Drupal con? It's a wrong convention, I mean. OK, yeah. No, this is a future of Drupal performance But we'll do some cooking today. And we'll soon see why. Follow me on Twitter at Fabian France. It would be great if you treat something, if you would mention me, or use some special hashtag. We can find future Drupal or something like that. And yeah. So let's start with a little introduction. Website composing is really like cooking. We are cooking a website. And that means not the development of a website, but actually a page request. Like you're going to some URL like Google or whatever or Drupal.org. And Drupal in the background is composing all this stuff together, like different things. And that's in a way it can be compared to cooking. It's because we have different little ingredients that we're using, which are kind of our content. Then we have cooking utilities that are helping to assemble this content. And we have other things like a can opener, which is like authorization. And then we have obviously cooks doing the work and preparing the meals. So there is only one cook in Drupal. And that gives some problem because that one has to wait a lot. And let me demonstrate that to you a little. So that was visualization of latency. How latency works, how it really makes problems, and how even with caching there still is some latency. Still have to run over there to get that. And the other thing is we are sometimes getting the can opener to open the cream, even if the meal doesn't even have cream, so we don't need the can opener. But we are getting the can opener every single time. And often it's like we run into the seller to get the can opener every single time, regardless if you need it or not. And that's a lot of unnecessary work. So as I said, no one would run a kitchen like that. You would first have several cooks, so if there's even an ingredient missing, someone else would keep stirring so that the meal is continuing, and someone else would get the ingredient. And you would not put your dish to the side and run to the supermarket to get the ingredients. You would plan that a lot more better. And while someone else gets a salt for you, you would prepare the dessert already. So that meal is blocked, you cannot continue with it. But you could do some desserts, meanwhile. Like Drupal could render the food while the database is still busy, because the food might be cashed. We could send it already. And that's also where the big pub work comes in, kind of that whatever is ready we are sending. It's like a restaurant where you order a huge menu, but it doesn't get in order, and you don't really care because you're really hungry, so whatever gets ready you are eating. And as I said, you would not get the can open out of the seller every time, especially not when you don't even need it. But Drupal is kinda doing all of that every request, because we load the service container, we load so, so, so many classes, and we have to load them all again, and we have to compile them, and PHP has to initialize some, and we have 242 services or something like that, so it is a lot. And then we run always, that's kinda our bootstrap process, we run the session initialization, we run the authorization, we run the routing, and just then we can even start delivering a web page every time. So we always start from zero. We are caching a lot, and in Drupal ate even more, but we kinda always start from zero to do all of that, so we always start from zero with our meals, there's nothing prepared already, there's no ready-made dishes in the refrigerator. But that in itself is not bad, it's just how the CGI model works, where you have a web server which is accepting your request, and which is passing them on to a CGI, like a PHP, FPM, or ModPHP, or whatever. This is kinda how our world works, but this is not how, for example, the Node.js work works. And it just means we do potentially very unnecessary work. And it just means we wait for IO a lot, especially with PHP 7, and there's no way to have multiple cooks right now, it's really just me running around all the room and getting the ingredients. And yeah, and now we're coming to that, how do we fix that? And that's the next step. So are you ready for some really cool ideas? I hope you are. Then let's go. So at DrupalCon Amsterdam 2014, my render cache session, I shared my vision about how this render caching could all work, and the only thing Drupal ate at that point had was cache tags. And it was a little crazy overall, in a way. But by now, 90% of all that's implemented in core, directly shipped, big pipe in core, 8.1, the placeholders in core, everything. So I'm just trying that again. So the first thing is we really should try to avoid doing unnecessary work. So one of my goals is to bring back kinda the performance of the Drupal 8 cache to the Drupal 7 level. And I've already shown it's possible. There's a special issue, a meta page cache, where I've profiled that and optimized kinda almost everything out of symphony, everything out of the page cache. And then in the end, with some little tricks, we can, it can be possible, plus keeping all the goodies of Drupal 8 to be faster than Drupal 7. And because Drupal 8 is PHP 7 ready, we can even faster than Drupal 7. And so that's one of the goals. And the other thing is we now have this dynamic page cache which is great for caching all the authenticated user page things. And we can bring that kinda to the index PHP level again, which means do it as quick as possible, do it as fast as possible, don't do much work, kinda just load the database, get the cache entry, check it and be done. And yeah, and for that, my strategy or architecture I've designed is to use the same approach as how ESI would authenticate user caching this in varnish. And that means we are caching the cache context. That sounds a little crazy, but it all makes sense if you now or remember that in Drupal 8, the cache context which is how your context, how your content varies, like this context is only cacheable by user, but this context is only cacheable by the face of the moon or by some crazy elven ritual that you're doing. This cache context, you can actually reduce to two things. All of our cache context hierarchy is planned in a way that for simple sites and for more complex sites we have to define some other things. You have a tree, so we have URL, and then we have URL.pass, or we have user and we have user.permissions. So everything in the end comes down to the user or to the URL, and the user in reality is just, there's a one to one mapping between a session and the user. So it all comes down to we have a session or we have a URL. And because of that and because cache context actually declare what they are depending on, like other cache contexts, when they would be reduced in that way, it means we can cache the cache context per session. And that's kind of also how it would work in ESI. You would do a quick restart. The user is then authorized within varnish, and then with some special headers, Drupal can now, this user has user ID too, et cetera. And now the idea is we're doing the exact same thing just within Drupal itself, and retrieve this page with the placeholders, and that's, I'm explaining a little more now. So as an example, we have this shopping card which is user cached. Then we have a normal block, which is just user permissions cached, and we have the user permissions and URL cached. So we don't really have to deal with a normal block because that we can just cache with the rest of the thing, that's not a problem. So but the user cache obviously is a problem because if we didn't placeholder that, it would mean our whole page would be cacheable by user. And that's not what we want. So what we store in the page cache, in the dynamic page cache then is, we are storing our page together with the placeholder. But this placeholder actually has a cache address because it's cacheable. So and in this case, it's a block four, block with the ID four, and the user of two. And this is even if you look at the database and you look at the cache IDs that are stored in the database, whenever you see a bracket, that's a cache context right there, user equals two. But because we have our little mapping that from the session 42, we know it's user two, and we already did this really quick up look into some cache, like a local APC cache or something like that, because that's not really changing. And we get back that this is user two, we can actually know which cache context we need without having to bootstrap all of Drupal or authorize a user by caching the cache context. And that means when we are now retrieving this page with the placeholders out of our cache, there's still some chairs here at the beginning if you could come up in the front if you want, not run. So if you are still retrieving this page with the placeholders and we have this placeholder and this placeholder is cached, then there's no reason why we need to bootstrap Drupal up even further because we can directly compose the page. And that's kind of the trick here, and that's also what is so great about authenticated user caching this manage within CDNs like Fastly, that you can be directly near the user and in the future, even with service workers, you could be caching things at the client, directly at the client, and it's always the same model. We're having dependencies that things are dependent on, we're having these cache contexts which are the dependencies, and we're having placeholders for things that should be out of band kinda. And this is kind of the trick that we can do with an index PHP, we compose the page because it's all cached and we are done, we just send it. And the nice thing is with the architecture, both scales of Drupal profit, it's the same way. The smaller sites just get faster performance for free kinda, and the enterprises, sites gets easier varnish and ESI, so regardless which camp you are in, you will have profit. And all sites, because it's using the exact same mechanism, and we can write all the tests for kinda the ESI with Drupal itself, and then when there's real varnish ESI, it just works because it's the exact same mechanism, there's nothing different about that. And, or if you use NJNX, you would write yourself a module that would do that. And all sites could get faster response times for cached fragments because we have a cache that's sitting so much at the front of Drupal that it's so early in the process that you don't have problems with that. And so with service workers, as I already said, which is kinda like a reverse proxy, like a varnish on the client itself, we can use the same exact mechanism as with ESI, we can send the page with the fragments and then we can display it and then we can cache the other fragments separately. So the whole system kinda works for all stages of that. And so without any additional work or changes in that, and for you as a developer, nothing changes. So service workers would provide from that as well. And the nice thing is we can do what I have shown you almost today. There's kinda a five line patch missing to Drupal kernel, which means to allow precontainer middleware support. So with Drupal request works in the following way, you are actually doing like that, that the container is, the request is coming in, a kernel is created, and then we are going through a middleware chain. But unfortunately before we do starting the middleware chain because the middlewares are stored in the container, we have to load the container and that is even if we were loading it from APC, it's still so much of an impact that it's slower than the Drupal 7 page cache was and because the container is quite big because it has all our services, even if they are not loaded. But because we have a bootstrap container, we can kinda hard code all of that in settings PHP and we can have like, would define precontainer middlewares and then we had true middleware support or you could have index PHP today, today that and with a precontainer middlewares, you could still have like in the bootstrap container, your database, your cache tech servers, even custom request policies for the page cache, even custom middlewares, you wanna really run always and you say, I don't need this additional performance, this middlewares need to run because they're doing essential things. But then the page cache comes in and we can do all this composing and stuff. And we can add this little thing missing in any Drupal 8 major version, minor version. To truly cleanly realize this, the best would be however, to remove bootstrapping completely, just have like middlewares as a system, remove HTTP kernel from symphony, there are other ways possible, obviously, and lazy-lorded services. That means the session initialization, authorization, routing, all of that, where I had to go around the room, is all done on demand. So just when for example, the current user service is used just then the user is authenticated. So we are going even more into a service-based architecture where things are done on demand and we are really at every stage, we know we can get into a booted up stage, but depending on where we are standing currently, we have to do more work or less. So if all this cache, no work at all, but if little is cache, we need to do a little. And another thing we could do, business kind of model is we could even send a page skeleton already and then big pipe the main content as soon as we have it, like stream the main content too. And the vision of all of that and that's so important to me as there should be completely transparent to the developer and SEMA. There should be no intrusion at all, which means it's all backwards compatibility and which means it's all Drupal 8 possible. And the site should just get faster automatically and that's the goal. Obviously you would still as a developer need to declare your dependencies properly, but you need to do this today already. So it's not really something changing. But there's more. Be ready for the future. Because we have this great render tree in Drupal, trees have one great property. They're easily parallelizable. So whenever we encounter a new rendering context and the rendering context is something that's cacheable, which is independently buildable. So whenever we have something where we, in the end when we come back from the tree rendering we would do a cache set. In this case, instead of rendering that directly, we just create a promise, we push that to a queue, replace it with a so-called wait placeholder and return. And then before sending the page, obviously we need to process the queue. But in that all would also work recursively. And then we'd have kind of like an event loop which processes that queue. For example, in Drupal, standard just as a random worker, which would just randomize your order. And then fun begins. So, and obviously if you have a promise, like you have one part of the tree and then you're going down and there's something else in the tree, then this would also work recursively. And then obviously if there's two things, one promise would need to wait on the other. But there's real big possibilities there because now for example with HVM, you could just make an asian queue runner instead. And then like magic, we have asynchronous IO kind of automatically. And because while we are waiting for the IO, we can start rendering another independent fragment. And maybe that's all cached and never needs the database or it's just internal things. And this can really help the performance of reducing our IO thing. Because as I said with PHP 7, things have gone so much faster but IO is starting to become a bottleneck again because of that, yeah. And PHP 7 itself is in the process of implementing asynchronous IO as well. They've worked so hard in PHP 7 to have proper isolation levels and everything that it would work and have kind of prepared everything. So our estimation, what I've heard so far is it might come in 7.4 or later. We'll see, but estimations are, we are pretty sure it will come at some point. And well, someone could base the queue runner based on React PHP or Icicle IO or whatever thing is hot next week. And in the future, we could have something other cool guaranteed response times which is something that CEOs around the world are asking for since forever. So for example, you can say if this block takes longer than 50 milliseconds that's our guaranteed response time for this block then just abort the rendering. So I'll weigh those 50 milliseconds, return a placeholder, a real placeholder in this case and big pipe it later. So we could have a page and we can dynamically, intelligently determine this block here is just really taking way too long. Something has gone wrong. Abort, send a placeholder. We'll send it later when it's ready. And when we add this kind of, and even without truly asynchronous IO we could still already do something of that like yielding back, going back because we are dealing with trees and as I said trees can have many like this properties where we could just measure the time to render that tree and if at that point we have already again over our limit then we can say oh but now rendering the rest of the tree would take even longer so no we are aborting kind of so that could happen in a subtree for example that we are defining such guarantees. So that would not even need HHM or asynchronous IO would be more efficient obviously because well it could take hundreds milliseconds and Signal obviously needs PC and TL extension PHP but at least we could say well but now we don't wanna wait the other 200 milliseconds that the rest would need we've now gone over that would not be guaranteed but it would say once we are over and we are back in control at some point where we can set a placeholder we really wanna abort and we can do that at any time. So if we add this kind of abstraction then the implementation does not really matter at all as I've shown you can use a hard thing of the week and just more little explanations already tried that a little around the placeholders because it's a term like context where everyone is understanding something different about it and it can be defined in so many ways and that's why I used wait placeholders. Normal placeholders in Drupal are replaced as late as possible and they are cached because the reason we are placeholding is we want to remove dependencies like this tree being dependent on the current user. That's why we are doing a placeholder and kinda doing out of band rendering. Wait placeholders are different because they are replaced as early as possible and never cached because the only thing we wanna do is we wanna remove the waiting time. So wait placeholders have been replaced before cache set happens. So that's again why I'm saying it's completely transparent to the developer there's nothing in there. And there's another nice property of our render cache tree because we have hash cache and that can obviously return something cached. So whenever we have hash cache we can safely return just markup and attachments because that would happen if we had a cache hit. And this is where we can do those replacements easily. And again, we could do this today. We would just need to replace the renderer in a module thanks to the decoupled architecture of Drupal 8 and play around with that. Isn't that exciting? I find this really exciting. So the vision for Drupal 9 here would be that Drupal 9 would have at its heart an event loop like Node.js or Golang or whatever. And again, we want this to be completely transparent to the developer or SEMA. It should just get faster automatically if capabilities are available that's ago. And we could change Drupal 9 and learn as a community together to be as much as soon as possible and switch over to the paradigms that most other languages have and win back those people that left for Node.js. But there's even more. So workers everywhere. Because now that we have our nice little queues and things that are independently renderable, why not push the work to a dedicated rendering farm? For that, we need to know which scene we're in, which user we have, which session we have, which route, which URL. In short, we need to know the context how this should be rendered. But we do have this information because that's exactly how that context would be, our content would be cached and how that content varies. And that are our cache context actually. And that you are hopefully all declaring anyway if you're working with Drupal 8. And we would need a way to quickly set that context that's not available right now because as I said, we do all this bootstrapping stuff. You see it's all coming together in the different parts. But as soon as we have that defined, what we can then do is we can do pre-generation of content. So for example, we could have a heuristic that was checking for our most 100 most active users. We are pre-generating content so they have a faster experience and have never to wait for anything. And our heuristic is just iterating the most used blocks, for example, or the most used content nightly. And then we can pre-generate it because we have the full context how we can pre-generate that content. Because we have that available and we can record that because we have a queue and we have independently built things. And that's why when you are building modules, use lazy builders. They are great. And we can do regeneration and serving of stale content. So for example, remember our block we guaranteed the time for. We could either send a block that's kind of saying timed out or we could send some stale content and once the new content is available, just send it. So there's a huge potential in Drupal because we are never deleting caches at the moment except for the database where we are adding some sort of last reason, least reason to use things. But in general, all things that are expiring where you get a cache miss, you can say, I want invalid content and you can get the old content. And that's great. So there's huge possibilities there as well to say, especially in combination with big pipe where you're saying, okay, I still have an old version of that. I send that. I don't care that the user sees a little blip in change like their list of friends, it's there. And then the order changes a little and some other friends appear like 100 milliseconds later but the user directly has a fast experience that's so much more important. And, or you could send the outdated content together with a special class which would gray it out so it's clear it would look like this. And that's a little then like Instagram like those previews of the images where they are just sending the most major color already. And obviously we could have workers in the rendering farm but to make those workers efficient however, we need long running processes. And that's in itself as a paradigm shift I'm proposing here because many things assume and droop that request is short and everything is reloaded. And that's not the case for other systems. Not the case for almost any other system. So, but already now we have long running drush Q runners. So it's not a completely new problem. But you can do baby steps. And thanks to Krell's vision. Thank you Krell, he's here. Some most services we have are stateless and static is almost gone from the code base. And that's so great because he worked so hard from Drupal 7 to 8 to make that a reality to change the community in ways to not only make it object oriented programming that you might love or hate but what all of this gives us is it gives us possibilities to be ready for the future in that. And he made that happen since 2010. So that's amazing. And the rest we should put into a Drupal and the rest kind of what we still have from static state which just should put into a Drupal static service in my opinion. So we could have some kind of scope state or at least resettable state in the future because a Drupal static of Drupal 7 has one big advantage. You can just call Drupal static result and your system state is again in fresh state but if lots of classes have different states that's not so easily possible. So we could for example start as baby steps just with a loop in index PHP that clears all static caches in between accepted work and actually we are needing that anyway for unit testing because we don't really want state to have there. And creates a new kernel every time that at least then all the classes are loaded and that's already is huge because the class loading overhead while not much it is something. And HHVM for example does something similar. It has like a like a warm up phase and once it's warmed up it knows the most frequently used classes and has those kind of preloaded in some special memory segment. And then we could write list all the different services which already now are 100% stateless. Thanks a call. And for example just create a container with just those services already preloaded and then we just need to clone this container and cloning is a really fast operation in PHP and you said kind of as our baseline and then just the services we don't have loaded yet we have to load but if that's still not enough we can be even more crazy and declare cacheable meter data like cache context on the services itself. So for example you have a workout one for user one for user two and you switch around and the worker has to one for user one, one for user two then we could in theory just reuse the current user service for example but stored per user in a different cache bin and again this comes back to the model that this kind of work that is coming in is coming in with in a certain context and a certain context if we define this well enough will be enough to determine all the dependencies of our system. So and even more crazy we could scope two per static with dependencies declared so not the services itself but caches of them but that are just some ideas and there's obviously some definitely race conditions of static caches because if you have long running processes that's something where if you have some cache you can run into cached race conditions but as we said with Q-Runners you can run into the same race conditions already now so but it's just something to take into mind and that's kind of what I wanna get into you now or what I wanna present to you now is a paradigm shift that we stop thinking of websites as being request response and booting up everything but we think of some as applications that are serving our users that are serving our clients and that we get all of that that so many people are doing this JavaScript that we get that in Drupal as one possible way to access the system and we are building our system to be ready for that so to summarize that again we need and that's kind of the vision here for Drupal 9 we are setting and retrieving of the request context we have long running workers and resettable caches and services and again should be completely transparent should just get automatically faster that's a goal and that could be the promising future of Drupal or future of the Drupal performance but again we could start today we can experiment with all that now we should even experiment with that now because when the world is ready for asynchronous service workers events loop and finally going away from CGI for PHP which I think will at one point in time happen because the whole other world is away from it then Drupal will be ready too and that would be pretty sweet thank you so now we have some discussion because it's a core conversation so we can both ask questions in if you didn't understand something but we can also say well this is crap you really shouldn't do this or now we should do this differently or like that we are the community so fire away microphone please so my question is with the long running processes my understanding is that one of the barriers to that is that Apache is based on the short request response cycle so how do we get around that if taking advantage of that would mean major changes to what hosting options are available for the Drupal ecosystem we could talk to that so first of all as I said I want a slow transition of everything so baby steps in that that it's one possibility but not the only one so the boot everything up approach would still work it would just be slower so to make advantage of that obviously I think what will at one point happen and what has happened both for Golang and NodeJS et cetera is that PHP will have a first class web server in itself so you just run like you run a node process you run a PHP process that's one possibility which is then having the big event loop and then retrieving directly requests creating services out of that request and doing that another possibility is that you have your normal Apache web server and you do even some of that Drupal request normally but then you are pushing off work you either know and have declared to be a little longer running or other things you're putting that off to a drush process and there's different services available which might be performant I've played around in 2009 with Gearman to do some really quick Ajax request for anonymous users and that already was working pretty well because I got like 50 milliseconds response times from that so there are definitely possibilities in using several of the other available queue systems and obviously it's always a trade of how much infrastructure do we want do we want just to run a PHP process something somewhere that then obviously what about all this plumbing that infrastructure guys hate about NodeJS that it's just one process and there's no one that checks that it's still running et cetera which I have Apache but I think that are all future questions and the important thing is kind of making Drupal ready that it can be run as an event loop but not necessarily must. Okay so I'm technical SEO by trade and I kind of find my tagline being feed the bot right I want to feed the Google crawler and I think some things that people need to understand is that Google when it comes to your site has a budget of time that it's gonna spend crawling so all of this that you're talking about caching for the user or caching it just is amazing to me and thank you so much and I hope maybe that's one of the baby steps is to help like if we serve a page that has related content it would be fetching all that even if the page is cached but I really like where you're going with this so thank you. Thank you. Hi Pevin, it's Peter Willanen. Just thinking about your steps that maybe we can go forward with Drupal 8 is there a way to use even the Drupal Q system and then couple some background processing so I'm just wondering we again need a system where we can basically use Drupal as is where no background processing is possible but then accelerated if it was and have you thought about is there a way we could use a Drupal Q so basically your request would have to process its own Q entries if there's no worker but if there is a worker in the background then basically all that work would be done by the time it's needed. So yes and no so I would not push that to a Drupal Q database back end because that would be too slow. It should be in memory Q by default and then if you push it back to a worker and the in memory Q should really work like and that's why I wouldn't use it because I don't think that Q is particularly suited to that use case because what I really want to have is these JavaScript or Guzzle or whatever abstracted promises where I'm really just having when this is ready just inform me kind of model. I don't care when it's ready but when it's ready give me the call back that you are now ready and then we can continue with that and I don't think the Drupal Q is suited to that but on the other hand for doing the workers and testing that out obviously for doing that the Drupal Q could still be used together with a Drush Q runner there would definitely be no problem. I doubt so that it would be for many things would be performant enough at the moment but yeah that's a more thinking of like a socket where we are posting the data because most lazy builders and that's why you should use lazy builders are very small having declared the dependencies and then we just need to be as efficient as possible to declare our request context and if you're doing that it should be really simple payload that we are doing but yeah we could still do that all in Drupal 8 it would just be maybe some more needed layers it could even be the same API for Q but for that we would need to see if the Q API is sufficient for that. Hi I'm Les Lim I came in late so apologies you've covered this but are there implications for debugability for long-range processes or across the entire system with several with things that are that are deferred to workers? That's a very good question. I've not yet thought about that in terms of how debuggable long-running processes would be I mean in most cases you would probably just attach your debugger normally in that so that should still work but I've not thought about it but my thing is obviously sometimes in development you would use it but usually you would not it would be more like a QA thing and it should be nothing that should interfere with your development usually because unless you want to specifically test that case. Also sort of related to that profiling tools have they caught up to the concept of having a long-running process as well? Functional? Yes and no so with XH-PROF you can run XH-PROF disable at any point in time and that in theory should reset the state of that at least from my remembering of the code base of XH-PROF when I run XH-PROF enable it should reset all variables so you're starting new and can start at any point in time in the course deck so that should be possible and obviously if you're using XH-PROF sample profiling you could just write up the code base profiling could just write out some samples but if it's not suited for long-running processes I don't think it would be very hard to change because it's really I think there's like five variables that you would need to re-initialize to get XH-PROF running for example. Thanks. Just wanna build on previous question and something you talked about earlier, Fabian. If you're writing code that is proper stateless services and pure functions and so on, the big advantage is you can debug those independently whether they're in an async environment or not you can't test them, those will work the same way whether we do long-running processes or async or native language async or whatever else. We want to have 90% of our code base written in such a way that we can figure out which one of these we actually want to use in three years, in five years and do have two or three different versions of it. So, starting today, actually starting two years ago write code that makes no assumption about its context or its runtime environment and then it'll just work with any of these and you don't have to think about it again. That's the reason you write code that way so that you are flexible for all of these. Yeah. And stateless service means avoid static like the plague in your classes. Because while your end to your service should be recreatable at any point in time, kind of. You should, it's like writing iOS apps or Android apps where you can be suspended at any point in time and recreated and the user just closes your application and you are responsible for holding all of that but if you kind of, but there you also need to store. If you store a state, store it in something consistent which is in our case, always the database. Next question. Discussion, some more cooking? Can you please talk a little bit more on what can we do now in terms of the initialization of the container service and all this stuff? As far as I remember, you reported in the issue that like now in Drupal it takes about 100 milliseconds to make all those initialization and in the presentation you said that we can try to get rid of all of those at all but like kind of, are there any steps that we can try right now? So anything I've talked about we can kind of try now. To be a little more practical, so currently dynamic page cache due to the design decision works with routes but there's no problem in writing a middleware, sorry an event subscriber that runs before HTML subscriber that would store the page based on the route into something together with all the placeholders intact so just push it into the cache and then you just have to hack your index PHP and initialize your database and then you're just doing a query to that, grabbing that and then you check can you fulfill all the placeholders and that would be a fun service to write kind of as a first step and if you can then just return the page and you're done and then try it out and see how many security issues you get. Yeah, there is obviously a risk especially for authenticated users but on the other hand you could have also a sent anonymous pages which are personalized and I think we are even going into that direction more based on session and how cool would it be if it's on session but you know only this little part of your page is dependent on the session and you could still have a page cache that's running as fast as Trooper 7's and you would still have your nicely personalized page and cache it for 10 seconds so only every 100s request you have to actually boot up Trooper so microcaching is also a hot topic that can be done or as I said with big pipe we are keeping the request open streaming everything else out already so why not take the time after the user has everything and do some more work to regenerate some of the content you've just delivered hot content in that. Next question. So obviously this is super exciting because of all the work we've done to enable this in Drupal 8 but the elephant in the room for this is how do we still allow people to install the module on a live site? Anything that rebuilds the container is going to be very problematic for a long running process so are we going to just leave it to the sites themselves to say well if you do that you're going to have to do something about this or are we going to have a front back then? So let's say like this if you install a module and you have a request let's say you have a slow page and you have some little callback that calls to some web service somewhere and that web service happens to take 20 seconds today so and meanwhile this request is serving this user someone else compiles a container we have set race condition today so that's not a concern. For the long running thing what we can use and obviously that's another database hit and that's where I really want to go is atomic counting. Okay the database is not as atomic but it's atomic enough for our needs which means when we install a module we are increasing the counter in the database and then everything that was previously cached and everything is then outdated and we completely restart our long running process and that's a way to synchronize that and that's how we use currently timestamps but I would like to go away from that if you know perhaps you know in Drupal there's a fast chain backend which means you have the consistent backend which is the database and you have APC on several webhands which is the inconsistent backend and whenever there's any write all the caches of that bin are invalidated because there's this timestamp in the database but we can use the same model of versioning things for many more things and I do think when you're deploying code you should increase your version identifier then everything is automatically outdated when you're enabling a module we are increasing this counter and everything is automatically invalidated and that's how I would approach a problem that's atomic counting It'll be interesting to see it work in practice because as you're well aware we had significant issues with all because basically the whole symphony kernel is not built to make that type of change like what code is loaded during your live site actually working so I think you make it sound nice but I think we then have to stop allowing modules to do certain things so it's certain low level things like looking at like providing such middle ways that run before the container we might have to stop from doing that and having something else some form of low level things that work separately outside the module installation so modules which don't have that create databases and stuff like that but they just do these really low level things Yes for sure especially if you want to officially I mean as a contract project every site owner can decide if it's worth the risk or not and they now which request policies they have defined in the container and we could even warn for a big exception if there's a mismatch between bootstrap container and the other thing but obviously there is some application that we are trading for the speed and some modules wouldn't be able to do anything about it about that cache but on the other hand flexibility versus speed is always a trade-off in that but yeah I agree if we do some constraints on that it might be simpler to implement Dr. Mendegan so this is a some of the performance improvements are very very compelling for how we can speed up the public side of the website and I'm wondering if you could talk a little bit about how we could apply that on the admin side of Drupal sites as well my understanding is that the way things are working now all the improvements we've made for caching and things we're not really doing that on the admin side which results in if you've got a a fairly big Drupal site and you've got a fair number of modules installed it can be a very very very slow process navigating the admin side of the website so how can we take advantage of some of these things to make that part of working with Drupal a lot faster it's an interesting question obviously so the admin side has one big problem there's usually a much more huge security concern about admin accessible pages and for other things so even while we could for example cache platforms and even deliver them real fast there are it would be much more cautious in doing so and would at a maximum allow caching that site per URL and per session because everything else which is more granular I would personally ask too much as a security risk that said however slowness on the admin side there's several things in my experience one is that sometimes there's unnecessary work like for example a block is placed always somewhere and it's placed in a region that does not exist but then it was in Drupal 7 sometimes blocks had been rendered because they were enabled for that scene but they weren't even there so it was just unnecessary work so again avoid doing the work and then the other thing however that I would like to see more is render caching of forms and tackling all that form stuff and I would like to see this in a again kind of three step process where first of all we are changing our form things to be true objects and we are making them also available for example to JavaScript to directly manipulate so our widgets are no longer like things that are just there but in the end the form is just an array of objects and every such object could much simpler be cached under certain conditions than the whole form and also what we can do and that's kind of the approach I think we've been designed to be working with Drupal 8 and 9 by now is we can introduce new layers like we could introduce a new kind of form API for example having these widgets or we have a new form API which has some restraints and you cannot do certain things but if you follow those restraints and if you for example register a URL for your form or if you register a callback for your form then we can shortcut certain things and we can cache your form then we can do certain things and with all the other things it was a quite long process but we can do the same for forms too forms are not un-cacheable we've proven them in Drupal 7 to be cacheable even in Akamai and replacing such things that are dynamic just on the javascript side so it's possible, I know it's possible it's just work that needs to be done and someone being crazy enough to say I really want to cache those forms and I'm just trying it out and there will be 30,000 test failures at the beginning but I check every challenge I'm getting and then I'm seeing that and as I said there might be kind of like a new form class cacheable form or something like that which has certain restraints and has certain parameters in being cacheable because currently you could pass a note to a form as a parameter how would you recreate that? it's impossible so forms need to be lazy buildable, they need to be independently buildable they need to have a clear defined input and output and a clearly defined state which is stored somewhere and once you have kind of cleaned that up and made it then it's cacheable and then the admin form and probably forms are like 90% of all admin pages plus listings and listings already have at least some views when they're caching so probably forms are the biggest chunk left and we can tackle that I feel like this would play really nice with like an Ajax-based system where you could just jump right into your placeholder system and say give me this when you have it and you don't have to reload whole pages so like all the groundwork you're laying here could also be used for that kind of stuff yes exactly so in Drupal 8 what you have available is a render strategy service which is by default a change render strategy service so you can just define your own render strategy and you could just have a dummy block and then you have like a render strategy that's replacing that with an Ajax thing and you're done or with something purely rendered by JavaScript and so you can define a custom render strategy that could even play with Ajax really nicely oh and there's even something we forgot in big pipe in 8.1 I've talked with some people here at DrupalCon and told them yeah my initial prototype had a little more and I was like explaining it was like oh I totally forgot so for example at the moment a block is when it's a placeholder it's big piped when big pipe is on but what we can do now nicely and easily is we can just put in a cached render strategy and for all the placeholders that are already cached when big pipe is on we just replace it directly and not big pipe it by if it's cached already why should we do further rendering of it no need to yeah and there's other really really low-hanging food in true if you want to make Drupal 8 faster and don't want to tackle any of that complex stuff there's some very very low-hanging fruit at the moment we are doing if you have a list of 10 blocks we're doing a cache get for each in Drupal 7 there's a cache get multiple on that you can implement that today any one of you can work on that there's an issue for that in the queue and that could be a pretty nice performance chunk for free the only reason neither me nor Wim has tackled that is because we had been so busy of making all the changes that are bc breaking in the Drupal 8 cycle and but yeah there's some low-hanging fruit that can give tremendous performance improvements just for free and as I said the placeholder system is not yet completely implemented how I originally envisioned it I even had it that something that's not a lazy builder could still be placeholder with some little tricks yeah further questions I don't think time is up anyway right okay then thank you so much tell me how it was have you evaluate the session evaluate the cooking sometimes very very seldom