 Let's go ahead and get started here. How's the first day of the conference for everyone so far? Good? Good? All right. So this session, Drupal in 2020, because I'm feeling ambitious. My name's Larry Garfield. You may know me online as Krill. If you want to make fun of me during this session on Twitter, that's where you do so. I highly encourage it. Does anyone not know who I am? So do I need to bother with the slide? You. So for Peter O'Lannon. My name's Larry Garfield. I'm a senior architect with Palantir.net. We're a Drupal agency based in Chicago. Drupal Aid Web Services Initiative Lead, Drupal Representative to the Framework and Trappability Group, Advisor to the Drupal Association, and a walking implementation of PSR8. Only some people are going to get that joke here. Tell me about you right quick. What we'd call themselves a core developer. Only a handful of people. Contrib developer. Who has worked in a web server language other than PHP? Decent number. OK. All right. Who's here just to make fun of me? Peter O'Lannon again, I figured. So Drupal Aid. Kind of a big deal. We've been talking about it for a long time. It's going to be a great release. And this is something we should be proud of. I'm going to say a lot of things in this session, but I want to make something clear. Nothing I take should be, in saying this session, should be taken as a criticism of the people working on Drupal Aid who have done an amazing amount of work. Those who have done Drupal Aid development who have contributed to Drupal Aid, can you stand up for a moment? So let's just get that out of the way first. That said, as Dries said in his keynote, incidentally, I wrote this whole thing before the keynote happened, so I'm not updating it for that. Dear god, let's never go through that dev cycle again. It's way too long, way too stressful. And probably the most relevant problem is that we spent four years not building great new functionality. We spent four years playing catch up. Drupal 7 was well behind the curve, even when it was released for where PHP was. Drupal 8 has leapfrogged about eight or nine years forward from where we were, which means we're only about two years behind now. What do I mean by behind? Well, WordPress had a decent REST API years ago. I don't even remember when they released it, but it's been a long time now. Jumla, they had a release in 2012 that was fully responsive out of the box. These are things that Drupal 8 is finally adding, but we're the last to the table on these. There's a tweet from someone that's retweeted by one of the content strategists I follow who works with a lot of different content management systems. And Epic Server, it's one of the big proprietary CMSs. It's one that some people are very, very fond of. But this is all boring. All this stuff we're so proud of that we spent a ton of time on and did an amazing amount of work to do is boring functionality for an awful lot of the market. This is the barrier entry. This is cost of entry. Quite simply, the market is moving faster than we are. And we need to be able to move faster because the technology market and the tool chains are moving even faster than that. Just since we started development on Drupal 8 in March of 2011 at DrupalCon Chicago, just since then the entire composer and packages revolution has happened. The PHP Framework Interoperability Group went from a small group of losers who had exactly one spec out called PSR0 to an actual force for good in the PHP community. I count myself in both groups. We had Symphony 2 released in that time. Symphony 2.0 came out after we started Drupal's date development. It's been that long. And that was one of the kickstarts for the PHP Renaissance the last several years. Flexbox wasn't even a thing. It was a glimmer in someone's eye when Drupal 8 started. PHP 5.4 was released since Drupal 8 started and was deprecated and retired. And 5.5 is already on its last legs. And 5.6 is the current stable version. And I think looking at the calendar, PHP 7 is gonna come out before Drupal 8 does. Internet Explorer 9 was released during the Drupal 8 dev cycle. As was 10, as was 11, as was Microsoft Edge. Microsoft released four web browsers and it's time it took us to produce Drupal 8. We need to get out in front of this. We need to not be playing catch up. We need to be thinking forward. We need to be looking at not where we need to be to catch up. Where do we need to be a couple of years from now to be ahead of the curve? How do we get ahead of these technology changes? Or set ourselves up so that we can get ahead of them as soon as we notice them. That's the challenge. Now, the new release schedule for Drupal 8 is going to be a huge help. This is going to make it easier. This ties into the feature branch work that Dries was talking about in his keynote. But this has been on the plan for two years now. Actually, just to add Drupal kind of prog, we settled on this. So when 8 comes out, we are not opening Drupal 9. We open 8.1, which is not API breaking additions. And so we can add functionality and refactor things without breaking APIs. So we don't need to wait four or five years between releases, which is a good thing. But that's not the entire story. That's not going to be the entire solution for us. The question we need to be asking is where does Drupal need to be in five years? Five years from now, when we were at DrupalCon, I don't know where. What will the market look like? What will our customers be asking for? What will the technology stack look like? If we want to say we're a state-of-the-art platform in 2020, what does that even mean? And when we figure that out, how can we get to it as quickly as possible? How can we start moving that direction now incrementally so that we don't have to wait five years and then release another version and hope we got it right then? So, all right, so what do we need in five years? So I consulted a crystal ball on this one. Unfortunately, it didn't work out so well. So let's look at some trends. I make some educated guesses. So what's the market looking for? Well, who's read Dries's blog? Okay, it's worth reading for the rest of you. This is one of his recent really big posts, the big reverse of the web. He's been talking about this one all year, in fact. I believe that for the web to reach its full potential, it will go through a massive re-architecture and re-platforming. The future of the web is push-based, meaning the web will be coming to us. His basic thesis here is instead of you going to a website and looking up information, more and more sites will track you and push information to you through notifications on your phones, RSS feeds, e-nails, technology that doesn't exist yet. The kind of stuff that Facebook and Google and Apple and Amazon are already doing, everyone's gonna need to do to be competitive and we need someone who's not a multi-billion-dollar corporation who can pull it off. So we need to be able to do push-web and we need to do it for the sake of the open-web because I refuse to accept a future in which no one is allowed to do web developments who's not a billion-dollar corporation. So we need to exceed their user experience and take back control of our data while still offering this kind of functionality. The way to do that is loosely coupled architectures with a highly integrated user experience. Loosely coupled architectures with a highly integrated user experience. The user should not be able to tell that it's a loosely coupled architecture. We're gonna need to make it a loosely coupled architecture to be able to pull off this kind of functionality. Another trend, I'm seeing a resurgence of decoupled CMSs. No, it's a decoupled CMS. I don't mean headless. I mean separating the editorial process from the presentation. Now this could be static site generators which are the oldest variety. Those have seen a resurgence in recent years with Jekyll, Sculpin, other tools like that. You could also have configurations where you have some kind of view-only application where you have your editorial CMS that dumps data to some intermediary server and then there's another application in front of it that's just doing read-only serves. Doing lots of complex stuff with it but it's read-only which means it can be a lot faster, a lot more optimized. I've actually built systems like this using Drupal. It works, it's a little bit clunky. Theodore Bidala nod was just up here two hours ago talking about that being the correct way to do headless essentially. You can do twin installs where you actually have an editorial Drupal and a front-end Drupal, a production Drupal and synchronized content between them somehow. I know Dick Olson, is he here? Yeah, you've done this kind of stuff. I'm weird. What's that? I'm weird. He's weird, yeah. But there's a market need for this. I keep getting clients asking me for a content staging server. I usually push them towards workbench moderation instead but a lot of them still want and some of them need separate installs. It could be completely separate servers for things where your presentation server, you have six of them but you have one content server. And that content server is basically a rest server for the content. The presentation apps are hitting that directly, give or take caching. And so you have separate apps but they're still talking constantly. It's not a database sync, it's constant communication. All of these things exist in the market but today they're just not always easy to do. But this is important because one of the huge advantages of a decoupled approach is you can completely decouple your look and feel from your content storage. I mean really, how many sites need, not just can but need to change their user workflow and look and feel and their underlying content model at the same time, every time. Very few. We normally end up doing that because Drupal does both at the same time and couples them. If we split those up, then oh yeah, Drupal 7, Drupal 8, redesigned several times, Drupal 9 in here, whatever. The HTML, CSS and user interaction, social media integration, all this kind of stuff are content delivery concerns. Content modeling, workflow indexing and so on are content management concerns. These are different needs, different types of applications that don't have to be in the same application. Sometimes you want them in the same, sometimes you don't. And then there is the question of headless, which I do recommend watching the video from Naud's talk earlier. He dissects this word rather well. But a headless environment, we have some kind of pure JavaScript application that's just hitting Drupal over REST. That's gonna be very high traffic. Dries was talking about this in his keynote earlier. REST is chatty. REST is a very chatty system by design. But if you don't want to custom build everything in terms of your API, which means you have to modify it every time you want to change anything, then it's going to be chatty. So is that API that you're using for your JavaScript application, is that gonna be the same API as if you're doing a live content server? Why not? So you need an API that's gonna be able to handle multiple different use cases. And it's going to be lots and lots of small requests. Drupal does not handle lots and lots of small requests very well right now. It handles a modest number of really big requests with caching really well. But not where loading one page requires eight different PHP loads. These are, I'm talking about microservices now. Could we have a Drupal based on microservices? Could Drupal itself be a microservice? Part of several. Could it be a collection of microservices that you can swap in and out? Maybe. Do we need it to be? Maybe. Looking at the real time web. By that I mean new web technologies that PHP as we use it just can't handle. Things like Event Source, which is so old and no one uses it anymore, I couldn't even find a logo for it. Because really these days everyone's using web sockets. You can't use web sockets with Drupal. Everyone doing web sockets with Drupal is doing the web socket part in Node.js. But even that may fade out in favor of HTTP too. Which is going to be present on the majority of servers and the majority of browsers in the world within 12 months. And does push web even better than web sockets in terms of the server is able to push new information to a client. When it decides it needs it. When it decides it's relevant. All of these require a persistent connection between the server and the client. Persistent connection with server push where the server is able to push out data. Can you do that with Drupal today? No. The way we run Drupal is completely inadequate for many of these use cases. This is much more fundamental. Why? Well we don't have any persistent connection capability. We get one request and one response and that's it and Drupal shuts down. And if a second request comes in we have to redo our entire bootstrap. And our bootstrap is not cheap. Also everything in Drupal is blocking IO. If you have an incoming request that's not doing all that much it's still just gonna block on its IO. You're gonna sit there and wait on talking to the database. Even if it's talking to the cache. You don't want that for REST APIs that are gonna be very, very chatty. The underlying problem here is CGI. Common gateway interface. It's developed by NCSA in 1993. I suspect there are people in this room younger than CGI. And this is still the way that we're running Drupal because it's the way most people run PHP. It's a specification for calling command line executables from a web request. That's the actual description. And it works by setting up a bunch of environment variables based on an incoming request. And I say based on rather loosely. Passing it to a script or a command line tool and then shutting down at the end. This is where everything in dollar server comes from. Why is dollar server in PHP just so weird? Because CGI. This shared nothing design that PHP is known for cannot keep up with high levels of constant rapid request. Because it has that boot up cost every time. There's all the process management in the operating system. It has to hand off from Apache to PHP or from Nginx to your FastCGI daemon. You get no persistence between requests. You cannot save any of that to overhead. You have no running process that can decide to push data to an active client. So web services or web sockets are just not a thing. How much effort did we put into Drupal 8 trying to make our boot up process faster for that reason? A ton. We did a lot to compromise our architecture in the name of performance for exactly this reason. Because otherwise it would take 300 milliseconds baseline for every request. And that's just not acceptable. I already have those. It's time, I argue, to leave CGI behind and move Drupal and PHP past 1993. What? Yes, I'm saying mod PHP, FastCGI, not a thing. What else is there? Let me introduce you to the world of non-CGI PHP. And I mean it. And I don't mean command line tools either. I mean tools like React PHP. Basically it's no JS but written in PHP. That's an oversimplification but that's a decent explanation. And if you're doing web sockets in PHP, these days this is the tool for it, is React. They've got web socket libraries. I've actually talked with their development leads. It's a decent enough system and it's much faster than CGI. Here's a very basic web server using React. We create an event loop, do some wiring, ignore the details, that's not relevant for the moment. Listen on to a request event with here's our callable, here's our actual app, and then run. And this is running from the command line and this just sits here waiting on this port for incoming requests. Incoming request comes in, passes off to this callable. It sends back your headers and text and returns and notice it's persistent. You have data that persists between requests because it's a single running process. You are not tearing it down every time. So who cares if this takes 400, 500 milliseconds to set up? You do it once and never touch it again. This lets you, then you need to multiplex though because if you're handling a single request at a time, this is one process and you pause on one request to look something up in the database which takes 30 milliseconds. That's 30 milliseconds you're not talking to any other request. That's no good either. So React, excuse me, handles asynchronous IO. And by asynchronous what I really mean is non-blocking. Everyone calls it asynchronous, non-blocking is really what we're talking about. Non-blocking IO is where you pass an IO request off to the operating system and say here, take care of this. Your call to the OS returns immediately. It doesn't wait for that request to finish. It just returns and you trust the OS is gonna take care of it. And you'll come back and check later and see if there's a response or if there's something else you need to do or if there's an error or whatever. And this is something PHP is completely capable of doing. It's not done often but it's completely capable of doing. Unfortunately the API for it is based on C and is therefore god-awful. Case in point. Create a new socket, set it non-blocking, connect it to an IP address, write something to it, and then socket select which is, hey, is there a socket in this array that is ready? That has data for me. If so, do something with it else. Go do something else for a while and then come back and check it later for some definition of later. This is very, very basic. And please don't write this yourself. React PHP takes care of all of this under the hood for you. That's why it exists because this is a pain in the butt to work with yourself. You also don't wanna have lots and lots of callbacks like we saw before because that gets to terrible, terrible code. So instead use something called promises. Who's used promises in JavaScript? Pretty much the exact same API react as an implementation of. It's a way of solving the nested callback problem and letting you defer execution when you're doing something asynchronously. So this is a highly contrived example. So we have this function call that's going to get some data from the database asynchronously. And so we say, all right, when data comes back, if there was an error, mark this deferred object, this promise as rejected. If there was real data, give it the result, pass it on. But this is all in a closure. So this only happens sometime later when data actually comes back. What we return is a promise that says, there will be data here eventually and you can act on it eventually. So called DB fetch, which returns this promise immediately. Meanwhile, the operating system is busy talking to the database. We can keep on going and to find these other callbacks on it. Say, all right, when the data comes in, then fetch a row out of it and then do something with that record. This is how ES6 and JavaScript is going to be working. This is how you do asynchronous in any, pretty much any modern systems doing async in a single process is doing it this way. You may have heard people say that Node.js is faster than PHP. Well, they're wrong. What is true is that's doing asynchronous IO, non-blocking IO, is way faster than blocking IO if you have an IO intensive task. So another PHP developer named Phil Sturgeon took this challenge and benchmarked Node.js versus React PHP. And once both were properly configured, ignore the blue line for a moment. The yellow line is PHP's performance, the red line is Node's, and they're pretty well neck and neck the whole way. No matter how many requests it's making at once, because PHP is not actually the slow part. Blocking IO is the slow part. By the way, this was done with PHP 5.5. PHP 7 is twice as fast. All right, if this isn't your style, it's another new tool called Icicle in PHP. Who's heard of Icicle? I figured you would have. Icicle uses generators. Generators are a new feature in PHP 5.5 that are seriously cool. At a limited level, they're kind of a shortcut for iterators. So you have a function, and instead of returning a value, you can yield a value. This Python people, you probably recognize this. It's based on Python style. And what this does is when you call it, instead of returning a value, it returns an iterator object. So each time when you iterate over it then, call next essentially, it will run until it does yield and return that value. And the next time you call next on it, it just picks up where it left off and keeps on running until it hits the end. So normally you've called the range function PHP. You get with one to one billion, you get an array with one billion items in it, which is not good for your memory. Instead, with this approach, it's generating the values on the fly, and so we never blow out our memory. This is a very simple contrived example. Generators can do some really, really cool stuff beyond this, and greatly simplify your code in certain areas. And this works for methods too. Any function or method can be a generator. You can also send data to a generator. So in this case, we call pow, and it will run up until this first yield. And then it will, we send a value to it, and it assigns that value to val, and then it continues running until it yields that result, which gets sent back and printed. Then the next time we call it, it picks up here, assigns that value, and so on, and so on, and so on. We have functions that we can pause mid-operation and come back to later. So we get 25, nine, and done. But we can do other stuff in the middle here. The function's just gonna sit there happily. What happens if that generator is doing asynchronous IO? Again, contrived example, but we've got some kind of asynchronous socket. We send data to it, and it writes it out on that socket asynchronously and comes back. So now, this yield, I'm not actually even returning data, I'm just letting the data be brought in to this process. So that's what gets output. And this seems really ridiculously weird, and it seemed ridiculously weird to me, too, the first several times I looked at it. And what follows, you may not follow, you may not understand the first time, but once you start to understand them, they're really cool. Co-routines. Co-routines, according to Wikipedia, are program components that, for non-preemptive multitasking, allowing multiple entry points for superseding and resuming execution. Functions that you can pause mid-execution come back to later. They can say, hey, someone else can have a turn for a while. Icicle is built on this approach, which lets the code look a lot more like the code we're used to writing. So in this case, we set up our server, we set it to run, and when a request comes in, we just yield our data. Every time we say yield, that yields back to the core runtime of icicle, and it lets other stuff run. So these lines do not run immediately after each other. Any number of things could happen in between them. These will still happen in the same order, guaranteed, but who knows what could be happening in the middle here. Combine this with asynchronous IO, and you get a very straightforward, very fast environment. I took this and wrote a very simple router for it, just proof of concept. So let's say, pretend this is the entire kernel for your application. We get the request in to figure out what our action's gonna be, our controller, essentially, and then we yield that action. Which means this will get dovetailed in between other parts of the request, including other requests. It still looks like we're calling that and saving the value, but we're also telling icicle, you can pause me here and come back to me later once the IO is done, and I actually have a value. And all of that logic is handled under the hood, and you can ignore the rest of this for the time being, other than you're still yielding. Icicle has promises too. They work similar. In this case we can say, we've got a DNS resolver that we're going to use. We're gonna wrap that up in a code routine, and then use promises on it. I'll let you read this code later. I'm gonna post it later for the time being, just to understand, this is happening. There are several frameworks I have not mentioned doing this kind of stuff in PHP today. And because they're a persistent demon, this can do web sockets. This can do HTTP too. This can do all the kind of stuff that we need to do in order to support the business cases we talked about before. Who's worked with HHVM? Couple of people, okay? This is Facebook's re-implementation of PHP, because they needed something more high performance. It's mostly compatible with PHP. There's a couple of things they don't support yet, they're still working on. It is much, much faster than PHP 5.5 or 5.6. PHP 7 pretty much is caught up. But it also has this thing called HAC. HAC is one of the worst named languages in the universe. It's an extension to PHP itself with a lot of new syntax for new capabilities, many of which have since made the way back into PHP itself. Things like scalar types, which are going to be in PHP 7 and are going to be awesome, started in HACs, in HAC. A lot of other things, they have generics already, the constructor promotion, short lambda syntax, which is also being considered for PHP itself at this point. But most importantly for us, native async primitives. So this is from the HAC manual. It's a, we have a function here that we're going to mark as async. And this one is async, async. And then we can say, all right, let those functions run and wait here until all of them are done. If they're doing asynchronous IO, which I left out of the slide because it's too big to fit on a slide. Then the runtime itself can switch back and forth between those two whenever it needs to to keep the CPU busy and not just sitting there waiting on IO. And this is built into the engine. And then we get all of our data back. And so we get the result of both of these. Now imagine doing this to render blocks in parallel in Drupal. What's the performance gain of that? What's the architecture gain of that? Pretty huge. That's what we were trying to do in the first place. Now imagine that all of your IO is async capable. They don't have drivers for everything yet, but suppose you do. You can say, all right, all of these, I'm just gonna say, run all of these things, let them all do their thing asynchronously. The CPU will sort it out. The operating system will sort it out. The operating system is way faster than we are. And when all of them are done, get back those results as an array. Again, very contrived example, but imagine what you can do if you can say, oh, I've got these eight blocks on the page. They're all cached. Thunk! Suck them all out of the cache at once, glue them together, print. Now imagine if you can make all of your IO asynchronous when serving REST requests. Can you get 60%, 80% performance improvements? Quite possibly. You can fork PHP processes, not if they're running as CGI, but you can fork PHP processes from the command line. I've actually built apps like this. We did one a couple of years ago called Kiwi, which was a joke because it was a connector for a system called EMU, both of which are flightless birds. And this is essentially how the system worked. It was a command line tool, but we had some number of workers and we just fork and say, all right, if we fork for the number of trial processes we have, and if it's the parent, though nothing, if it's a child process, then go do whatever the work is that are splitting up between multiple child processes and then wait for them all to finish. And so the parent process just pauses there until all the children are done. And this gave us a two-fold improvement over not doing it this way. But this also means that we have completely shared memory up until the point of the fork. And the operating system doesn't actually duplicate the memory. We can have five, six, seven, eight processes running all with the same memory usage, same memory space in the operating system. And those processes, they could be blocking or those could be asynchronous, too. Imagine doing something like this. Super simple example. We have a non-blocking forking server. If you set up our socket, then just loop and say, all right, is there something to do? If not, come back in a moment. If so, fork, let the child process, handle that incoming request and let the parent go back to waiting. Or we could pre-define a couple of processes and create a pool of them that are sitting there waiting and we can hand off from the parent request to those child requests, or child processes every time there's a request, which is exactly how Apache works. We could reimplement Apache in PHP and never have to bootstrap again. Obviously, this is a terribly buggy and error-prone way of doing it. There are various libraries that wrap with this up in a much safer fashion. One of them, Icicle Concurrent, is a process manager that's using channels for communicating between processes, kind of inspired by go. I said, there are other tools like this as well. This is what's happening in the PHP world today. All of these are happening today. I've been talking to some of the PHP internals developers, including some who work on both HHVM and on PHP. Will we get async primitives in PHP 7.1, 7.2? I put the odds at better than 50% PHP has native async primitives in the language before 2020. What are we gonna do with those? Are we ready to use those? Will that let us do the push persistent connections that we're going to need to do? Will this let us have multiple configurations of Drupal? Which of these approaches is gonna win? Async, forking, react style, generator style with Icicle? Well, I was hoping to be able to tell you which was gonna be successful, but to be perfectly honest, I have no idea. I have no idea which of these approaches is going to end up the successful future of PHP. What I will predict is that CGI is not going to go away entirely because shared nothing does have a lot of advantages to it. It does simplify a lot of problems that if we're working with these other types of architectures become relevant. But we are gonna need to work in other environments. We're going to need to use Drupal in situations where we can't afford a bootstrap on every incoming request. We're gonna need to use Drupal to do web sockets. We're gonna need to use Drupal to do server push with HTTP2. These are going to be hard requirements on us if we want to compete. We are going to need to be able to need to run in both modes. We're going to need to be able to run Drupal as a standalone server like now or as a split brain decoupled system or as a high performance rest server or several of those at the same time. So how do we make Drupal both monolithic and decoupled at the same time? That's our challenge. That's what we can do today to get ready for whatever the future looks like. Any of these, I don't know. What we need to do is support different configurations of common Drupal components. The same underlying components are arranged and architected in different ways but still the same underlying code. So that we don't have to produce eight different versions of Drupal, we just have eight different wireings of Drupal. How do we do that? Well, here's our first clue. Components. We need reusable components that can run in any of these modes. Or some other mode I haven't mentioned because I don't know about it yet because it hasn't been invented. We need reusable components. What makes a component reusable in PHP? This list should look familiar. It should be stateless. No state means it doesn't matter how many requests are running through the code at the same time, they won't bump into each other. It means value objects. It means immutable objects that we know are not going to pollute other threads. It means not a single global anywhere in the system. One single global destroys your ability to keep your processes separate, or your requests separate if you have a single process. It means we need to stop having global dependencies on the request. The request stack is Symphony's way of handling sub requests and we're using it as well and that's not quite gonna cut it because keeping track of that context gets very, very interesting. We need code that is not dependent on our service container. Why? Because the Symphony service container is a great tool for a CGI-based system. But if we're running in a persistent demon, we don't need all of this compiling stuff. All this compiled container, it's not necessary. It's there for performance when you have shared nothing. If you don't have shared nothing, you don't need to waste time on that. We need to wire it differently. We may want to use a completely different container in different configurations. So the more code we have that is independent of the container, the easier that becomes. Any IO we have needs to be isolated into very specific classes because that's the stuff we're gonna have to rewrite for each case. So keep your IO in very specific classes that do nothing other than talk to third parties because that's the stuff that gets rewritten. The rest of the code shall be stateless services that you can just reuse anywhere. This is easier if you rely on third-party code. If that third-party code is well-written, great, you don't have to maintain it. The PHP community as a whole can maintain it. Dare I say, this is what qualifies for purely functional code. You know me, yes, I dare. These are the same Kled code standards we've been pushing for years. This is why Drupal Lake made such a big shift from the old PHP 4-style architecture towards a more modern OO architecture. This is why we push for stateless services. This is why we have value objects in places. This is why, because this opens the door to these kind of changes. What are the hard parts still gonna be? Well, Entity API has most of the problems I just listed. There's far too many statics, far too many service dependencies. It's gonna be a problem. It's a problem we have to fix if we ever want to serve entities using WebSockets. If you want to serve entities using WebSockets, we have to fix this. Render API, the render context system is great for what it does. It's great for the architecture we have now. I really don't know what's gonna happen if we try to put that in a synchronous environment. It might actually work. I haven't tried, but it's a concern point. Anything that's container aware means it's coupled to a single container. If you have code in Drupal or in a module that is container aware, you are coupling to an architecture that may not be around in five years or may not be the only one you need to worry about in five years. As I mentioned, request stack. It's designed for sub-request, not for asynchronous work. This may be a problem too. We may need to separate from that. Which means do we need to start putting our code into separate repositories? Do we need to break core up into separate repositories to force ourselves to do this kind of separation? Maybe, I don't know. We might. Because even just with the components we have now, it's been really hard to convince people to not introduce subtle little dependencies without realizing it. I'm saying, oh, it's just one little dependency. One little dependency breaks all of this. Thread safe code, which is what we're talking about here. Stateless thread safe re-entrant code can really easily be used in CGI mode. We can still use it in the traditional fashion but not vice versa. Code that relies on a bunch of globals that get destroyed at the end of the request will not work at all in anything other than CGI. All of this other code, aside from a couple of yield statements, will work in CGI. It'll work in any of them. So let's write the best code we can and hedge our bets. It's also testable. Everything I just said makes code more testable too. If your code is more easily testable, you're probably getting the rest of this right too. They all play on each other. This is not, this sounds huge, but quite frankly, this is within sight. I can see us pulling this off in five years. Why? Because of the work we've already done. Because of the work we've already done to refactor the system. Because most of the system is now stateless services. Not all of it, most of it. Most of the system is not container aware. Too many things are, most of it is not. We are closer to this than today. Then Drupal 7 was the Drupal 8, I would argue. The distance from 7 to 8 that we have already covered is a bigger shift than what the actual work required to prepare for whatever this future holds. As long as we don't slide backwards. As long as we don't get lazy and start reintroducing shortcuts that make things easier for right now, but actually type the couple of things again. So please don't do that. I'm sure some of you out there are saying, but what if you're wrong? Raise your hand if you're in this category. Peter? Five years sounds too slow, but I'm saying it's true. Could be? 8.2. I mean, these are refactorings that we can do. Every contrib module should be thinking this way. And this is a guideline for how we improve Drupal 8 within the Drupal 8 lifetime. This is not a, we start rewriting Drupal today. No, no, no, no. This is a, as we are improving Drupal 8 over the next several years, these are the guidelines to keep in mind. And if I'm wrong, and in 2020, synchronous CGI is still the only game in town for PHP and I don't know, HTTP2 just never really takes off, right? Well, then all we're left with is a highly decoupled, highly testable, highly reusable, shareable code base full of loosely coupled components. Awesome! Some resources to follow up on. I will post these slides. It's linked to React, to iSchool, Doorman is a PHP process manager. They have forked environments. Ahvm's documentation. Some links to the PHP manual. Some other articles that I recommend reading. As I said, it took me a while to wrap my brain around most of this. So if your brain is fried right now, don't worry, you're in good company. Most people's brains are fried at this point. But I do recommend taking time, read through some of these articles, maybe multiple times, and think about just how far can we go? Just how far can we push this to make Drupal a web socket-friendly, HTTP2-based, decoupled powerhouse? Thank you. So we do have time for questions, so if there's a microphone here, please use it. Unless I've just melted everyone's brain way too much. Nobody? Yeah, it's over there. Hey, Larry. Peter Willanen. So in terms of pushing content to devices and things, how do we have Drupal do that and also serve web pages in a way that maybe it makes sense to termite the request and clean up because you just built a huge whack of HTML versus just pushing out a little notification where you might want a long-running demon. Are we gonna have to have Drupal running on the same server somehow in the same mode or running on two servers talking the same database in two modes? How do you get sort of the current behavior of actually serving web pages and this push web socket's behavior? That's an excellent question. A couple of possibilities. You could have a common data store server, that's where all your entities live, and then a CGI front-end and a web socket front-end that are running on separate servers. You could have a persistent demon that on an incoming request can upgrade it from HTTP 1.1 to web sockets or to HTTP 2, and then that request stays open. It's its own fork process or it's a separate track process in PHP itself. So you can do very simple like chat servers and stuff with React PHP in a matter of an hour or two. And then it maintains an open connection to every client that's connected to the server. And then I'd see no reason why it couldn't. Oh, this request is asking for an HTML page rather than being a web socket connection. So I'll just serve its thing and drop that connection at the end. Yes, that means that doing so needs to clean up after itself and not have all these statics and giant render arrays that we built up. That's the point. And that's exactly what we need to be doing in order to support, I don't know which of these architectures. Hi, I was wondering, do you know what symphony is already doing to implement this or what it means for the integration with symphony? I am not aware of symphony doing anything in this regard at this point. That said, I haven't really asked. I suspect symphony will remain a very good shared nothing architecture framework. But in five years, who knows? You know, I'm trying to look further ahead than anything currently in the market that is in actual use. ReactPHP is the NHHVM or the only tool here I mentioned that are in actual use right now. Icicle is still in beta, maybe alpha. So I think a lot of symphonies architecture could work fairly well, actually. The HTTP kernel architecture could work very well here. In five years, I would not be surprised if symphony had also switched to PSR7. The HTTP request and response standard, which was designed with this kind of thing in mind. That's why it's value objects and that's why it's immutable. To simplify exactly this kind of what is now esoteric, but in the future may not be esoteric use cases. So my hope is that symphony components will remain good decoupled components and we can keep using them or not as it makes sense architecturally at the time. Thank you. First of all, great talk, really. And many of the things you said above all in the early beginning reminded me of application servers in Java like, I don't know, J-Pause, Tomcat and stuff like that, multi-threading. So I was wondering, is there anything in the works for PHP or are there very compelling reasons for doing that in the user's page size aside from, I don't know, portability? In terms of the PHP engine, as I said, there's talk of asynchronous primitives in the language itself. Whether or not this will happen, I don't know. I think it's likely that within five years we'll get something. There was talk about bringing a better event loop library into PHP itself like LibEV or LibEvents which are the options that Node.js uses too. I think, again, a lot of work went into PHP 7 under the hood to clean up its code to make it possible to do this kind of stuff in 7.1, 7.2, 7.3. Will we ever get like Tomcat equivalents? I'm gonna guess no because aside from some high performance pieces like the event loop, there's probably not a reason to do it in the engine itself and there's plenty of people who actually don't think it should be. So yeah, at least half of the implementation of these things is going to be user space as my guess, but I could be wrong. Thanks. So if we assume that well, I love all this great new stuff and I can't wait to get started. I don't want to wait until 2020. I don't want to wait until triple nine. Is there going to be a way in your opinion that this can be cleanly done in a Drupal project before it is part of Drupal core? The whole thing perhaps not. I'd say so the project I mentioned before that I did was a decoupled system was Drupal as a behind the firewall editorial CMS, pretty much Drupal 7 as is. It dumped data to Elasticsearch and then we had a Silex app sitting in front of that. So stuff like that you can do today, there's a lot of people who have done projects like that. In terms of breaking up Drupal itself, I think taking Drupal data and dumping it to some intermediary and then putting something else in front of it is one of the architectures and the fact that then the other half is not Drupal right now, well, we can change that later, but then that's where all the front end is. Do that front end using something that uses twig and it's a transferable skill set. Entity API is really the big challenge, to be honest. Entity API and views are the most important parts of Drupal, I would say. Entity API and views, everything else is tools to build that. And so Entity API not being this kind of fully cleanly decoupled is problematic. I'd love to see someone try, okay, can we put Entity API in a persistent demon? What happens if we do that? What are we actually gonna run into that breaks horribly? I'm not entirely sure. Volunteers welcome to figure that out and then feed that back in for, okay, some of these things that are breaking, can we fix them without changing APIs? I'm gonna predict some yes, some no. The ones we can fix without breaking APIs, let's do. Let's just start now cleaning things up in that direction. We can do that within the Drupal 8 cycle. Those are milestones along those feature branches Dries was talking about this morning and see what happens. Again, worst case scenario Entity API becomes better. Oh, how terrible. Okay, thanks. Anyone else? Thanks for coming. Please leave feedback. Enjoy the party tonight.