 Hi everybody, welcome. My name is Josh Wahey. I'm a technical account manager at Acquia. Today I'm going to be talking about decoupled why we're building a worse will and this is I guess built from my experience and what I've seen happening in the marketplace over the last couple of years. I'm kind of a not really a front-end person. It's kind of most comical to me that I'm speaking in a front-end track but I work a lot more in the sort of back-end or enterprise architecture space. So even though we're talking in front-end, I'm not really talking front-end. There's no, you know, snippet of code that you're going to see in this chat or anything like that and yeah I really focus on how to, you know, I work a lot with customers. We focus on how to do more with less risk, often like a less code that sort of thing. Yeah so we're going to kick on into it and to really sort of set the stage I thought we would sort of take a bit more of an approach on this around a history lesson. So I wanted to talk about the evolution of Drupal and this sort of understand the origins of where we've come so we kind of understand the landscape that we're in today. So traditional Drupal, the static web, go back circa, you know, 2006 and we have a stack that this should be very much, you know, not introducing any new concept to anybody here right. We've got the lamp stack, PHP running dynamic content generating HTML, CSS and JavaScript being served up directly from the file system over something like Apache and then pushed over to the browser and we call that a page view right and this is a very standard thing and this is what Drupal was built to do, do this exact type of thing and it works really well in hobby applications and it works just fine and mid-tier applications and with a little bit of modification that we're reasonably okay in enterprise as well. And when we split this up on in terms of responsibilities of what these different parts of the architecture do you've got the server side being the renderer that the dynamic work is done on the server side and the browser is largely just static. It doesn't, it tries to do as least amount of work as possible. The reason for that is because back in 2006 and in the 90s for that matter resources were bad and internet was slow and you know all the power sat inside of server side processing and so that's where we wanted to do all that work because we couldn't trust in the front end to have the resources to do the things that we wanted it to do and it helped us you know scale things a bit better and it's that kind of mainframe approach right, send the request to the mainframe get it to do the job and send you back the result. And from a, if we look at the loading journey standpoint, I recognize some of the text here can be really small to read so this is trying to look at like the holistic end-to-end journey of trying to render out one of these page views right. So the far side you've got things like stalling because the browser's not ready to do the request, doing a DNS query lookup, making the connection establishing the SSL, sending the request now that you've got a connection waiting for that time to first bite right, which is the big blue teal thing there, downloaded the content, getting the first paint on, first meaningful paint becoming visually ready and then finally having a time to interactive. This is kind of the experience that we'd be producing in this area. There's some numbers down the bottom here and right here is about 1.8 seconds to kind of give you an idea of how long that would take. And then we've got a pie graph to try and break down all of these parts and say where do they belong in the stack. So how much of this is attributable to network, how much of this is attributable to browser and how much of this is attributable to the back end and the server you can see here is 75% of the problem. And that's that big time to first bite thing in the middle right and that's because Drupal has this mentality of I need to render the entire page before I send the first bite of information. That's a big problem that big pipe actually went about solving in Drupal 8 but we basically have that mentality and that's why it takes so long for anything to show up on the page. So we go about trying to optimize this a little bit more. We evolve a little bit further on and we move some of that static behavior over to the server side and we start creating caching. So we introduce things like varnish or squared or memcache powered by engineering or whatever to kind of start producing static caching and rendering. And this falls on this premise of everybody that's coming to the website wants the same thing. So why do I have to do the same amount of thinking on the server side twice when it's the same output each time? So if it's the same output each time we can cache it, send it back and it kind of removes that problem of Drupal needing to think through the whole problem before it sends the first bite back. It's already at this point static and so we can start pushing things forward really quickly. And so it changes the loading journey significantly right because the time you just fix it from going from say I need a 1.6 seconds to render a Drupal 7 page to now just pulling it straight from cache. It's a 30 millisecond job and everything now you can return in under a second and under 600 milliseconds in that scale there. And it shifts the pie chart a lot. So now where the pain point is it becomes it becomes that's becoming the browser. The server side gets really fast. So we've kind of, you know, networking is not a problem. Server sides of networking and server side of half the total equation but starts to sort of put more of the performance issues on the browser. So let's look at how that kind of works in a little bit more detail. This the pie graphs in the bar graphs are good sort of high level things but not all pages are the same. Caching works dependent on a number of things. One of them is how long can you cache something before until you consider that piece of content stale and then you have to go and refresh it, right? That's the expiry cache strategy. And so that really depends on how much how much value you get out of that piece of cached information. It depends on how frequently it gets used. And so there'll be parts of your site that get used a lot. So it's usually maybe the top 20% of your content gets used a lot. And then there's parts of your site, the bottom 50 that doesn't get used a whole lot. I got an example here of a site that has say 500 pages. It has an average response time from Drupal of 1.8 seconds. And then let's say there are 10,000 pages that occur in, in, you know, X time frame doesn't the time doesn't really make sense here because we're not sort of talking about TTLs. But you've got page distribution of 20% 30% 35 and 15 and then brackets that you've got the numbers. And then you've got the amount of traffic that would hit those distributions of, of page contents. And then you've got the case, hit rate calculated for each one of them. So for the top 20% of your content, you might get a 98.57% case, hit rate for the bottom, you get a 75% and you average that out, it will come into 85.27%. You can check my math if you like. So 14.73% of the traffic is greater than 1.8 seconds because 14.73% of the traffic had to incur that Drupal page load that had that time to first bite. If we increase the page views, sorry, if we drop the page views down from 10,000 page views to just 5,000 page views, then what happens, the same algorithm play the whole thing out. But now 27% of the total page views have to incur that Drupal page load time. So there's actually worse performance because you've got less amount of traffic going to your site. And so in the case strategy that we end up implementing is more page views, more performance. You just need to have, it works really well. It scales really well for big enterprise sites, news, media, global sporting events, that sort of thing. But if you don't have a lot of traffic to your site, it's less effective. And then the way that we kind of want to get around that is by having longer cache lifetimes. Longer cache lifetimes mean you've got a higher chance of getting a cache hit and the site remaining more performant for longer. And so the sort of final evolution that you took Drupal essentially 10 years to reach this point was implementing cache invalidation. And this is the, it was kind of there in Drupal 7, the purge module existed, but it was not very holistic. It was not very accurate. There was a lot of things, a lot of reasons why you still really couldn't have a very long cache lifetime. And that got fixed in Drupal 8 with the introduction of cache tags. And so now Drupal is able to produce a page, send that into cache. And it could be cached for a week, a month, a year. And if nothing changes on that page, Drupal knows about, if something's going to change on that page. And so it's able to cache, it's actually able to actively purge that piece of content. And, you know, whenever that content is actually due to be updated because a content event occurred inside the CMS. So now there's like this just direct parity. And we can be as performant for as long as we possibly need to until Drupal needs to spend that time to regenerate that page. And so it is sort of like a really fine-tuned, mature strategy for delivering content. But in the time, the 10 years it took, Drupal to get to this point, there's been a lot of paradigm shifts. So we've kind of moved from data, so into like a data refresh kind of view of the web now. It's no longer about page refresh. We want to start to personalise our experience for our users a lot more, which means that that sort of idea of one cached item that can be served to everybody becomes more questionable. There's actually more nuance in the data that we serve up people based on their context. There's also a lot of event-driven data now being pushed into people, at two people rather than them having to request it. And these sorts of things, JavaScript becomes quite naturally suited to these sort of demands. This has also become an emerging technology to help address and solve all of these things. So you have things like page list DOM manipulation, so now we can change the environment that the person sees, the page that they see without us having for them to go back to the server, although they kind of are, they just don't know about it. Yeah, we can do client-side personalisation, so we can leverage their processing power. And event-driven is handled with things like web sockets and being able to maintain a connection open to the back end and receive new information. So how does Drupal kind of stack up against this? There's a lot of pros and cons. So pros, it's really good at content management. We've got the optimised content delivery down. It does centralised processing, which is good because we don't have to, it's not distributed, we don't have to send the processing power over to the client to do something. And it can scale simple content really well. We've nailed that. The con of it in today is that it's monolithic. I'm going to go into that a little bit more later on. It has complexity that's ingrained into it when we start talking about personalisation. And it has very limited front-end leverage, what it can really control on the front-end. And that slow time to first bite still persists, even though it's sort of been, in a probability sense, just dramatically reduced. And so localised processing is sort of the next frontier that we're now in the mists of pursuing. And this is where we want to start changing up that model of everything's dynamic on the server side and everything is static on the client side. We start to play around with this a little bit more and we start to introduce dynamic UX experience into that. And we're going to say, to do that, we're going to be able to leverage the processing power in people's client-side devices. And, of course, since the inception of Drupal, things have changed. You know, Bandwit's gotten better. I mean, you guys still have the NBN, but Bandwit's got better for the rest of the world. There's also, we have phone processing power is a lot better, right? And so there's a good reason to say that we can start to distribute where that dynamic processing power takes place. But it also means that the loading journey starts to change as well. Because while we have now an optimised content delivery from the server side, we start to see these client-side complexity increases, these things like time to interact with the client-side. And so, you know, it's kind of visually really first meaningful paint, and first paint starts to take up the time for someone to engage. So, yeah, client-side processing is okay, right? Like, phones these days are pretty powerful, and so they can do lots of stuff, you know? And, you know, they've got eight cores inside them and eight gigs of RAM and, you know, the pixel resolution is better than and so, you know, because of that it's a good reason why we can start putting our code onto people's devices and getting them to compute, right? And that certainly seems to be the case if we start looking at this from an economical standpoint. So, for the last, you know, five years JavaScript, and this is the Octaverse, this is GitHub looking at popularity of languages on GitHub, that JavaScript has been the most popular language for quite some time, and PHP's fallen. It's dropped below Python now. In the job market here, you can see Laravel still had, at this point in time, I think this was 2018, still had more demand for jobs than React and Node, but React and Node is directly below them, and PHP's all the way at number 14. So, in the market there is a demand from people who want to hire more React and Node JS developers, and this influences the availability of all kinds of things in the resourcing market as well. This is GitHub emojis, again a part of the Octaverse that looks at how people use emojis in GitHub to understand sentiment around language. So, at the top there you've got the hearts, and you can see that PHP is 6.9% of the emojis, of the heart emojis inside the PHP language, while JavaScript is at 6.5. So, people in the PHP community heart 0.4% more of the time than JavaScript. They love it, just 0.4% more at the moment. However, the thumbs up me in Emoji, you can see JavaScript there is at 84.2%, while PHP's at 80.3%. So, yeah, maybe PHP lovers love PHP ever so slightly more, but there's more optimism in JavaScript. PHP is party more by 0.3% than JavaScript people, and then PHP pessimism, the downward thumbs up is at 2.4% while JavaScript is at 1.1%. Still better than C-Sharp, but you can see that there is a swinging mood there. There's a lot of optimism inside of the Node.js community, and then there is a bit more success that's happening with PHP, but this speaks to their maturity in their, I think, in their language and where they're at. I'll speak about that a little bit more. Now, with being able to bring things into the dynamic side of the client side, people start developing things they want more common and consistent ways of being able to develop things, and so comes and emerges front-end frameworks. So, these are things, if you're a friend and developer, you'll know pretty much every one of those logos, and these people want to have their own technology, their own discipline, their own methodology to deploy into, and they don't want to be hindered by the same delivery processes that Drupal is delivered by, and so they need a new delivery pipeline to be a part of the project, and that ultimately means that we end up with two backends. So, here we've now taken the pre-existing dynamic stack that we knew about, and we're going to tack on to their file system and a web server that's designed just to serve up client-side apps and pump that through the CDN, and then out to the client-side inside the JavaScript engine that we can pile it into, and this is all so that we can have, you know, a React app deployment pipeline that's deployed by CI, and then a Drupal one that's different, you've got two different teams that deliver both of these projects, and they want to do it in a polylythic way, not in a monolithic way. Right, so this is a, if you decide to say, no, we're going to roll the JavaScript app into Drupal, then it means that you need to wait for the Drupal team that are ready to do a deployment before you can actually release your code, and that team gets really frustrated about that, so they separate it out, and then now they're running two different server-side stacks to be able to deliver a single application, and that's becoming pretty much the norm, like having multiple technologies hosted on the server-side and being compiled in, and so we're starting to see polylythic service delivery becoming very, very common, but of course, meanwhile, what's happening in the loading journey is things in the client-side are getting bigger and fatter, and this scale has gone from 1.8 seconds, so this is now four seconds, and that's because all these frameworks have to get loaded in, they have to get compiled, they have to get built, run, rendered, and they're doing a lot more things, and when frameworks start to emerge, especially in the front-end, they want to do more things, they want to have more control of more things, and so they need to have time to think and to be more logical about things, which sounds kind of familiar, right? Remember Drupal had to do all of its thinking until it produced its first byte, and the same thing is now happening, it's just happening on the front-end instead of the back-end, so time to reactiv is a real problem, does, you know, everybody know who Eddie Osmani is, where's your hand if you know who Eddie Osmani is, one person, Eddie Osmani is the lead developer for the Chrome, Google Chrome browser, and he's got a really great talk, he's done it twice now on the state of JavaScript and these are just slides that have repurposed for him, and this is him looking at the time to interactiv between two different websites, that he's blurred out but I think you can make out what that is, and showing that these two sites actually take a very, very long time to load, and the time to interactiv, the point where they're actually kind of useful, is taking a very, very long time to get there, and that's all basically because these things are fully decoupled apps that take all this time to load in and create an experience. This is the cost of JavaScript, imagine my site will be available somewhere if you guys want to check it out, look for that, he's got a medium blog and writes all this out in a great blog, he's got maybe like 60 slide presentation, and he's got a YouTube video I think as well, and it's well worth the read to understand JavaScript, and the cost of it today, but I'll give you a couple more slides from here, this is JavaScript processing for cnn.com being profiled against different devices, so at the top there you can see the iPhone 8 on the A11 processor, and then he's got highlighted in blue the Alcatel 1x, which is under $100 as a phone, so on the iPhone 8 it takes 1.1 seconds to load cnn.com, while on the Alcatel 1x it takes 32 seconds, then you've got something that's kind of in the middle of the road, the Moto G4, which is on the Snapdragon, and that takes about 10 seconds to load, so you can see here that this is actually client-side processing, something that we knew about it in right back in the 90s is that it's not the same experience for everybody, and one of the points that Eddie Osmani really calls out in his talks is that often developers, people who are fortunate enough to have high-end devices, and so when they're doing their testing and they're getting the user experience, you usually get in the very best experience that they can get, and everything else is degraded from there, and so you're remembering that this is just looking at device performance that isn't even factoring into its network availability, so 3G, 4G experiences can be very different for people as well, and when you need to load in assets it can be a real big problem. He did something, being the lead of Google Chrome, he did a bunch of analysis and was able to calculate that the median web page today has 350 kilobytes of JavaScript in it, and it takes an average of 15 seconds until it's interactive. I mean, at the beginning of my talk I was talking about how it took us a long time to get a page out from the server side and it took 1.8 seconds, and now we're tolerating 15 second page loads with front-end delivery, so he goes on to point out that mobile is really a spectrum that you have to think about and maybe when you're delivering your apps, it's a good idea to think about which phones you want to target performance metrics around, and when you've got performance metrics that you need to hit, and make sure that you're doing some testing on those, and it's going to be different for different customers and different clients, certain types of businesses are going to appeal to people with different, you know, sort of average phone specs or different network availabilities. Equally, when we think about this from a network standpoint, it's really important to remember that JavaScript is not, from a size standpoint, it's not the same as other assets. So a 200 kilobyte image is not the same as a 200 kilobyte piece of JavaScript, because when that image gets to the browser it just loads up, it's a bitmap render, right, it can just be rendered straight away, but when 200 kilobytes of JavaScript hits the browser it needs to get compiled and it gets turned into two megabytes of code, and then it needs to get run, and then it needs time to execute and render and use the main thread and all these other things, so it takes a lot longer to deal with JavaScript, and so the size of the JavaScript you're using, he sort of uses as an indicator of how long these things will take, by being very, very mindful about it, and it actually goes even further around saying you should have budgets, JavaScript size budgets that you work towards in your projects, and you won't go over certain budgets, and it also forces you to make sure everything is performing an optimal. Oh, how did they get in there? So this, this re-eventing server side ends up like, how do we kind of deal with this today? How do we make the world faster, not be 15 seconds on average and really slow for other people? Well, the node guys have a really great idea. They want to take node and put it on the server side, and then be able to create some static renders of the pages so that they're ready to go from the server side and then get over to the client side. They do that with things like Gatsby, and so they'll end up, you know, pulling the content from, say, Drupal headlessly, pull it into Gatsby and then they will create a static render and then they can load up directly into the client side and then it's like, bam, they're ready to go and then all their frameworks don't have to do all this for pre-rendering work and so that time to first paint and things kind of get a bit faster. And, you know, now we've just like fixed the problem, right? Like, we've got the beast mode on and think figuring out how to make all of this really kind of work. And, you know, you end up getting a loading journey that looks like this now. So we'll use, you know, what back to the 1.8 second scale. Browsers still, you know, this this diagram now starts to look a bit familiar, right? We like, from a loading standpoint, we've kind of gone back to doing exactly what what we were doing with Drupal and Drupal, what Drupal had just achieved, right? We've started to reinvent the wheel. So looking at the pros and cons of this, like, we've got, it's the same pros and cons that we have with traditional Drupal, right? We've got content management, optimized content delivery now, centralized processing, because it's all being processed on the server side by the Gatsby program. Scaling simple content really well. The cons, it's polylythic. So it used to be monolithic, now it's that it's polylythic. Like, the complexity of what we're doing to reach the exact same goal, the exact same end point is by just increasing the complexity and reinventing the wheel and doing everything the same again. It's really, there's a lot more complexity there in that it got exponential, right? It went from having one technology stack to to maintain to having two technology stacks to maintain. And quite often Node has a tendency to want to have multiple stacks for different purposes to do slightly different things. And so you can get really exponential really quick. They haven't solved Cation validation yet. So now they've actually gone backwards, not forwards. They've now had to deal with expiry again and Gatsby has a tendency to want to re-render your entire site when you need to update the Cation so it can be really slow for sites to have lots of content. And so, you know, what about the size of the first time to bite? Well, that kind of shifts a little bit. But yeah, we've kind of started to reinvent the wheel. And, you know, this is kind of like a picture or maybe a little unfair about like where I think the what they're kind of trying to do right right now. But it's not it's not worth noting that like I'm not trying to hate on Node that much, but I'm trying to kind of say that they're both wheels, but PHP is like this because it's had the time to mature. And Node has not had that same time and exposure. So it's not to say that Node won't solve these same problems, but saying that right now this is kind of what they look like. So let's have a look at technology maturity. And everyone hopefully has seen like a adoption bell curve diagram. So at the very beginning, you have early adopters of a technology. Then the second sector is of the early majority of people who adopt something. Then you had a peak the late majority then join in and then you have these laggers at the end and then eventually the technology end of life itself out right. Along with that is one slide too soon. You have kind of like where developers sort of flow with this as well. And so you've got and the technology itself. So you produce the POCs, it becomes usable, early adopters join in. The early majority join in and they start building the frameworks. I mean I can think about being with the Dripal community since 2007. And it was really exciting trying to solve all these problems in the Dripal community in Dripal 6 and in Dripal 7 and being a part of a community, that community spirit really like thrived to try and build up everything. And I think we see that now in the Node community today as well. And then we kind of hit like a feature complete point right. I think it's ever really complete but like it's complete enough to win the day and like Dripal has done that now with enterprise delivery right. When you get to that level you can say you answer most of the world's problems with what you're trying to do. So once you hit that place then you stabilize. So we've become a pretty stable project now. Then the problems become solved and then they become boring and developers leave because there's nothing interesting to do anymore. And so for if you look at this from like a customer's perspective or you look at it from the perspective of an agency who makes business from doing project deliveries. This space here is the prime time because it's the time where it takes you the least amount of effort to get all the features and requirements met for the customer. It's also the time where things are stabilized. So from a customer standpoint there's the least amount of risk involved in taking on a project of that size. That's why you see late majority often being the enterprise people. They're late last to the game because they've got the most to lose and so they want the least amount of risk inside that project. So if we look at that and then kind of map out like where are these technologies in this scale? Take a guess. Where is node? Where is PHP? And I thought I throw in another one for scale. Where's Java in this? And I kind of think that they are somewhere like this. I think Drupal's in that prime time spot right now. I think Java's just popped out and I think node's still building towards a feature complete place. But it's got a lot of the excitement, a lot of the early majority behind it and trying to make things really work with it. And this kind of, I think when you think about this maturity in the technology, it kind of explains a lot about what's happening. And it starts to make us, it should hopefully make you think more about what's the right technology to use and when to use it. And when I started thinking about when to use technology and when not to use it, evidently, Kenny Rogers came to mind. So I wrote a quick song. That kind of goes like this. You gotta know when to couple. Know when to couple. Know when to do front end, dynamiccy and when not. You gotta count your node modules when you're sitting at the PR merge. They'll be time enough for rendering when the compiling's done. Thank you. So here's some tips for decoupling and being able to make those decisions better in your projects. I've gotten a really highlight but it is worth mentioning that there is so many good reasons to use decoupled components. There's so many legitimate reasons and like let's be really like honest about it, Drupal and PHP is not a good choice for a lot of different tasks that we want to be doing in today's web. So there is a lot of good reasons to use it. It's just finding out the right reasons and implementing them in sensible ways. So performance is worse in a fully decoupled architecture. They might be people in this room that have learned that the hard way. But really seriously, think about if it's worth that performance hit. I use that now as just a rule of thumb and I've got some examples I can share with you around that. Start coupled and validate reasons to decoupled. So my position when I work with customers is that we're doing a coupled build. And whenever I encounter a trigger happy node.js developer who says I've been really, really wanting to use this fully decoupled thing or whatever, I question them about that and what they want to do and help them try and talk them off the ledge and back into doing something more sensible with server side rendering and then just, let's just try and find bits that make sense to increase. If it's going to be static and you're only ever going to render it once, don't do it decoupled. Polylythic delivery increases operational expenses exponentially and also makes the delivery pipeline a lot more complicated as well. So simplifying everything by being very subjective about how you use that. Clarifying editorial requirements up front. So I was on a project where they had a node.js team and solutions architect come in and they decided they were going to go fully decoupled. They went into eight months delivery in the head of UX. One day said you need to let us be able to control layout from the UI. Completely killed their project and brought Drupal back into the picture. So they really had to redesign everything. A bunch of the node.js team left because they could not solve a problem that node.js today still cannot solve. Do not use DIY APIs of come across multiple scenarios where people are trying to like, you know, optimize the type of content that is produced out of Drupal for the app and they make it bespoke. But you would actually write problems like if you write a custom API into Drupal good chance you're going to get the cache tags wrong and then you can't do invalidation and then you're back to its expiry and you have stale content and that is systemic into your front-end app. It becomes a big problem. So just use JSON API or GraphQL where they've already solved that problem. I want to talk about a couple of examples that have worked well and one example that's worked well maybe a couple that haven't. So this is the ozopen.com. We've run for the last two years. Aquarius run the Australian Open on Drupal 8 with the help of some partners. This is the architecture that it looks like. So we've got an AWS data center. We've got Aquarius Cloud sitting there hosting the Drupal site. They've got an on-premise editorial team who interact with Aquarius Cloud and BrightCove to produce content and video. They've got a scoring solution which is like a bunch of lambdas essentially and caching layers that pull real-time sporting data that comes straight off the courts and into that solution. That's called SMT by the way. That's the solution that provides all those data points. It comes through thick and fast because day one you have like 30 concurrent games of tennis being played and the end of each match is a point and that point is then propagated out through into the scoring solution. So the volume of updates is phenomenal. They get pushed out to Abley. The client requests a page from Aquarius Cloud. It's cached at Aquarius Cloud. It gets cached at the edge globally. And then when the page loads, it establishes a WebSocket connection to Abley. And then Abley pushes scoring data in real-time into the visitor. So when you're in front of the site, oh, man, this thing jumped so quick. Oh, that is working. You get this sort of thing. So you've got this component here, which is playing a game. The tennis ball showing you who's serving is the points over here and the points set. And that's all being delivered over from that WebSocket data and it's a fully run by a JavaScript front-end part of the application. Drupal doesn't really have anything to do with that card. But then the big screenshot over here you can see of the page. Well, this is those cards. You can see everything at the top. So there's a number of games and there's all of these different games in progress. That's all being basically a decoupled delivery. And then below that is above it and below it. Above it is the header of the website and below it is a piece of news. So they've integrated real-time sporting data with a content experience. And all of that is Drupal just coupled and delivered through twig templating and content types and just the news editorial experience. And this is a really good use case of building it progressively decoupled app. So we had some really amazing wins with this strategy. One being we've got 99.9% cash hit rate as a backend guy that's like the holy grail of caching 100% uptime. And that is significant considering that this is a global sporting event. Like we have some of the highest loads of traffic that you get on that type of thing. And we stand up with it just really, really well because of that caching strategy. Most of the traffic is just handed off at the edge. And that was a really smart architectural decision to go down that way and not have to lean on a lot of the coupled things to do, things that Drupal does really well out of the box. A couple of examples. We worked with SGX. They're a stock exchange. And so they are, as you can imagine, a stock exchange. They have a lot of stock data that comes through and there's a lot of graphs to update in real time as those sort of things happen. And that was a big reason why they decided to go with a fully decoupled solution. And so they, with the couple, they also had these like really sharp JavaScript guys and they decided that all their frameworks like React and Vue and all that, they weren't good enough for them. So they had to, like they decided to rebuild one by themselves. They decided to use GraphQL out of Drupal 8 built on Drupal lightning. They had delivering to multiple users and they had, at the time that I produced a slide, they were going to be there, they're actually live now. But they built in a tie fully decoupled site and they ran into a lot of challenges around like just general content delivery like the menu system and the headers and the foot of the templating and all that sort of stuff. And this is all stuff that they wouldn't have had to have dealt with if they had just kept the coupling to the pieces of the graphs. So like this was a, you know, systemically they had challenges with delivery because they were led by people who wanted to do everything in a very decoupled nature. I kind of like say this a lot is like when you're a hammer, everything looks like a nail. If you're a Node.js developer, you want to solve every problem with Node. And that can be sometimes a really difficult thing to navigate if you don't know better ways of doing things faster. This was their architecture. And so they had Drupal, they would use Drupal as a way for the editors to manage the content. And then they had fully decoupled stacks so basically the front-end user would never see Drupal per se. They would just serve up headless content as you expect in the headless site. But they, I think they're now working on the Gatsby thing, right? Like they want to do that. So that's going to be another fun thing for them. Oh, wait, that's, last one was Malco. They're a, essentially a casino in Macau. And they, similarly, the agency there decided that they wanted to do a fully decoupled headless build. They put an API gateway in front of Drupal that was back-ended by lambdas. Those lambdas then did something magical. I couldn't tell you what. Before they went and did a back-end headless request to Drupal. And then in doing so they completely removed all of Drupal's caching strategy from the front-end. And so they would end up with performance issues and Drupal would be absolutely happy. And then the API gateway would be in trouble. And then like they were doing multilingual translations. And you switch translation the next minute to page 404. And that was nothing to do with Drupal because we weren't handling the page renders and all this sort of stuff. So like it took a long time to sort of get cohesion on something that again is just stock standard. And the worst part about their decision to go fully decoupled is that it's a static site. There's literally nothing that they needed to do like a page list refresh or any kind of real-time data coming into it. It was just purely that that's what the resources the agency had. And so that's how they were gonna go about delivering the site. And it led to buggy behavior. So I'm gonna kind of wrap it up there, leave it. If you guys, I know I kind of took pretty much almost all the time but if there's some questions or we've got some time for some questions, that would be really cool. Tonight we've got an event happening at seven o'clock down at the class house. So if you guys like to come along, you're gonna see, have a bit more chat. That's the aqueer event. But thank you very much for listening to my talk and maybe there's some people who have some thoughts or questions on my observations. We're actually running Vue.js, rendering on the server side. So we're running headless Drupal at the moment. We do have some of the issues you're talking about. But in terms of cation validation, we're actually using the purge cache tags out of Drupal. Grabbing those on the front end and sending those as headers on the front end. And then basically using the purge module to purge front and back end caches. So you can actually do cation validation on headless just using tool straight out of Drupal. Yes, right. You can. And JSON API, GraphQL both support cache tags, right? So if you've got validation, you can do that. The problem is when that works fine as long as you're only of a rendering on the front end. The moment that you try and do something like a Gatsby where you bring the rendering to the server side from on a node stack, then you start serving static content from the server side to the browser with no cache tag data. And so you can no longer invalidate or even invalidate Gatsby when a new content event happens. So I don't know about Gatsby, but the way that we're doing that with Nuxt, we are actually invalidating the cache on the server side as well as client side. And we are essentially doing that by servicing. What are you using to do your server side renders? Nuxt. So it's the Vue.js equivalent of Next, which is from React. It's quite a different concept to Gatsby, which I think is actually just rendering static HTML rather than and then it's kind of serving that HTML where it's Nuxt and Next do the same rendering server side and client side, but it's just on the very first request for a full page that delivers HTML. So it's a very different concept to Gatsby. But when it serves that HTML... Oh, so wait, so are you... Is it statically stored on the server side or is it still dynamically rendered on the server side? No, it's still dynamically rendered. And then we're storing that essentially in varnish, which we've got running as a CDN. I won't comment about that, but yeah. So like a varnish hit would usually yield like I say a 30 millisecond return. What does a Nuxt pay request on the server side return? What time frames are you waiting for? I couldn't tell you if it's the top of my head. At the moment, it is a lot slower than I want them to be. So out of varnish, things are looking really good. But then the initial Uncache request is probably a lot slower than I want it to be now. But we've got a list of things that we're going to do to improve that. But the good thing is we can cache a lot on the front end and purge on NoteSafe essentially with the purge module. So our cache sheets are going to be huge, so... Cool. Yeah. All right. Thank you very much.