 That's a lot of people. Thank you for coming. I really, really appreciate it. If we haven't met yet, my name is Chad Carlson. I'm hanging around that Platform as H booth downstairs. Can you hear me all OK in that back row? I've been told that I can mumble, and I'm wearing a mask. So I don't know, raise a hand. I'll be happy to project and enunciate a little more. This is my second in-person DrupalCon. I did one of the online ones during COVID. Really excited to be back. Today, I wanted to talk a little bit about decoupling Drupal. Now, as a preface, I started in Platform as H about four and a half years ago. My background was not in web development or Drupal, obviously. And in that time, decoupling has been one of those projects as a member of the DevRel team that I got assigned to and got to explore for a little bit. So my hopes for this talk are not to give you the impression of your main takeaway that I'm an expert in decoupling. These are just some of the interesting things that I've come across in the last few years. And I've shared some of them at DrupalCon in the past, and hopefully some of it will be useful for you all here today. This is sort of the structure I have for my talk, sort of the basic language of this decoupled conversation. And again, I will admit because that experimentation has happened as my time at Platform as H, that's been my playground. So there will be some Platform as H stuff. I swear it's not NAD, it's just a little context of how what you're gonna see works. Then a little overview of some differences that I've seen in the three main frameworks that I'm gonna talk about today. What are the features that distinguish them? And we'll play around a little bit with a live site there with some of those differences. Before, I'm gonna go through three frameworks that I've spent the most time playing with. Maybe Remix is an exception. That's probably the newest experiment that our team has been working on, but otherwise Gatsby and XJS. And then if we end up having some time playing with this live sites, maybe a little outro on should you even do this? So yeah, let's go and get started. As far as basics, if anybody has just walked into this talk because they've heard the word decoupled before and they're just curious about what all the buzz has been about, the basic idea is that we have the backend Drupal site and we wanna leverage its content model to serve an API that's ingested by some specialized front-end framework that's gonna be some JavaScript something. We can leverage the built-in REST API, but in most cases, we're gonna use the JSON API module or a GraphQL module to do that. And the why behind all of that is often some variation of these front-end frameworks are really specialized for doing this front-end tasks and with the way that's different deployment targets that specialize in those specialized frameworks. It's very easy to spin up a microsite to ingest that content, let it service purpose for that short amount of time and once it's passed its lifespan, destroy it and move on to the next campaign. Or similarly, use the same backend API and serve many, many different sites that look slightly different for what their individual purpose may be. And that's really great for doing all that variation. It's also great to be able to separate content teams and development teams into their own separate workflow and be able to hire experts within these JavaScript frameworks who can really focus on this microsite pipeline instead of anything to do with this Drupal backend. Alongside that, you get features like dynamic previews of the in-progress content as a part of that content pipeline. Like I said, campaign-specific microsites and depending on the technique that you use for these front-ends, say, we're pulling in and going to a full static, if for whatever reason that backend API were to go down, our actual domains are not at risk because they're just pure static being served somewhere or they are cached and we can figure out the backend problem and that content does not stop being served. This is just a little graphic of exactly what I just described. And I only include this separate view to recognize that implied within this is that we expect two deployment targets for these front-end sites and for this backend Drupal API. And like I said, two teams, two sets of responsibilities and two workflows that have to be kept in sync and that'll often be sure we can use environment variables to tell a particular front-end where this data is located, but it's hard-coded somewhere most likely. What I'm gonna show you today works slightly different on our platform and it does take place in a single deployment target. So hopefully that's interesting to you. It's been the playground I've explored this in, but I guess another preface that you may be more familiar with decoupled in this sort of two deployment target model. That line looks bad. If you haven't seen it, this is what a Drupal site, how it gets configured on our platform. You can specify a type for PHP if you can read that like blue. For this is PHP 8.1. We use a built-in composer installed build, some drush commands for the deploy hook, and then we build out this cluster of containers to serve that back-end with this relationships key. It corresponds to another file that looks like this. So we say, I need MariaDB and I need Redis, and that sort of gets built together along with a roots file when committed to the repo to give us something that looks like this on our production site. And when we take that, what I wanna show is that we can imagine that that production environment up on top for main is exactly what I just showed for that production cluster. We're gonna make a few different variations that's either add some functionality or add a separate front-end to that production Drupal app just to get an idea of how we bounce back and forth. But if you get the basics of what was trying to be shown here, there's a build, there's a deploy, there's a connection of containers. The only other thing to keep in mind is this file exists to try and make a build image that can be reused across this environment so we can do experiments like this or promote new features. And the only other thing is we start adding front-ends to this dynamic when we're going forward with a single deployment target instead of this implication of two is that we're gonna start adding additional containers for the front-end that run alongside the Drupal backend and they're gonna end up being deployed to the same environment. But when that happens, they run in parallel. So we kind of have build, build, deploy, deploy and then a subsequent steps after that. So we have to get a little bit creative on the timing in how content gets pulled but if we can respect a few of these things we can get out of the domain of hard coding a backend's URL for the API and instead start pulling things from the specifics of the environment. And this is an example of sort of what that looks like. I have a roots file that says I want all requests to go to Drupal. So that gives me the ability within some yet unnamed JavaScript container to say I'm gonna take this built-in roots object that gets defined from this super simple configuration at the top to look for which container is Drupal and what's the root associated with it. And once I do that, for any framework I have I can do the same sort of lift and find out what the current backend location is and the same behavior sort of works across environments. All right, I think that that's all the platform preface. So where do we get into the differences of the kinds of frontends that I've encountered at least? Like I said, the most common ones are gonna be the choice of first installing this JSON API module or enabling it. It's built-in. You can enable it with Drush or through the console. Most of our work is doing it through Drush because through our team we're really focused on giving people example code bases that they can deploy one of these decoupled configurations with. And so we'll have a script in the backend on the first deployment that enables this module. From here I can place a request on node articles and get the list of something that fulfills that node type inside this data field and give me a long list of all the articles and we'll see what the examples look like in a second. From there I can use those individual individual objects within that array to follow up on the ID and get more information from another endpoint that looks like this in order to narrow down on that specific articles, content and other metadata that's associated with it. And so I can leverage that module by itself to start making individual pages in a front-end or I can use something like GraphQL. So here it does require an additional installation for this Drupal GraphQL module or with some of the frameworks we'll look at using another module specific to that framework that has it sort of built in. And what this variation is gonna do and is a requirement for some front-end frameworks is it gives you a little bit more flexibility to provide either a combination of requests in a single request. So here I'm placing a query that looks for a node where the title is associated and then the associated node article that gives me these things I need to know about the body of the image. I can collapse into feature articles in a similar way. I can also pass things like variables and parameters that can do the same thing of that subsequent request in JSON API, but I can also use this parenthetical on the side of the query to do things like filter out. Like I said, either filtering out things that contain a featured tag or otherwise have some other attributes that I'm interested in showing in a particular list page. So this is an environment that I have on the exact project I just showed a screenshot for where I have Drupal in production and the only difference I have in this environment is I've completely copied the production site. I've just added that GraphQL module and enabled it. And so what that's gonna do is gonna give me an explorer here inside of the admin dash and I can run this query against my articles and again, this is a fairly simple one. I just wanna know the number of nodes and for those individual entities, what's the body look like, the title and what can I pull away from the images that have been uploaded from them. I can also then adapt this request to include what's the alt for the image and update it maybe. And we'll see that I now get that for every article. Try to remember what are the other ones. Yeah, I can do the sub path of every article itself to alongside the title. And here it's gonna fulfill all the other things that I have enabled here, which is enabling and installing path auto that gives it the pattern of a slug of the title at this sub directory of blog. And so if I need that data, I can include it, but otherwise I can reduce the amount of data that I'm really doing on a single request, reduce some bandwidth and really just take the bare minimum of what I need for this request to get all articles. And that's sort of what it looks like inside of the explorer. Some frameworks come with this same explorer built into them so you can play around with them in the same way, but that's sort of what that looks like. And yeah, those are the major differences that we'll see in the frameworks I'm gonna show. And the other difference is gonna be how is the site rendered? So if we are, say, looking for a completely stable front end that can withstand the back end totally falling out behind it, we might look for the classic static site generator where we're just getting pure HTML and built output site and it can withstand the API being gone because it really needs a rebuild to take new content. But then if we want something a little bit more dynamic, we have a content pipeline where people want previews, we want content to be updated on the fly on that front end site, we're gonna go towards the right here. So a deferred static generation is the same as a static site generator and we'll see it inside of things like Gatsby where we have the skeleton of the site is effectively built, but then the first build of an individual page won't occur until the first request. Server-side rendering, a little bit more dynamic, it's going to render for the individual request without that specific build. And that allows for things like revalidating the back end content and updating that individual page and then reactive site generation, which is sort of this new era of Gatsby, I suppose, where it is going to do the same thing as server-side rendering, it's going to render the front end individual page on the request, but it's gonna add things like an on-demand revalidation hook so anytime a piece of content is changed, every single page on that front end site is going to remember what are the data dependencies of it and it's going to listen for that hook to re-grab all of that data to rebuild the page once the time the next user request comes through. And so the options that you have here sort of fall within these categories of does the framework depend on GraphQL in order to do some of these more dynamic things and then what are the needs of your organization and content pipeline? Do we want something to be more robust to failure or do we want something that's very, very dynamic to constant updates in that pipeline? So the first one that I wanted to go through is my newest experiment. So this is Remix. It's currently at version 116. It has the HTTP handler server and browser framework built in and it's compared to some of these other ones a lot more lightweight as far as builds. I'm very, very curious to continue playing with it because it seems like it's a lot faster. And in most of the examples I've come across it's really a reliance on the JSON API. Not that you couldn't do GraphQL but that's what it's really working with. And it falls more within this server side rendering category. You can leverage other existing web frameworks like Express if you would like to with adapters and same goes with React in view components. You import them as adapters and move on from there. And this is sort of what the project structure looks like. We manage our dependencies here in package JSON but otherwise we're going to define individual pages and list pages within this roots sub directory and how we're gonna handle data sort of within this models and in our main client and server definitions. So within models I think is my first example here this is our server config for handling individual nodes. So in this case I'm going to get nodes of type run off the page but it's gonna allow me to get of type article and it's gonna leverage an API endpoints environment variable in our case much like that environment file I showed you from the current Drupal environments. And then provide us with this list of nodes for actual page generation. What it does with that data is it's gonna use this index TX to TSX to find the list page based off of that request and then it's going to leverage two things to do paths. One is dots equal new sub directories and then if we have anything that has to be dynamically created like the node type of article it does so with the dollar sign like any other variable. So here our root would be node article and then the ID of that article would be found at that final path here based off of this structure. And if you do have a chance I think I'll share this somewhere later this is sort of what this example is based off of our beaches Drupal remix example. So yeah the node server TS is going to get a nodes of type and then individual nodes follow up with the same environment variable with the UUID. And then on the platform side if we already have Drupal building on one side of the environment the remix app is then gonna look something like this. We have a type of node 18. This just makes sure that it has enough memory for that build which becomes a lot more relevant with something like Gatsby. Then we're going to install dependencies and if you go to the bottom we're gonna use PM2 to start so that we can have some of the re-validation features of updating content on the front end. And then interestingly we're going to have this really delayed step of actually building this site. And like I was saying before everything runs in parallel on our platform so for a particular environment we're actually going to install dependencies and the start command is gonna have an initial sort of zero content build for this remix front end and then it's going to do nothing on the remix side but then Drupal is gonna go through its deploy phases before finally falling up and getting into the server that's already started and doing one final rebuild to get the current state of that. Data on the back end. I'm not gonna skip right ahead to Gatsby. I'm gonna show you what that looks like. So inside of our project here I have as a child of prod and then this set aside area I have for front end experiments that's also a clone of prod this remix run environment. And so for here I have a cluster of containers where this is a copy of production and then what I've added is this remix front end and with that's two URLs the Drupal back end, maybe let that load and then the remix front end here. Well this is gonna be fun. Here's our data which in this case all I've done is pulled individual images from NASA's image of the day and their descriptions and then for this demo I'm going to have a path at node article and then finally the article itself UID on this path. And what this is pulling from is this current environment it's not pulling from production data it's pulling from this current environment which then allows me to do things like which one I can go into this individual article and test the revalidation. So here I'm going to say there's update, update T and I can save and then that's gone ahead and updated it for the current environment. And so I like that because I may choose that I don't need that kind of dynamic behavior but because I've set aside in this environment I can experiment with okay how much authentication do I need to put on an update like that who has access to making that updates and would I rather use something else that set aside preview non-published changes. But that will give me that sort of behavior for remix. Is there anything that I can show you on here? And then the only other configuration that I've done different is I've set up Drupal to be served from this API subdomain and then remixed to be served from this root domain. And I have to remind myself of the JSON file. And so in this case I actually did not need GraphQL this is just because I've inherited it from the parent environment really the only thing that was required to get this going was path auto and even that wasn't required because I've built it up an entirely different way. So remixes simply leveraging JSON API in order to build the front end once it has the location of the current front end and then revalidation just sort of works out of the box. Back to the slides and Gatsby. Gatsby is probably the first front end framework I really experimented with within this Drupal context. It is currently in its version five which is really some of those reactive static generation features that I described at the beginning. And it does give you a lot of flexibility to choose a lot of these variations. So I think in this example I just have a purely static front end. But if you aren't familiar this is the structure of a Gatsby site and the important components for Drupal in an example like this are similar to the previous one. I'm gonna define pages and paths with the sections at the bottom right here. I'm going to set aside a template for an individual article on the Drupal back end. And I'm going to set up list pages for my homepage and for the list of articles and build all of this together with the data source using these two config files at the top. These are all sort of relevant but these are the two most important ones. So that's Gatsby config and Gatsby node. So everything in Gatsby is built with this goal of consuming many, many data sources and putting them into a single GraphQL data layer is what they call it. So all of these sources together are usually compiled with these source plugins and this is the one for Drupal. Here I give it an environment variable of the API endpoints. I can add additional parameters of authentication but there's nothing stopping me from adding a raw GraphQL or JSON data source pulling from a flat JSON file inside of my file system. Inside of this config I can actually switch over to show you what the full example looks like. So here I'm going to load a few plugins that help me manage my images and their sizes same with breakpoints and some basic defaults, colors and whatnot. And then that source Drupal plugin here which these are going to delete this. But like I was saying there's no reason that you couldn't add additional sources for static files or for multiple different APIs that you want to build into the final sites. But this is where you would do it inside of Gatsby config. And so with the data layer sort of defined and with our individual templates set aside in pages, templates and components we then move to this file of Gatsby node which is going to use the create pages method to actually take our data layer and place a similar GraphQL request against its own data layer like I showed before on the Drupal side. So in this case I want to create an individual page for every single node of type article with these attributes associated with it to build this front end Gatsby site. And here is what that final exported function looks like. I'm going to go all node articles to create individual pages that have a final path that's defined by my path auto alias which again is blog slug of title. I'm going to then resolve that final page using the article template which exists right here. And I think within that template I need to leverage the article ID at some point. Let me look. Okay, so in the creation of the individual page I place a follow up request using the article ID to then get more information about the individual image because I don't have the image on the list page. Or I don't have the alt on the list page. But whatever the quick case, I follow up with that ID that I'm passing through inside the definition of those individual nodes right here. So that gets plashed through for context for that individual article and then picked back up right here on the GraphQL query. Then I'm going to use the basic React and Gatsby components to then build up an individual article page. The deploy for Gatsby on our platforms will look similar to remix. They're all going to look fairly similar. Again, I'm going to build dependencies at the same time. I'm going to run an initial build for the current environments and serve that content for that environment. And then deploy steps with Drupal to then do a final rebuild when the deployment is finished for Drupal on the Gatsby side. If we go back to this product here, if we again get our bearings, this is just production Drupal, but then it's been split up into this grouping of experiments to then get an environment here that is my Drupal backend and Gatsby front end. So here, Drupal is not going to be all that interesting in this theme, but then this is the consequence of that config. So here I have what I define in my index file, my articles file at the articles path and then the individual article here for this page using those components and the main article template. But I think I can show you here is the exporting of the variable that I showed briefly on that slide is going to pull out the roots for this current environment Gatsby and to code this variable to look for Drupal and output the key. And so any child environment that I make of this Gatsby site is going to get, it's going to use this same file that gets sourced, but it's going to get the current version of it for that environment. And in this case, I don't have any authentication hooked up. So this is the extent of what I need to set it up. And it gets built up for the current environment here, but then gets used on that initial and then post deploy build. And remix what I was talking about before, it has the exact same file in it. Actually, I'll go to this view. And so in this case, this child environment is going to use the same file, but instead to get the URL for this current backend. And so that'll be the case for any one of these environments and their children. And we have Next.js. So this is the third framework that we've explored inside of our team. It's currently at major version 13. In this case, a lot of what I've played around with has leveraged the Next Drupal module and a lot of the demos that have come out of chapter three and the demo I'm gonna show is their recommended starter. Most is JSON API and then recently some GraphQL. And similar to remix, it's gonna be some SSR with revalidation available. And the starter out of the box, again, I don't know where this gets linked. But if you go to the chapter three Next Drupal website, their recommended starter comes with revalidation packaged in so that when you change things on the backend, like I showed with remix, it will do the same for the front end. And as far as the structure of this kind of project, similar to what we've seen before, I have a shared components directory and I'm gonna leverage this LIB directory of how I wanna connect with Drupal, which I think I have on here. The environment variable is written different for its documentation, but same basic concept. I'm going to pull the backend URL for the Drupal site and get it for the current environment, leveraging the Drupal client. And if you want set up a preview secret for generating on-demand previews of content changes. Inside of the structure that was this one, the next config file is not this one. Here, it's a separate image domain that is going to be of the format, if I can get back. So this is the current backend for the next JS site and what you'll see inside the documentation is this is the previous environment variable to actually just hook up to Drupal and then this image domain is just removing this front part that gets exported to the environment here. All right, and then similar to both of the other frameworks, I have a template that defines a root path here at the index page to show my list of articles and then for dealing with individual pages, this is the placeholder that is based on, again, leveraging path auto to make individual pages on this slug of the title. So over here, index, grab static props from this Drupal client to filter for which ones are published or not to get those basic fields from JSON API and then to give them a sort. Not much different here for next JS. Other than, again, it's gonna use PM2 as a process manager to allow it to respond a little more quickly to a revalidation request to the front end and exactly the same as above except that this is not the same. This is coffee pasta. Dependencies we'll build, start initial build and serve the front end, deploy and rebuild the front end. If we skip back over to the environment itself, you close and go to the front end. So this is what the starter, if you go through those docs will look like and it has the exact same content that we had in production, NASA image of the days and using those templates to make individual pages at what we have defined with path auto here. Restricted to right now, unless we merge this current environment and then any of its children until we merge. If you go through that documentation, it sets up authentication sort of from the outset, which is really an interesting little problem on our platform because what you need to do is, I think I have it in this slide. There you go. All right, so it assumes that you're going to set up the Drupal backend and enable Drupal Next, JSON API, serialization and a few other things. Then you need to define a role and a user for what is going to pull content and represent this front end Next.js site and then define an OAuth consumer that is then associated with that user. Then register a front end Next.js site in the module config and then find a way to share the credentials with the front end. So this would just be on the first deployment, like I said, with our team. If I want to build an example for how to do Next Drupal, I want somebody to go and click a button and all this sort of happened. And that's fine. We can put in a lot of built in code that go through all of these steps on that initial deployment in setting up production. But we also have this sort of environment situation to deal with. If I create a child of this Next.js environment like I have with remix, it's no longer going to be pulling from, or I don't want it to pull from that parent backend. I want it to pull from the current backend. And so if you take a look, I'll have a QR code at the end to kind of play around with it. During the deploy phase for Drupal, we've split up these sort of steps into a couple of scripts that are installing Drupal itself and all the while sort of keeping track how far along we make in this process. Enabling Next and JSON API, creating the role and user path auto, making all that dummy content and notes from NASA picture of the day. And at the end of this sort of placing something accessible that says we've already done the initial installation of the project. But then any time we start an environment, whether that's from the outset, or every time we create a branch, we need to reconfigure all of these OAuth steps for authentication that I've said here. And so we've done that is it'll reset the configuration on a new branch, go through the steps I was just talking about of configuring previews and a consumer and user and registering that Next.js front end site. And then any time a new child branch is detected, it will then redo all that authentication config for children of Next.js. And ideally when that is all done correctly, this is another project I have where Next is the parent. I should be able to isolate previews for this environment. Ignore this glaring message here. So in this case, I'm not gonna revalidate anything. I just want a preview view of the Next.js site for the current environment, which I can get here. But if I attempt to access that same site as a user, I shouldn't see that update because I have a preview token that gets passed for this current environment as an admin that I have access to as part of my content path. But otherwise the experience doesn't change for any of our users. I think that this is, yeah, the extents of what I wanted to show with these three separate frameworks. I think when I introduced in the beginning, a lot of people will do this because they are looking for sort of separation of responsibilities between front and back end teams, dedicated content cycle. And definitely something I've talked about a lot today with people is being able to spin up quick and throw away individual front end sites, this sort of microsite per campaign sort of approach. Deploy somewhere cheaply, have it for its lifespan, and then get rid of it when it's done. And that's a pretty interesting model. It's different than what I've shown here. But should you do this at all? Obviously, like most things, it depends. Your API may not warrant separating like that. You may not have the front end talent to sustain individual microsites churning through like this all the time. And the one that glares out to me, which again, definitely colored by the playground I've grown up in, is maintaining two workflows is a lot. And it adds a lot of additional complexity in trying to manage secrets and trying to keep control of who has access to what at any time. So should you do it? I think it's a pretty interesting thing to play around with, but that's my job to do that. But yeah, I think that that is what I have for today. I might be a little bit early. I'm happy to stick around for any questions or to talk about what I've presented here today. If you're interested in deploying any of the examples that I've shown, you can scan this QR code to the org that we maintain this code in. But otherwise, again, I really appreciate you all coming out. Thank you very much. Thank you.