 Awesome. I think it's time. Before we get started, I wanted to know if any of you got to attend the decouples summit yesterday? Okay, I see some hands. Well, we had a panel discussion on these topics. I'm going to touch on some different things. I'm going to go a little bit more depth in some topics. Some others I will not because that panel was an hour and 45 minutes. We're not going to go that long again. Hello. Welcome. This is your first Drupal Compresentation. I love this lot. There's a lot of energy after the juice note always. I will ride on that wave and hopefully I will keep that going. I'm Mateo. I've been doing the coupled projects for a while now. I think that I started in 2013 and I've been doing those. I like them and I'm lucky enough that I was able to get on those projects and thanks to Lollabot as well, I've been able to contribute a lot of code in the community to help build projects like JSON API or the solutions that we're going to talk today. They are all backed by Drupal modules that I was able to ride because of two things, because Lollabot sponsors my time partially and because we are a distributed company and I don't have to wait in the bus to get to work. So I can do this stuff. And I guess that the community benefits from that. So if you want to appreciate that, maybe consider dropping by the booth and talking to us about your next project. That would be awesome. Apart from that, I'm an API first initiative coordinator along with Wimliers. You may have heard about the initiative in the keynote or breading in Drupal.org. We are trying to make all these cool things that we saw earlier happen. So, yeah, please come and contribute. Apart from that, last DrupalCon we talked about getting a new Drupal distribution with decoupled in mind. And so I built Contenta along with Daniel Weiner and some other contributors to the project. So, all right, that out of the way, the actual content of the presentation. So we're going to talk about five different hard problems here. There are more. Yesterday's panel was a big proof of that. And we're going to just mention some of those at the end. But these are the ones that I wanted to talk about because these are the ones that have been taking my time in this last year. And I've been writing some Drupal modules to ease the problems of those. So we're going to be talking about performance, about schemas, why they are important and how they are massively difficult to generate, routing, and then editorial layouts and authentication by the end. So again, as usual, there is a Drupal module for that. So, yeah, hopefully we can keep that going. The first hard problem is performance, right? When you start a decoupled project, you may not see that right away. And that is because at least in HTTP 1 or 1.1 projects, you have the need of sequential requests, right? JSON API greatly improves in that in the sense that you can include entities in the response so you can save yourself that work. But sometimes it's unavoidable and sometimes it's not just getting a set of entities and doing some includes. Sometimes it's more like you want to create an article, right? And imagine that you have a React application with the form. And that form is submitting some data. And ultimately what you want to create is an article with a bunch of tags and that's it. To do that, you create the JSON object that you need to push to the server and you realize, oh, but I need the tag IDs for this article, right? I want to add some tags to it. So I need to create the tags first to get the IDs. And also I need to get the user ID based on the username so I can put it in the JSON body first. And that becomes a set of sequential requests so you can get first the user ID and the vocabulary ID. When you have the vocabulary ID, you can create the two tags. When you have the two tags, you can fetch the IDs from there and you can actually create the article. So this is very painful. It's like you can do the first two in parallel, then the second two in parallel, and then the fifth. But that's assuming, and this is like one of the key concepts that I want you to walk out the room with, that's assuming that you are thinking about a particular consumer that can do things in parallel. So in the couple land, sometimes you don't get that luxury. There are technologies that don't have an event look like the browser does. For instance, if you had a consumer that's symphony application, you cannot do things in parallel. So your assumption is that you have to get into the lowest common denominator. And also think about all of the requests that you need to coordinate. Don't try to read the code. This is just for an example. This highlights the complexity of doing this very simple task of posting an article. You need to coordinate three levels of requests. Each one of those has a bunch of parallel requests and you need to hand hold all that process and you need to do it across multiple consumers at the same time. And as we talked about before, each consumer can have its own ideas on how to make requests and if they can be parallelized or not. So this code needs to be repeated and variated in each consumer. Instead what we want is to do something like this. Something that is very simple, it's a simple request that deals with all of that. Because ultimately what we are doing is we are going back and forth with the server and the client just to fill in an ID. And that's a very simple task that a machine should be able to do. So that's why I got the idea to build the sub-request module to make the server do it. So the idea is that you create a JSON document which is called a blueprint that contains a description on how to make all those requests. Basically it's just the request that you would do minus the IDs because you don't know them, right? So you put all those into a JSON document and wherever you have the ID that you're missing, you put a placeholder. The placeholder says grab the ID from this previous response, right? So you're telling the server, do all these requests and then grab information from those and fill them for me. Don't come back to me to just put an ID in a JSON document. So it's a very simple idea and it goes like this. And again, don't try to read it, you probably won't be able to. But oh, that's sad. It's color coded but it's not showing very well. Well the first two sections, hopefully you can see that there are five sections in there. So those are the five requests that we're making and the first two are the ones that you start off. They don't have any dependency on other requests. You start making them right away. So the second two are gonna, they have a key at the end that say wait for and then they wait in for the vocabulary request. So what that is expressing is that I have in my request, I have a placeholder that depends on the response to one previous request. So it's gonna wait for those, replace that and it's gonna happen all in the server. So you can specify using this format. You can specify the requests as you would do them in the consumer. You can specify in a way that you can have placeholders to create your request for you. So ultimately what we are aiming for is to make a single request to the server. So there's only one, there's only one back and forth between the server and the consumer to make all these requests that Drupal can do internally in the server. So that improves performance greatly and also has the added benefit that since it's a single request, it is gonna be the same code or the same principle across all of the consumers. Because it becomes, the problem becomes generating this blueprint and having the placeholders and that's gonna be the same thing across the board because it's gonna be interpreted in the server. So that works really well when, especially when these internal requests take advantage of things like page cache, it's really fast. Like we are resolving requests in under three milliseconds and three milliseconds is pretty fast. So that's it for the first one, that's performance and subrequests will help a lot and also wanna mention, oh great, they are showing here the notifications. Well hopefully no one will, I definitely turned those off. Do not disturb is now on, okay. All right, cool, sorry about that. Schemas, we were talking about schemas. No actually I was mentioning that the subrequest module that is implemented in Drupal, I also implemented in Node.js. So the thing is that you can now have it in Drupal or in Node.js. If you have a proxy Node.js application which you probably will end up with, you can use it there too. And yeah, just try it out and leverage it and hopefully it will help your performance and maintainability. So let's talk about schemas, right? I didn't write the schemata module. I only helped writing the JSON API integration of the schemata module. But why do we want schemas and what are they good for? So schemas are basically a description of the shape of a JSON document. So imagine that backend developer and a front end developer go into a bar and this is not a joke. So imagine that they go and go for a coffee, right? And the backend developer is trying to explain the front end developer, oh I'm working in this API and you're gonna get JSON documents. So the JSON document is gonna look like it has a data property and inside of that it has an attribute key and a relationship key and if it's an article in the attribute key you're gonna have a title that's gonna be a string that's gonna be 255 characters long max and you're gonna have a body key that has a value and then the long text in there. So that description, that exact description is what the schema is, right? We are describing the shape of the document using a machine format or a format that a machine can understand that's called JSON schema. And very importantly, this is a standard and by a standard I mean that other platforms and other software understand this format and can do cool things about it. So by using a schema and the JSON API integration with that, what we are empowering is creating documentation because what I just, the analogy that I just made is a backend developer documenting verbally the shape of the API and how to use the API but by having a software do this we can just generate beautiful web apps that document your API for free. And let's stop a second to realize that. So you download Drupal, you download JSON API, Eskimata and the open API module and you get for free for your content model and a fully fledged API that you can do a lot of stuff and it's totally documented and not only that you create a new field and that gets documented as well, like magically. Well, it's not magically. It's actual software but it gets documented and it's up to date, doesn't get stale, you don't have to do anything to do it and it's accurate, right? So that's one of the benefits of having schemas but that's not the only one because by describing the shape in a standard and in a way that software can understand it, we can do things like generating forms. Imagine that you have your Ember app or your iOS app, right? You want to create a form for that article that I was making the analogy for. So the iOS app can download the schema, can see the, okay, this is a title, so I'm going to put the label title, this is a string so I'm going to generate a text field and this is max to 55 so I'm going to make it just not the text area but the text field, right? And the thing goes for body, et cetera. So you can end up with a software that reads the schema and generates a form for it automatically without no one having to type the HTML to do a form. And you may think that's pretty cool because I'm a software developer and I'm basically lazy when it goes to write code. I want to reuse and keep dry, et cetera. So that is very appealing to me but then you realize that there is another factor that is more important than that and that is that when you create a form, if you don't use the techniques like this, what you end up with is that you have like maybe four consumers to your API, all of those have forms and then you create a field, a new field in your content type, right? So now you need to deploy your API and update all the forms in all of the consumers and have a joint deployment so the API and the forms are all in line. If you do it with the schemas, the app just downloads the schema and pops up the new field in there, right? And that is very powerful because it allows you to decouple yourself from the release workflow, right? You don't have to be like in sync for your deployments. All the things that are good when you get schemas is like the client side validation. We got into the like validation of 255 characters. You can do that in the client side. We have going to go to the server to do that validation, right? And throw that error. So that's better experience for the admin team and then just the global concept of enhancing the user experience with this. So this is a screenshot of Contenta. Out of the box you get this automatic generation of schemas and that translates in automatic documentation based on the open API module. And it's very easy to use, like it's actually just go into that page and start using. The problem here is that the shape of the API is pretty hard to guess, right? We are talking about auto-generating the schemas and what we are really talking about is doing the best we can to guess the schema, right? And I'm going to say why, right? Because the first thing I need to drink a little bit is a cliff hanger. So the main problem here is that we are using Symphony and the Symphony Serialization component. And what that allows is for arbitrary code execution, which is good, right? It's what we want it because when you're normalizing the Drupal entities, what you want to go from is like the node object that you get when you do node load 42, right? It's a PHP-type object that you can execute methods on and that it can actually save data to the database and it can load from the database. So in the PHP world, it lives in the PHP world, but we need to go from that the data that represents a node to a JSON object that can be a stream over the wire to a client, right? So the data, the node 42, lives in two parallel dimensions, one that is PHP-only and one that is in serialized or text-only in JSON format. So the normalization component allows us to go from one to the other. And the problem is that it is too flexible. So you can have something like this, like a normalization function that says, all right, so I'm loading a node and I'm going to roll a die and if it's a five, I'm going to output a string and if it's not, I'm going to output an integer. But your schema was saying all along that it was going to be an integer. But since you can override the normalizers that JSON API produces and REST core produces, you could do something like this. And there is no way that we can guess beforehand, like two months ago when you delivered documentation to the developer, what the result of rolling a die would be. And it has to be either a string or an integer. And there is nothing in our code base that prevents this. And this can become a problem. Also, when we talked about documentation, we didn't touch on this, but the shape is not enough to provide meaningful documentation. Things like, is this content type public through the API? Or can I delete this? Or is this field allowed to modify because there are fields that are generated by Drupal and you cannot modify them, you can't read them, but you cannot create them. So things like these are not available directly for our APIs. And we want to document that because that's critical for documentation, knowing what you can do and how you have to do it. It's not only about the shape. So we have some ideas in the API initiative. So the first problem is about ensuring that the schema is accurate. And for that, I wrote a PHP library that ensures transformations in a type safe way. So we are going to try to build a prototype that includes that into JSON API normalizers. And we're going to require everyone writing a normalizer to be type safe and declare, okay, this is the shape that this is going to output. And that contract is what we're going to use to generate the schemas. So we're going to be sure that the output of our normalizers are what the schemas say. The other problem is a little bit more complex because we kind of require a little bit of coordination with core teams to provide more metadata on what the API can do. And we, admittedly, we already have a bunch of this through the access system and the entity and field APIs. It contains some of that information. But sometimes we don't want to conflate the two things, what Drupal can do internally as a system and what you want to expose to the world, right? So we need some sort of a wrapper on entity API and field API to declare this in a way that it can be reused for the REST core. It can be reused just in API and maybe even GraphQL. So ultimately what this translates is that we get into the mindset that we are really API first and not just API compatible, which is kind of the situation that we are currently. Alright, so routing or routing, like my British friends like to say, is the next in the list. So I've been saying that you need to stop thinking about your React site as decoupled Drupal because that is not only what decoupled Drupal is about, right? It's mostly about omni-channel situation or multi-channel situations where you have an API that drives different digital experiences and that is in many situations what it drives the decision to move to decoupled Drupal and probably why many of you are here, right? Because you are building, I don't know, experiences that drive a React website but also an iOS app and an Android app and Apple TV, Roku, SmartWatch, even there are smart opens and you can store apps on that. So you need to be mindful of this and that every decision that you make in the back end affects the front end. However, there are some outstanding challenges to the browser and the browser is pretty important. I mean, we've been building websites and now we are building websites and all the things, right? But we're still building websites and the browser has a thing and that is that it's driven by URLs. We run our web apps inside of browsers. We could say that it's our OS for web apps and we need to be able to use the URL in an effective manner and for that Drupal has been very opinionated that content editors should be able to specify the URL for the content and for every web page and that is fair because SEO, that concept affects SEO and that is very important for the success of many businesses in the digital world. So we really need this in the decoupled landscape and let me focus right now on the browser for a second and break that rule that is not only about the React app, it will be for the next slides. So it is a need that you have to be able to control editorially your URLs. You need to have that SEO specialist be opinionated on where your routes will live and whenever a request comes into your Vue.js app, it's going to take that path and it's going to inspect it and it's going to have to make a request to the Drupal backend. So one strategy that people have been following is to do something like I'm going to create a property in my node that is going to be called path or a slug and then I'm going to filter that content type by that property and I'm going to find what I'm looking for. And that works well except when you realize that you go into this scenario. Imagine that an editor creates a recipe for a recipe site and it puts it in the recipe's sliced bread. And we are all very proud of that recipe and share that in Facebook, Twitter, even put it in a printed magazine, that URL, and then comes a change to that. And you drop the recipe's prefix. So now, if you think about that solution that we came up with, when the request comes in, we have updated to sliced bread and not recipes sliced bread. So the request comes in but it comes in with the old path because someone clicked on Twitter or on Facebook. So you look for the content that has that old path and you don't find it because you updated it and it's no longer there. And that is very sad and the SEO specialist is not pleased about it, especially when the magazine is printed and sold because you cannot change that. So the idea is that you use this module called the couple router and basically deals with this because this is an old problem for the Tupper community. We solved this a long time ago. You just use the URL alias and you download and enable the redirect module and whenever there is a change on the path alias, then a redirection gets created and if you land on an old URL, you follow all the redirections throughout the changes until you land into the node that you're looking for. So the concept is the same. There is a new endpoint when you download and install the coupled router module and to that you pass the path and you just execute the request in Drupal, it comes back to you with the URL that you need to request. And I kind of dropped the hint before, but if you see here, we kind of dropped the recipe's prefix. The recipe's prefix was what it was telling the Buje's app, look for a recipe that has this path, the sliced bread. If we drop that, we don't know if sliced bread, we need to filter on a taxonomy term. Is it a recipe? Is it an article? So we need to look for all of the resources to find that path. With the couple router, however, you just pass a path and it resolves to any entity that is behind that. You don't even need to know what resources is it. So at this point, hopefully someone in the audience is screaming about the idea of sending a path to Drupal to get the entity that you need to request in the client and then make another request to actually get the entity. And you just build a blueprint to fix that. Because that's the thing when we work in these hard problems as an abstract concept, we find ourselves whenever we crack the nut on one of the problems, that we find the old problems again. So we need to keep reusing the same principles. So you send the blueprint that has the path resolution to the couple router and then it requests the entity in there using the placeholder. So that makes two. And we're going to move to editorial layouts and in-place editors. This one is special. It's my least favorite of the hard problems because I feel that we are in a transitioning time. We are kind of new to decoupled Drupal or decoupled strategies. And we are still dragging some of the feature sets of the old times, I feel. And building layouts in the server is one of those concepts. However, that is something that some clients really need or think that they really need. And we need to provide solutions for them. Right? What I would try is to teach them how this is hard and how much this is going to be a problem, like maybe development time or budget for it, et cetera. But some, they will still need this. And there are solutions. Like this is a screenshot of something that the 1x internet people showed the Drupal Camerure this year. And this is what you would expect. It's not showing very well. But it has an in-place layout builder. You drop blocks and you select them and you edit them. And it works really well. And it's both things. It's a layout builder and it's also the, and in-context experience that editors really love. And there is a buff. I'm sure Chris over here can say more about that later. And you go and watch that. Also, there is going to be another session right after this one in this room that is the MoonRace project. It's the weather.com layout builder experience for the couple projects. So this is a real need. That's what I'm saying. However, it is very hard to generalize from the API First Initiative perspective because Drupal is everything to everyone at any given moment. And how do we do that, right? How do we provide a layout builder in-context for every project? That is just not possible. Because there is also another limitation and that is, again, what we were thinking about was about the React version of this. Do we need to build a layout builder in-context for all of the different consumers? Do we have to build five layout builder experiences? Do we build one that is not in-context? The answer is going to be different for different projects. And what I really encourage you, and this is the second big takeaway of the session, is that you need to leverage the constraints of your actual project to simplify. Because we're trying to build something that is really difficult. It's very impressive, but it's really difficult. We're trying to drive six, seven, ten different consumers. Digital experiences, if you allow me, with a single API. And we're trying to leverage most of the work that we've done in one into the others, right? So you need to simplify. And if you have the ability to say, I'm okay, I'm only going to be building a website and I know that's not going to change because X, Y, and Z, then go ahead and build this if this suits you, right? But this raises a lot of questions. Like how is the layout that I just built with my front-end for the web in mind showing my smart oven? Or if I'm going to build different consumers, different layout builders for different consumers, how do third-party consumers do this? Like, they don't have a hand on the server and you probably don't want them to have. But also, and most importantly, is that there is no concept of a page in the server for the copper world. So we have as Drupal people, we have this cognitive bias to think about a node corresponds to a page, right? And that is not true. Like, a page is whatever that React app defines as a set of templates or that iOS app defines as their presentation, right? And it may be that those templates are for a single entity, but there is no guarantee of that. They could be pulling different sets of data and items of data. So defining a layout for a page in the server is very difficult when you don't even have the ability to define what a page is. And again, leverage your constraints. Maybe you can say for my project a node is a page and then build on that. All right. So one of the solutions that we could build is to have some way of defining consumers and assign configuration to that. That's something that, for instance, Facebook does. They allow you to go to developer.facebook.com and register an app. So in Drupal, we also have that. There is the consumers project. I created this for the consumer image styles, which allows you to select the image styles for each consumer. So you don't have to load them all in different situations. So you just register your consumer and then assign some configuration. Maybe with the layout builder. I don't know. But I'm very interested in learning what your experiences are with it. And if someone gets to build this, I'm very happy to help review and maybe even work on that. And we're moving now to user authentication. Some may be surprised that this is a hard problem because it's been working forever for us in Drupal, right? And it's been a solved problem. And other communities have struggled with this, but we have not. It has been working great. But we've been doing authentication using cookies, which is something that, again, the browser is a pretty complicated thing that does a lot for us. And one thing that does for us, it slaps a cookie on your request, depending on the domain. It works across subdomains. It encrypts cookies so they are secure. You can share state with those. So there's a lot that we can do with that. But when you have an app that runs in a Roku, for instance, you don't have this, right? So if we want to do authentication that works across consumers, we need to go to OAuth2. And this is the specification that the industry is using. There is little discussion about that. There are others. They also work, but this is the leading one. And the good thing about it is that it is solving many of the problems that you have. Again, this is a diagram that I didn't write. Please don't turn to read it. The idea is that OAuth2 has the concept of grants. And the authentication is based on the server generated a token that the consumer stores. And then every time that wants to prove that this is for user 43, then it just uses that token in the request. The problem becomes how to get that token from the server. Because it's going to be different if you have an Angular app and you have to authenticate a user because you just do what you're used to, right? I think of this like the GitHub example. You click the sign with GitHub and you get redirected to github.com. You put your password there if you're not already logged in and it asks for approval and then you get redirected back to the other side with a code in the URL that that site reads and then generates a token for you and you get it. So it's kind of a complicated process that happens, but it requires human interaction. That's called the authorization code grant. And this diagram helps you with deciding do I need this one or not because it could be that you're writing a Java application that is a demon that runs on a server and every Cron execution needs to make authenticated requests to Drupal, right? And for that, you don't have a user to click around and put their passwords. You could argue that you didn't even have a user at all because it's just a machine, right? So for that you would use the client credentials. So this seems very complicated to execute, but since this is a standard and most importantly a leading standard, there are lots of tools, lots of documentation that go with this. And also this particular implementation OAuth2 is based on JWT, which is when I was saying passing the token back and forth, it's not just a random string. It's a JSON document that contains information about the user that is encrypted with an encryption pair. So you have a set of keys, you encrypt that JSON document, and that's the token that you share around. So you can do creative things like if you have no JS proxy in there, you could share those set of encryption keys and decrypt the token and say, oh, well, this is actually not a valid token. I'm not even going to bother Drupal with this. Or you could say, oh, I see that this is a valid token for user 77, and user 77 doesn't go to Drupal. It goes to whatever external service. So you can also use this underlying JWT technology to do all the interesting things. Like for instance, and this is something that is already happening, doing single sign-on solutions. There are two different teams right now that, building on this module, they are providing single sign-on solutions. One of those is also another standard called OpenConnect, which, again, you can leverage to connect to just sign in in your iOS app and be signed in your web service, in your web app. So don't be stressed. There is a lot of documentation. I recorded a set of videos to help with the process of how do I use this grant or how do I debug that my token is being processed correctly and also the stuff that we didn't comment on, like scopes, what you can do, because you can limit what the user can do using OAuth and all that. So go and check that channel and see those videos. You can also install contenta, and there is knowledge hub that links to all those videos. And that is pretty much it. But, yeah, there are many other hard problems. Like, for instance, something that you did not expect, maybe, is that project management gets more complicated. Because you, instead of having one web team, now you have one iOS team, one Android team, then also one web app team and the backend team to manage, and they all have scrums. So you have now five scrums, and then you have scrum of scrums to coordinate what they have in common. And also you may have three different ticketing systems. And, like, it may not be a big issue, but it's just an example that when you're jumping into a new thing, you're going to find small nuances like this that you need to solve, right? So your process may be impacted and you need to figure it out to be productive. Another hard problem, and this may be the hardest one that I've mentioned is API versioning. I said it's the hardest one, because I'm almost convinced that it cannot be implemented in Drupal to a point that it works in every single scenario. So, again, leverage your constraints and try to make it work for you. But imagine that you have version one of the API that contains a content type with a field that you want to remove for version two, right? So you go ahead and you delete the field from Drupal, and it's gone. And it's gone with all the implications. It's gone from the database, so if a request comes in and tries to load that content type, data for that content type, the date is not there, right? You removed it from version one as well. And there is no good solution. And that is not the only problem with versioning. Like, there are all the problems, like, making small changes in the content model can have rippling effects that are very difficult to undo when you maintain backwards compatibility. But there are proposals to make versioning work within some given constraints. So we'll see where that lands. Content review. We talked about that a little bit yesterday in the decouple summit. It is very hard. The Workspaces project of content staging that was mentioned today in the Dreadnought, and the fact that you can set some content together and then see how it works in Drupal, it doesn't mean that it's easy to do in six different consumers. It's hard, especially because you need some level of authentication that may be exclusive for the preview system, right? You have a read-only application that all of a sudden needs authentication because you need to preview content. So you need to plan a little bit for that, especially when you're doing estimations and budgeting for the project. Search can also be problematic because it doesn't follow the same conventions of the rest of the entities. You could write a fake entity that wraps search results and then use JSON API to interact with that. But I would recommend that instead you go and index your search into things like, for instance, Elastic Search or Apache Solar, and then use those APIs to get your search. Instead of going to Drupal, go directly to the search index. But you lose all the good things that the search API module does, like the facet searching and all the widgets that allow good searching experience. So you've got to build that, which is not a big deal because if you're into decoupled, you have to be into building things. And you're going to be building a lot of things from scratch. And that's pretty much it. This presentation took a lot of effort. I practiced this presentation twice this morning before I did this. So all that to say that it would be great if you took a minute to go to the Drupal con session node, click into the evaluate, and you could even do it now if you want. And yeah, we have a little bit of time for questions. But I wanted to take the chance to say that I'm going to be on the Lollabot booth at noon answering any other questions that are lingering or that you want to ask later or that we don't have time to answer. So with that, do we have any questions? And please walk to the mic and stay in line because otherwise the questions will not be recorded for the video. I wonder if you can hear me fine. How about now? Okay, perfect. All right. Very nice presentation. I came in a slightly late, but I'm wondering if you covered stream wrappers and sort of hard problems. It seems like stream wrappers are sort of there, but they are not quite general enough. I'm not sure I understand what you mean by stream wrappers. Can you elaborate on that? So there is a notion of handling files in Drupal, right? Typically those stream wrappers are just meant for public and private file systems. Architecturally you can write your own stream wrapper and say, hey, Lollabot is a stream wrapper and it can do magical things, whatever you want it to do, but things don't necessarily work. Things are still tied in the core to the paths. They don't convert the URIs to, you know, the paths and things like that. And I'm wondering if you encounter that in the decoupled scenario. What I can say is that it has been an outstanding problem handling files in decoupled so far, but some really big improvements have landed in the last versions of Drupal core. The most important is that we can now upload file binaries in the most impressive ways. Like you can upload just a single pixel image or you can upload a file of five gigabytes. And you're not even limited for the PHP memory limit. So that is one improvement. Another improvement is that files now have the ability to unwrap the stream wrapper and provide a download URL when you normalize the entity. And that works for the stream wrappers that come with core, like basically public and private. I'm not sure if those would work with... No, okay. Probably WIM can provide more... So this is going in core? It's going into CDN module. I'm going to repeat that for the mic. Can you tell me your name, sir? Or maybe WIM can... That's Brad Jones, who's working with me on the CDN module. But CDN module is kind of a separate problem space, I think. I don't think you were necessarily interacting with CDNs. But it's a lot of the same things, so come and talk to me. Sure, but I think what you're asking is whether stream wrappers other than public and private are actually supported just fine in normalizations of your data. Is that what you're asking, really? In Drupal 8.5, we improved the way that file entities represent URIs. And it now automatically is going to expose a property on the URI field that contains a publicly accessible URL. So an HTTP URL for whatever stream wrapper you have. So as long as your stream wrapper implements the interface correctly, things are going to work fine. And as long as you're not using custom modules that do things wrong, things are going to work fine. In Drupal Core, all of this should be working fine already. If that's not the case, then please file a bug report. Because as far as I know that there is no bug reports around this being horribly broken or broken in any way. And I have to admit that in general stream wrappers is not something that is very widely used. Maybe that is why, but that's exactly why we need you to file bug reports if you're encountering them. I think there's a longer conversation here. Thank you. So there was Wim Lear's API-first initiative co-coordinator. I think, I don't know if that was captured by the mic, that the summary of it is that I was not sure if we supported other stream wrappers other than public and private, but it seemed that we do if they are implemented the correct way. Great work on the session. Thank you. You mentioned that versioning of the API is an issue. You said that you were almost convinced that it was unsolvable in Drupal. I'm curious what is specific about this problem in Drupal that is different from any other API? It seems like people are dealing with versioning APIs and deprecating things from their APIs elsewhere. What's unique in your mind about Drupal in this case? Sometimes, and this statement is going to be very unfair, but sometimes I reduce Drupal to a content modeling tool, which means that you click together like very complex content models. That gets translated into a content store, and that can be either just a database, a set of databases that work together, and all that. But we only have one of those, and we don't have a great way to version the store, and at the same time, maintain all the feature sets that we expect from Drupal. The example that I gave was when you remove a content type for version 2, it's gone. You need to get a little bit more creative on your solution, and you need to stop editing your content model so you can keep this API versioning, but to me that goes against what I think that Drupal is good at, which is creating content models. You're freezing the content model at some point, and you cannot touch it anymore, and then do all the stuff to maintain the API versioning. Is there no way to deprecate a field or content type in a version 2 and then remove it completely in version 3? It seems like I'm asking about the intermediate state, where in my example you're just talking about version 1 and version 3. Is that possible? If you keep the Drupalisms in place, it could be done. You could have a property on a field that says if it's deprecated or not. You could even have a configuration entity that informs which fields are deprecated or not, and that could even go into the API response. Let's say this field is deprecated, stop using it because it's going to go away in the next major version. But the thing is that when it goes away, it doesn't go away in the next major version. It goes away for all of the versions because you removed it from the database. But yeah, you could deprecate things and discourage use of. Thank you. Hello. Thank you so much for your presentation. It was great. My question is about routing. You mentioned the coupled router module, which sounds great. My question is basically, the route becomes the first class query in every request. And the decoupled routes module just handles the right entity over to me? Yes, that's it. It doesn't hand the entity, it hands the URL that you need to request because the decoupled router is unopinionated of the API that you're using, so it can give you the entity ID, the JSON API URL, but also it can give you the REST core URL. So that's why it doesn't give you the response right away because it doesn't know which API you're using. Does that performance implication? Yeah, but you can use sub-requests to bundle the next two together. But this is actually something that it was requested as a feature request in the issue queue to just return the response. And maybe it is that we need to do that because it feels like sub-requests should handle that to me. Thank you. You're welcome. Oh, sorry. I just realized that we are out of time. Well, I didn't realize they had to tell me. But do you mind taking the question at noon? All right. Again, if you want to continue the Q&A, come to the Lollabot booth with 100. We'll keep this going. Thank you. Did you like it? Yeah, it was a great question. Nice.