 get to meet some of the people that I've been working on modules with like the GraphQL module for for a few years. I never met them in person. So that's a really nice practice. Hi John, what are you up to? What are you up to? It's good to see we have the screens now. I was looking into this yesterday, like the speaking interview yesterday morning and I saw the big white screen in front of the windows. I was very happy that... Oh really? Yeah. You don't actually go cover the right machine. And I was like, but I build everything in Kino with animation. I thought, well, it's kind of difficult to slice in, I guess, to get my animation done. That actually worked. That was surprising. I won. No audio, though, so that's good, but no. But I really like the Kino presented to you that I can see right now. How long I've got and what it says, don't know what it says. I did that for previous presentation. It has the same, like it has a different energy as well. Yeah. But yeah, you always win the two butterflies with the same browser, right? So there's going to be some time to come. Thanks. Someone on the front row, yeah. No, no, no. I even won't have time for Q&A now. I'm kidding, a little bit. Don't worry, it's too early for audience participation. Yeah, exactly. Yeah. First row is great, best view. So these next to the Q&A laptop would make more sense. Oh, yeah. There's two chairs up front here. If people still want to sit on the second row, I see one in the middle here, one up front. Good morning. Cool. It's about 10.30. I got the signal that I can start, full room. I hope everyone's been enjoying Drupalcon so far. I also hope everyone is awake this morning. There's going to be quite a bit of code on the slides. Welcome to this presentation, building a GraphQL API beyond the basics. We're going to do a bit of a deep dive. My name is Alex on the far back. I'm lead frontend engineer at OpenSocial. That's my title. I've been doing quite a few other things, translation workflows, GraphQL API, build a real-time chat two years ago on top of GraphQL and Drupal. Programming for 20 years, 10 of those. I've been getting paid for it. Slide coffee addiction. My first Drupalcon was actually nine years ago in Prague. It's really cool to be back. A little bit about OpenSocial. We built community engagement platforms as a service. We work with purpose-driven organizations, some of which you see here on the slides. If you like what you see in this talk and you think you would want to work on this, come chat, because like everyone else at Drupalcon, we're always looking for Drupal developers. What is and isn't in this slide, we are going to talk about setting up a modular schema, how to use the GraphQL API. We're going to do a GraphQL module, how to do a deep dive, how the underlying GraphQL library works a little bit. I'm not going to talk too much about schema design and automated testing. You can find the slides for this presentation on my website, Alexonthefarback.com. Look under talks. There's also a link on this page to a GitHub repository that will be made public after the talk that contains all the codes you see on the slide, as well, and some more examples for things I couldn't cover. We're going to be looking at building a real-time chat, or at least the parts of it that we did for OpenSocial. The first thing we always start with at OpenSocial is schema design. And design is not there just to be a nice word. It's really there for a reason. We've got to define the goal of what we're building. We're going to look at some of the parts for the chat today, and we can capture that in a user story. As a chat participant, I want to be able to send chat messages and receive chat messages from other participants without refreshing my page so that I can easily communicate with others. And this captures a few things. It captures that there's chat participants, so there's going to be users that we want to have information about. We need to send things that's the mutation in GraphQL to be able to modify data, receive things which is querying, but also without refreshing. So that's where subscriptions come in. And if we look at what this looks like in OpenSocial, then we have an OpenSocial installation on Drupal. We have two browsers open. We can look up a user. We can send a message there using the input, and you'll see it pop up on the other side. And of course, we can reply, and that'll just update without refreshing. If you want to know more about schema design and how we get to the schemas that I'm going to show you, I recommend this book, Production Ready GraphQL. I spent about a year trying to figure out all of the GraphQL stuff, looking around the internet for best practices. Then I came across this book, and it basically contained everything that I'd figured out on my own. So for now, we bought this book for all of the OpenSocial developers, and we just give it to them to get started because it covers everything from security, schema design, schema evolution, testing, rate limiting, all the good stuff. So let's dive in. The GraphQL module, that's what you're here for. There's always a module for that in Drupal. GraphQL is no exception. There's actually two versions at the moment. They both use the web onyx GraphQL PHP library under the hood. That's the defective standard GraphQL library in PHP, follows the JavaScript reference implementation. All the contributions happen on GitHub, and both 3 and 4.x are actively used by the community, as you can see here. So why are the two versions? Version 3 basically gives you an entire schema out of the box based on your data structures in Drupal, and version 4 requires you to write those things yourself. You may ask, why would you want to do it yourself? That's actually what I prefer, because GraphQL is all about making life as easy as possible for the frontend application to get exactly what they need. And what a frontend application needs is usually going to be different from the way you store your data. So version 4 is also what we will be looking at today. And if you're currently using version 3 and you're thinking, hey, I want to get some of that version 4 goodness, but I don't want to do everything myself, that's thankfully where Jesus Olivas has done some work and created the GraphQL compose module. This works on top of GraphQL version 4 to give you some of those features from version 3 of automatic schema generation. He has been posting updates about this in the GraphQL channel on Slack, and you can find the GraphQL compose project on Drupal.org. So let's look at implementing our API. When we have our schema design, the first thing we'll do in the Drupal module is define a base schema. Everything in the GraphQL module works as plugins, and this is no different. Our base schema will contain some of our shared types and we'll later show how schema extensions can reuse those and build on top of it to extend the functionality. So we do this in our DrupalCon base schema, the PHP file, in the plugin GraphQL schema namespace, and we start out by just creating a class that extends a base plugin and has a schema annotation. The only thing we really need here is an ID that we'll use to reference later and a human readable name for developers, and we only need to implement one function, and that's our GetResolver registry. The Resolver registry is something provided to us by the GraphQL module that we'll dive into a bit later. For now we just create it and return it. Once we've created our plugin, we can actually start writing some schema files that the plugin can pick up. Here you see we define our schema with the three operations, as well as empty types for each of those operations. We name the types after the operations, but that's more convention than requirement. And on the right-hand side you see we define some shared types. At the top there's a date time which only has a timestamp. We could return a timestamp directly rather than wrap this in a type, but this gives us the opportunity in the future to expand it with other representations of time if we would like to. The second thing you see is the node interface. This is common in the relay GraphQL client to make refetching of data easier. We'll implement that when we get to chat messages later. And the final four things you see, the cursor scaler, the connection interface, the edge interface, and the page info type are all for the relay connection specification to help paginate long lists. They work particularly well if you want to do infinite scroll because they make you resistant against things like insertion and deletion within your list as you're paginating. With our schema defined, our module doesn't actually know yet how to get data for all of those fields when it gets a request. That's where this resolver registry comes in that we've defined earlier. So rather than returning it immediately, let's keep it around for a little bit and look at what we can do with it. If we look into the class itself, we'll see that there's actually one function that's very important to us, and that's the add field resolver function. As you can see, it takes some inputs and just stores those in an array. So let's look at the inputs. If we add the datetime example, we can see that it first takes a type. That's our datetime type in this case, and the next argument is a fields, which is the timestamp. The final thing we give the module is this resolver that will do some work for us, but we don't yet know where that comes from. So if we go back up to our schema plugin, we can add something that's the resolver builder provided to us by the GraphQL module, and that's what will help us get these resolvers. If we now call this function, again, adding the datetime and timestamp, then in this case, we can see that we can use the fromparent method to get a resolver from this resolver builder, and fromparent will just pass on the input that it gets. So input in this case is output. We can do the same thing for our other types and fields, but using different producers. In this case, we produce the connection edges data producer. We'll look at the implementation for that a bit later, and we map an input to it. Here we map the connection, which again comes from the parent fields. We repeat this for other fields on connection as well as our field on edge, and then we return our registry again, which now has these resolvers registered. As I mentioned, let's look at this connection edges data producer. Everything in the GraphQL module is a plugin, including these data producers. So this goes in the plugin GraphQL data producer namespace. Again, it's a class that extends the data producer plugin base, and we add the data producer plugin caching interface so that the module can do field level caching, and we have our data producer annotation. We give it the ID, which we just saw when we were calling resolver builder produce, as well as human readable name and description. We give some information about what output it produces and what it consumes, and here you see this connection that we mapped into our data producer with the map function. This plugin only needs one function implemented as well, which is the result function, and here you can see that it gets this connection interface connection, which is the input that we mapped, calls edges, and that creates its return type. So this is how you would create custom data producers. That could be a lot more complex than this. We would do the same for all of our other fields, but I won't bore you with that. With some of the base schema types defined, the next thing we do is we define a schema extension. This is another plugin, so let's create a plugin file. This goes in the plugin GraphQL schema extension namespace. We pull in some things from the GraphQL module again, and we define our plugin class using the STL schema extension plugin base class. The annotation we add here is almost the same as the one for the base schema. We again add an ID, a name, and a human readable description, but what we also specify here is the base schema that we're extending. So that's where our Drupalcon ID comes back, and that's how the module knows when loading the Drupalcon schema that it has to include this schema extension as well. So with our plugin class defined, our annotation defined, again, there's only one function that we actually need to implement, and that's our register resolvers function. We don't create our own resolver registry here, but we actually get the one that we returned from our base schema, and we again make a resolver builder so that we can add our fields to it. Once we've created this class, we can again start defining schema. So where a base schema will only have one schema definition language file, our schema extension will actually have two. The first one we look at here is dot base dot graph QLS, which defines any new types that the schema extension wants to define. So we'll have our chat message type which implements the nodes, which has an ID, send time, sender, and some content. The content can either be a message from a user. This is the union that you see. Media chat message content contains only text. It can also be a user event, for example, someone joining or leaving a chat group, or it could be a deleted message. And if we look at the concrete types for those, we see our media chat message content with the text fields, the user event chat message content, which has an event type, enumeration, either conversation created, join or part, as well as a subject who it happened to. And our deleted message content is a little bit special because GraphQL doesn't have empty types, but if a message is deleted, there's nothing there. So we give it the underscore Boolean fields that's nullable, that's not a queryable field, but it does give us a concrete type. Similarly, we implement concrete types for pagination, so you see the connection here again, and we create some inputs for our mutation. So you can send a user chat message, you have to provide a conversation ID, as well as the message content, and the message content in this case always has to be text. We don't put text in our chat message input directly because we want to be able to add fields to this in the future, for example, when we start adding media support. Finally, we create a payload type, and this input payload pattern is something we use a lot in OpenSocial because it makes mutations very predictable. You may be tempted when using mutations for the first time to reuse input and output types, but you'll find quite quickly that they're going to deviate slightly, so it's easier to just take some duplication and create a bespoke type per mutation. So with these new types defined, we want to hook this into our existing base schema, and that's what the extension that GraphQLS file comes in for. Well, you can see here we extend the type query that we defined earlier without any fields, and we now add the chat messages field that will return our connection, as well as the chat message field that allows loading a single chat message. Similarly, we define the mutation and the subscription. The GraphQL module itself doesn't support handling subscriptions, but it does support defining the schema for your subscription, so you can add a service next to it to actually handle web sockets and real-time data. The reason that we need to split up these files in our schema extension is because when the GraphQL module is loading all of these things for the GraphQL library, it needs to make sure that everything is in the right order, so we don't try to extend a type that doesn't exist yet. So this way all of your new type definitions can be first in your completed document and extensions come last. Again, with these schema definition language files created, it's time for us to tell the GraphQL module how to actually turn a request for those fields into data by filling in a resolver registry. I'm not going to show you all of them, but I want to highlight some that show some new functionality of the resolver builder that we haven't seen yet. The first one is for our chat message field, which takes the ID of a chat message and returns a chat message type that actually maps directly to a chat message entity in Drupal. We can use a built-in data producer from the GraphQL module here, which is entity load by UUID. It takes two inputs, the type. In this case, we use a hard-coded value, which is chat message, and we can do that by using the from-value data producer from the resolver builder. And we also take a UUID, which is actually the argument of ID that we had on our chat message fields. The next one to highlight is the chat messages fields on the query. This uses a custom messages data producer. That is a little complex. It takes five inputs, the after, before, first, last, and reverse. These are all arguments from the fields. And showing you this data producer could be a talk on its own. If anyone is interested, you can find the implementation on GitHub. Basically, what we've done is we've created an entity connection class, which contains all the complex logic of figuring out how to do proper pagination, making sure that it respects the right values, never misses anything. And we've created query helper classes that will contain the actual logic of what am I trying to paginate? Is it a user? Is it a chat message? And this allows us to keep some of the difficulty at bay. So if you want to do this in your own project, you can find these implementations on GitHub. The other one, I said we had the deleted chat message content, which had a field you couldn't query, but just to make sure that our linting doesn't complain about unmapped fields, we are going to map our underscore fields to having a static value of null. And you can provide static values, again, with the from value resolver builder. Finally, we had one union in our schema, which could return one of three different types. That was our chat message content. And because the GraphQL library needs to know how to resolve fields for the type it got back, it actually needs us to tell it what concrete type that union returned. And for that, we added type resolver. So this is a similar method to add field resolver in our resolver registry. It'll just keep an array behind the scenes. And we give it the name of our abstract type, the union, as well as a callable, in this case, a class with a static method. If we look at that class, it's quite simple, actually fits on a slide, has a single function, gets the resolved value, the chat message content, will simply query its type, and the return value is a string of the concrete type. So after calling this, the GraphQL library would know what the concrete type is, that it needs to continue resolving values for. Finally, the last one to highlight is mutation resolvers. So mutation has an input and output that we want to register. And at OpenSocial, we create a little pattern for it. This register mutation resolver is a custom function. And the implementation is about this. Because we always know that we want to have an input type and a payload type, as well as split up our data producers in two. You can see here, we use builder-compose at the bottom to actually compose the inputs and outputs of two data producers together. The first one is the field name underscore input. What that does in OpenSocial is it will validate the input, make sure that everything is there, that should be there, or return an error early. And the second data producer will just be the name of the field name. That can take a validated input, and actually focus only on the business logic. So this lets us split out some of the hairy things for input validation that we needed to do. So, as I said, GraphQL doesn't support, or the GraphQL module doesn't support handling subscriptions because Drupal doesn't really do async well. So we solved that by putting an extra service in front of our Drupal site that got new messages through RebitMQ, took in a subscription request, and converted that to a query. I did a talk about this on GraphQL Galaxy. If you want to know more about this, and I'll try to include the code for this in the GitHub repository as well, because unfortunately I don't have time to go over it today. But with all of our schema, our schema extension defined, as well as informed our GraphQL, the GraphQL module of how to map the schema to actual functions that we can execute, we can take a look at what it takes to take an HTTP request and actually turn it into data. So that's the journey of a GraphQL request. GraphQL itself as a specification is transport agnostic, but within Drupal you'll pretty much always use HTTP requests. So the first thing we do is we get an HTTP request in Drupal. It'll figure out that it's routed to the GraphQL controller, and it'll upgrade some of the parameters in the request into a class that we can actually use later. Once it knows it's a GraphQL request, it'll call into the request controller, handle requests function of the GraphQL module, and this will check if it's a batch request, i.e. containing multiple GraphQL queries or operations at the same time, or whether it's a single request. If it's a batch request, it'll just loop through each sub-request and then continue as if it was a single request, but if you were using GraphQL in, for example, a React PHP environment which has an event loop, you could go and process all these at the same time. The next thing it does is it'll find the server operation or the server entity that I omitted in this presentation, but in GraphQL you create a server entity that takes the schema we created, and it'll call execute operation in that server entity. So we can dive into that function. The first thing it gets is this operation parameters, and that is actually what was created by this route upcaster from the request. The first thing it does is it calls into the GraphQL library to get the current implementation factory because that is unfortunately some global scope, and we'll replace it with our own. So this is actually a Drupal service, so if you ever find that GraphQL doesn't do what you want to do, this could be a place for you to hook in, just replace the GraphQL executor service with your own. The next thing we'll do is we'll load the configuration from our server entity, and we'll dive into this function in a little bit because it's quite interesting. Once we have our configuration, we actually go back to the GraphQL library and let it do most of the heavy lifting by calling the helper and execute operation with our newly created config and our operation. We check the result to make sure that it's cacheable. Even if we get back a result that's not cacheable, we make sure that Drupal knows that it's un-cacheable, and then we return it. Whatever happens, whether it went right or wrong, we make sure to restore the previous implementation factory. So as I said, the configuration function on the server entity is actually quite interesting because it gives us a lot of power, and it's also a good place for you to hook in if you find that something you want to do is missing. It consists of two main parts. The first part is actually getting the GraphQL schema manager service. And it'll load the schema that's configured for this server entity. And that is actually the schema plugin that we configured at the start of this presentation. This gets loaded here. What you see in the last three lines is that it'll actually check if the schema implemented the configurable interface. This is something we didn't do, but there is a composable example in the GraphQL module that uses this interface, and it allows you to actually decide which schema extensions you want to load rather than just loading all of them that have specified our schema in its annotation. The next thing we do is we get our resolver registry. So this actually calls that function on our base schema that we started out with where we created a new resolver registry, and it'll have our populated registry that will pass to the GraphQL library later. Next up, we used the server config class from the GraphQL library, and we start populating it. We can control what kind of debug information we want. This is exposed through the UI of the server entity. Can enable or disable query batching if we have that need. We can set validation rules. The GraphQL library provides a large set of validation rules that are enabled by default by the GraphQL module. If you want to change those, then this would be the place to override. You could take the override to get validation rules, modify that as needed, think about rate limiting, complexity limiting, all those fun things. Next up, the persistent query loader. We skipped over this a little bit. One thing you can do in GraphQL is rather than send your query every time, you can register a query, get back an ID, and just send the ID every time. I think there was some work done for supporting Apollo automated persistent queries, but if you want a different implementation, this is the place to do it again. Next, we actually set the schema in the configuration object, and here we call the get schema function on our base schema plugin. And what that will do in the base plugin class that we extended, we'll go out, fetch all those GraphQLS files, parse them, get all the files for our schema extensions, parse those as well, and then give an abstract syntax tree to the GraphQL library that it can actually use to figure out how to resolve our requests. Next thing we do is promise adapter. So as I mentioned, within a React PHP context, you could do this asynchronously, but within Drupal we need to do this synchronously, so we use a sync promise adapter. And finally, we set some context that will be available in all of our resolvers, which in this case contains our plugin, as well as the parameters for the entire request. And finally, we set the field resolver, which will actually be a function that the GraphQL library calls to resolve a field. And that's a function we'll dive in a little bit deeper later as well. When all of that's configured, so there's a lot of things you could change if you wanted, we return the server configuration, and we're back into execute operation where we can call this helper execute operation function. So without code, let's look a little bit at the call stack of where that goes through. So we have our server entity execute operation that calls to execute operation on the helper function, and this does a few things for us. It ensures that we've properly set a scheme, ensures that the server supports query batching if this is a query batched operation, validates the parameters against those validations that we set, determines the operation to execute, make sure that if it's a get request, we only do query operations because mutations require posts, and any error rendering handling is applied here as well. Next up, it'll call into the GraphQL class promise to execute, sorry, that will actually apply our validation and call on to the executor promise to execute. So that executor is actually created by that execution implementation factory that we created. So this comes back to the GraphQL module. So we try to let the GraphQL library do as much as possible, but in some place we come back to the module. And this will actually create an instance of the configured executor, which by default is the one provided by the GraphQL library because why do everything yourself? So when we go, we wrap the executor for the GraphQL library a little bit to implement request level caching, so we check if this is cacheable, and if we do, we execute it in cached. We try to find an existing response and return it if that's possible. If not, we execute it uncached, and then we go back to the reference executor, do executes and execute operation. Here we check if it's a mutation because if it's a mutation, then we have to actually process every mutation field one by one since they contain side effects. So if the second mutation calls an error, we don't want to execute the third and fourth. If it's not a mutation, so a query or a subscription, we can process every top level field at once and basically do it level by level. To resolve a field, we call the reference executor resolve field function, resolve field value or error function, and then we have this placeholder, which is the actual resolve function. So that's where we get back to some of our own code. This was provided by the GraphQL module. That resolve function was set when we called getFieldResolver, and this looks like a lot, but if I remove some of it, then we see that what it returns is actually a function that takes some input and it makes use of this registry that we had before. We can look at the inputs. The value is the parent type of the field that we're resolving, or the root value of our server config. So if you have, for example, a connection with edges, this value would be the connection that it could use to get the edges. It has arguments for the field, so for our chat message, this would be, for example, the ID. The context that we set in the server config, if you have some advanced use cases, this can give you more information about the query itself. That should do some optimization. And finally, info contains some info about your schema, like the field definition, any subfields that there may be. If we look in this function, we can see that the first thing we do is we take our resolve context and we create a new field context. This is used so we can do cacheability information on a field level as well as on a request level. The next thing we do is we call the resolve field function on our registry. So this is actually the registry that we created in our base schema that has been passed all this way. Finally, once that gives a result, we'll check for any caching metadata that we need to store. So let's dive into that resolve field on our registry. The first thing we'll do is we'll call get runtime field resolver with those arguments that we got. And you can see that here at the bottom, it just delegates to get field resolver with inheritance for the parent type and the field name. You may already be thinking that that's a lot like the array that we were filling before. And if we look at that, then you would actually be right. So the field resolver with inheritance first tries to get the field resolver for the type and field explicitly. If that works, it'll just return that resolver. If that doesn't work, it'll check if the type that we were trying to get a resolver for implements any interfaces. So what this allows us to do, for example, is we had the message connection for which we didn't actually map any resolvers, but we had mapped our resolvers for the connection type itself. So this allows us to reuse some of those resolvers. It'll loop over the interfaces, try to find any resolver there, and then return. And if you look at the bottom at the get field resolver function, this is actually the mirror of our set field resolver function. It just does an array lookup and returns what we set there. If we go back to the resolve field function, then we'll try to, if we got a resolver, we'll actually call the resolve method on it. And otherwise, we will call the default field resolver provided by the GraphQL library. It doesn't do a lot, but it knows how to do object access and array access. So if your types are really simple, you could use that. But I would recommend just implementing your own things and being more explicit about how you want to resolve your fields. The final thing we're going to do in this presentation is take a look at actually some of these resolvers that we've been creating with our resolver bundle, because that's the last bit of magic I want to demystify. The from argument method on the resolver builder actually instantiates this argument class. And you can see it implements the resolver interface. We gave it one argument like when we created it, which is the name of the argument that we want to fetch. And you can see that the resolve function is actually quite simple. It gets a lot of this data. And the only thing it really looks at is this arcs object that contains the arguments that were provided to the GraphQL field. And we'll try to fetch the argument off that field that we wanted. You may think that the custom data producers that we created with produce is just as simple, but those are slightly more complex. We don't actually call our data producer plugins directly, but they're actually wrapped in a data producer proxy. So what you'll see here, we'll call this prepare, which will actually get the right data producer plugin. When that's done, we'll do some magic with the context. And that was actually where you saw this map function and where in the annotation, we defined this consumes connection property. This basically does the plumbing for that. And it'll also take care here of field level caching if it's needed. So if caching is enabled, tries to load the cached value for this field, otherwise resolve the field, store the cached value. If caching is disabled, just always resolve it. And when you know all of this, you actually know how you go from the left query all the way to the right. I know that's already been a lot, but if you want more, there's things to talk about, like testing, directive, schema design, custom validation, query loader plugins. We hang out in the GraphQL channel on the Drupal Slack. You can find me on alexanderfarai.com or on Twitter slash kingdutch. Bless you. I was asked to remind you of contribution opportunities there today, tomorrow, Friday. Also, please fill out session survey as well as the general conference survey. Thank you. Actually, time for Q&A. Didn't expect it. Hi, I have a question. The database is just a Drupal database, so just relational database for the actual data you load in, right? Yes. And I've seen that you're using UUIDs and it's hexadecimal stuff as an entity ID, but I thought the entity IDs in Drupal are always integers because we are running in problems, we are developing to a graph database or RDF and graph debate. And it would be really nice to just use the eries or URIs as an entity ID, but Drupal's core on a lot of modules always want integers as entity IDs. So how do you solve those problems? Because I saw you just making UIDs. Yes. So you're correct that Drupal uses numeric IDs pretty much everywhere, but I think as of some Drupal 8 version, they also make UUIDs for every entity. So that actually, if you look at the entity type manager, there's a method there to load things by UUID and that is what the entity load by UUID data producer uses under the hood. So if you're doing anything with decoupled applications, I always advise against giving out to Drupal ID but use UUIDs instead. Two reasons. An ID is really easy to confuse. If I have number two, I don't know if that's a user or a chat message or something else entirely and if you're doing things like delete operations, that's a big deal. If you're using UUIDs, then those should be unique across the boards with the asterisks of collision that are very rare. You're lucky if we run into them. So if you try to delete a user with a certain UUID that's not actually a user, you just will get an error and won't be able to do it. So yeah. If you want to use UUIDs, I guess create a custom property on all of your entities and create a custom loader. But can your UUID maybe just be the UUID with some prefix and then... Yeah, it's a bit of a semantic web. You just want to point to one Erie and it's unique in all the web. And we... Oh, sorry. Yeah, we just want to use it as a UUID because it's not... In our database only, it's about the web stuff and semantic web stuff. And so you want one URI for one object and we want to use this one and not going the way through Drupal database, mapping, entity ID to URI and then going back to the graph database where it's... The ID is a URI. You want directly to store those data as managed also by Drupal. But I wish to look up the UID manager. Yeah, so a good thing to keep in mind is that the ID type in GraphQL is actually defined in this specification and it's supposed to be an opaque type. So specifically your client isn't really allowed to touch it or shouldn't care about its format. So if you just want to pass URIs around as if it were IDs, you would have to implement your data producers to figure out how to do that. But your clients shouldn't really know any better because they're just IDs that they give to your GraphQL API and get back. Any more questions? Thank you. Enjoy the rest of the conference.