 Welcome everyone. My phone says it's 8 30. So let's get going great to be presenting in person again Little less great. It's this early in the morning I hope everyone's enjoyed their parties or got together as last night if you need some emergency sugar during the presentation There's some strobe apples on the table Manufactured here in Portland, but approved by all the Dutch people on the team. So Today we're gonna be talking about GraphQL API. My name is Alex on the far back I'm the lead front end engineer at OpenSocial I've been working with Drupal for just over 10 years You can find me on Drupal.org or the Drupal Slack as King Dutch save name on Twitter And I've contributed to the web Onyx GraphQL library and the GraphQL module as well fun fact today is actually King's Day in the Netherlands Which is fitting with my username if you want to see the slides for this talk During the presentation or after the fact you can find them on my website Alexander the far back comm slash talks I Work with the company names OpenSocial we build community engagements platforms as a service And we try to help organizations connect with their members Some of the organizations that we work with are on the screen If you want to work on any of the things that I'm talking about today Then go to careers that get open social calm because like everyone at Drupal con. We're also hiring So what is and isn't in this talk my initial slides for this presentation We're about an hour and 50 minutes, but they only gave me 50 minutes. So I had to cut some things We will be looking at setting up a modular schema and I'll take you through the GraphQL module What it takes to get from an HTTP request to a response and I hope that you get away with this to Have the internals be a bit demystified and give you some confidence to dive into it yourself I won't go into depth on schema design. I'll give you some pointers there And I won't be able to cover testing, but hopefully I can get that in an asynchronous format later this year Little disclaimer. I've been working with GraphQL in Drupal for about two years now But while making this presentation, I also found some things that I could have improved So I hope that you learn from this presentation, but don't copy blindly if you find things that I could have done better Let me know after the talk or later on Twitter I'm gonna go through an example based on the work that we've done for a real-time chat at OpenSocial And we'll be looking at a little slice of that I'll only cover the sending and receiving of messages within a single conversation But those concepts should transfer to extending that to things like conversation management and user management as well The first thing that we always do when we start with GraphQL at OpenSocial is to start with schema design And the design is in there for a reason because with GraphQL We tried to start from what the purpose is of the applications that we're trying to serve Rather than from the data that we already have And one of the things that you may notice when designing GraphQL schema is that you actually have some data duplication in your API And usually that will be for different representations to help Clients get access to those different representations more easily without having to transform them themselves and that's actually fine For what we're working on today, we can actually use a user story which is as a chat participant I want to be able to send chat messages and receive chat messages from other participants Without refreshing my page so that I can easily communicate with others that includes receiving a list of existing chat messages Creating a mutation to send new ones and also having a subscription to get real-time updates of new messages that are sent A little demo of what this looks like Within OpenSocial with the more full-fledged chat you can see two windows I can create a new chat search for users When the conversation is started, we don't create it immediately only when the first message is sent Do we actually create a conversation and you can see that it pops up with the other receiver? automatically thanks to GraphQL subscriptions if you actually want to know more About schema design and the schemas that you see in this talk one of the books that I found really helpful Which unfortunately I only found after three quarters of a year of doing my own research is production ready GraphQL covers a lot of things for schema design GraphQL security How to have a GraphQL API in production and also evolving your API in the future and giving yourself room to add new fields? I was so happy with it that I convinced our CTO to buy this for all of our developers So anyone that comes to us and starts working on this gets their book on day one So with that out of the way, let's take a look at the GraphQL module itself Because like all things Drupal there's a module for that The GraphQL module uses the GraphQL PHP library under the hood The contribution is an issue trackers for this module happens on GitHub. So that's where you can find it And currently there's two versions which sometimes gets a little bit of a confusion with people Do I use 3.x or do I use 4.x? You can also see this in the user statistics and the difference can be explained that the version 3 takes Drupal's data and Models and actually produces the API from that this gets you going really easily But it does expose your internal data structures in your API, which usually with GraphQL is undesirable They changed that by rewriting the module in Version 4 and going with a different approach where you have to define your schema yourself and then wire it up Today we'll be looking at version 4. It's a little bit more work, but you can get a much cleaner schema that way There are a lot of people now on version 3 that kind of want to get To version 4. Thankfully, there's Jesus Olivas in the community who's been hard at work on creating automated schema generation as a module on top of GraphQL version 4 If you want to see what he's working on the updates on the GraphQL channel and Drupal Slack and Unfortunately in PowerPoint the link I believe it's the GraphQL compose project Where his work has been published that you can find what he's working on and use it yourself So that's really awesome to see So let's take a look at implementing our API for this message exchange The first thing that you'll do when you work with GraphQL version 4 in Drupal is you'll define a base schema in your custom module The base schema is there to help us with some of the common types that we'll use in our application And we'll later extend those with schema extensions in separate modules to build more specialized functionality You define your base schema by creating a plugin This goes in your module under the plugin GraphQL schema namespace and to help you get started. There's the STL schema plugin base Class provided by the GraphQL module like all plugins There's an extension that gives us an ID which is in this case Drupalcon and a human readable name to help your fellow developers To make this class complete There's one function that we need to implement and that's the get resolver registry function And the resolver registry is gonna play an important role in what we're gonna do After we've defined our schema file and it's gonna store The resolvers that will help the module figure out how to actually resolve data We'll dive into that in a little bit. But first, let's take a look at what this base schema is So the schema for your base schema is in the same module as the plugin in the graph QL folder of your module The name of the file is the ID that we just used dot graph QLS and in this case We've used the schema property to define our three operations query mutation and subscription And we've mapped these two types with the same name You can give them any name, but just for convention. This is easy on the right hand side You can see we've defined some base Types that we can use such as date time which we use for time representations The only field it has is a timestamp which returns a scalar just to denote that it's actually a timestamp We could have returned a timestamp directly, but by using an intermediary type We can later add specific human readable Representations to save clients from having to add a date manipulation library if they want to use the data directly from the API What you also see is the node interface This is not a Drupal node, but the node concept in graph QL was actually introduced by the relay client And we'll see this implemented in a lot of our types and it gives consumers of the API a central way to Refetch content that they may already have regardless of what type it actually is. So the only field that requires is an ID and then the last three Are four things that you see the cursor scalar the connection interface the edge interface and the page info type These are types that are defined by the relay connection Specification and what this connection allows you to do or what the specification allows you to do is to implement pagination In a way that is resilient to removal or modification of data So the cursor is a value that's provided by the server Which can help it in future requests to figure out how to Generate the next page of results. So this is offset based pagination I won't go into the details of the implementation unfortunately, but I'll let you know where you can find it since we have that open source And how we have that set up With our schema designed we can return to our schema plug-in And this is the moment where we need to start telling the GraphQL module how to actually turn that schema that we've just defined Into something that it can fetch data with and that's where that resolver registry comes in So rather than returning it directly. We now store it in a variable and return it later We can look into the resolver registry to see one of the functions that we'll be using most often And that's the edge fields Resolver function you can see that it takes three arguments. The first one is a type This is just a string. This is the GraphQL type that you're adding the resolver for the second is the field It's the field within that type and the third is a resolver Which is the resolver interface and you can see that it just stores this in an internal array To actually start adding these resolvers we use this function in this case We start out with the daytime and the timestamp daytime type and the timestamp fields and We introduce a new Helper method to actually help us create these resolvers and that's the resolver builder in this case. We use the most simple Resolver that that is which is the from parent's resolver and that just says just return the previous value that I got For this type so any field that returns a daytime will return a timestamp and for the timestamp fields We can just pass that along as is the next Field that we want to map is the edges fields on our connection and for this we actually use a slightly more complex resolver and we use the produce function of our resolver builder to Get a data producer, which is a concept in GraphQL these are plugins that actually contain custom Resolver logic and by wrapping these in the data producer plugins. We can add some things like caching later caching which we'll see in some later slides What's important to note here is that after the produce you see this map function That's where the produce function doesn't actually return your data producer directly But it returns a wrapper class that has some helper methods And this mapping actually specifies how your data will go into your Data producer and we'll see this connection input in a little bit But again, we just used the value for that that we got from our parent fields So let's take a look at this connection data producer what it actually is You can see that it's in the plug-in GraphQL Data producer namespace in this case because it's related to our connection setup in open social We sub namespace it to connection We have the data producer plug-in base class with our annotation The most important fields in the annotation are of course the ID, which is what we pass to this produce function human readable name and description again for your fellow developers produce this is Using the typed data API Unfortunately at the time when I wrote this I wasn't very skilled to the type data API So I did the type script thing and just put any there Again a human readable label for what it produces and then we have our consumes which defines this input that we just saw in the map For a connection again defining what this is we implement the data producer plug-in caching interface Which lets the GraphQL module know that it can cache this value There's one function that is important in this data producer and that's the result function Here we see that we get this connection as input which we just put in our consumes annotation And the only thing we do with it is we call the edges function and return the value and that's how we've Resolved our value in our schema class with the get resolver function. We can repeat this process a few times for other fields To set up our other connection fields We do the same For edges and unfortunately the page info didn't fit on that on here, but we would do the same again And that would conclude our base schema with those base types that we can reuse To actually add some of the chat specific functionality We can create a separate module Using a schema extension class in that module and what this allows you to do is it allows you to use Drupal's flexibility and modularity and only have your API available when the functionality is actually enabled So that you don't end up with an API that doesn't match what your platform is actually doing so again, we start with Class a plug-in in this case. It's a schema extension plug-in. So we change the namespace slightly and we extend a different base class We also use a slightly different annotation Which is a schema extension again We give an ID and name and a description and important to note here is that we specify the schema that we're extending Which is the Drupal con schema To complete our class we have to implement one method Again, this is the register resolvers function So slightly different from the schema class and you can see here that we actually get the resolver registry That we just created in our base schema class as input and again to help us with these data Data producers or resolvers we create our resolver builder Variable The next step is to define our actual Schema in this module and we split this over two files this time the first file is a base file So this is again in the graphql folder It will be your schema extension ID dot base dot craft QLS and what this base file allows us to do is to define all the types that are specific to this module In the next slide we'll add to that an extension file and that's a file that allows us to extend types defined in other modules So what we have here is quite a lot. I'll go through it quickly We have our chat message which contains some fields that Yeah, give information about the message that was sent such as a sender Pattern we use is to use an actor as the sender rather than a user directly And this is because in the future We also want to support things like chatbots and automated systems sending messages So by creating some in direction here, we give ourselves that flexibility You can see that the content is actually defined as a union So our content can be one of three types the most the type will use most often is the media chat message content Which just contains a string, which is the message that we're sending The other option is a user event chat message content Which can have an event like conversation created join part to indicate actions that users take It also has a subject to indicate who the action Was taken on and we can use that together with the sender to create different types of events So for example, if I would remove you from a conversation I would be the center of the event, but you would be the subject Whereas if you left yourself, you would be both the sender and the subject Finally, we have the deleted chat message content. That's when someone deletes the content a message We want to indicate that the message was there, but actually throw away the content so that doesn't Accidentally get loaded by anyone in GraphQL. You cannot create empty types, but a workaround that's been Sort of standardized is to use an underscore with a nullable boolean and we'll see how we always return null there in case it does get selected Furthermore, you can see that we actually implement our connection here using those interfaces we defined in our base schema module and finally we Implement to inputs which are which are used in our mutation As well as a payload which is returned by our mutation And this is again a pattern that was promoted by the relay GraphQL client, which says if you always have an input Type then you can evolve that over time and the payload Using a payload rather than returning a value directly Also allows us to add other information such as user errors Or maybe we want to add information about the sender rather than just the chat message in our API in the future So with those new Types defined in our base schema file. We can go to the extension schema file So this is Drupal con underscore chat extension of GraphQL S And this is where we alter the query mutation and subscription type that we defined in our base schema module We had two fields to our queries. The first is chat messages to get a list of messages using the pagination and We define a field to fetch a single chat message for a mutation We create a fewer sent user chat message The reason we include the fewer hairs to indicate to our API consumer that they don't need to specify a sender But we're actually using the person accessing the API And finally we extend our subscription type to say you can subscribe to new messages. You'll just get a chat message directly so with our schemas defined let's go back to our extension class and actually Look at how we define our resolvers for some of these types I won't go through all of them, but I've picked a few that I think are interesting to show how to use this resolver builder class So the first one is the chat message fields and we can actually use a data producer that is provided by the GraphQL Module, which is the entity load by UUID and it takes two arguments. It takes a type and the UUID The type for us is always the same. It's always chat message. So we use the from value Builder which will always just return that string or value exactly and the other is the from argument builder In this case we use the id argument, which was the only argument that we had on our chat message field So this is actually the user input that we're using And the results will be the entity that's loaded or null in case The entity couldn't be found For our pagination we do the same thing Except we map a few more fields and we always use all of the arguments and here we use a custom Data producer. So this is the messages and this is where we did the pagination implementation for OpenSocial Unfortunately that could be a talk on itself But if you want to take a look at this, it's in the OpenSocial repository The base classes are in the GraphQL namespace and what we've done is we've created a data producer that always returns an entity connection class The entity connection class contains all the difficult logic of figuring out how to do the pagination how to do sorting figuring out whether there's more or Results or whether there's a previous page and the only argument the entity connection class uses is actually a query Helper and this query helper implements the specifics for the entity that we're trying to load This is creating for example the entity query Because if you wanted to load profile data based on user sorting You would have to set that up differently. So we could that allowed us to separate those two specific things There's a query helper implementation for topics, which again, unfortunately you can see at the bottom of the Slides fell fell off a little bit Let's look at the next one so the other interesting one is for this deleted type that we had Here again, we use from value to always return a Single value. We do have to map all our fields. So we also map the underscore and we just always return no Finally, we're not the only thing we can resolve our Values, but we also sometimes need to resolve a type and this is for our union which we defined Because it can actually be one of three concrete types So we need a way to tell the GraphQL module in the GraphQL library, which type it's actually working with You can see that we do this We don't define a fields, but we only define the type that we're resolving and we give it a callable In this case the callable is a static function on our chat type resolver class And when we look at that we can see that it's a relatively simple class that only has the one Function as an input. It's got them gets the message content And what it returns is the concrete type name in your GraphQL schema So you can see that that's done here with a switch based on some Information that we've defined for our entity implementation In case we get anything else that we don't know then we're gonna throw an error and some developer messed up somewhere that we need to fix Finally we also need to register our mutation and since we Use pattern for this across all of our mutations. We've created a little helper function in open social which we call Register mutation resolver we give it our registry our builder and the field name that we want to resolve for and if you look at the Implementation it can use that to figure out the data producers that it needs to load So for mutations we actually always use two Data producers the first one is for input and the other the next one is for output in our input data Producers we take the raw user input We do validation on it and we actually convert it to a typed class in PHP and that class then gets passed to our second data producer and this is used with done with the compose function from Our resolver builder and you can see that we use from parents here again But this time from parent is not actually the parent value from our previous fields But it is the value from our previous data producer within the compose function The second data producer takes this typed class which is now easier to use checks whether it's value valid And it contains any actual business logic with that input And with those highlights you can basically copy paste it to create Resolver mappings for all your other fields as well So Drupal with that could actually already do querying Amutations, but it can't cover subscriptions. So for that we need to add something else What we've done is we added a separate Service because Drupal itself doesn't really do long running connections And we also found that the way Drupal does data loading doesn't lend itself Well to running it as a long running process So we've added a subscription server in between And what that does is it handles the subscriptions from the client and whenever something changes in Drupal Drupal will send a message through rabbit MQ Which triggers the subscription server to fetch this updated data Using a GraphQL query and then pass that data on to the subscription that it's serve serving If you want to know more about the tech that's used within this I did a talk about this in GraphQL Galaxy at the link below We can look a bit Closer at the inner to this if you looked at our schema then you can see that there's a parallel between our Subscription fields which returns a chat message and our query which also returns a chat message You may see that the query field is nullable And our subscription is not which could cause errors But because we get our ID from Drupal itself, we know that the chat message actually exists So it's okay to have our subscription non-nullable What we do in the subscription server itself is we actually take the Subscription query from the client and we rewrite it into a query that we can send to Drupal So we start at the top with the reference query And this just selects the chat message fields with the ID ID here as a placeholder field the Next step which unfortunately doesn't fit in slides in this presentation is to take that To take that reference query and our subscription and actually remap all the fields into our query and use field aliases to make sure that the structure of the response is the same as what our subscription client expects The next step is to take the client the graph to our client that we set up for this particular Subscription that's actually where we do a bit of user impersonation. You could consider the man in the middle of tech And then we execute our query with with Drupal And we send the response as a next message frame Back to the client the next message class here comes from the GraphQL WS PHP library which you can find on github and packages and it implements the GraphQL WS Specification so that actually works with all the major GraphQL clients and that's how we manage to serve subscriptions while keeping all the data in Drupal itself Now that we have everything set up Let's take a look inside the GraphQL module at how we actually go from such an HGP request to Our final data response. This is going to contain some diagrams and at the important parts We'll zoom into a bit of code So the first thing that happens when we have an HTTP request coming into Drupal is that it actually figures out which Server entity is associated with that route So there's a route producer in the GraphQL module that will take your server entity configurations that you can make through the GraphQL module And it'll map that to the route that the request is coming into it'll pass this on to query route enhancer That will load the server. It'll convert the raw GraphQL request to a server operations class Which is provided by the GraphQL library and that's a lot easier to work with What that enhancer also? implements is the GraphQL multi-part request spec and that converts any attached files in the requests to Uploaded files that you can just map as inputs to your data producers So if you want to know how to do file uploads with GraphQL, then that is what you should be looking at The next step is to actually Hand it over to the requests Controller in GraphQL, which gets the server And the server operations and this only really does one thing it checks is the request request is a batch request And if it is it'll map over each operation in the request and delegate to execute operation If it's not a batch operation, it'll just call execute operation directly. We can look a bit closer at execute operation Which is already sorry. Let me see which is already on the server entity. Yes, and this gets the operation Parameters and it does a few things the first thing is it gets the current implementation factory from the GraphQL library And that's actually what tells the GraphQL library How to function We get the one that's now configured because we're gonna change it and we want to be able to restore it at the end so that any Other code using the library Isn't bothered The next part is that we set the implementation factory. You can see that this is a Drupal service So this would be a point that you could hook into the process if you need that that level of control And it'll just call the create method on your on your service The next step is that it'll get the configuration from The server entity and this is actually Configuration class provided by the GraphQL library that controls a lot of things So we'll take a closer look at that in a few slides But when we have that configuration we can pass it to the helper provided by the GraphQL library to the execute operation function and the helper Provided in the GraphQL library is a set of helper functions for HTTP GraphQL server implementations When we have a result we want to make sure that there's always cashing information available So in case that's not already part of the results will wrap it in a cashable execution results Class and just tell Drupal that it's not cashable and this also allows the server to cash the entire Request in case that would be possible for the operation And finally no matter if you were successful or had some horrible error. We restored a previous Execution implementation Let's dive into that configuration function because it does a lot of the heavy lifting for us There's two parts that we can split it up in the first part actually uses the plug-in manager from The GraphQL module to load our base Schema file so this is configured on the server entity when you create this through the UI you will choose Which schema that server will serve? It instantiates the plug-in And what we didn't do in our example with what we could do is implement the configurable interface Which is part of the Drupal plug-in system to actually make our schema configurable There's an example in the GraphQL module that uses this which allows the end user of the server entity to actually configure Which extensions are enabled? The second half of the module pulls this server config class From the GraphQL PHP library and it's actually interesting to go by each method one by one Because a lot of the work is done by the GraphQL PHP library But this is where the GraphQL module tells the library how it's going to work The first part is to set debug flag which just tells the library what kind of debug information to Do and that's comes directly from your server entity Next up is whether we support query batching or not if we disable query batching here But we get a query batched operation then the GraphQL library will throw an error Next is the validation rules. This actually currently comes from the Server entity and is not yet configurable. Although there is it to do in the code So if you want to contribute that could be a place to start By default it will use all the validation rules defined in the GraphQL PHP library These make sure that the operation is valid That there's no weird things going on. There's actually quite a few useful Validation rules in there if you wanted to implement things like complexity checking or rate limiting This would also be a good place to hook it to Next is the persistent query loader, which is also coming from the server entity Persistent query loaders are actually implemented in the GraphQL module as plugins So while I don't have an example handy for you here, there is an open pool request in the GraphQL modules GitHub to implement the automated persistence query Loader for the Apollo specification Finally we actually tell the GraphQL library what our schema is so This calls the get schema function of our base plugin And that will actually load that schema file that we defined earlier This is also the place where any extensions that you've defined. So our Drupal con chat module We'll get instantiated and loaded The promise adapter is because the GraphQL library works asynchronously So if you were to use the GraphQL library in something like react PHP, then you could actually use promises to do work asynchronously Drupal doesn't support that so we just use the sync promise adapter to make sure that everything keeps running synchronously And then the context is a value that is actually passed into all of our Resolvers so the GraphQL module uses this to collect cashability information for the operation That way it can do caching on the operation level Finally the field resolver is what actually has the contains the implementation in the GraphQL module that loads all these data producer plugins And we'll actually call into our Resolver registry if we go whoops If we go back up a step to our execute operation function Then we can see that now that we have this config We actually call into the helper to call execute operation and that call stack is interesting, but we'll go through it in a diagram So the server execute operation function calls the helper execute operation Function and this will actually validate the schema that you have to make sure that there's a schema Make sure that the server supports query batching depending on our Configuration make sure that the parameters that were in the operation are actually correct and valid If there was a query ID provided in the request for Persisted queries, this is where it would actually invoke the loader that we just configured And then it'll from that find the operation. It actually needs to execute In case the request came with as a get request it ensures that it's only ever query because you're not allowed to do mutation or subscriptions over get And finally if it's needed this will apply any error handling The interest most interesting function call it executes then is GraphQL Promise to execute which is where the validations that we configured actually get applied When the validations are successful, it'll call into our executor implementation. So this calls back into our service that we configured And it'll create an instance of the executor and that's where we transition back from the GraphQL library into our GraphQL module where do execute is called and that's on the next slide and The only thing that the executor in the default implementation actually does is take care of this operation caching So it connects the GraphQL library with the Drupal caching system If the request is cachable, it'll check if there's already a response if it exists It'll return the response immediately if it doesn't exist then it'll execute an uncached Response and try to store it in the cache if the request isn't cachable at all before example because it's a mutation It'll just execute an uncached Resolution directly From there we jump back into the reference Executor, so we're actually back in the GraphQL library and letting it do all the heavy lifting Which which calls into execute operation and execute operation contains two branches So for everything that's not a mutation it'll call execute fields and this will go in a loop And we'll do a breadth first resolution So it'll actually go through all your top-level fields first until it hits a point where it needs to start resolving promises to load data And then continue in depth for mutations It also implements a loop But it does things slightly different because the specification requires that mutations are executed one at a time to make sure That if any point a mutation fails that your next mutations aren't left in an indeterminate state You can see at the end that it would call resolve function and that's actually the resolver That's back in our GraphQL module and that's interesting to look at We configured this in our configuration Class when we called the set filter solver function and it's actually a callable That's returned by the get filter solver function in our server entity If we look at the function then we can see it takes four arguments and it uses our registry the four arguments are actually dictated by the GraphQL Library and this looked a lot better in keynotes But the first one is the value. This is the value of any parent fields Or in case this is a root field It would be the value you configured as a root fields today. We didn't configure any The second one are the arguments provided to the fields in the query The third is the context. This is that context value that we created in our configuration with set context And the fourth field is actually info which provides a lot of information about the field that we're resolving like where it's used What is its child selection? And while we don't currently use this at OpenSocial you could use it to optimize data loading if We look at the function itself you can see that we use the context to actually create a fields Context so while we have this generic context that we bring along for the entire operation We also create a context for each field that allows us to do field level caching So we have caching at two levels both at the field and at the entire operation The next thing we do is we actually try to resolve our fields Which is where our resolver registry comes back into play? And once the resolution is done We actually check if the field is cacheable Or if the result is cacheable and we add it to our field Cacheability information and the aggregate also gets attached to the Cacheability information for our entire operation The result field is actually the interesting function here in our resolver registry And it takes the four values from the GraphQL library and our own field context And the first thing it tries to do is to actually find the resolver that's been configured For our field with the get runtime field resolver and that delegates to get field Resolver with inheritance we can dive into that a little bit because the inheritance part Actually helps us out The first thing we do is we find we try to find the field resolver for our type for the type that we have and The field that we have By doing a simple array lookup that you can see at the bottom If we don't find the resolver we return null in case we don't have a resolver for this specific type and fields We check if the field can implement any interfaces And this is something we use at OpenSocial a lot for example for our connections If that's the case that it implements interfaces will actually Do a resolver lookup for each of the interfaces that are implemented. So if you have some generic functionality Like this connection class, then you can map the resolver only once for your interface And it'll resolve the values for all of your implementations If we go back up to our resolve field function, then we can see that in case we do have a resolver We make sure that it implements the resolver interface and then we call the resolve method in case we don't have a resolver We call our defaults field resolver. This is actually set to a function from the GraphQL library Which knows how to handle arrays and simple objects Where it'll just try to access the property of the value that it got Let's look at an example of one of these resolver implementations And we'll start with a simple one Which is the arguments resolver? So this is actually the class that you get back when you call the from argument function on your resolver builder And you can see here that it has stored this name that we initially implemented or input What it does in the resolve function itself is it actually doesn't array look up on the arguments that we got from GraphQL We can also look at a slightly more Complex resolve function for our custom data producers This is not actually a resolve function of our data producer But this is actually the data producer proxy that the GraphQL module uses and this is where that field level caching Gets implemented. So the first thing it does is it'll prepare which loads the data producer plug-in that we've actually defined and told it to produce And when that plug-in is loaded It'll make sure that all the contacts Context that are mapped. So this is where we in our annotation told it what we would be consuming That those are present if they're required in case any of the contacts are missing So this could be any previous value that failed to resolve Then we're done and we just resolve null If they are present we'll check if our data producer is cacheable at all and we'll execute a cache resolution And otherwise we'll resolve un-cached and this basically just calls into your data producer itself and returns the value And when you go through all of that Call stack And do that for all of the fields That is how you turn your query on the left side into actually Some data on the right Now I realize that's a lot of information. So I don't expect you To memorize this all but I hope that when you're working on this you have some handles to Actually dive into it and and figure these things out again Before closing I wanted to take a short look at the other things were working on in the GraphQL space So one of the things we're working on is working together with the simple OAuth maintainers on a 6.0 version of the simple OAuth module We're looking to do a bit of separation of concerns of how scopes are defined and also make the module Ready for more third-party applications because we found that the current implementation Works very well if you control both the client and the server But when you no longer control the client yourself, there are some change we'd like to make Another thing is schema linting. This is usually quite easy with static analysis But one of the challenges that we see with Drupal because we have this great modularity Is that we actually need to figure out whether our schema is valid and doesn't actually break when some of the modules are disabled so we have to figure out what Yeah, what module Combinations are possible and whether those schemas are all valid And finally we want to implement rate limiting. We haven't seen an implementation for this yet with the GraphQL module. So this is going to be overriding some of those validation rules to look at Yeah complexity handling one of the things we're thinking of borrowing is the Drupal debugger actually has some really interesting code to track Database requests that is something we could hook into to see if we can figure out what the database requests are on a field level to actually Determine the complexity of individual fields and radar requests by that That's all I have for you today If you want to talk about other things related to GraphQL find me in the GraphQL channel on the Drupal Slack or on Twitter Or just here at TripleCon Yeah, thank you