 So welcome to GraphQL Meet Drupal. My name is Sebastian. I'll spell you my last name, but you can find me on Twitter under the foobie and on Drupal.org as simply foobie. Mindless Ballindo, it's B-H-Y and not H-B-Y. Before we start, I want to thank a couple of very amazing people that have helped me, inspired me, and supported me during the journey of creating both the PHP library and the Drupal module for GraphQL. And there was Moosh, Preston, Wim, Dries himself, and Acre for enabling me to work on the module and the library for a considerable amount of time. So thanks to them, they have made this possible for me. What are we gonna talk today? What are we gonna talk about today? So first of all, we are going to talk about GraphQL in general, the specification, the language, the syntax. We are going to talk about the motivation behind GraphQL and why it even exists. And we are going to talk about Drupal and the status quo of the module in Drupal. And if there's time left, we will talk about some bonus features of GraphQL, which are not yet supported by the Drupal module itself, but are going to be very interesting once they are. All right. So whenever you are looking at a new technology that's rising, especially on the web, with the web moving extremely fast in terms of technology in the past years, you have to question yourself and ask whether, if there is any reason for the technology to exist in the first place. And I think that's a very healthy approach towards the technology that we are working with because it's, as I said, moving very quickly. And obviously, some people at Facebook thought there was a necessity for new technology, for interacting with data from a server. And they came up with a very nice solution and it's called GraphQL. And before we start with talking about GraphQL in general, we'll talk about the limitations of rest and what inspired Facebook to create GraphQL. So without any hard feelings about rest, obviously it still has its place in the world. There's some fundamental issues, especially with modern web application that starts or are increasingly working and interacting with a huge amount of data. And especially if we're talking about client-side applications that comes at a cost of performance, that comes at a cost of usability and developer tooling for the people who are working on these applications. And I'm pretty sure that you have seen this list before, the bullet points on this list. But we are going to talk about why or what exactly they mean. So let's look at a very non-hypothetical and real-world example. So we're going to create or hypothetically create a Star Wars API. And we want to fetch some information from a backend, from an API that is a simple rest API. And we want to fetch the full name of one of the characters from the movie or from a special movie. We want to fetch the list of other movies that the person appeared in. And we want to also fetch some additional information about the whole world of the character. So we're talking about traditional rest. This is probably what the response is going to look like. And if you want to try that, the API is public. It is a test API for rest interfaces and nicely shows some of the core principles of rest. And what you can see here is that we are getting a lot of redundant data, lots of data that we don't require or that didn't want for our application. So instead of just receiving the name and the homework name and some of the names of the films that the character appeared in, we're also receiving lots of additional information like the hair color or skin color or eye color, which we couldn't care less about. Also, I don't really know how to read the burst data. Also, it's probably also information that we will not need. And that's called overfetching. And this is especially relevant for Drupal applications because as we know, Drupal has quite hefty bootstrap. And when we were talking about continuously requesting API resources from a Drupal backend that comes at a very, very high performance cost. This is probably what it's going to look like. So left-hand side, that's the API server and right-well, that's you. So there's another issue. As we saw, we're not getting the information from the homework and from the films that we wanted. We wanted to actually just show the names of those. But instead, we just get the reference to the resources where we could fetch that information. And in order to get that information, we can have to do another round trip to the server instead of just receiving it directly. Where, again, we're doing a lot of overfetching and receiving additional information that we didn't want or pay about. And these additional round trips, again, come with a lot of cost for the server because of the bootstrapping that is induced on every single request. And it obviously gets much worse for larger lists of things. So as we saw, there's a couple of films that Luke Skywalker appeared in. So we would have to fetch the whole list of films separately in order to retrieve the information that we want. And that quickly can get out of hand. So in order to fix this, you might take the easy way or it might be tempting to try to solve the issue by simply creating band-aid solutions on your existing API. So one of the band-aid solutions that you might think of is creating a custom query parameter that allows you to specify specific fields from reference data, from data that is associated with the data set that you are directly fetching. And it might look like this. Another solution could potentially be to create specific ad hoc resources for each of the views that you want to display on your frontend application. And, or it might look like this. And you might think, well, okay, so now I've solved the question, I've solved the problem. Let's continue with a different problem. But that's really not how it's gonna work. And some people might be chuckling because you know what's gonna happen. If you keep working on this application, the amount of custom resources that you're going to create over the course of the lifespan of that application is going to increase more and more. And with that comes the high maintainability cost. So yes, you'll end up with lots of different resources for specific views. At some point in the life cycle of your application, you might not even know which resources are still in use and which have been used by previous versions of your application, it might be deprecated already, or some might simply break because you don't have full test coverage, you will never know. And what's even more scary about this, so far we haven't even talked about API versioning, backwards compatibility. So what happens if you want to maintain three different frontend applications, an iOS app, an Android app, a JavaScript application for the web, and they all receive data from the same API. They all might need different views and they all might need different versions of that application. It really comes at a high and exponentially increasing maintainability cost. Also, this SQL-esque way of thinking about our data in terms of joint tables, chunking the data into pieces based on the storage model instead of serving it in a way that product developers think about their data, which is graphs, you are really complicating it for the client-side developer. So wouldn't it be great if we could just tell the server to give us the specific information that we need within a single request and ask it to also return that information in the exact shape that we requested it in? And that is indeed possible. So let's actually look at how that would work. So given the same example, we have the person Luke Skywalker and he goes by the ID one. We want to fetch his name and we also want to fetch the name of his home world and also we also want to fetch the name of each of the firms that he appeared in. And I'm sure you've guessed it by now, but what you're looking at here is actually GraphQL. In fact, what you're looking actually is called GraphQL, which is a fun wordplay on the word GraphQL. It's a nice tool that Facebook developed on top of GraphQL, which leverages the schema definitions from GraphQL to give you this nice functional IDE with type of heads and auto-completion for your schema. So let's go back to the slides. That is nice. Everyone's happy. So yes, it's GraphQL, this is the logo of GraphQL and how does this source ever work? So at its core, GraphQL is just a language specification. It's powered by different implementations in various different languages by now. There's Ruby implementation. There's even a C parser, which you can use with any service that you can imagine. There's Sangria, Scala implementation. There's a PHP implementation, which we are using. And there's also Rails and Python JavaScript, anything that you can imagine. There's probably a library for that already. At its core, it is a data querying language running on arbitrary code, not with storage. It is backed by a schema that you define and is based on a type system that makes it fully introspective. And what that exactly means, we'll find out now. So it is very important to know that it's not a query language operating on a graph database. Instead, it runs on arbitrary code, which means it is actually just executing a series of function calls on the server to fetch data based on the graph of your data on the backend. So in case of Drupal, if you're fetching a node and the node has fields and you want to fetch a specific field from that node, it would basically first fetch the nodes through the entity API. And then in the second series of function calls which extracts specific fields from that entity and put them all together in a nice JSON structure. It is completely agnostic of your storage layer because of that, and it can therefore potentially run any architecture. You can make it work on your Drupal environment. You can make it work on any other environment. And what's also very important is you can make it work with multiple different remote APIs at the same time. So imagine you have a Drupal backend that also stores user information like Twitter account names or Facebook account names in the user profile. And you want to query both Facebook and Twitter when fetching information about the user. You could do that in the same schema. You could tell Drupal to retrieve that information on the backend and spit it out to you when you're requesting information about the user on your front-end application through the same resource. And that's very powerful. So I really like this slide and I've been copying it from one slide back to the next since I've been talking about GraphQL because it really communicates a very important message about GraphQL and that is it evolves and changes the client-server side relationship. So instead of the server dictating the response structure and also what it returns and the client having to work with what it gets, the server now publishes its possibilities and the client which specifies its requirements. This also shifts some of the responsibility from the server to the client, where now the client is capable of telling the server exactly what it needs and there's no chance of any of the client-set applications breaking unless you change the schema on the server, which you should. So there's a couple of resources for learning GraphQL and the syntax. We'll work ourselves through the syntax together here, but if you're further interested in GraphQL after this presentation, there's three very important and nice interactive solutions. Well, two of them are interactive. One is for the brave for learning GraphQL. So we have the Star Wars API that I just demoed. It's the first resource. Then for people who are more interested in a guided tour through the GraphQL syntax, there's also a learning series, which is also very interactive and allows you to execute GraphQL with yourself. And for the brave, there's the actual text form RFC specification written by Facebook. And it's one of the best RFCs I've ever read, but it's still an RFC, so it's very hard to grasp. And if you're writing your own GraphQL parser, that's the last resource is probably where you start. But if you're just interested in the syntax, go for one of the first two, okay? So the features of GraphQL are manifold. And first of all, for entering one of the, for querying data or for mutating data, or for subscribing to data through WebSockets, obviously you have queries, mutations, and subscriptions. But then when you are descending in the object graph, in your data graph to fetch information about specific objects from your backend, we have the ability to use aliasing, fragments, directives, variables, arguments, and sub-selections. And you probably have no clue what that means, but we'll look at it now. And it's really, really powerful. So let's see the next one. So far we have only worked with the hello world example of GraphQL really, simply executing a very simple object selection and then fetching sub-fields from that. But we can do much more. So let's begin with a little bit more complex example. Let's fetch all of the films that have ever been, all of the Star Wars films that have ever been produced. From all of these films, we want to fetch the title, we want to fetch, well, maybe the director, a list of producers, and we can also descend in the object graph by fetching related data models. So for example, we can fetch all of the species that appeared in that specific film. And we'll then return that information for all of the items returned by the species connection. So if we do that, we see that in the movie in New Hope, we had humans, droids, wookies, something else. All we can now do is, you can tell it to only give us the first two. And now it becomes apparent that what we're actually dealing with here is remote function calls. Because each field in the schema is powered or backed by a resolver function in your schema. And that makes it so that we can provide arguments to those. And those arguments are then also passed to the function which powers the schema entry point on the backend. And that works on every level. So also here, I can say, give me only the first two. And that's what I meant by saying it is run on arbitrary code opposed to directly querying the database. We have function calls and that is very powerful because that comes with all of the flexibility that you can imagine. And it means that it can run on any backend that you can imagine. So now that we have the possibility to fetch a list of films, there's also another feature that is very powerful. We can fetch the ID of an object. And if you've worked with the Facebook Graph API before, don't confuse that with the GraphQL API. But if you've worked with the Facebook Graph API, you'll know that they have generic objects which are identified by a universal ID that you can use to fetch that object without knowing what that object actually is. And this is also the case here. So every GraphQL schema dictates that you should expose an ID field. And this is actually just a base 64 encoded string containing the original ID of the object and the type of the object. So what we can do now is, given this ID here, we can query the generic node entry point and say, okay, give me this object. And because we don't know what type of object this is at this point, all we can do is we can fetch the ID again. But we can also fetch the name of the type. We'll get back to this because this is a feature of introspection, we'll talk about that later. But now, because we know, hey, this object can potentially be a firm, we can use fragments. And fragments define that if the object returned by this call is actually a firm, now I'm basically type-inting again, give me the title. And then, because we now know it's going to be a firm, we can alias that response and tell it, well, you are a firm, so make your Jason Recky a firm. And if we fetch a list of people, this is starting to become useful because now, oops, sorry, we can start to use variables. So, given a query of name MyQuery, we're using a variable named ID, which is of type string in this case. And if we then pass that variable here, sorry, it has to be ID. We now know the ID that we have been passing down here is actually a person now, not a firm anymore, so if we specify, hey, you're a person, give me the name, that works. And why is this so important? Well, variables and fragments are as essential as the querying itself because if you're talking about an application of the size of Facebook, the newsfeed on the front page would take thousands of lines of querying code in order to be able to fetch all the details and information that it requires to render that nicely structured hierarchical list of data in your newsfeed. And Facebook came up with the idea of instead of sending the query every time the whole huge string which could be a couple of kilobytes large to the server, they would store the query string on the server, exposing it as a route and only accepting the query ID, the query parameters for the route, which would then call the stored and possibly obfuscated query to return the data that you were originally... Yeah, fetching or trying to fetch in your frontend application. And this is very handy if you're talking about large client-side applications where the string can't be obfuscated in your JavaScript, for example. So you could add a build task to your tooling chain and make it so that on the backend in Drupal it would generate routes where you would simply expose the query parameters and generate the query from that. So we have heard the name Schema a couple of times now. Fundamentally, each Schema is simply an arbitrary nested hierarchy of type definitions and those type definitions are powered by a very powerful type system utilizing scalars, so simple primitives like string, integer, et cetera, as well as enums for fixed lists of things, for example, a list of image formators for your Drupal file system or a list of text formators for your text fields. Additionally, we have objects, objects being complex types like entity node or entity user. And we have interfaces that combine these in case there's multiple sub-definitions of those types. So if we're talking about nodes in Drupal, we will have articles, we will have pages, and they all might have different fields. Now we're talking about unions. Unions are simply combinations of multiple objects that are all valid inside of a given context. So this is some pseudo code trying to explain how these type definitions are put together. So if we're talking about Drupal again, we will have an interface of type entity node, which combines both types, entity node article and entity node page, both implementing entity nodes in this case. And the interface exposes all of the common fields like title, UID, created and updated timestamps and the node ID itself. And what's important here is that all of these fields can again be complex types. And that's what I meant by a very nested hierarchy. You can make your schema as complex as you need it to be. So in this case, UID would be an entity reference field targeting the entity of type user, and thereby, if you are following that reference, it would first give you A, the ID of the user, and B, the full entity object allowing you to fetch the name, the email address and the information from that user. So, we have spoken about resolver functions before and how Drupal or any GraphQL schema implementation invokes these resolver functions to fetch the nested data. So in type, in terms of the entity node article type, we will have the title field in the image field, as you can see here, and other fields, obviously. And it simply illustrates that a resolver function receives the parent node from the Graph hierarchy as an argument to the function as well as potentially any number of additional arguments that you specify in your schema. So, for example, if you want to fetch an image, you might want to also specify what type of image formator you want to apply to that image before you get the URL to that image in return. So, yeah, if you want to fetch the title, it will simply give you the title property in the entity object. And if you wanted to fetch the image, it would also expose the potentially available formators as an enum, which you can then provide in your schema, in your GraphQL query. And it will then be respected in your resolver function and received as an argument. You can see that right here. This is pseudocode, right? This is not PHP. Introspection. This is one of the most interesting features of GraphQL. So, when you're working with traditional REST APIs, you often have to document the API yourself. There's no formal specification of what REST can and can't do. And there's no formal specification or standard for all of the REST APIs in the world. Because, really, everyone defines REST in a different way. For you to generate the definitions and the descriptions and the set of available information that you can potentially retrieve from your server. Introspection is a way for GraphQL to generate additional schema information for your schema. If you remember the movie Inception, this is kind of the same thing, really. Because you are inside of your schema in GraphQL and Drupal. You're defining your schema for all of your Drupal data types. GraphQL takes care of defining additional information inside of your schema, exposing that information. And this is so interesting and so powerful because it allows you to build tooling around your GraphQL implementation. So we have looked at the graphical interface before. And the only way it can work is because it retrieves information about all of the available types from your schema. Every GraphQL server has a specific and special underscore prefixed schema entry points that allow you to descend the object graph, fetching information about the type of the objects, all of their children, descriptions, fields that they host, etc. Arguments. And when you execute the query, you get a whole range of information back from the server that is very useful for your tooling. In this case, for example, for GraphQL. This is how this IDE works. And it's also useful for it to generate documentation that you can simply browse on the side. So GraphQL is simply more than just a query language. It's actually a whole ecosystem of tooling and things that Facebook has already vouched to create. I'm really looking forward to seeing more from them. Building a GraphQL server, however, is much simpler than you would expect. First of all, at the very top of your of your server, you have the GraphQL library that supports parsing, tokenizing and parsing the GraphQL query string and then turning it into function calls, a series of function calls which simply operate on the application code and then can retrieve information from any source that you can imagine. In terms of Drupal, this is a Drupal con, so we have to talk about Drupal obviously. We have created a module over the past month that does exactly what we have been talking about this whole presentation. We are generating a schema based on the available information that Drupal already exposes from its core. And we are then adding resolver functions which operate on our application layer on our APIs, the entity API, the entity query API which nodes, users and any arbitrary entity, constant entity at this point that you might create in the lifetime of your Drupal site. So the goal was to not have you create your schema manually through custom code, but instead generate the schema for you based on your fields, your entity fields, your bundle fields and all of the properties hosted by your entity types. We wanted to also make it possible for you to leverage not only single loading of entities by their ID but also batch loading through the entity query API exposing all of the query fields not only for fetching the data but also for providing arguments to the query API. And as a foundation for that schema to operate and for us to generate that schema, we have a really good foundation for that core already in Drupal and that's the type data API. So essentially all that the module does is translating already existing type data API type definitions into a structure that the GraphQL library can understand. So the initial version of that module has been released two weeks ago two weeks ago I think. It supports all of these basic initial goals that I just outlined. We have got support for single loading of entities. We have got support for multi loading of entities, batch loading of entities through the entity query API and we have support for filtering. There's views integration so you can provide custom entry points to your schema. I'll show both of that stuff in a minute. And we have also made sure that access checks are built in because that was the most fundamental thing obviously to finish after we have done the initial loading. There's some limitations though. Ideally to complement the GraphQL ecosystem we would also be fully compliant with the relay specification. I'm not going to go into detail about relay here but if you're working with React you're probably familiar with relay. We are currently not compliant to the relay specification so you can't use it with that yet. That is one of our next goals. We will also add mutation support which is currently not supported so writing on the server. And we are also going to add means for you to customize your exposed schema so that you don't expose entity types that you are not considering fetching from at all. We will add config entity support through the configuration schema which is also available. And in general I generally try to make it very very useful without having to specify custom views because right now pagination is not ideal and you would have to create custom views for that but let's look at that. Now we are looking at an actual Drupal site and let me increase the size of that again. We have got two Drupal node types article and basic page and we have got some content for both of those types and we have got the graphical interface which you just saw for the Star Wars API built into Drupal itself so under Graphical Explorer you can directly reach that. If you install the module you already have that now to load entities directly through the ID sorry so if I go into node ID 1 I can now fetch the node ID again obviously I have that already title and here is our entity we have looked at the type system before so the body field is not available on all nodes by definition, it's a field so we will use fragments to say in case this is a type entity node article also give me the body field and from that body field I only want the actual value not the summary or anything, just the actual HTML output that was written into the body field we can also create new fields because it uses a type data API and as long as your field definition has a type data integration which it has by default there's no way you can't do that it will also directly be supported so the moment that I save this field on the node type let's just make it a plain text field, save it go into my node and write some text into this field oh I didn't save it right go back into the node this is the page and now because the schema is automatically generated it will now have let me reload obviously the IDE has to fetch that information from the server first so I have to reload the interface and the Wi-Fi is really bad here one second I need to reconnect it's fetching the JavaScript for the graphical interface from the server from a remote CDN and it's failing to do that oh no it's so now there's our other text field now and if I fetch that for node ID one is it one it was node ID three is there there's also the possibility for you to fetch lists of nodes so right now before we fetched a specific node by ID we can also fetch all of the nodes using the entity query API fetching the title for each of them or you can say limit the result set based on the available queryable fields exposed by the entity query API so in this case for example I only want to fetch nodes of type article that works or page that works as well I didn't try it before so I was really excited so fragments when fetching all of the nodes of all of the types that we have available we also obviously have access to both of these fragments so if it's a node of type page fetch the body if it's a node of type article fetch this additional text field we can also fetch stuff nested in the object graph so if we want to get information about the author of that node for example the name we can just send further the object graph through fetching the user object via the UID property on each of the nodes we can then further head into the actual entity object and pick out fields from the return entity this is an entity reference field so it has two values in the field type definition first of all the target ID which is the ID of the entity object and third of all the property as you might know if you're creating custom field types for your Drupal site you can specify multiple properties for your field types and in case of an entity reference field that's the two fields that are available so even if your custom field has multiple different properties they also support it directly and we have built in some need simplifications for the schema that would be exposed as a list because type data simply makes every field in your Drupal environmental list otherwise multi cardinality fields wouldn't work nicely through the API we're checking if the field is actually a single value field or if it has multiple deltas if it does have multiple deltas we are exposing it as a list if it doesn't we are simply exposing it as a single entity item furthermore if the field has multiple properties it's a project with fields as sub-selections so you can fetch out the specific properties that we want from the field type however if it is a plain value with a single field with a single property we are directly exposing that and it makes it possible for you to directly fetch the title without having to define any special sub-selections right do you want to see any more suggestions? I did talk about permissions so we have entity level access checks and field level access checks built in so generally if the item that is returned from the API is of type accessible so there is an interface in Drupal called accessible interface if it implements that we are checking for access so as we talked there are some limitations currently we don't have relay support but we are going to add that I'm already working on the second version some of these things are currently limited by the fact that the graph PHP library is a little bit behind in terms of feature parity with the actual JavaScript original JavaScript implementation and that's because they are moving very quickly and I'm working on it alone so I appreciate if some people could assist me in working on the module I'm really looking for help both for organizing, contribution on the module as well as simply writing code and because we have some time left before we do questions so let's say we have five more minutes before we do questions I want to show some additional features that I'm going to implement for the module that I'm looking into implementing and the first very interesting is pagination and pagination in GraphQL together with Relay and that's the key here GraphQL and Relay together make a very nice combination of pagination is very amazing so again if you want to fetch a list of films you go through the all films route call and you can then fetch information about pagination so does it have an X page does it have a previous page what is the end what is the identifier inside of my list for the first item what is the identifier inside of that list for the last item why that is useful I'll show you in a bit so now we can go in and fetch specific items and say okay let's fetch the title of each of those and there's also another interesting property on here which is called cursor and cursor is the identifier of a specific item inside of the context of a given list if you're sorting a list by a specific property the cursor will be different than if you sorted by a different property and it is basically a base 64 encoder string as well combining the type of the object the ID of the object and the way it was sorted and that's very simple and also very clever because what it allows you to do is if we are limiting the response set to two items and now it tells us well there's a next page and this is the cursor of the first item in that result set this is the cursor of the last item in the response set and they match up with these here so what I can do now and this is so exciting I can tell it okay well give me the first two after this cursor and guess me the next two and I can continue and that is just it just feels much more natural and pagination through a result set than when you have to do page one, page two, page three because it just it limits the result set by a range plus a starting point and it's potentially an end point and you can also do it in the reverse order so you can say okay give me the first two before this one it just works naturally and this information is also really nice forward and backward button all right mutations for that we'll have to wait a little bit I'm looking forward to working with all of you together on that so if you're willing to volunteer on working on the GraphQL model please get in touch after the session and I'll also try to arrange above tomorrow so are there any questions? right if you have any questions please find the microphone first I want to say thank you this looks fantastic it addresses every qualm that I've had with the rest second I want to ask this seems to only deal with query what about insert and update that's what we call mutations and so I've spoken about mutations I should have made it clearer I guess that it's mutations are updating and creating content or configuration for that matter so yes mutations are going to be one of the next two things that we are working on so first will be relay, second will be mutations if you're interested in mutations we can talk about that during the buff the way it's working in GraphQL is basically you also have a function call which receives an input object which is the stuff that you want to write to your backend and then as the sub-selection as the fields that you can fetch you simply get the object as a context that you just created so what you can do if you create a node you basically invoke the function call create node giving it a set of information for the node that you want to create and then in return you can fetch the ID of the node that has just been created that's also very powerful and it allows you to for example directly then in return to creating the node fetching the name of the author for instance if you're interested in that I'll definitely speak about new features in the buff tomorrow right so that first depends on whether or not you're using relay so when you're using relay relay has this nice way of injecting a network layer so you don't have to worry about fetching or requesting the data yourself at all so you simply tell relay okay this is the URL this is the HTTP URL to my server and then you're done you're directly putting together the queries yourself and using simple rest simple HTTP calls to fetch or to send the query to the server and then working with the response you have to deal with it yourself and you just have to put together the base path of the GraphQL server so HTTP yourdrupalenvironment.com so GraphQL is the endpoint inside of the GraphQL site so you don't want the query that you want to send either in the get parameter query or inside of the post body query and there you can also specify a variables post field right so there's one small issue with nested data structures so if you're accessing the second dimension basically inside of your GraphQL right now I'm not optimizing the query so that it does multi-loading of entities for that case for the first day yes not for the second one so if you're fetching a node and then users and then something else inside of the user like another reference that's not optimized for but we're working on that as well so basically prefetching stuff this is really cool thanks just have a quick question how does the security training work here so if I have a user that I want to have like this specific user only have access to a certain number of nodes that either he created or he has access to you're talking about exposing of the full schema of the site or you're talking about explicitly exposing data exposing part of the data alright so the GraphQL schema on the server side the resolver functions all have to deal with access checks and permission checks themselves but since we are generating the schema for both entities and their fields and both of these items can potentially have access restrictions on them and we are aware of that so both of them implement the PHP interface accessible interface and if they do we simply call arrow access with the current user that's currently authenticated and if the user doesn't have access to the node we don't fail the entire query or we don't let them access that subset of the query and what does this look like in the context of a decoupled application or a facade in front of many backend data source so you want api.example.com single endpoint licensing information three different systems you wouldn't have that well you could but you wouldn't want to so the idea is that if you want to interact with many different APIs you would put a GraphQL middleware that actually deals with these APIs through the schema so that you can abstract the interaction with the other remote APIs within your server schema so you call that one GraphQL that single source of truth basically and you'll handle the remote API calls within your server structure and then basically you accumulate the results in one goal and that one probably needs to provide the data storage what do you mean? so if you're calling one endpoint so what I think is the facade that needs to provide the GraphQL logic and probably the data storage would be too heavy for it to be interacting with different systems so basically you're asking me what is the GraphQL so this is a really interesting question and people have tried that actually the Star Wars GraphQL API that you saw there is running on the REST API and simply forwarding calls to the schema so when we try to fetch and you can download the GraphQL Star Wars API implementation to see that happen and you see it in the console output in your terminal when you're running it and you're using all of the firms and on the server it then hits the REST API fetches the information from there caches it and gives you what you need from that so the first step to upgrading your existing implementation if you're not using Drupal because then you generate the schema for your anyways but if you're talking about an existing application Drupal 7 or whatever you can definitely expose REST but simply where you would write the GraphQL schema in note and have it call the REST resources of your Drupal site that is possible and that's actually what Facebook recommends you're welcome Chris Claver is software engineer from the Nerdery I have two questions a question that I have promoted to me is that when we have GraphQL and we're using GraphQL on the Drupal site do you mean that we have to manage schema in two different locations manage the schema looking inside Drupal site, manage the definition of manipulation of the schema with the GraphQL am I getting that wrong? You don't have to manage the GraphQL schema yourself because we're generating it for you on the server completely so we have the type data API in Drupal and if you are creating custom entity types you are always defining the type data definitions for them the core type data the entity types are already fully defined and they have the properties fully defined and all of the field types are also defined through type data so we are just traversing on those type type definitions and why we do so we're translating them into GraphQL schema definitions that means that as soon as you're starting to site build your environment all of the stuff, all of the fields that you configure will be available through the GraphQL directly you don't have to worry about that if you want to create custom resources you can do that the way that it works is currently tagged services so you can create a custom service in Drupal which for example calls Facebook API or Twitter API or does some other stuff like calculating the time and date at a certain location in the world you're completely free to expose any additional information that you want through a custom service that attaches another GraphQL schema or sub schema to your pool Thanks, you guys want to bet with me the other question is sounds like this is an awesome thing to be added to the Google platform but I wonder, is there anything that could be changed in the Google platform for example the type data API you want to better adapt, better work with GraphQL from your perspective right, so we had to do some workarounds well no workarounds we had to do some simplifications for you to actually work nicely so type data is very verbose it makes every field a list regardless of it being a single data field or a multi-data field so that's something that was very annoying so we can't just iterate on the type data API without making some assumptions about it so for example while we were translating the GraphQL schema we had to check hey, does this field have multiple sub-properties or not well it's not quite enough and if it doesn't have many multiple sub-properties we make it directly accessible without you having to specify another sub-selection so you don't have to go title first item value you just write title and you get the title directly inside of the JSON response simply used GraphQL directly type data API directly it would be so that you would have if you have an entity and you wanted to fetch only the title you would have to go node 1 title 0 value so that's the verbosity coming from the type data API and we had to work around that it would definitely be easier for us if type data would work the way we are doing right now it's off but it doesn't and there's good reasons for that so I'm not sure thank you so yeah, it's not actually coffee time but I wanted to include this to give right, there's a couple of resources I just put the slide up here so it's being recorded there's a working drive, there's RFC there's a reference implementation in JavaScript which is the most up-to-date version actually not true, the Sangria guy that is working on the Scala implementation is sometimes the head of the actual Facebook developers because he's crazy, I don't know and there's a graphical source code as well on GitHub, we have the Star Wars API player on GitHub, you can check out how the schema is built by checking out that source code we have the relay GraphQL specification the stuff of the pagination and cursors and so on that's what we're going to work on there's the Learn GraphQL tutorial step by step and the GraphQL relay and Drupal demo repository so once we have relay support this repository is going to host or it's actually already hosting React relay application also using Redux that has a lot of cool stuff in it like it has server-side rendering it has relay so it's directly connecting to the GraphQL server it has Google accelerated mobile pages built in it has service workers built in so all of the cool new stuff, all of the fancy things that we're currently talking about on the web they are already in there and the only thing that's missing now is the server-side implementation support for relay and once we have that you can use that as a starting point for your client-side completely coupled applications and if you want to try it out already just to check out the service workers offline support and all these stuff you can do it it is currently working on the Star Wars API let's go back to this slide thanks for listening I was on the Wi-Fi before it didn't work thanks for coming I don't need JavaScript I need PHP I would need a lot of work actually because I'm really new at Amazelib so I don't have a card yet I would give you mine but if you give me yours I can send you a message you also want to get contacted? thanks this is what I wanted you too? me too I'll tweet all of you tomorrow I'll try to schedule it now I don't know when it is going to be I want to follow up so I think the Node.js that you were talking about is exactly what I need I missed the first 15 minutes and then I missed some of what you were referencing I just want to make sure I know what to look up so this idea of if the endpoint hits Node.js and then Node.js does has this key part of the logic it's hitting potentially a constellation right well that's basically there was a presentation in London by Nick Schrock one of the developers of Graph Care and the same question came up how do I migrate to Graph Care the problem that Guy had who asked the question was he was working on a very big team where backend and frontend developers mostly separated and there was a gigantic team and a gigantic application and also some of the DevOps engineers were a little bit concerned about Graph Care security implications so they didn't want to jump on the fancy train to directly use Graph Care so he suggested why don't you write a JavaScript implementation calling your existing REST APIs as a proof of concept so you can already use Graph Care to relay in the frontend and then when they are ready and when they see how cool it is and they want to migrate they can write and move the schema from that middleware JavaScript Node.js implementation into the actual backend and even if then you have to have additional REST services that you want to call like Facebook, Twitter, Google, whatever you can also move that from the middleware to the server unless authentication for that happens but there are other people doing extremely crazy stuff like you could also use Graph Care on the client to query your local storage so like have relay query the client so basically not doing any server side communication at all and just using it inside of that client application or for PouchDB, offline support, whatever completely up to you So you said London Nick Schrock Yeah, great guy No, it was not in the Facebook headquarters it's called just Google Nick Schrock London React Meter So other thought for you so I work with the Commonwealth of Massachusetts same Massachusetts we're about to start doing micro-purchasing experiments like 18F, I don't know maybe it's basically we can skip the procurement process the government procurement process for anything that's less than $10,000 it's less than $10,000 so it is to basically open up our backlog and try to hire people to work on tickets ticket by ticket not contract So you're crowd sourcing your workflows That's right, so you can still do $100,000 work just having to do 10 tickets so I wrote a bit of the URL I'm going to card there, bit.ly slash Drupal Micro Do you have Graph Care tickets? Well, so we're talking about how to build out 8CAN.Nats.gov which would be the facade of all of the publicly available data in the state of Massachusetts So that's where you're coming from So I don't think there's any future where we're actually going to centralize it all with a single GraphQL application but this idea of then we were already talking about doing this with REST like there's a facade layer the talks all this legacy and provide the REST for the API because it doesn't make a difference right I mean you're doing the REST cause either way either you're doing it directly or you're doing it on a server