 Chances are that if you came to this room, you came here to listen about advanced web services with JSON API. If that is not the case, please stay because we're gonna talk about cool things and you know, you may enjoy that. My name is Mateo, this is mandatory spam. My name is Mateo, I work for Lollabot and I really like decoupled experiences. I'm a decoupling nerd. I was lucky enough to get to work with these technologies early 2014, maybe 2013. So in this time, I've developed quite a passion for the topic. So yeah, you can find me pretty much everywhere as AoYpso, it's a terrible nick. I don't even know how to pronounce it because it's zero in there, but that's the one I got, I'm stuck with it. So hopefully when you get out of this room and you talk to others, you will tell them that you learned about JSON API, what it is and why you may want to use it. Also, what's the status on the triple module because obviously we have a triple module for it, otherwise we won't be here, we wouldn't be here. What's the status, how it compares to other alternatives like resting core and what are the limitations and the outstanding problems that we still have not only in JSON API, but in the decoupling world. So the tagline of the JSON API specification is that JSON API paints your bike shed. I don't know if any of you have had the opportunity to design a REST API or a web service in general, but there are opinions, a lot of opinions. People just want their payloads to look in a certain way, they want their URLs to look in a certain way, they want to pass some parameters to do crazy things and there is no, well there are bad options, but usually you come up with good options. The problem is that you spend a lot of time in something that is not really that important because you can use a specification that already defines all that and get to work and start providing value for whatever digital experience that you're building. So JSON API helps you with that in that sense. What it does is, and the worry you can read this, it defines the transport, the format of the JSON object that you're gonna be getting from the server and pushing to the server and how it looked like and what are the features that you have built in and also the interaction with the API and the resources. Things like we will have some special query string parameters that means something and they do certain things. So it controls how you as a consumer relate to the server and I'm gonna be saying you many times, sometimes I will mean you as a consumer or as a client developer, a front end developer and sometimes I will say you as a backend developer. I will try to be clear but hopefully you can infer that from context. So this JSON API specification is a creative common, it's a text document. It's creative commons and it's like open source but without source code, right? They even have a GitHub repo that you can go read, provide feedback. If you don't like the wording, you can open a pull request. If you need other features, you can open a pull request and you can even fix stuff. And they are currently one of the few specifications on this topic that are stable and that means that there's a lot of people using it already. This specification was born inside of the Ember community as the means of meeting a need that they had to standardize on some format because everyone was doing their own thing. So this is strongly driven by front end experts and UX experts. What that means is that this may not be the most academic specification but it's something that is meant to be used and to be easily understood. And for that, we can see that JSON API doesn't rank very high when you take into account things like hyper media factors. You can see that in this pyramid, JSON API only has like three little boxes and then Uber has every box possible. So why JSON API and not other alternatives? Like there is Hull in REST Core. We also have another country called GraphQL. So what's the reasoning of having this thing? This is mainly the reason. So there is an insane support for the JSON API specification. If you go to the JSON API.org page, you will see that are 141 different repos that integrate with this specification and that is only counting the people that opened the pull request against the JSON API side to include their repo. So people like me didn't open the pull request. So there may be more than this number, which is good. Also, these are not just different variations of the same repo. These are across 18 different languages and frameworks. So chances are that if you're working in crazy technology like Elm or you're building an iOS consumer or you are using an Android framework, chances are that you will have support for this. So yeah, that's great. But that is not only for front end development. It's also useful for back end development because there are also server implementations for this. Like for instance, the Drupal module. And I really hope you use the Drupal module. But you may have other situations where you want to place something like, I don't know, maybe a proxy in between Drupal and the consumers. And you want that to be Node.js. Well, you will have a library that gets you very far using Node.js and that's open source and you can use. So this is the main reason why Dation API is so popular these days. And the more popular it gets, the more useful it is to use it. So this is an example that we could see in last DrupalCon in New Orleans. It shows how a third party library can interact with many CMSs just by dropping a JavaScript file and putting some markup in the page. It can be used as an in place editorial experience or in context editorial experience using JSON API underneath to communicate. Like it's almost a drop in JavaScript library and you can do pretty cool things with that, right? And it's because this JavaScript library expects the transport of the information back and forth to be JSON API. And if your server and CMS complies with that, then you're done. You can move on to the next thing. All right, we're getting ahead. Another thing that I like about the JSON API specification is that it doesn't try to specify every single thing about your API. It gives you some room of flexibility. Like, for instance, it will say that you need to use filter query string parameter and that parameter named filter is used for filtering. And that's all it says. It also adds a note saying that the specification doesn't say anything about your filtering strategy. And that's by design. The good thing about this is that in Drupal, for instance, we can go ahead and implement a filtering strategy that matches entity query. Because entity query is amazing. You can do many queries, complicated queries and do it even across storages and that's pretty great. So we want our filtering strategy to match entity query. So we go ahead and implement it. And also they provide an extension system, which means that you can then, once we have implemented this filtering strategy, we can write the specification of what we did, how we structured all the filters and how we specified how an operator works, et cetera. We write that down, we declare that extension then other people can use that extension in their implementation. So they can say, hey, I'm using JSON API plus this extension in the filtering parameter that is not specified in the base specification. So you can do a lot of reuse but still keep your business logic, sorry, your features. You can still be in control of your features because it gives you this flexibility. Another thing that I like maybe a little bit too much is to tell people, hey, please go and read the manual because JSONAPI.org has the specification. It's very clear if after that you still can't resolve your question, I'll be glad to answer it. But you have a place where you can direct questions and by reusing this project and project and project, they learn the thing and that makes them more productive. So how do I get here? I mentioned before that in 2014 we got a project for The Tonight Show with Jimmy Fallon. That was a decoupled project. I was really excited about it because I kind of wanted to work in one of these projects. And we did what we were used to do back in those days. So we were used to have a process that gets some data from the database and passes it to our templates which were PHP templates in those days. So we said, we're gonna do the same, get some data and pass it to the templates but the templates are gonna live in a different system and we're gonna pass them through HTTP and we're gonna send back and forth some JSON documents. So what did end up happening is that if the consumer, which we only had one, if the consumer needed a field, we just would add the field in the server for that JSON document and send it over the wire to the front end and the front end would put it in the templates. And that worked great. It was a fantastic success but that didn't scale because the next project that we had, we didn't have just one consumer and the server could not contain the presentation for every possible combination of consumers. So in the end, we ended up researching what are the available specifications to deal with this because I'm pretty sure that we are not the only ones that have these problems, right? Someone else that is smarter than me may have worked on this and may know the solutions to these problems. So these problems were that in traditional rest environments you would be doing multiple requests from the front end to the back end meaning that you imagine that you request an article and when the article comes back, you see okay, the author of the article is the user 35. Then you go to the server again and you ask for user 35. Then it comes back and you see that the picture of the author is picture two and you go and get the file number two and you see that there are a lot of back and forth and that is a problem. Another problem is that, I don't know about you but my entity and my content types seem to have a life of their own and they grow over time and they become this massive thing full of fields and when you turn these into a JSON object and you send it over the wires, maybe a JSON object of 200 megabytes is not okay. So we need a way to kind of limit that impact. Another problem is with content discovery. I said that you go to the server, you as a front end developer go to the server and ask for article 12 but how do you know that you need to request for article 12, how do you discover that content? So these are typical problems and they all have known solutions and good news pretty much every framework that attempts to solve this solves them in a very similar way. So that may mean that it's a good solution. So first let's gonna talk about the JSON format. You can get a chance to clap or something. Sorry about that, it was so cheap. Okay, so the JSON object is gonna be a structure like in four different groups. The one that you probably can't read is in blue. It says that we will have information about the resource data and their ID and there is some supporting glue structure that holds everything together and there's also very important information about the data itself, the attributes, the fields and relationships. And finally something that is underused many times is the hypermedia which to oversimplify is data that talks about your data. It tells you what the actual data, the green bits are and what you can do with them. So this is what it looks like. This is an entity, this is article one and you can see in blue that you have the resource type which is articles and the resource ID which identifies the actual entity or the actual object in the backend. Then you have in green the relationships and attributes, they are kind of separated and finally you have the hypermedia in yellow. If we drill down into the attributes, this is really simple. It's the label of the attribute and then the value. You can see here that if we wanted to go back to that image of the comparison of the different specifications with all the pyramids and we see, okay, JSON API has only three little sad boxes. For it to have more boxes of hypermedia, we would need to have here like title and make that an object and call it value duplicate and then add the hypermedia in there and say, okay, this title is wherever. So we are losing some stuff, we are losing some hypermedia capabilities but we are gaining a lot of clarity. I mean, it doesn't take much to understand without me saying that this is a key value, right? Relationships are a little bit more complicated because a relationship is not just any other attribute but it's a property or a field that talks about another resource. So it's gonna have the four colors because it's kind of a small version of the current document. So the important parts here are the green and the blue, the green saying the name of the relationship. For this article number one, the name of the relationship is tags and then the blue identifies the resource that we're pointing to. So we have the host entity, which is the article and the target entity, which is the tag. Right, so let's talk about a little bit the resource interaction. So how do you interact with the server not only the format that you get from the server or that you push to the server. So this specification uses rest, which is another standard and what that means is that you have a resource which is the thing that you interact with. Maybe it's a room, maybe in an auction site is a bid, maybe it's an article, that's your resource. In Drupal, we identified that with our content type. You interact with that using HTTP methods. The typical methods are get, put, post, delete, patch, options, et cetera. So using the combination of the HTTP verb and the resource, you can interact with the server. So a typical request looks like this. You do a get request against slash articles and you specify using the accept header that you want the response to be in JSON API format. And then you get a response and you act on it and you put it in a very cool way because you're using React or whatever thing you're using. One caveat is that in Drupal, we actually don't use the accept header. We kind of do it in our own way. We use the underscore format equals API JSON and there are reasons why we are not using the accept header. We were at some point and we choose decisions to use this underscore format and you can go and check the change record in Drupal.org. So the typical solutions to these problems that we said are known and to avoid the multiple requests, we do resource embedding. Like we put different resources in a single response. To avoid the bloated responses, we do something that is called sparse field sets which is just declaring the fields that you want back to as a front-end developer or a consumer want from the server. And finally, to solve the content discovery, you add collections and filters. You say, give me all of the articles with this title and that's your way of discovering the content. So let's think about this extremely simple example. Imagine my friend Paco. He just started the blog and he wants to do it in a decoupled way. So he's evaluating using, for instance, REST Core and this is the article detail page that he's thinking about. This has like the article title, the picture of the author, the name of the author, an image, descriptions, some tags, some comments and the comments have authors, et cetera. This is an extremely simple example and I realize that your projects are way more complicated. That means that what you're gonna see is gonna get multiplied in your case. To do this, Paco would need to make a request to article 12, for instance. When the request comes back, he requests the author, then from the author requests the picture and every time the consumer has to wait for the previous request to go and come back. So it looks something like this. It's kind of hellish how quickly this escalates and in here the sheer amount of requests is not what is bad. The problem here is the nesting levels. You can see that number nine and number 12 are in the fourth nesting level because they are requesting the image of the author of a comment of the article and to get to that you need four requests and that is going to be a problem. So we solve it doing resource embedding. With JSON API, what you do is you request the same article, article 12 and you provide a special request to the parameter called include. And in include, you say, hey, give me the author, the image of the author, the tags and then the comments and the image of the author of the comments, et cetera. So with that, the server takes that information embeds all of the resources that it calculates in a single response and it sends you back all the information. So you have one single response as opposed to many. So that is very useful. To avoid the bloated responses, that 200 megabyte JSON payload that we were talking about, we are using a different query stream parameter and we are saying, okay, give me the fields for the article's resource. I only care about the title and the creation date. I don't even care about the body and the body is gonna be a big, a very big field that's gonna take a lot of characters, right? So by doing that, you are basically removing every other article, sorry, every other property from the article and you're gonna only send the things that you are interested in. So that's how you solve that problem. Finally, for the content discovery, it would be kind of reasonable to have an app that said, give me the cover image and the publication year of all of the albums, of all of the bands, having at least one of the members under 35 and living in Murcia. Murcia is a region in Spain. And you could also add, and while you're at it, give me some extra information about all these related entities. So that will look like something like this. You kind of add to the URL. So first, you add a filter and note that we are requesting bands, but because we're doing get in the resource bands. But the filter is not even in the bands properties. We're filtering based on the members properties. So we do filter, members.city equals Murcia. And then we add another filter called members.h And we say 35 and it has to be less than 35. So we add an operator and we get all the bands that have members with city Murcia and with age less than 35. So while you are finished getting all your bands, then you include all your relationships. You include all the albums for the bands, all the cover images for the albums, and all the members. And then you limit the fields that you get back. So a consumer is specifying at every time what they need in a single request and they get exactly that. And another consumer can request different information just that can be radically different or just a slightly different and they always get exactly what they need. And this is the major takeaway. The consumers need to be able to specify their data dependencies and put them in a request and get just that. So at the end, what you're doing is you're writing queries in the URL because this stuff kind of sounds like almost a SQL query, right? Because it is a query in a sense. We just write the query on the URL and this is not much different from, for instance, what GraphQL does. We're using the URL. GraphQL uses a proprietary format. It's almost like JSON. It's a document that specifies the fields and relationships that you want to traverse. It's just the same but expressed in a different way. Instead of using the URL, they use that document and they send it. And Falkor uses something similar to what GraphQL is doing. All this to allow every different consumer to have their data and have it in a single server implementation that allows to expand to new consumers without having to do any change. So I may start with a website and an iPad app and then add an Android app. And the server implementation should not change. The rules should not change. You shouldn't have to add a special query string parameters that say, hey, I'm an iPad, right? The iPad should just be specifying I have these data needs and give it to me and I'll be done. One last cool thing about the JSON API specification is that for every resource you get four endpoints. The first is, for instance, bands 1234 which will get you the specific band, the individual band. Then you have the slash bands that gives you a collection of all of the bands that are in the Drupal installation. These are the typical ones. But then there are two special endpoints and that is the first, or sorry, the number three is the related endpoint which means that you can traverse relationships not only by doing includes, but also with this URL which sometimes is just more convenient. You specify the name of the relationship and in number three you would get all of the albums for that particular band as if you were making the request to the album's resource. And finally there is the relationships endpoint which will be something similar but instead of giving you the album objects it will give you the album relationships. Remember when we were seeing all those four colors and we were talking about the attributes and relationships and we said that the relationships have the color blue that was important. This is what you get when you hit these relationships endpoint in number four. This is especially cool because this allows you to do patch requests against this URL and that would allow you to create links between entities and just create the relationship which is very convenient in many situations. One of the promises that I made or things that I hinted is that it's more performant to have one request as opposed to have many requests, right? And you shouldn't take the presenter's word on anything especially not mine so I'm gonna prove that point by showing some numbers. So this is one of the tests that I made and I realized that by no means it's a performance study it's just trying to validate a hypothesis. We're going to do a bunch of requests on a JSON API server and on a core rest server. And in this case we're gonna request the node 2100 and include the author and the image of the author and then the two tags that node 2100 has. In comparison we're gonna do the same thing and we're gonna request the node 2100 using course hal JSON and then the user and then when it come back we're gonna request the image and we're not gonna bother to request the tags because we know that the three nesting level is gonna be the slowest path. So we're gonna request the tags for JSON API but not for core because for JSON API they come for free. So these are the results. I was making all these requests against my local host and I had page cache turned on and it was returning from the cache so it was pretty fast, 21 milliseconds. The important thing is that we have this delay between the requests so if we compare this to JSON API we see that it's much faster. It turns out that we have a constant time to go to Drupal, bootstrap Drupal, take an item from the cache and return it. So if we do it in one request we just do that once. If we do it in multiple requests we do it multiple times. This is kind of consistent if you do anonymous requests, if you do authenticated requests or even uncached requests. You can see that the performance improvement is that JSON API is three times faster and it's not only three times faster but if you have more nesting levels, if you have seven nesting levels which is not unreasonable for a real world scenario you will have a seven times faster response which is something that we should consider. This is done just because we are avoiding Drupal bootstraps but not only that. I mean, here probably the problem with performance is not going to be that we are bootstrapping Drupal too often. The problem is gonna be that if you have a connection from a 3G phone that takes 300 milliseconds to go from the phone to your server and back. If you do that once, then you may have a performance problem. If you do that seven times, that is a huge problem. Okay, let's talk about the Drupal implementation. This is the URL of the Drupal project. We integrated a part, apart from the obvious parts like the entity system and the routing system to create the URLs. We integrated with the authentication provider subsystem which means that any authentication provider that is in your site and there are modules that provide authentication providers. Any authentication provider that is in your site is gonna be available for any route created by the JSON API module. That means that you can authenticate your requests using basic auth, cookie, and any other thing that you may add. It has also full integration with cacheability metadata. When you're doing these includes, you're putting data in a response. You're putting data from different entities. So if one of the entities changes, it has to invalidate the whole response. Using this cacheability metadata that is no longer a problem and you can have fully cached or heavily cached authenticated traffic using the JSON API. One of the decisions, the design decisions that we took early on the development cycle is that we wanted to create resources, one resource for every bundle. Instead of the course approximation to the problem which is to create one resource for every entity type. So we took that decision because for an entity, we don't have a schema. So if I ask you what are the fields that node 36 has, you don't have a clue. You can only tell me that it's gonna have an NID, a created data status, and you can list a bunch, but then you have to ask me, well, is it an article? Is it a page? Because article has field tags, but pages don't. So what bundle it is. That is why we are based only on bundles because we have a schema. And that is important to us because that creates the expectation when you're requesting something, when the consumer is requesting something, creates the expectation of what they're going to get back. For us, it was very important to be able to describe our resources with a schema beforehand. Also, one of the things that we did is that we enable automatically every bundle and we're working on this. This is probably going to change. The problem here is that if one of the bundles doesn't have a resource because you disable it, the problem is that when you do a relationship, what do you get in there? So we are kind of working on that, but for now everything is enabled by default and you have to implement a hook to disable the stuff. And yeah, it works with config entities, which means that it opens you the door to do things like having an administration area in your decoupled app to do things like create content types and administer your site. Because sometimes you want to have your administrators to access Drupal for some things and access the front end app for other things. So you can create a customized administrator experience in your decoupled app. And that's very empowering because sometimes you just give someone access to administer Drupal and you don't want to give them access to administer everything. And the permissions are not that fine-grain access. So you can use this to create this type of experiences. Okay, another cool thing that gets me excited and probably a little bit too much excited is the automatic schema generation. Since we said that we are based on bundles because we can rely on a schema, we may as well generate that schema for you, right? So we went ahead, we researched what is the, what is the best way to describe your data. And I came up with the solution of implementing JSON schema. Even if the name JSON schema is kind of similar to JSON API, they have nothing in common. I mean, they don't even treat the same problem. JSON schema is a way to describe data structures like in our case for an article, JSON schema would say, okay, an article will have a field title, it will have this integer and the field title will be a string and the string will be of max length 225. It will be required, a required field and on and on and on. You describe the structure of the data. JSON API deals with the data itself. It will tell you that the title is a triple eight or whatever title you want and JSON schema is gonna describe the properties and the data structure. So why am I excited about this? Because you can do things like, and we're doing this, this is a sub module inside of the JSON API Drupal module. You can do things like auto documentation. There is a JavaScript library. Actually there are many JavaScript libraries because again, this is a standard. Many people are integrating with this. There are many JavaScript libraries that take a JSON schema and print pretty documentation. So what I had to do is create a Drupal page that places that JavaScript file and feeds it the schema that we're generating and it puts these beautiful documentation in there that you don't even have to maintain. Remember when I told you about the joy of sending someone to read the manual? That is to work with the API. If they want to know about the content, you can have reliable documentation that is automatically built and you can work, that can work with humans because you're using a library to generate these documentation. And it's worth noting that if I went to add a property or a field to the article content type, it will reflect automatically in here. So I just turned this on and worry and not worry about documentation. Another cool thing that you can do with schemas, you know that an article has a title of type string and you know how to generate random strings so you can generate random data. Not in Drupal, not only in Drupal using Devil Generate, but anywhere, like this could be an online service. You put the schema and you get random data for your resources and that is extremely useful when you're testing an API and you just want to see if your app works in different scenarios. So you can use the schema for that as well. You can use the schema to generate forms automatically for you. So there is an Angular library that will take the JSON schema. It will generate a form for that because again it knows that it has a title of type string and it knows how to generate a text box for a string. If you describe in your schema that the string can only take three different values, it would generate a radio box and it creates a form for you magically. And the best thing is that if you add another field, the form updates. So you can have one of the benefits of the coupled apps in a decoupled world using the JSON schema. So there are a lot of uses of the JSON schema. We are not even done. You can validate data. You can have client-side validations using the schema. You pass the schema and your app can read what are the acceptable values for this. So you can validate it here. You can see that they're used in a validation online service that's telling me that I have some strings that should be converted to integers. You can even generate code based on the schema. Things like, remember when I said that you may want to have a Node.js proxy in between your server, JSON API server in Drupal and your consumer. You can auto-generate the entity definitions for that Node.js server just by passing this schema to a software and it will generate the code for you and you're done. So the takeaway is that by implementing standards in terrapirability and reuse of other people's code increases greatly and you get many things for free and you can integrate with cool projects that would take a lot of time if you had to implement yourself. And sometimes it just takes someone to paint your bike shed for you. So the limitations of the Drupal module are that multilingual support is not great. That is also true for REST score. We don't know what's the best way to provide multilingual responses. Like do we negotiate a language beforehand? Do we send all the different languages in one response? This is something that is still in under discussion and we haven't reached a solution. File integration needs some work. This is something that I was happy to scratch this morning because we have been working on this, this print and it has improved significantly. So that's no longer a problem. We have poor support of revisioning. I believe that is also true in REST score, although I'm not 100% sure. But still we need to be able to support revisioning because that is a very important topic. Another limitation is that all this is only extensible through code, which means that if you want to tweak the responses on the things that the server accepts, you need to create a module and implement some hooks and create some services and stuff. So that kind of makes the site builder the site builder not empowered to make these changes. I'm not saying that that is a good idea, but that's the state of things. And finally, it's limited to the entity system. While this is true, I don't think it's a problem at all because these days everything is an entity. And to end this presentation, I will say that there are some open challenges nowadays. And this may be true also for any decoupled site, but for Drupal, versioning the content model is extremely difficult because in version one of your API, you may want to have this set of fields. And then in version two, you remove a bunch of fields and you want that if a client requests version one, it gets certain data and version two gets some of the data. The thing is that you can only have a set of fields in your content type at the time. If you go to your administration page and check the fields that there are, you only get one set of them. And when you delete fields, the SQL tables may go away, so there's no way that a request to an older version will get that data because that data is just not there. So that is a problem that we're not sure how to solve. Responsive images and image styles is also a problematic topic, mainly because the image styles are declared in the server and are declared in Drupal, and it's an opinion on what the image should look like. So we are encoding presentation in the server instead of having the consumer, as we said, to request what they want and how they want it. So we would need somehow the consumer to be able to say, hey, I want this image in this way and crop it like this, et cetera, et cetera. So we're kind of still debating if that is a good idea to do it in Drupal, if they should leave somewhere else and not worry about it. Data processing is also a way, and this is one of the areas or at least the only one that I know that GraphQL is kind of superior to this specification, because things like, you're sending me a timestamp as the creation date of this article. I don't want that timestamp. I want it modified and I want a date string based on the server's time zone. So that would be a valid case and it's the consumer's job to request this data processing. So we kind of have to get some work in this. Multiple operation requests doing the same that we're doing for includes but for writing queries, right? Also, again, it has many drawbacks and but also is still appealing for many people. So it's like in the balance to see if we want to do that or not. And finally, aggregated values. How do we deal with requests like give me account of the videos for every season for this particular show? Things like that, how to express that in a JSON API compliant spectrum. If you want to help with any of these topics or any other thing, work on Drupal Core or just sprint, there are sprints in this Friday. They are really cool, especially if you haven't been into any sprint before. I really recommend it. I definitely have a lot of fun there. So if you wanna check those out, please go. Also, thank you to all these people because without them, my presentation would be a wide slide with black text. So that's it. Make sure that you log in and if you liked the session, provide some feedback. So the Drupal Association people will know if you liked the session or not. If you did not like the session, you can just forget about this. That's it, ta-da. We do have a couple of minutes for questions if you have any. Can you step to the mic, please? Otherwise, the question won't get recorded and it will avoid some shouting. Okay, so now we have like two kind of similar initiatives in core that is station API and GraphQL. So why do we have both and how do they compare once to each other? So, right, yes. I knew that this was coming. Yeah, I mean, it's a super valid question. When we choose one or the other, it may be a matter of what technology you're using. I made a big point on showing that there is a big support for the JSON API specification and you can have whatever a React client use JSON API, but if you use technologies that deeply integrate with GraphQL, like let's say relay that makes your queries for you really easily, then GraphQL is a better option. So I guess that my answer is when you have decided what kind of consumer you wanna build, like an iOS app or a JavaScript web app, then investigate what's the support and what gets you most things for free because at the end of the day, the features are really, really similar. I mean, you can do mostly the same, except that JSON API has some things on the edges that is a little bit better and GraphQL has some things on the edges that is a little bit better. That answers your question? Yep, thank you. Any more questions? Oh, I forgot to ask. Does any of you actually use JSON API? Can you show your hands? I said there's a bunch of people already. So that's why you don't have questions because you're already using it. Okay, so I don't think that there are other questions. So you're free to go.