 Hello everyone, my name is Kampra and we'll be talking about graph filtering. So the story is, we're going to redefine our user's experience. And in the process of this redesign, we wanted to focus on some basic body. And all this was supposed to happen with no performance sequence. We're really not sure how we were going to achieve it. But what we clearly wanted was to fast-form the left-hand side of something on the right-hand side. So I think it's up to manage and provide social media campaigns and posts. So what I mean by that is to our customers, across the different social media platforms, we Twitter, we Facebook, we Instagram, from which the customers can moderate and create posts all under a single code. So basically rather than going to Twitter separately or going to Facebook separately, reading your posts there, see who has liked this comment or you want to like a particular post there, you can do everything on this. So we wanted to focus on surfacing more data. As you can see in the left-hand side, we just have our post and a particular media saying that this particular post belongs to Twitter and it has been now delivered or posted on the social media platform. So we wanted to fast-form to the right-hand side of the screen, which has a content conversation API where we can see the post thread attached to this particular post where we will be showing the various comments and the various replies from these comments. We wanted to show the number of followers, for example, we wanted to show the profile image of this customer and we wanted to show various others. Similarly, for our audience or give our customers a good experience of content. We wanted to show whether this particular post has been liked by someone or not. We wanted to show whether it has been read by someone or not. Except for this type, in addition to the previous information which we were displaying for the users, we wanted to show more information and break up the style in terms of the resources. So the first one we had this content in there. It would give us the name of the user, it would give us the profile image, it would show us the various actions which have been performed on this particular image and it would show the date, it would show the numbers such as the number of followers which this particular user has. It would show the number of tags which this particular post has. We had this conversation API which as I mentioned before will be giving me an entire rate of the conversation on this particular post listing the various comments and the various replies on this post. So over the next 30 minutes I am going to talk about how we need to get data from three different resources and then organize its infrastructure. For example, as you can see on the right-hand side of the screen we are talking to three services which was the content API, the analytics API and the conversation API. The first way was a response from the content API before we could take or fire the analytics API and the conversation API. The second problem to face was that there was an overfetching of data. The whole data was simply dumped on the UI and it was the duty of the front-end developer to make print from this dynamic information. We wanted to move this logic at the back end so that we could enhance our performance and the overhead could no longer be trusted. So I have deliberately put this plot image here because at that time our website was getting rendered even slower than this plot is looking differently. The page look time was around 8 to 9 seconds and the page size was around 2. And if I was in an area which had really poor internet connectivity and let's assume that each of these data was to 5 to 6 seconds so I would be getting my page rendered in a whole of 18 seconds which was too much and quite expensive for us. So yes, we definitely have a problem, solutions for it. Let's switch to which we have dumped on the screen. We could have added a URL card to create a new custom rest in coding data which we required and which would offer us really to an explosion of custom influence. So by this time you must be... Trust me, even we felt that we had no solution to such a problem because we had to make free round trip to the server, we had to make it the data at the front end for this information in a single round trip and that too. So just to be one of the same page in terms of what it has nothing to do with the data. It has no data associated. That's how the sample query looks in GraphQL. So we have to get post query where I exactly ask for the information I need. For example, I ask for the title that created and I could have asked for the author's information in that single API. They give me the name of the author, the image of the author and the number of followers this particular author and the name of the author who created those comments. So GraphQL beautifully supports this data resting and I can beautifully write my data needs in such creating our schemas, manually creating the queries and for every query for how GraphQL works are then fired in a tree-like fashion. So for example, I have to get activity over here. For each of these queries, we call them the second fees and second estimate and inside that, once I get the R50.com I'll fetch the title and similarly the cascading holds down in the tree-like fashion. So it's not quite clear that it eliminated multiple round trips to the server. We now have to only just one round trip and we'll be getting all our data in a single API. It solves the problem of under-fetching and over-fetching. Whatever I ask, I exactly give in the same response. So for example, I just want the name and height as soon as I ask for the query. If you ask from here, it will get removed in the response. So this is what GraphQL provides us. And the third, easier to aggregate data from multiple resources. So rather than trying to talk into each of these three different APIs, we had this complexity handled by the GraphQL layer and what the client was supposed to do was simply make a call to the GraphQL layer and the GraphQL layer will handle talking to the content API. So now our new architecture diagram looks something like this. So instead of, as I mentioned, instead of the client talking to these three different APIs the client just talked to the GraphQL onboard server asked what it needed. So under the complexity, talking to these three different APIs and it was called something. So at that time, we were like, okay, we have solved the problem and we were mainly making everything. But little did we know that it was a short-lived solution. We see that our GraphQL query started in a short-lived state at that time because we had not made any changes to our GraphQL server. And the last time we checked that it was all working very well. So after some investigation, we found that the GraphQL server which we have built for our REST APIs that was the content API in our case were now out of service. One of the three was deleted and other was a page render. But we were quite sure that this was not a full-time solution when it comes to production and scalability. So what we did was, we set up a task every five minutes which would talk to each of our REST APIs every five minutes and requesting for some information. So luckily for us, our REST APIs were deleted from APIs which is the particular endpoints which that particular REST API supports. We would generate a GraphQL server in sync with each and every REST APIs. This is a sample of how we got it required for the REST API. So for the type activity, i.e. the ID, the title and the let's say, its name post and the other post which lists down the endpoints which that particular REST API supports. For example, I have just listed down the activity or get me all the activities it will give me the type it will give me the activity by filtering by type. So the conversion was now very easy. The REST API, the REST API the thing that the client is and what we talked about was the management code had to be from these REST APIs that we required and the UI was not supposed to deal with all this information to manipulate the data but that was not required. For example, I had to wait for the content API then coding the server and getting the data from the conversation API was no longer required because the state management we required was now easily achievable with the REST API. So when it comes to REST world we have let's say hard coded endpoints and we can easily make use of the HTTP caching. The parameters kept changing as per your requirements. So HTTP caching was not needed to switch from isomorphic fetch to authorized. The parameters are client with the isomorphic fetch as I mentioned. Now we switch to a pool of time. So a pool of time it provides and under the boot support for caching just to explain how a pool of time works. Everything in GraphQL is every data manipulation or every data fetching is in the form of a graph. And whatever results you receive is in the form of a traversal path which we can say is in the form of a tree. Where in begin traversal required to retrieve their particular object. When I find a second query again what it will do is it sees that okay I have the traversal path it has memorised the traversal path for the author object and rather than fetching the author object again from the vacuum what it does is it memorises it and retrieves it from itself rather than making the course of the path beautifully helps us to achieve caching. So when it comes to GraphQL as we have seen that the query size is enormous and when it comes to production it would go above 10k maybe just because of the text on the query which we are acting but when it comes to rest it is it's hardly 50 or 100 characters and getting your performance degraded just on the basis of query size was something very bad. So we use automatic persistent queries. In automatic persistent queries rather than passing the textual or the English type language or the English type query we pass a hash. So what the apollo server does is a hash from the apollo client finds the required if it finds a match for that particular hash it retrieves the query from its registry and fires it to the server. In case the hash is not found the apollo server says okay I don't know what this particular hash is it will be the textual query. So apollo client gives the textual query to the apollo server registers it so that next time when this particular hash or the query is going to come it will be retrieving it and there won't be any requirement and it provides beautiful support for caching and caching. The whole purpose is to make less number of API calls to the market. The names of the authors which who have commented let's say on this post. Now these authors can be repetitive it will be fetching the information about author one and two repetitively. Let's assume that only author one is called facebook.daylanders does is that it provides it patches these requests so rather than sending one and two again and again it patches these requests and sends one into any in return provides a promise object for caching. So once this particular thing has been resolved it gives it to the UI. The node function as we see here is used for caching. So once again some, I've made some query and what function remembers that saying that okay I don't need to make a call again to the back end and I need to retrieve the information directly from here so now our final architecture diagram looks like this and the page load time also decreased from 9 seconds. So this is our sample output of the whole chunk of data of the whole screen which we saw the first activity exactly has that same information which I require which I need to render that particular type including the information from all the three conversations. All the three APIs be it the conversation API, the analytics API or the content API and exactly that particular information which that particular type goes through to get that particular type go over caching of data and all the results are retrieved in a single file. Also I would like to add that GraphQL was a additive. You don't have to basically trust, you don't have to restructure your best APIs for that it's just an additive pair. So you can easily make the transformation working when it comes to asking for single API caching. You don't need to treat APIs for different clients. For example my android client for example my android client was asking for just the name of the followers and let's say my android client was asking for the name as well as the id for the followers. With a particular rest API I would have to make changes to the query parameters and demand data water definition which is basically get what you ask for. You can see me and get free caching via