 Also, I was full, okay. Hello everyone, I'm Shahid. I work at Hasura and I tend to shamelessly plug my Twitter handle everywhere I go. So please follow if you like what you see here today. I'll be ranting more about these things all the time. Cool. So I work for a company called Hasura. Most of you might have got Hasura stickers and would be wondering what this is about. I'll come to that towards the end and I would like want to start the talk by introducing a problem. This is something that everybody would be aware of or you have seen in your life at some point. Before that, can I see how many friend and developers are here? Just raise your hand. Okay, awesome. Back-end developers, cool. Is there any other category of developers here that I'm not aware of? Full stack, okay. Who is full stack? Okay, grass stack, jam stack, a lot of stacks are there. Okay, any GraphQL users here? Okay, very few. Serverless functions, function as a service. Awesome, okay. So most of the stuff might be new for a lot of you but that's the whole point, right? Cool, so let's see how we can build a typical application. You have a mobile app or website or something of that sort and you will be contacting an API layer which will be talking to a database which will again be talking to multiple other microservices or as always there's the monolith where everything is there inside. I'm not going to that part because hey, we are in 2018, right? Like monoliths are what, 1990s maybe? Yeah, anyway. So you have this microservices talking to each other and this particular microservice that I'm introducing here, we'll be using them throughout the talk and towards the end of the demo also. So just keep them in mind and you might have guessed already what this application does, right? Let's see what a food. So any thoughts, what kind of issues architecture can have? I'll give you a clue. Two of those issues are in the title of the talk. Any random shout outs? Yeah, scalability, resiliency, but why? Why is it not scalable or I'm assuming that it's implemented in a naive way, okay? Of course you can make this scalable. I mean, all of us can do this but I'm talking about the naive way of doing stuff where you make a request from the application, it hits the API layer, API layer does everything that has to be done. In this case, it makes an insert into the database, validates the order, process the payment and assign a delivery agent and after doing everything, it returns back to the client, right? So this is a synchronous action. You are doing something on the client application. It's going through the entire flow on the backend and then coming back to you. Okay, I'm talking about this scenario. So this particularly is not scalable. As always, do we all agree to the fact that synchronous actions are inherently not scalable? Like you can't do thousands of them at once unless you allocate the required resources, right? Do we agree? Anybody disagree to the fact? Okay, not. All of you don't have very strong opinions or trying to be very liberal, postive or taking a nice nap after the lunch, like which one is it? Okay, anyway. So resiliency is something that you build into the application there, right? So what happens if the restaurant microservice, your interaction with the restaurant microservice fails? Like you need some, that microservice is crashed. Like how do you start the process again from that point it failed, right? This is something that you typically build into the API layer. You'd write code to do this. Now, a typical solution to this problem, when we see that most of my issues are due to synchronous things happening in the backend, what is that golden hammer that comes to your mind? Let's make it a sync, right? Let's make it a sync. So now I'm making it a sync. Let's see how an asynchronous architecture look like. So you have the API layer again. You have the database and you make a insert, let's say an insert in the database. You have something like a event system, like Kafka, RabbitMQ, and you will keep inserting events. Other microservices will consume these events. They will process what is meant to be done, right? Any of, most of you are familiar with this architecture, right? Yes, no? Yes, okay, awesome. Cool, so this is the typical a sync architecture that we see where you have, you interact with the database, your state is stored in the database, and you execute certain action based on events happening on the database, right? Nice, but microservices are, again, like monolith, so 2000s, right? Or maybe like last year. Today everybody is about serverless. So let's talk about serverless a little bit. So my aim is to replace my microservices with serverless functions so that I don't have to take care of a lot of things, right? So I saw there are very few serverless users here or functions as a service users here. So let me just introduce what serverless is or what this lets you do, right? Typically when you need to write a web service, you need an API which tells you hello world, okay? So this is my express code for that. Typically you would write this function and then you will have something like app.use or app.listen and you will maybe run this on a typical VM with VM2 node, whatever, just write it right around it. Or if you are fancy enough, make it a container, lock a build, lock a push, maybe put it on Kubernetes, right? How many of your Kubernetes users, okay, awesome. So serverless functions or serverless frameworks or serverless platforms, what it lets you do is this. You have, you typically have a command, command line tool or a UI where you just paste this function or you just execute one command which will take this function, which is in one index.js file and it will give an endpoint to trigger that function. So this will give me this URL and I can just call that URL and they get the response. So it's scalable by default or scalable by nature because you are not managing any CPU RAM or anything. There's no ops, you did not deploy the code, you did not access it into any VM, you did not create any servers, you did not care about CPU or memory and it just works, right? So there's no VM to manage, don't need to worry about scaling and all serverless platforms are scalable by default. You will only talk about the pricing page here. You look at the pricing page and you'll say, free for a million invocations a month. That itself sounds like scale, right? So you can trigger it how many of the times you want concurrently. You can trigger it once, twice or 10 times at once. So scale for free, I said most of the cloud platforms like AWS Lambda, Google Cloud Functions, Azure Functions, all of them are very generous free tier. You will be able to deploy these two very generous free tier which is like a million invocations per month, okay? So this is freedom, right? You are, most of us are front end developers, full stack developers are here. Like I just want an API to be available for my app to work. I exactly know what it does. I don't wanna go to the hassle of creating a VM, deploy it and do whatever, right? I know the function, I know how to write that code. I just want this code to be running somewhere so that I can trigger it using an API call. But is it resilient? So we talked about scalability, is it resilient? If some serverless function fails, how do I restart the same action? How do I trigger the same action again, right? Serverless functions by themselves are not scalable, not resilient. So what you have to do is you need to throw in some idea of state because they themselves are ephemeral. You don't have any idea of state. So you store your state in the DB, the database. For example, in the earlier case, you say payment done or not done. You store it in the database and whenever something changes in your state, you trigger events. You trigger the serverless function on this database events. So what that means is when you have this event system and you try to trigger the serverless functions on these events, and add retrace to the mix, you get resiliency. Let me explain that by taking an example. So here in the earlier example, we have this food ordering app, right? You have this schema. This is what your order table looks like. Whenever you create a new order, this is the state, right? Now there are four steps here, which is very important. One is valid, where the restaurant does the validation. Another one is paid, is approved and is agent assigned. Let's look at the flow using the serverless architecture. A new order is placed. Now this is the database state, right? Everything is false. This particular database state will trigger an event which will trigger a serverless function, which does whatever logic is required to validate the function. Now that will do whatever logic that is required and comes back and updates the database and says it's validated true. This particular action or this particular event of, is validated being updated to true, will trigger another serverless function, which will again come back and update the state and this kind of, these things, this continues over all the steps, right? And finally, when everything becomes true, your order is completed. Now your client just created an order, right? You may place an order from the client and what happened is, you replaced your earlier synchronous microservices architecture with a serverless function. Your microservices become serverless function, serverless functions are asynchronous and scalable by nature. And when you trigger them using database events, you get resiliency, but what is the cost, right? You don't get anything for free, right? You can do all these things, but what is your cost to do this? Any clues like, you talked about event system and all, it's like something that is available on the road, you can pick it up off the shelf, use it, right? What you did was, you moved specific error handling or failure handling logic from the API layer of yours to another system, to a generic failure handling system, which is the event system, right? The cost is in this asynchronous architecture, you need a generic event system, which will be able to trigger microservices or serverless functions or whatever, some webhooks or some logic when database actions changes and you need asynchronous serverless functions. So what we did was, we took this architecture, we took the synchronous portion of it, we made it asynchronous by introducing this new architecture. Again, what's the catch? This is a problem, right? Your client made an order, earlier whenever the order was completed, the client got an update back. Now whenever something is happening on the backend, how does the client know whether the order is validator, order is placed? You get a response immediately when you place an order, that's what we showed here, right? You make a database insert, you get a response back. So how does the client know what is happening in the backend? So how do you send this asynchronous things that are happening in the backend to the front end? Yeah, somebody shouted it, graph QL. So, graph QL is like this new person in town. So very few people knew graph QL, so I'll just make a small graph QL intro. So you have a typical rest, in a typical rest scenario, you will have this product or this particular end points to which you do get, post, patch, all these requests, right? All these methods. And you get response back. So if there's a product and you only wanna get the name of the product and nothing else, you would, the backend has to implement something like columns equal to, customer columns equal to name or something similar, right? So using graph QL, the query looks like, what is there on the right hand side? You write something called a query and you say what is it that you want a query and you get what exactly you asked for. And if you wanna query over multiple relationships in a database, this is how you do it. You mess them inside the same object and you get that response back. How many of your React developers? Angular, what is the rest of your views? View, Vue.js, how many? Okay, couple of you. Still jQuery, huh? It's 2019 almost. So React has, React and all this modern UI frameworks have this UI or component based architecture, right? You write a component, you use the component everywhere. You reuse the component. So when you have a component, let's say you have a profile, use a profile component. You need to know the user's name, a link to a profile picture and let's say the user's location. Three things are needed to render this profile component. So you can write up specific GraphQL query to only fetch this information, profile, name, and the image link and the location. And you can wire it up to the component and that's it. Your library will take care of everything else. You don't have to get the response from your REST API, transform it to what you want it to be to pass to the state of your component. All of these things go away. And whenever you want something else to be done, like how many times has it happened that you are working on a UI component, you wanted one more extra field from the backend and you had to call the backend developer and they were like sleeping or I'm on vacation and I'm on leave. I'll do it as soon as I'm back and your work gets put on hold because of that, right? All these backend front end interactions will go away in this scenario because there's one endpoint which expose everything, of course, everything that is allowed to be exposed and then the front end can decide what they wanna see exactly, what they wanna query exactly. So this kind of puts UI first. You decide what your application needs to see or what your application needs to show on the front end and depending on that, you do the database modeling or you write the right query to get the correct information. The amazing thing about GraphQL is subscriptions. Now we talked about all these things that is happening in the backend where the order was being validated and you need some way of sending this back to the client. That's what we stopped earlier. So you need some way of consuming S-ing information from the backend without worrying about too much stuff. So do you know that REST has a watch verb? How many of you have seen this watch verb or used it somewhere? I'm squinting at the Kubernetes users out there. So Kubernetes has this watch method. It's not a standard spec, I guess, but they have implemented it. It'll let you watch over a resource. Basically, it'll open a WebSocket connection and sends you updates over that WebSocket connection. GraphQL subscriptions are exactly like that. You have a database. You will be able to send information to the client, not when the client asks for it. That's how REST works, right? Your client asks for it, you give it back. The client uses, let's say the client opens a subscription. It's like a WebSocket connection. It is a WebSocket connection. And you will be able to send data from the backend to the front end using GraphQL subscriptions. For example, this is our food ordering example. You had payment, this is the order table. You had two fields, payment and dispatch. Over time, both of them becomes true. You wanna show this on the UI, hasn't when it happens, right? What would you do? All the front end developers out there. Like you have no idea about GraphQL. Like how would you do this? Polling, first solution is polling, right? You keep polling the backend, the REST API at a particular interval. Somebody said sockets. Now, sockets are not something the friend developer alone can do, right? Polling the friend developer alone can do. Pardon? Real-time DB, yeah. Can you name one real-time DB? Firebase, anything else? A lot of couch DB, reflex, reading DB, a lot of real-time databases are there. How many of you are all actually using them? Firebase, I agree. Many might be using, but other stuff. And Firebase is no SQL database. We need to take that also into consideration. So the second solution is to use web sockets. So web sockets, there should be a contract between the front end and the backend. The backend need to decide on a, both parties need to decide on a common contract to send data across, right? Again, the same problems of REST API comes. What if you wanna get one more field in the web socket response, right? You need to again, paint the backend developer. So you do polling, you do web sockets, custom API, but those who have implemented web sockets would understand community is really fragmented. So socket.io and all is there, but still have you tried making a JavaScript client talk to a, maybe the other way. So integrating different frameworks and different tooling is very difficult because the community, there's no set standard which says this is how information should be passed around through web sockets. It's up to the developer to decide, right? The front end and backend developer decides this is how we are gonna do it. Or most of the times one party decides, right? The other one has to just do it. So with GraphQL, this comes down to very simple mechanism. So the spec is standardized, everything is set on stone or in one GitHub repo where the spec is returned. This is how the client should interact with the server. This is how the server should send the information to the client and all clients implement this spec. So all clients are compatible with all servers. If the server says I'm GraphQL compliant and the client says I'm GraphQL compliant, it works. This means most of the work has already been done by the open source frameworks and libraries out there. And you just have to implement them. You just have to use them. This is what boils down to when you have a, when you have to do the thing that I showed you earlier. So it just becomes this. Instead of the query, you will say subscription. That's all. Your React framework or your React library is called Apollo in React. Most people use Apollo and it will be, it will get the update from the backend and it will refresh the component automatically. So whenever any of these things happen in the backend, you will be able to get that update in the front end. Now, of course, you need a GraphQL server with subscriptions, right? So we took our simpler architecture earlier and made it so complicated and we need all these new things to be done. So going back for a comparison, this was our traditional architecture. We said that that is not scalable because it was synchronous. This is my asynchronous scalable architecture and this is my GraphQL and serverless architecture. So you have an application and you have something sitting here. You have it interacting to the database and you have GraphQL, client is interacting with the server with GraphQL. The server is also triggering serverless functions based on the actions happening on the database, which is again updating the database back and so on and so forth. So some fun, right? Lot of theory, some practical stuff has to be happened. Cool. But I hope this works. I heard there is Confidence Wi-Fi. I myself, I'm connected to it. So let's see how it works. Why don't you go and scan this QR code? You'll be taken to a page which says open or order app analytics app. Preferably go to the order app and we'll come to the other ones. So how do I do this? So what I'm gonna do is this is an example application built to demonstrate this scenario. So, okay, I will also do one thing. Somebody is already messing with the demo. Awesome. Okay, so I'll keep this on one side. We'll play around with other stuff on the other side. So, guys, so when I place a new order, what happens is I'll be able to see the status in real time. Whoever is placing this thousands of orders, please make sure you pay for the order, okay? Otherwise it'll be just stuck there. Nothing will happen post that. So these are the number of orders being placed. And now this bottom graph are the number of orders being validated, okay? And then the payment happens, then so on and so forth. The validation and the restaurant approval happens. Now, while this is going on, if we can stop ordering and please listen. So, I would just like to show you what's happening here. So this is the code. This is the react application where you are placing the order, okay? So, there is this place order or JS function where you are just doing this thing called mutation, which is inserting an order item, right? When you place, choose the item which is inserting into a database. Now, all I did here is to write this query and this on click function is bound to, yeah, we saw that like Nintalk how this happens. So whenever the order is placed, this is how it happens. This, so these are the items are loaded. So you say query, and you say return data.item.map. You render the item name in a checkbox. So compare this to how you would use a REST API. You would use fetch, call an endpoint, get the response, pass that response back into the state, and then render the component, right? So here you have a component called query that is available, and it just have works right away. And this is how you are placing the order, mutation. Mutation is place order and the order gets placed right away. So whenever you wanna see the status of the order, this is what happens. You have a subscription. This is a live status. Your order keeps getting updated. This is these are the fields that I wanna show and these fields are mapped to the component right away. So here you can see payment status is shown and the subscription component is being used here and based on that I'm rendering this template and the status is rendered by this function. Now when you place that order, oh wow, nice. Guys please, please pay for these orders, no? Otherwise it won't move forward. Okay, I'm being generous. I'll pay for a few. So this is what is happening in the backend. There's an order table and whenever you place an order, Sammy, who's E-L-L-O-O-O. I wanna see this person. Please raise your hand. The one who is messing with my demo. Reveal yourself please. No? Okay, we'll find you. So, oh my God. Over server. So when you place a new order, what's happening is this serverless function is getting triggered. Okay, so this function is hosted on Google Cloud where you can see validate order you can see the number of invocations for the last one hour. So it's going pretty high, right? 60, 70 invocations per second are happening. So whenever you are doing any action here, the corresponding serverless function is getting triggered and you can see it's climbing slowly. So this is partly because Google's invocations per seconds are very low like 60 server requests per second and you can see that as and when it gets time, it happens. So let's try to pay for all these orders. This person is gonna... So I'm gonna pay for all the orders. So what I'm gonna do is I'm gonna update the order table and set is paid to true where is validated is true. So those who are familiar with GraphQL will understand what this is and once that is done, tell me how many rows were updated. So I can execute this query if the internet works, of course and it'll tell me how many rows were updated. So 18,200 orders have been paid for, okay? So you can see this massive jump here, right? This green jump is due to that order that have been paid right now. So our valid order function is still being very slow and getting invoked. So I will have to resort to my videos because of our said person who is placing infinite orders. So we saw how we can place one order and whenever we make many orders, this is the ideal situation would look like. So this is what happens, right? When you have a system and you didn't design for the kind of scale that you expect, even if you call it scalable, resilient, whatever, you need to have an anticipation of that scale. Last time I did this demo, I got around 40,000 orders and I was under the same impression. But yeah, you guys are really awesome. Not you guys or everyone, that one person. You will find that one person. Okay, whoever you are, come to me after the talk is done. I have some gifts for you. Good gifts. So the idea is whenever, so this is the ideal scenario, the validation will go on. So I only managed, I only configured this to handle such a number of scale. So you will be able to see the validations are going on. And whenever all of the orders are pending payment. So when you pay all orders, suddenly the payment thing jumps and the other step, which is a restaurant approval continues to happen. And once that has reached a particular state, the next function, which is the delivery agent assignment will continue triggering. So now imagine building an application which can handle 50,000 orders. Like this kind of scale and all, if you have to handle this, you have to work, I don't even know how much. This is just like front end hacking, just couple of react lines and one database model that's it done. So let's say when your network breaks down. So I have a demo for that. I don't can't really show you that on Google system because there's no way of turning off a function. So I'm making some orders. So I have paid some 1000 orders and the validation is happening and payment is also happened. And this is my restaurant approval functions is on AWS Lambda. This is button called throttle. I click the throttle button and the function just stops executing. You cannot execute it anymore. So these many approvals have been done and this may approvals have been done and the function has been throttled by this time and this many restaurant, this many delivery agent assignments also have done after that just flattens or nothing is happening because this function cannot be triggered anymore. So once that, so everything is stuck at restaurant approval. So once I have, I make the function run again where is 1000 concurrency have associated allocated to it. It will start running. You see that it start running again and everything continues as expected. So there was a network failure in between and because of that network failure, everything stopped executing but it picked up again when the network was available. So that is how the resiliency is achieved. So you can see that all of this, you can see all of this invocation logs and everything where if you go to the events tab, we see the validate order function and you can see the invocation logs. So you can see that this particular event was delivered. This was the response is requested the response and let's see what the order status is. Are we still getting spammed with orders? Looks like it stopped. So keep a watch on this link, this analytics app link after the talk is over also, you will see that how the system will handle the scale without any intervention. So I haven't done any tweaks to handle this particular amount of load. This was something that is working for 1000, 2000 orders and see for yourself how the system will handle it. Now it's nice to see all this in one talk but we have also formally or not formally just put it down in one place. If you can go to three factor app and you can read about all these principles, these architecture principles that I have just introduced right now. So we use real time GraphQL for faster iterations. We use event-driven for scale resiliency. We use async serverless for scalability. So you can check out this link refactor.app and you can learn more about this architecture. That's it. That's it for me. The product that I used right now for Suda GraphQL engine, that is what we do. We build GraphQL on Postgres. Check out the GitHub repo. Star rate, it's open source. We released this three months ago and we have got a huge response. It'll be awesome if all of you star it and show us more love. I mean, have you guys got the stars from school? Like notebook, they'll stick stars, right? Very good star and all that. Not saying the same thing but that's how we feel. Anyway, E-W-L-O-O, please meet me after the talk. Any questions or do we have time for questions? Yeah, let's take a couple of questions. Yeah. And we'll definitely star your repo. Please show the same enthusiasm you've shown to place those orders, okay? If you did that, no. We could get some, I don't know, 200, 300 more stars. Hi, so you have talked about what are the actually benefits of the GraphQL and how it is handling so many orders. But what are the cons and drawbacks of using it? What are the disadvantages? So I mentioned at every step when I pointed out all the benefits, I also pointed out the cost. What is the cost associated with this? It's not technically drawback, it's the cost. So you need a GraphQL server with subscription. And building a GraphQL server with subscription is not easy. It doesn't come for free unless it's a GraphQL engine which is open source and free, but anyway. So it's a huge effort. So you need, if you want to do this on your own, it's a mammoth level task. And if you particularly want to know about the cons, the cons are there with any real time asynchronous system. You need to be able to handle certain race cases and edge cases which will not appear in any synchronous system, right? The moment you introduce asynchronous actions or asynchronous pattern in your architecture, you need to be able to handle all the scenario that all the other problems it introduce. Does that make sense? So with GraphQL per se, I can't pick out, there are issues with the client libraries. Any new system has its own issues, but it's getting better because system is working towards making it better. Any other question? Sorry, yeah, so hi, right in front of you. Oh, sorry, yeah. Sorry, so I had a question. From what I've seen where you use GraphQL, typically what happens is that the GraphQL query gets sent to your server over HTTP post, right? Which sort of makes any type of CDN or any HTTP based caching completely useless. Have you figured out some way to kind of, is there any pattern so that I can kind of get the best of both worlds where I get kind of the power of GraphQL as well as I'm also not losing out on all forms of HTTP caching? So this is one, you can say, this is one of the cons, right? Certain things you will be able to cache in your server. There is one way which people have tackled this, which is to send the query in a GET request, in a query parameter, which is a hack. People have done it, but the community itself is working towards better caching mechanisms. But typically you would not need to worry about it unless, yeah, at scale you need to worry about this. But it's a problem that is being actively worked on. There is no solution right now.