 Before jumping to GRPC, let's talk about REST APIs. So to give some context, we had some legacy apps, which are monoliths, something in Rails, something in PHP. We wanted to move into a more front end and back end separated way. So we started building services for our use, and we are separating the front end. So we started with REST APIs only. So there are good things about REST APIs, like well-defined HTTP verbs, and JSON objects can be easily consumed by any JS libraries. And it's curl-friendly, like you can access by any HTTP clients. And almost all the languages have matured HTTP libraries as well. So you don't have to deal with any underlying things. But when we are using that for microservices, we face some of the problems, like not everything is a resource. Because consider an example of calculator service. Maybe the rates, which can be properly represented in a REST way. But consider an endpoint, like calculate premium. So these are functionalities. These are not necessarily a resource. So it would become a JSON APIs from REST APIs. And the problem with using multiple languages is whenever you write some services in one language and you want to consume in another language, you have to create all these tub codes. For example, you have a user service, which provides user information, let's say getUser and createUser. So when you want to consume in Ruby, you have to create classes for request and response. And you would create some service clients, which would make abstract HTTP calls for you. Similarly, when you write in Go, we're sorry for the rest of you. So this is a Go code, like the same set of code you have to write in all the languages you are going to consume for the internal services. And there are other problems, like for internal services also, you have to maintain your API documentation. And if something is rapidly changing, you have to keep up with those libraries, which we have created in different languages, whenever the service is changing. And you have to update the API documentation as well. And JSON is text-based, which allows unnecessary performance burden when serialization and desalization. And before a couple of services, you may not feel the performance burden. But considering a workflow involves multiple microservices, and each time for a workflow to work, it will make sequence of calls. And each time you are marshalling and unmarshalling into JSON, you will feel that a performance. So GRPC is similar to any other RPC systems existing before, but with some better features. I used to work in Java, so I used EJBs before. So GRPC is simple. You have a GRPC server, and you have a couple of clients. So each client will have its GRPC stub to interact with the GRPC server. So from your clients, you will just call methods in the stub, which internally it will make a request to the GRPC server, and receive the response, and give the output of that method back to you. So you will feel it's like a local method call. Instead of an HTTP call, everything is fed on. And here GRPC is some of the features. It is a high-performance RPC system developed by Google. And it uses HTTP2 for the transport. And it uses protocol buffer as the message format. So typical, our REST APIs would use JSON for request and response. It uses protocol buffer, which is a binary message system. And it supports different communication modes, unary, client-side streaming, server-side streaming, or bidirectional streaming. And you can add the middleware layers. We have discussed it's difficult to add the authentication layer into the graph kills. So that's an inbuilt functionality to add authentication system on top of GRPC. And if you want to add additional middleware layers, like logging or anything else you want to, you can do that as well. So protocol buffer. So it's a mechanism for serializing a structure data, similar to JSON or XML. It's another way of representing the structure data. So as I said, it follows the binary message format. And it's used to describe both service interfaces, as well as the message format. If you use the SOAP before, we have the similar thing called WSDL. So previously, we used to describe the services and the request and response messages, everything in the WSDL file. So similarly, we'll have a proto file. So it's a simple one. It's simpler than the WSDL. So here I defined service user data, which have a couple of endpoints, createUser, which accept the createUser request and returns the user. And similarly, for the getUser, it accept the getUser request and return the user. So what are the parameters in my createUser request? Everything is defined in here. So it has a first name, last name, email. Everything is string. Similarly, for the getUser request, I can either pass ID or email. And the response user, it contains the ID, which is integer, and string, first name, last name, and email. This is a simpler one. So all my service-related details, everything is in a single file. User protocol buffer has some advantages, like better performance by using the binary format rather than the text-based encoding. And it's easy to read and understand the service and message definition. So it's better than WSDL. And in some ways, it's better than the API documentation itself. So whatever your service accepts or what are the inputs and outputs, everything is defined in a single place. And this is the one to use to generate the service as well as the steps. One of the good things with protocol buffer is you can automatically generate the stub classes programmatically. Like currently, I think some 10-plus languages being supported. So you don't have to worry about this stub generation in the client side for each of your language. So benefits of GRPC. So as I mentioned earlier, protocol files, it acts as a service documentation itself. Like to any developer reading the protocol, it's better than going through the documentation. An automatic code generation for stubs using the protocol compiler. So each language has its compiler to read this protocol file and generate the stubs according to that language. For example, if I run the Ruby compiler, it will generate the classes for me. If I run the Go compiler, it will create the struts for me. And it provides the better performance by using two things, like HTTP2 for the transport and protocol buffer, the binary format for the serialization. Writing GRPC service. So writing the GRPC service, it's a simple like four step. First, you will start with the service definition in a protocol file format. And you will generate the client and server code using protocol compilers. Then it will generate some base classes for you. So you can extend the base classes and implement the server. And to use it in the client side, using those stubs generator, you can call the methods in the stub. You can configure the URL for that GRPC server. And you are good to use that. So let's see a demo. It's clear now. So we'll start with the protocol. So as I mentioned earlier, the same protocol I use here. I'm following the protocol file version 3. And I defined the package. I mentioned the version for this service. And we have a service user data, and which has two endpoints, create user, get user. The same request and response messages we defined. When I run the compiler, it generates these two files for me. Like this is the stub. I can use it in my client. And it created the service for me. And for the request and response, it created the model classes for me. So I used a gem called graph. So this is an existing Rails API. We added GRPC on top of that. So I extended that base graph controllers and bind with that service, which has been generated by using this protocol. So I defined these two methods, create user and get user, which basically create an active record whenever a new user is created. And if there is any error, it will throw in error messages. Get user is simple, like if it has an email, it will use the find by email. And if it receives the ID, it will do the find and return the response. Simple things, similar to our application controller. This is a sample Ruby client. So using the same prototype, I generated the stubs for my Ruby application, Ruby client. In the client, all I have to do is record GRPC, record generated PB files, connect to the GRPC server, constrict the request, and call the methods in the stub. So all I have to do is writing this file only. So I don't have to write all other files. It will be automatically generated. The same with Golang as well. The same prototype used to generate this big one. But in the client, so now with GRPC generated that code for me, all I have to write is the same thing. I set up a connection to the GRPC server. I constrict the request, and call the method in the stub. So without GRPC, I have to implement the same stubs code in Ruby as well as in Golang. But GRPC saves us time for me. So we start with the GRPC server. So I have two clients here. One is in Ruby. Another one is in Golang. So one is to hit the create user endpoint. And two is to hit the get user endpoint. So the first time, if you see, it returns the response properly because it created the record over there. The second time, I throw that error directly back to here. You can see that email has been already taken. So I can throw the errors created in the Rails service directly back to here. Similarly, to consume the same service in Golang, so I call it for this email ID. And it receives that user object back to it. And I'm able to receive it. So similarly, if I want to add a new endpoint to that prototype, all I have to do is add another RPC endpoint here. And then I have to regenerate the prototypes in each of the clients, only if I need it. Let's say in my go client, I want to use the new endpoint. But in my Ruby client, I don't want to use that. I'm not in need for that newly created endpoint. So I can still use the existing one. But in my go client, I can generate the stubs with the updated or the next version of this prototype and still communicate with this. Unless I don't change the existing message formats, I can still use with that older prototype versions. So let's try adding one more service. So we're going to create a new endpoint, delete user. So which accepts the delete user request and returns the delete response. So I'm going to define these two messages first. So delete user request contains ID alone. And the delete response contains the Boolean flag, whether it's been deleted or not. So what do I have to do? So for the service side, I'm going to generate the stubs, service files again. So in that service file, the new endpoint is added. In the controller, I'm adding that one. So for the response object, it's also already created. So now my server side code is done. So I'm trying to destroy that record based on the ID from the incoming request. And if it's deleted, I'm sending the status true. Otherwise, I'm sending the status false. I'm restarting my server. I'm going to generate with the new profile changes in the Ruby client alone. So first, I'm copying that profile. And I'm generating the stubs again for the Ruby client. So the new endpoint is added. It's added in my client side also. Now I can go to my client and directly call those methods in the stubs. So I have to construct the request first. So I construct the delete request. I'm going to call that delete user method. Let's try this. This has to be converted into a number, because I can name it later. Okay. So it's though the error, couldn't find the user record with ID tam. So let's try with some other ID. It's deleted in the response. So for the normal web client, there are, you can use, but there are still proxies being involved to convert into the JSON or other easily considerable. So it's more suitable for your internal services, as well as your mobile clients, because it supports for Objective-C and Android Java version as well. So you can use for the internal services and your mobile clients. Web clients, I would still go with JSON services. It's still much, there are not many mature libraries in Ruby. So even when I tried, I found one big graph, but if you will go with HTTP or, if you want to write JSON library in points, there are a bunch of libraries available in all the languages, right? It's still evolving. And one of the good things, okay, bad things are, still we are exploring, like, they're selling these performance and other things. We have these rest end points and we are slowly adding this GRPC as well. During the usage in life, we will get to know what are the disadvantages of that. So there are different ways we are thinking. The first one is like adding the version of the package itself, or keep the different protocols for different versions itself if it's a drastic change. For the minor version, we are planning to add this version 1.1, 1.2 in the package level itself. So in the same client, I can use the different versions of that GRPC services. Yeah, so for your internal services, developer is your consumer, right? So I would rather read this service definition file and the message format rather than going through the documentation to understand. Maybe for the purpose of the services and usage, you may still need that API documentation for sure. But what is my input? What is my output? And what are the methods I'm exposing? I would rather go with this definition file rather than the API documentation. But yes, yeah. No, but consider the number of lines it's generated in this tab classes, right? So once I write the service definition file, I can directly write my service and client. It created the base classes or base threads needed for me. So for the GRPC server itself, it's production ready, it's already used. But the graph thing, it's for the rails, if you want to add on top of another rails API which you are using the rest. So the graph is easy to add. So that's why we are using the graph. But still, is it production ready? That's debatable, like. I don't think that graph is still production ready. Yeah, yeah. You can manually run your server with GRPC implement, but you have to write some basic code at the moment in Ruby. Yeah. Yeah.