 Hi everyone, welcome to QubeCon session on GRPC Communication Patterns. I'm Kasun Indusri and my colleague Dhanesh Krup will also join me in conducting this session. So in this session we are going to have a closer look at some of the most commonly used GRPC Communication Patterns and we'll have a look at how they are implemented internally. I'm sure most of you have heard of GRPC or even use GRPC in production but if you are new to GRPC it's a modern inter-process communication technology that allows you to build distributed applications so that you can design a microservices based application using GRPC and promote consumers can consume it over the network as easy as making a local function core. So GRPC is based on a contract first development approach so that you come up with your own service definition using protocol buffers so that's where you define all your business operations of your application and then you can generate server side and client side stop code so that you can establish the communication over the GRPC channel. So internally GRPC uses binary messaging using protocol buffers and which runs on top of HTTP2. So GRPC is an efficient strongly typed polyglot communication protocol that allows you to build a request response style synchronous communication as well as you can use duplex streaming messaging in GRPC as well. So if you look at the applications of GRPC so it is often used alongside other technologies such as RESTful services, GraphQL and even technologies such as Kafka and Nets on the event communication space. So most of the internal service communication can be built using GRPC while this most common to use REST and GraphQL as external facing communication. However, it is also possible to expose GRPC service to your consumers directly using an API gateway. Now let's have a closer look at the RPC flow of GRPC. So here we consider the same application that we discussed earlier. So here we have the product info service and the consumer application. So let's have a closer look at how messaging or remote method indication works in this particular user. So now we have the stop code generated at the client side and the server side. So from the client application I invoke the remote method. In this case we simply invoke the stops get product method using my client application code. And when you invoke that sub is responsible for converting that message encode that message and build the outgoing protocol buffer message. So in this case we create message headers, two message headers. So obviously we are sending a course request to the service application and also as the part we have the name of the service and the remote method that we and also we have been such as content type as part of the message headers. And as the message payload we have the encoded message. So this is where we use protocol buffers to encode the language specific data structures into the protocol buffer via phone. Then the message is sent over the HTTP to connection and at the server side you can the server application looks at the path values and find the corresponding stop. Then it is sent over to the corresponding stop and stop unpack the message and converts the message to the language specific data structures and invoke the actual implementation of the remote function. So in this case the remote function is invoked at this point and then the response is sent back from the service, the RTC service. So this response follows the same path as with the request. Now if you look at how these things are implemented at the HTTP tool level. So suppose you have a consumer client application and a server application then client creates HTTP to connection which means it creates a GRPC channel. So behind the scenes it creates an HTTP to connection. So once you have the GRPC channel you can send one or more RPC requests over the same channel. So in this case these different RPCs are mapped to RPC calls are mapped to streams in HTTP tool. So here we have RPC 4 and RPC 3, RPC 5 running on stream 3 and stream 4. So same applies for the response path as well. And when it comes to message frames here we are sending headers and data frames. So headers this is where all the GRPC headers are sent and data is the place that you send all the business specific payloads of the RPC request. Now let's have a closer look at request and response messages in GRPC. So if you look at the request message here we have the request headers and a frame, a message frame known as length prefix message. So this is where you send all your business payload. So this can be a single message or multiple message based on the message pattern that you are going to use. And we will be exploring a lot on length prefix message in upcoming slides. And at the end of the request you have to send the end of stream flag. So this is another frame, a data frame similar to length prefix message but it's an empty frame. So that marks the end of the request in the request flow. And if you look at the response message you have the response headers and length prefix message again same as the request path. And to mark the end of the response message we use trailing headers. So unlike the request path here we use a header frame. This contains all the trailing headers which mark the end of the stream. Now let's try to understand some of the communication patterns and dive into the internal implementation of each and every pattern. So let's start off with unary or simple RPC. So as you know a simple RPC is all about sending a single request to the service and you expect a single response from the service. So if you look at the implementation of this, so when clients send a single RPC call it sends a set of headers, one length prefix message and end of stream flag and empty data frame. And in the response path you have headers, response headers, a length prefix message, a single message and trailing header. So this is very straightforward. Now if you look at server streaming scenario. So here we have a single request one RPC location but you get multiple responses as the multiple messages as the response. So in this case request path is very similar to simple RPC but in the response path you can see we are sending headers and multiple length prefix messages followed by a trailing header. In the client streaming it's the same thing but in this case request as part of the request we are sending multiple request messages. Therefore we have multiple length prefix messages followed by an end of stream and as the response you get a single length prefix message with headers and trailing. And if you look at more complicated scenarios such as bi-directional streaming RPC. So in this case in bi-directional RPC we send stream of request and stream of responses. So you can understand this further by looking at this example. Here we are sending a series of order requests to be processed by the service and once those orders are processed server sends back a stream of signals. So if you look at the implementation of this again you can see there are multiple length prefix messages. You have header and end of stream and in the response path also you can see headers and multiple length prefix messages. And when it comes to the implementation of both client and the service side you can create your business project by looking at the end of stream flags in both request path and the response path. Now let's dive deep into request and response headers and Dhanesh will take you through the rest of the session. Thank you Kasun. So in our previous slides we talked about how message flow in different messaging patterns. In this section we are going to look into the request and response messages. First let's look at headers. So when we talk about headers in JRPC there are two types of headers. One is called definition headers, called definition headers and the other type is custom metadata. All definition headers are predefined headers supported by HTTP tool. If you look at the table there are a bunch of called definition headers. Some are prefix with same column. So those are called reserved headers. So one of those headers are one is metadata. So in JRPC the HTTP metadata is always post and the other one is path that contains the service name and this remote method and there are some others like authorizations and schemes as well. And the other type of headers are custom metadata. Custom metadata is an arbitrary key value pairs which is defined by the application level there. So metadata, we use metadata normally to share information about JRPC call. For example, authentication headers, etc. So you can see there are a couple of headers which are prefixes using GRPC hyphen. That is those headers are defined in GRPC call implementation. So GRPC timeout, GRPC encoders are such those headers. And so if you are defining custom headers you need to avoid these prefix in your custom metadata. And also in the content type we need to pre-fix application slash GRPC. If it has not been given error. The next thing we need to talk about is length prefix messages. By definition, message framing is an approach we use to construct information such that the intended audience can easily extract that. So in GRPC we use a message framing technique called length prefix framing. Length prefix framing is an approach that writes a size of the message before writing the message itself. If you look at the diagram we show in right side. So we are generating the encoded byte array and we are compute the size of it and in these four bytes we append the size of that. So in GRPC four bytes are allocated to set the size of the message and the size is written as peak Indian integer. And also you can see one byte in front of that size four bytes that represent the compression flag. So if it is zero that means this message is not compressed. If it is one that means this message is compressed. The compression algorithm is defined and passed in the request header. So the other thing we need to talk about is that how GRPC encodes binary message. By default GRPC uses protocol buffers to encode this message. So the protocol buffer encodes the message based on the structures defined in this service contract. So if you look at the definition. So you can see a protocol definition. So in our example we have order ID message and it has one field called ID. And from that message they are generating the binary message binary. So if you extract the binary you can see there are different tag values. So in our case we have one order message with one field. That means we have only one tag value pair. So the tag value pair is mapped to a message field. So at the end of the message we pass zero to indicate that it's the end of the message. So when we go in deep to the tag and value pair. So the tag is derived using field index which is defined in the service contract and the wire type. The wire type is directly mapped to the field type. So in our case it is a string type that the string like map to length delimited. That means the wire type value is true. So from that we derive the tag value and the value of the field is encoded using different techniques based on the wire type. In this case it is string that means we are using UPF8 encoding to encode the value. If it is integer we may use point it depending on the type. The next major thing we need to talk about when it comes to GRPC is error handling. So errors are a first class concept in GRPC. That means every RPC call the response will be either payload message or an error. The error includes a status code which is predefined and it is unified across all languages. And also they pass a status message which describes the error. And also these errors are sent as a response trailing headers. So I capture in the first table I capture those two headers comes in the trailing as trailing headers. One is GRPC status and other one is GRPC message. So let's say the request computed successfully. In that case the GRPC status will be zero. That means okay. If it is an error popped up in the service side the corresponding error is going as the GRPC status and GRPC message describe what the error is. So when it comes to error handling there are a couple of best practices we follow. The first one is we do not think of error details in the response payload in most cases. So that means all the error details are always going through the trailing headers. There are some of situations we cannot follow this. Let's say you are using a streaming example and you need to pass an error detail to the client without stopping the stream. In that case you need to add error details in the response payload. Otherwise most of the cases you can send the errors by the trailing headers. The other thing is at server side when we have an error it's better we can return all the errors to the client caller. So unless there's the internal state of compromise in other cases we can send it to the call most of the cases. The other thing I need to emphasize is the deadline. The deadline allows both clients and services to know when to about the operations. The clients are the one who initialize the call. So once the call initialize client is setting the deadlines. So deadlines is normally set as absolute time which specify the time which we have brought the operations. So when the client initialize the call the deadline information also go inside the call inside the request message as the header. And when we come to the service they first look at what is the deadline param. According to the deadline param they decide whether to proceed the operation or avoid it and send an error to the client. So if the service calling another service it's bet it's it's important to propagate those deadline information to the other services as well. The interceptors are also a main point in gRPC applications the mechanism is to execute some common logic before and after the execution of the remote function. We can apply it both to the server and client side and depending on the message pattern we use we need to use different interceptors like ordinary interceptors streaming interceptors etc. The useful the main use of these interceptors are for the login scenarios and authentication scenarios or we need to capture some metrics. For those use cases we use interceptors. When it comes to implementing services the service versioning also plays an important role. So let's say you have a service running and you need to update the service so the services should strive to remain backward compatible with the old client. So if we have a better versioning strategy that will allow us to introduce breaking changes to the gRPC service. So in gRPC all these service versionings done using the package name so we are pending the version number to the package name. How it works is as we told earlier gRPC call is underneath a post request and the part of the part of the request is derived using the package name service name and the method name. So if you append the version to the package name that means whenever the version number change that that will create a different context. So the old client will not affect when we deploy both services in this environment. So that if the client needs to migrate to the new version they need to get the correct version of the proto definition and generate the stuff. The final thing we need to discuss in this session is to extending service definition. Let's say there can be situations where we generate the application with different messaging patterns you need to extend your service definitions and add some custom options. In protocol buffer definition they provide this facility to add custom options in different levels of contract. So it can be service level it can be method level it can be three levels etc. So I captured a few of these scenarios we may use it. So if you talk about one scenario so let's say we have a service and it's secured using external or provided. So you need to pass this or provide the URL in the service contract itself. So there you can define the service options to provide the string or provide the URL and we can use that of custom options in like inside our service definition. Likewise you can add custom options inside a method level and also in the field level as well. Okay that's it we need to cover in this session. So in this session we mainly refers to the GRPC offline running book and all the use phases and the source codes are in this GitHub repo. So that's we need to cover so thank you for listening.