 Hello, everyone. Welcome. My name is Ivy Zhuang. I am a GRPC Java maintainer. And today, as Kevin mentioned, I will walk you through an overview of GRPC as an introduction. First, a bit of a trigger question. Does anyone know what GRPC stands for? Actually, GRPC remote procedure call is a correct answer. And remote procedure call is that you can execute a call function that is executed in a different process or even on a different machine over the internet. But that is not the only correct answer. Actually, GRPC literally stands for anything as you can discover in GRPC's personal report that is released every six weeks. And golden retriever pancakes named after our mascot is also a correct answer. But unfortunately, the nice looking Google remote procedure call is not accepted. Okay, now let's have another view of GRPC. GRPC is a general purpose. It can be run on different operating systems like Android, Linux, Windows. It can be run on different platforms like 86 ARM. And it can be run on the cloud, on your web, or on your mobile devices. It is language agnostic. We now support C++, Go, Java, Python, Ruby, C-Shop just to name a few. And the core part of that is that it can communicate in a mixed language environment between kind of server. So that is really the core essence of GRPC. GRPC has tons of features to make your communication efficient and safe. For example, streaming, high performance, security, and stats are chasing. And when I'm thinking of the most common use cases of GRPC, I'm thinking it's really like a very powerful modern smartphone. So you can do asynchronous unary calls. That is just like email. And you can do streaming calls just like a video chat. And when you send a single message out to your mom, she might reply with 10 messages. That is a server streaming call. GRPC is secure. We have different ways to do credentials to do the authentication just as you can use fingerprints or fix ID to log in. And GRPC has a strong native support for observability that keeps tracks of your RPCs, wasting stats and chasing for them. And that is like your smartphone is keep tracking of your activities like screen times. The feature list goes on. So you can really rely on GRPC as you rely on your powerful smartphones. These features make GRPC excellent for building microservices. Now let's have an architecture overview as well as familiar with those technologies have refreshed on it before we dive deeper into more technical details. So the outmost layer of the framework that closes the application is called STUB. STUB is a convenience that GRPC provides to the users to build your application. It's very quick. It's easy and cheap. It is also really closely related to the protobuf generator code. And then we can talk a bit more about protobuf later on. The next layer that the STUB layer built on is the application at the API layer. So when you're using STUB, you're actually indirectly using API, but you can also directly use API layers to get run of STUBs. That gives you some powers because some of our APIs, they're only exposed at API levels. Some more features only exposed at API level instead of the STUBs. For them, Paul, you can do many flow controls on the API level. And channel is probably most important concept here at the API level. Channel is conceptually an endpoint that you can send and receive a message to. It is not a connection, though. Instead, a channel manages multiple connections, and then they can multiplex RPCs on it. Sometimes in API level, we corresponds to RPC as a core. This is also a knowledge. The next layer of the framework is the core part. So there are many interesting components in the core that an RPC will experience in its life cycle. The first thing is the name resolver. So the name resolver's job is to find where the backend is and how to connect to it. It is pluggable. Next component is the load balancer. Load balancer's job is to manage the connections here, the connections we call the super channels. And its job is also to find which RPC to go to which the channel. Load balancer is also pluggable. Buffering and retry is also in this layer. So buffering is that when you start a request, the RPC not necessarily immediately send out to the wire. Instead, it might queue it internally because it needs to fetch other information to put things together. And then it will send out. Retry is that if your previous attempt failed, for example, and then the RPC will automatically do a replay of your messages on the transfer layer, hoping that it will be successful for the second time, that it will increase your communications robustness. Some of the secretive stuff is also in the core layer. For them poll, the GRPC might be fetched some tokens before it is able to establish communication or start an RPC. The next layer is got transport. The transport is kind of invisible to you, but it's important. It is just a lot of heavy liftings to put your bytes into the wire. GRPC has many different kinds of like transports for different use cases. Like if you're developing an Android, you might use an Office TV transport. If you are doing testing, you might find the in-process transport to be useful. And maybe you're proud that you're using the Natty transport layer. And all those, they are compatible with API and a core part of API, core part interfaces. And GRPC, the transport is compatible with HTTP2 protocol. That is important to make it high efficient. We can also talk about more later. And here, Interceptors is interesting and a concept. It's kind of an API layer, but it really gives you the power to rewrite your channel in a call. And sometimes GRPC will expose its APIs through Interceptors. For example, OC is one use case of that. And people also use Interceptors for login purpose, et cetera. So as you can see, GRPC is really versatile in terms of how it's providing you with the building breaks to build your application. So we just mentioned that GRPC is built on two community standards. The transport layer is built on HTTP2 and there's also a protobuf. HTTP2 is an IETF standard. It is derived from the early speedy experimental APIs that originally developed Google. It makes it, GRPC using HTTP2 really makes it compatible with low balancers and the proxies over there in the world internet. HTTP2 reduces the number of TCP connections. It is binary. It includes header compression. All of these features make GRPC very high performance, reduce latency, and make better use of your resources. So protobuf is a global source project that does the data serialization. There are two major parts in protobuf. One is the protobuf file that is written that you write the IDL interface definition language to write a contract between your kind and server. And the second part is the protoc compiler. So the compiler is written in C++. It has two major parts. It generates your code and it has major two parts. One is built in the other's plugin. The protoc compiler built in and natively support generating different languages for runtime libraries like C++, Java, Go, etc. And the other plugin is an extension to the protoc compiler that can pass and decorate your generated code. And internally in GRPC, we have a custom plugin that is how GRPC will turn your protofile into the steps-related things for you to easily run your applications. Those concepts in the library are kind of boring. So let's entertain ourselves by looking at some code. So the first assignment here you see is that I create a channel object by using providing a target string. The target string is used for name resolver. I also provided the credentials I need here for simplicity for test is insecure. And then I will install interceptor that will go through the business logic for every of your RPCs. This is an API-level concept. And then I will supply this channel towards the generated code to return a stub. And then I will do a blocking unary call here. So the block unary call as a stub will block your call until you get a response. So this is quite simple. Actually, under the needs, GRPC is always asynchronous. But the stub layer gives you some sugar to do unary style or streaming style calls. For example, the next example is actually a streaming call that you create an asynchronous stub this time. And you will provide a request to send out and also a stream observer. So the stream observer is that you kind of provide a callback for a GRPCs to receive those messages later on when it receives from the server. Here the GRPC will call your unnext when it receives response or call uncomplete when the RPC finishes and a call error if there are exceptions. So now we have more lectures. So let's break down the client part component, the manager channel components to have a better view of the GRPC architecture. Here, as you remember from the code snippets, we just constructed a channel from a target stream. And this target stream actually, it looks probably very familiar to you. To give you a refresh, the UI standard UI, the syntax is it has a schema like HTTPS and then column double slash authority slash pass. And GRPC follows this standard, however, it has its own interpretation of what the target UI means. And the schema, for example, is actually specifies the name resolver to use. And internally in GRPC, it has a map between the name resolver name and the provider of it. And when you specify it, when you're creating the channel, GRPC will plug in that lazily for you. So that is how GRPC makes your name resolver pluggable. While the authority in the past part may have different meanings depending on what the particular name resolver is. For example, in a DNS name resolver, which is by default one, it will be a DNS server that you communicate ways to resolve the host names that you can connect it to as a back end. And when, for example, the schema is an XDS, then it's completely different. The authority part can be the control plane. So TODRXDS is a gigantic name resolver and load balancer. So the next component is that the result of name resolver, so traditionally when you use a URI, you're trying to discover results, right? And the results in the point of view of GRPC is actually resolved to address and service config. Service config can be a very powerful industrial map. And the most important part maybe in the service config is that it specifies which load balancer you use. And that is also how GRPC make the load balancer pluggable. There's an internal map between the name of the load balancer and then the instance of it. And when we receive a service config specifying which name resolver, which the load balancer you use, it will just plug in to do its client-side load balancing. And the load balancer manages connections. Actually, we call several channels here. The couple of the different systems together, actually, the load balancer will return a picker to the channel. And then when there's an RPC, the channel will decide which channel to connect to by calling picker.pick. So this is kind of the architecture of how things is happening. The summer channel is conceptually the HGP2 connection, but we can't talk more. The server side, as compared to the client side, is the Stinger. There are two type of sockets. The listening socket is always waiting for connection. And then once it is accepted, it will quickly hand over, create another socket to do the real connection. And this connection is what the client-side load balancer creates and manages to. So we mentioned that GRPC is built on HGP2. That is important. And I think it's worthwhile to have an overview of the mapping between HGP2 concept and GRPC because I think it is helpful when you are running your application and doing some debug and see RPC level of failures or you are adopting some new features of GRPC. So we mentioned that channel, GRPC channel manages a bunch of super panels. That conceptually is a HGP2 connection. And then when we do RPCs, we actually schedule those RPCs on those connections. And just as the mapping between a super channel to a connection, here an RPC is mapped to a stream at the HGP2 world of concept. In HGP2, a connection can have multiple streams and those streams are delivered onto different frames. And GRPC wraps itself onto those frames. Visually, it's like this. So when the GRPC client is sent to a server, it has headers and payloads. Here for simplicity, we only include header and data frames. So what happens is that GRPC will send its metadata over the HGP2 header and the continuous frames. The header part of GRPC will combine your application's header together with GRPC's headers and swear together. For example, the path header in HGP2 is actually corresponded to the server's name and method name in GRPC. So the payload is delivered in HGP2 data frames. The data frame has its own fields for lands, flags, and payload. GRPC wraps itself in this HGP2 data frames and has its own syntax. Here it has the compressed flag, the message lands, and also the message itself. To be clear, there's no strict relation in terms of the boundaries of these two types of things. So the GRPC will handle that. And finally, when the request ends, we will send a data frame also with a flag set, the end of stream flag set indicated stream is over. A server side is similar. We use HGP2 frames to convey the responses. And a little bit difference is that in the response side, you have a response header and a trailer metadata. Plus the response data is in between. And normally we have both header and trailer, but all trailer-only metadata-only messages are available permitted. Actually, if, say, you want to immediately close RPC with only the status code. And in the trailer metadata, we will include the GRPC status there, which is a product that is a must. And as for the mapping between HGP2's status code and GRPCs, we can find in this short link in the bottom, right? That actually concludes my overview of GRPC. There are some useful links here. Highly recommend the YouTube channels. There are some recent interesting stuff there. Yeah. Thank you very much. Does anyone have any questions for IV? Yeah. Yeah. This one? Yeah, this one. So we have a name resolver. Let's assume that I'm using mixed. Now, the client is going to have, I don't think, blended mixed. You have multiple thoughts from it. So, and each part has been ourselves as before. Now, the main resolver, is it going to have all the channels connected to the client side? Nope. Like the client is keeping all the server connections in the client side so that any request comes in, passes through, at some time, load balance types. Or is it making only one branch? Oh, that's really, this really depends on your algorithm in load balancer. So for that point in GRPC, there are some built-in load balancers like PIC first. Even though there are many backends, it's only used the first reusable one. And if it is wrong robbing, then it will make all of the connections and then select one by one. All of the servers. So if they have underpods running, it makes connections for all underpods. I believe so. It keeps the connection open. Yeah, if that is your name, what your name is, what your load balancer does. Sometimes it does this. So GRPC sometimes in high-scalable systems, there are thousands of connections open. Is it possible? Yeah.