 Good afternoon, everyone. Welcome to this talk, What's New in JRPC. My name is Gina Ye, and I'm a JRPC maintainer on the Go team. And I'm excited to be here to share the latest updates on JRPC. So the goal of this talk is to give you an overview of the new features that we launched in JRPC recently. And we have a larger cover, so I'm not getting into too much details on each individual features that we have. Instead, you'll be seeing short links in most of the slides throughout my presentation. And that will take you to our resources and documentation, and that you can learn about it after this talk. And as always, you can certainly talk to us during the day if you have any questions or feedback. All right, so let's get started. So Kubernetes Gateway API is a new API that provides an extensible way to manage your traffic routing in Kubernetes clusters. It is designed as a revamp of the ingress resource and to get rid of the vendor's specific annotations. The special interest group firstly identified the most common use cases of the ingress resource and bring that over built into the Kubernetes Gateway API. We introduce JRPC route to integrate JRPC with the Gateway API so that you can route your JRPC traffic earlier, easier, rather than having to do it at a level of HTTP. JRPC route is now currently in the experimental stream and will be promoted to View on Bearer once soon. And it's currently supported by GCP traffic director and several other controllers. The other exciting thing in this space is GEMA using the Gateway API to manage not just in ingress, but also for service match use cases. And we are deeply engaged in the design process to ensure that JRPC process list service match will have the first class support in the APIs. So stay tuned for the ability to use the vendor agnostic Kubernetes Gateway API to manage your JRPC process list service match. And we also have a birds of feather topic around service match. I'd be happy to join Richard and myself if you are interested in this topic after this talk. We have been working hard to expand our support in low balancing and one of them is custom backend metrics. This is a mechanism in the JRPC library that allows you to inject your own custom metrics at the JRPC server. And this metrics can be used with a low balancing policy. We follow the open request cost aggregation standard that you can report your custom metrics from your backend in two ways. The first option is to sending us the metrics when RPC finishes. And the alternative is to create a derivative channel and periodically sending out the metrics to us. If you want to learn more about the feature, follow the short link at the bottom ending with JRPC-CBM. And that will take you to our developer guides with example code. We recently added weight around Robin low balancing policy, which also mentioned in the previous talk. And it can be used with the custom backend metrics that we just covered a few seconds ago. It's really simple to configure your server. If you are using GCP traffic director, just set the custom policy as weight around Robin and with the parameters based on your needs. If you prefer to send the metrics out of band, set enable or be load report to true. And additional parameters are also available for you to find the behavior of the weight around Robin policy that we provided to you. You can still leverage our weight around Robin policy if you're not using TD. And you just set the load balance in config with a configuration in a JSON format and when you're calling dial function in your application. Once you configure LB policy as weight around Robin, the next step is to send metrics from your baggants. Here is a formula of how GRPC load balancer selects a baggant surface with metrics that you send to us. Which includes CPU utilization, QPS, EPS and error penalty. Below is the example of using out of band reporting. In the GRPC server code, you create server metrics recorder with the options that fits your needs like the minimum reporting interval. And then register your recorder and start sending CPU utilization, EPS, QPS, etc. at any places that you like. That's all you need to do to enable weight around Robin policy in provided by GRPC. And more details that can be found at the short link at the bottom ending with GRPC-WRR. The next one, randomized pick-first. So we are extending the E16 pick-first policy with a flag to shuffle the address order that your name resolver returns. As you may know, pick-first is like a first travel word policy and just like its name, when the name resolver returns a list of the addresses, we try to connect with the first one and then connect to the second one if the first attempt failed. Pick-first is commonly used with a DNS server, shuffling the address order. In some cases where your DNS server couldn't support shovel or randomization, you can simply flip the flag, shuffle address list in your GRPC code, and we will do that for you. And next one, we are excited to share that step forward session ability will be available soon. It is a load balancing technique that ensures all the requests from a particular client session are routed to the Sandbagian server. And this is extremely helpful for applications that maintain the state information for each session, such as like shopping cart, user profiles. The most common approach to achieve it is to use cookies. So when the first request sends out, the load balancer routes it to a server based on your existing LB policy. And the server returns the response back to the load balancer and it will encode the server information and send the cookie, set the cookie in the response header, which will return to your application later. If you want to send a subsequent request to the same server, you can store the cookie in your application. In the following request, include the cookie in your request and the load balancer will decode the cookie to retrieve the server information and route requests to the Sandbagian server. Before the cookie is expired, all subsequent requests with the cookie are routed to the Sandbagian server. And like I mentioned before, this is extremely helpful to improve the user experiences by ensuring all the requests from a particular client session are processed by the Sandbagian server, especially for applications that requires to maintain the state information like shopping cart or the user profiles. So now let's take a look at how to configure Staffel Session Affinity with the Kubernetes resource, GCP Session Affinity Policy. So in the YAML file, you set the cookie TTL time in seconds and target reference to specify which route or service that you want to enable Staffel Session Affinity. And if you want to learn more about the feature, check out the short link at the below, which will take you to our talk at KuKong Europe earlier this year. And the next one that I'm going to talk about is microservices observability. So it is released to public preview for Go and Java last year. And we are excited to announce that it is generally available across all the languages that JRPC supports. This is a powerful tool for you to gain insights into your system's behavior. It helps you to quickly identify the problem, improve the performance and reliability, and so that you can make better decisions about how to architect or manage your system. Microservices observability has three types of data. First of all, logs, for example, like how the message payload looks like, the final status, and also the error code. Secondly, it also has metrics such as how many RPCs started, how many RPCs completed over the time. Last but not least, microservices observability also provides traces which represent how long RPCs are taken to complete and also known as round-trip latency. If you are using microservices-based architecture, you should definitely enable observability to get all the benefits that I just mentioned. And it's really simple. All you need to do is to provide an observability config, and JRPC will send logs, metrics, and traces for you to Google Cloud Platform or any third-party services that they are currently using. We built a unified plugin integrated with any platform that supports open-sense metrics and traces. And this makes it easy for you to identify and troubleshoot problems regardless of the stack that you're using now. Another good news is we are working on open telemetry support that we also mentioned in the earlier keynote. And we are trying really, really hard to get open telemetry support into JRPC. Open telemetry is a new open source standard for observability. And it will be more extensible, flexible, and many companies are involved in the design. So stay tuned for more updates on open telemetry support in JRPC. Here I want to show you an example of observability configuration. In the Cloud Logging, you can list out the events that you are interested or use Star for all the relevant events or exclude certain events. And to enable monitoring, all you have to do is to add Cloud Monitoring Object as a value that we have on the slides. In Cloud Trace, it could be overwhelming if you are sending all of the traces data because the monitor is so huge. So you can specify the sampling rate to fit your needs. In this example, we have 5% of the trace data are randomly selected and sent to Google Cloud Platform. After that, you will have to add a few lines into your application. In the main function, call observability.start and press a context to start a feature. And JRPC Library will start sending logs, metrics, and traces to Google Cloud Platform. And don't forget to call observability.end to flush out the data and clear the resource and memory before closing down your application. And that's all you need to enable microservices observability, which helps you to gain insights of your system's performance and identify potential problems. We also have a birds of feather topic on observability and Jordan Phan, who's leading the session if you are interested in the topic. So here comes some of the fancy features that I like to take this change to quickly go through. Firstly, custom low balancing policy. If our building policy doesn't fit your needs, unfortunately, you can still bring your own custom low bid policy into JRPC. And we recently added support of RBAC HTTP filter for service method scope client authorization on XDS enabled JRPC servers. Lastly, JRPC clients currently support both IPv4 and IPv6. However, most implementation doesn't have support have individual backends have both IPv4 and IPv6 addresses, and we are working on that. So in the near future, our API will support multiple addresses returned per endpoint and happy eyeballs will be used to determine the address. And next, we recently added support of Java modules and the module name of the JRPC draw file automatically generated for you. And another feature that we have in JRPC Java is least request low balancing. This is contributed by Spotify, and I want to encourage all of you to bring your ideas to JRPC and benefits all JRPC users across the world. In the latest release of JRPC Python, we have removed all the external dependencies. This means JRPC Python now officially have no dependency. And besides this, we also add support of Mac M1 chip. So in the latest release, you will find Mac universal dynamic libraries, which can be around both on M1 or Intel chips. In JRPC core, we are introducing event engine, a new public interface for applications to provide your custom behavior or implementations for IO and synchronous executions. For example, driving JRPC from the external event loops, you can implement your own event engine and override the methods with the behavior that you want. And make sure you set event engine factory to your own class before initiating any JRPC object. If you're interested in this feature, check out the short link below, ending up with JRPC-EE. And on the C++ site, we recently upgrade the way that we notify you when asynchronous RPC actions are completed. And we are excited to introduce the new callback API. And the good news is you no longer need to manage any threads and like keep regularly polling our completion queue. And whenever the JRPC actions are completed, your call will directly call by JRPC library. The callback API provides a set of methods for your application to initiate operations. And your application can also override the methods like unread, done, unwrite, done to get notifications when RPC actions are completed. For more information, check out the short link, ending with JRPC-Callback. For JRPC Go, we are introducing a new channel state, which is called idle, as the initial state. And it will transition into ready when connections are made. If there's a period of time without using JRPC, we will temporarily move the channel state back to idle and close all the open connections for you to optimize the performance. And when new RPCs.com comes in, the connection will be automatically reestablished for you. No additional code, no additional effort needs to be made in your application. This feature has been available on Java and C core for a while, and we recently added into JRPC Go. And you can also customize the idle timeout with a code on the slides. So here comes my last slide, developer tooling. So I want to share two tools that could be helpful during your JRPC journey. First, JRPC Debug is a command line interface that provides you lots of debugging information, such as states about how many RPCs are being sent or failed, the address resolution results, and also the XDS configuration. And the second one is JRPC-Call with one C in the middle. It is also a command line tool that lets you interact with your JRPC server in a curl way. So check out the two GitHub repository. These are super handy tools when you are doing development with JRPC, and if you haven't tried it out, you should definitely give it a try. All right, that brings me to the end of my talk. Make sure if it's in our JRPC.io site, which has all the documentation and call snippets, example code that we have been putting lots of effort in, and subscribe our YouTube channel to get notifications when there are new videos available. You can also request, you can always request in conversation with maintainers if you have any questions, and finally join our mailing list to get the latest updates. All right, thank you for your time. And now I will hand it back to Kefa for Birds of Feather. So was there any questions from our new features? Thank you for telling about the new features. We have been waiting for the callback-based API for some time, so it's great news. Is it already available or is it coming soon? It's already available. Isn't I going to correct me if I'm wrong? The question is, I have a callback API. I believe it's generally available for everyone, and it's ready for use, and we are actually adding a couple more APIs that will be available soon to make it even easier for you. But you can start using it today.