 All right. It is 255. So I think we'll go ahead and get started. My name is Kevin Nielsen and I run a bunch of the GRPC team. I manage the Go, Java and Python team and work at Google where we use GRPC a ton. And then we love having all of you make GRPC a huge success. Speaking with me today is Gina and Richard. As I mentioned, this was a very, very last minute talks. You don't see their name in the system, but they are in the right place and they are in the slide deck and they're prepared and ready. Cool. So I wanted to share just a little bit of kind of overview type things with GRPC kind of show you kind of the relevance and the growth. And some of that before we share a bunch of things that are new and exciting in GRPC. So here you can see sort of the GitHub stars trending over time. And it's really exciting for me, you know, every year when I look at this, when we're evaluating how are we doing? Are we still relevant? Are we still growing? Are people still loving GRPC? This is one of the things that we look at and it's really exciting to see that continued growth, the continued kind of engagement. All of the questions we get asked in our mailing list, everything that we get out there on GitHub do absolutely appreciate it. And then you can see this massive number of pull requests to cost many of the various languages. Another cool metric I want to share is the number of weekly downloads we have in several of the different languages. You can drink that water, it's mine. Anyway, with Java for Maven, 4.8 million downloads every week. In Python, 19 million weekly downloads and in Node for NPM, 9 million weekly downloads. So really, really huge usage. We're really happy with it. I'm thrilled at the success and have all of you to thank for that. We want this to continue to grow and we would love any feedback you have or request. We're always really trying to make this ecosystem grow and continue to evolve. So please keep giving the great feedback that everyone has and we look forward to more. One of the things that we got as feedback as we started doing more and more of these talks as COVID ended, talking to folks at both KubeCon and the GRPC conference that we run, GRPCConf. And one of the feedbacks we got was a lot of our documentation, tools, blog posts, a lot of this area was lacking compared to other similar projects. And so we made a really, really big effort in making improvements there. And so over the last year and a half we've been heavily focused in these directions. One of the efforts that I've really spearheaded making happen within the team. And you can see even just in the last few months, since the beginning of this year, we've added six new guides in the GRPCIO documentation site and three new code examples. And those were areas where, you know, we really have almost plugged all the holes and got everything that we want. And then we're going to next go through and look at all the GRFCs and try to figure out, you know, mapping GRFCs to user guide entries, figuring out where it makes sense and continue to close gaps here. So we're really excited about this. And then another initiative that we kicked off about a year ago is trying to get more and more content on YouTube, more relevant content, more timely content. Because one of the things that we learned is that all of you kind of enjoy consuming things in that way as do I. I do, again, I want to invite all of you to come and join us in Sunnyvale. You can see the Google Cloud headquarters. We've got, there's a really nice event space there that is set up for doing events like this. We're going to have three tracks, hopefully some code labs. And this will be a one-day event in Sunnyvale, California. We would love to have all of you there. And call for papers opened up just this week. I think it was two days ago and the link is on there. So you can see that and we would love to have you make a submission and love to hear you talk. And we're hoping that a lot of the content there or a majority of the content is actually from the community rather than from the maintainers. We'd like to give the community and make this a community effort and a community-driven event. One of the things I'm really excited to announce is we are heavily investing in Rust and want to make it one of our supported languages within GRPC. And this is something that we heard loud and clear in Detroit, in Amsterdam, in the past, in Chicago, and then also in Sunnyvale or GRPCConf. Every single time, this is probably the most frequently asked question that we've had across all those events has been, where we're reaching out to you has been, will you be adding support for Rust? So today, Tonic has a great implementation of Rust that has great adoption. We're working very closely with Tonic to see what we can do together, what makes sense for the future, but it is a decision that we've made to come up with an XTS compliant and fully compliant GRPC implementation in Rust. So this is a pretty big deal, and we'll have a lot more to announce at GRPCConf later this year. And absolutely, if this is an area that's interesting to you, we would love your help and support making this a big success. We are looking for a handful of people who want to give early feedback, want to be part of the early design process, some of the decisions we're making, go over designs, things like that, help be part of the community making Rust happen. And so if you're interested, we would love to be a huge gift to us. There's a barcode here or a link. Please fill out the form and let us know how we can get you engaged and what you're doing with Rust. A few simple questions would absolutely love your help on that. Finally, I just wanted to share a couple of links that are out there, our main website, where you can find all the information about GRPC. There's a YouTube channel that I know many of you, we're just starting to recap that and add new content, and we're going to continue to add a lot of short five-minute videos as we add features, so I want to encourage everyone to do that. We've got a Twitter channel or X, whichever you prefer. And finally, there's a mailing list, and the mailing list is really our main form of chat and communication with the maintainers, and so it's a great place to get your support or get support for issues that you have, and ask your questions and be able to talk directly with the team. And then finally, we have something we call Meet the Maintainers, where if you would like to come and do a one-hour kind of meet-and-greet with someone from the GRPC team, it's something that we love to do, where we hear about how are you using GRPC, maybe what would you like to see from it, where is GRPC doing great so we continue to do that, and where is it not doing so great where we can fix those problems. And so absolutely sign up for one of the Meet a Maintainer talks or sessions. You can schedule within the calendar there, and you'll meet with someone like Richard, Gina, or myself to do that. So with that, I'm going to hand it off to Richard and Gina to kind of jump into more of the features, but my name is Kevin, and again, we'll be around after the talk. You know, if there is anything that you want to share, we would love to help all of you with your applications, any questions. Anyway, we can, and really appreciate all the engagement we've had from the community. With that, I'll hand it off to Richard. Thank you, Kevin. All right. Hi, everyone. I am Richard Belville. I am the Tech Lead for GRPC Python. I do a bunch of dev work for that, as well as Kubernetes integrations and Service Mesh, if you saw the last talk relevant to that. The first thing that I want to talk about today is developer tooling. Just a few tools that will help you out on your GRPC journey. GRPC Debug is a command line interface. It provides you with a range of debug information, such as stats about how many RPCs have been sent or have failed, as well as address resolution results and XGS configuration for Service Mesh. So reach for this tool whenever you need to troubleshoot GRPC. Then there's GRPCurl. Similar to the Curl CLI tool that we all know and love, GRPCurl enables you to send RPCs from the CLI, either with or without reflection. GRPCurl does a great job letting you inspect the types within your API, so it provides this really satisfying dev loop of query what an API looks like, and then manually call it based on what you just learned from GRPCurl. Next up, a tool that I'm sure many of you already know and love. Postman now has full support for GRPC, allowing you to make RPCs from a GUI, including full support for streaming RPCs. It is packed full of features that make it work great with GRPC. For more details, hop on over to the Postman website or check out the excellent talk that Postman did on the topic at GRPCon for 2023 on YouTube. Moving on to something close to home at KubeCon, we've got the Gateway API. If you saw the talk at 2 p.m., this will be a repeat for you. The Kubernetes Gateway API is a recent API that provides a more extensible way to manage traffic routing in Kubernetes clusters. It is designed as a revamp of the ingress resource to get rid of vendor-specific implementations and annotations. The special interest group behind the Gateway API has identified the most common use cases for annotations on ingress resources and built them directly into the Gateway API. We have worked with that special interest group to introduce a new resource within the Gateway API called GRPCRoute so that you can more idiomatically route GRPC traffic rather than routing at the level of HTTP. GRPCRoute is moving from experimental to standard this April. It's currently supported by GSPTrafficDirector and a bunch of other controllers. The other really exciting thing in this space is Gamma, using the Gateway APIs to manage not just ingress, but also ServiceMesh use cases. We have been deeply engaged in the design process there to ensure that GRPC ProxyList ServiceMesh has first-class support in the APIs. Now you have the ability to use vendor-agnostic Kubernetes resources to manage your GRPC ProxyList ServiceMesh or whatever ServiceMesh you'd like to use GRPC with. We are also excited to share that stateful session affinity support is now available in GRPC C++. It is a load balancing technique that ensures all requests from a particular client session are routed to the same backend server. This is useful for applications that maintain per session state, such as shopping carts, user profiles, or game sessions. GRPC implements stateful session affinity using cookies. When the first request is sent out, the GRPC client XDS stack routes it to a server as normal based on the configured LB policies, such as round robin or pick-first. In this example, request one happens to go to server two. Server two encodes its identity into a cookie and populates the set cookie header in the response and uses the cookie in it to define a session. All subsequent requests in this session need to be populated with this cookie, and the GRPC routing stack will ensure that all requests with that cookie get to server two. So here, the client wants to send request two, also in the same session as request one. It populates request two with the cookie returned in response one. And the GRPC XDS stack routes that to server two based on the cookie. Until cookie expiration or until server two goes down, all requests with this cookie are routed to server two. As a result, you're guaranteed to always hit a warm cache for that session, significantly speeding up your application. Big win for latency-critical applications. And with that, I'll hand it over to Gina. Thank you, Richard. All right. Hello, everyone. My name is Gina Yeh, and I'm a TOM. I'm tech lead manager at Google leading the GRPC Java and Go team. So in the previous slide, Richard talked about what step of session affinity is and how it works in general. And now let's take a look how to enable it on Traffic Director. We have introduced a custom resource called GCP session affinity policy. And in the YAML file, you set the cookie TTL time in seconds. And the session cookie will be expired at the time that you're provided here. Before it's expired, the request was the session cookie are guaranteed to be sent to the same backend. And then you set the target reference to specify which route or service that you want to enable step of session affinity on. And that's all you need to do. And if you're interested in these features or learn more details about it, we have a short link at the bottom and that will take you to our talk about step of session affinity at Qualcomm last year. Another GRPC new feature is custom backend metrics for load balancing. So this is a mechanism in the GRPC library that allows you to inject your custom metrics at your GRPC server. And this metrics can be used for load balancing. So we follow the open request cost aggregation standard and you can report your custom metrics in two ways. The first option is your server attaches the metrics in the trailing metadata when RPC finishes. And another option is to be realistically sending the metrics out of a band. Custom backend metrics is available on production now and if you want to learn more details, you can always check the short link that I have below which will take you to our developer guide which also includes the example code in multiple languages. We recently added support of weighted run Robin load balancing policy and it can be used with the custom backend metrics that I just covered in the previous slides. And it's really simple to configure your server if you are using GCP traffic director. You just set the custom policy as weighted run Robin with the parameters based on your use cases. If you prefer to send the metrics out of band, you set the enable or be load report true and we also have additional parameters for you to find you the behavior to measure use cases. You can also use our weighted run Robin implementation even if you are not using traffic director and here we have an example code in go and you just set the load balancing config with the configuration in JSON format when you call on the dial function from your go application. And once you configure LB policy as weighted run Robin, the next step is to send the metrics from your back end and at the top here is a formula that how DRPC load balancer selects a back end surface with the metrics that you send to us which includes the CPU utilization, the QPS, EPS and the error penalty. And below is an example of using the auto band reporting and in the DRPC go server code you create a server metrics recorder with the options that feature use cases for example like the minimum reporting interval and then you register your recorder and start sending the metrics like the CPU utilization, QPS, EPS, etc. And that's all you need to do to enable weighted run Robin provided by DRPC and as you know you can always find more details from the show link that I have on the slides. So next, randomized pick first. So we are extending the existing pick first policy with the new flag to shovel the address order that your name resolver returns. As you may know pick first is a very straightforward LB policy and just like its name. When the name resolver returns a list of the addresses we try to connect with the first one and then connect to the second one if the first attempt fails. So pick first is commonly used with the DNS server shuffling the address order and in some cases where the DNS server doesn't support shovel or randomization you can simply flip the flag, shovel address list in your DRPC code and we will shuffle the order for you. And we are excited to announce that DRPC is adding support for open telemetry and this is a powerful tool for you to gain insights into your system's behavior and it helps you to quickly troubleshoot the problem improve the performance and reliability of your DRPC applications so that you can make better decisions about how to architect or manage your system. From 1.61 release you can get these metrics to help you to analyze your DRPC latency your RPC latency, QPS, error rate or even like the payload sizes and we are adding more metrics and extending the support to other languages. Also the open telemetry tracing design is almost completed it's still revealed but almost done so stay tuned for more updates on that. So here I want to show you how you can integrate the open telemetry metrics into your DRPC application so you will need to add a few lines of your code in your application and here we are showing the code snippets in C++ so in the main function create an open telemetry meter provider and add a Prometheus exporter and then use the open telemetry plugin builder to set the meter provider that you just created and register plugin by calling build and register global and after the registration DRPC operations performed all the DRPC operations performed will be monitored and the stats will be reported through the configured Prometheus exporter. So a few more advanced features that I like to take this chance to talk about first one is custom LB policy if our building LB policy doesn't make your needs you can definitely bring your own custom LB policy and also we recently added support for RBAC HTTP for service and method scoped client authorization on XDS enabled DRPC servers and lastly DRPC clients currently support both IPv4 and IPv6 however most implementations doesn't support or doesn't have support for individual backends to have both IPv4 and IPv6 addresses and we are actually working on it so in the near future resolver and LB policy API will support multiple addresses per endpoint and HP eyeballs will be used to determine the address and next we recently added support for Java modules so the module name of the DRPC draw file will be automatically generated and another feature that we have in DRPC Java is least request load balancing so which distributes the incoming request to the server with the least number of active connections at the time the request is received and it decides to improve the server utilization and the response times by ensuring that the requests are evenly distributed across the available servers so an interesting fact Java implementation of this in DRPC is contributed by Spotify so we want to encourage all of you to bring your ideas to DRPC and benefit all the DRPC users across the world and with that I'm going to hand it over to Richard Thank you Gina Alright so over the past few releases of DRPC Python we have removed all external dependencies so the library is now lighter and easier to install than ever we've also added support for Apple Silicon so M1 and M2 chips in the latest release you'll find Mac Universal Dynamic Libraries which can be run on either ARM or X86 chips in DRPC C-Core the basis for C++, Python, Ruby, and PHP we have introduced Event Engine a new public interface for applications to provide custom implementations for IO and asynchronous execution for example you can drive DRPC using external event loops such as LibUV you can implement your own Event Engine in C++ and override various methods with the behavior that you want simply call setEventEngineFactory to get started using Event Engine on the C++ side we have recently upgraded the way we notify you when asynchronous RPC events occur we're excited to introduce the new callback API you no longer need to manage threads and regularly pull completion queues which can be tricky to get right instead the GRPC C++ library will invoke user provided callbacks when RPC actions complete the callback API provides a set of methods for your application to initiate operations your application can also override methods like onReadDone and onWriteDone to get notifications when RPC actions complete and back over to Gina so for DRPC Go we introduce a new channel state idle as an initial state and it will transition into ready state when the connections are established whenever there is a period of time without using DRPC or RPC we will temporarily update the channel state back to idle and close the open connections to optimize the performance for your applications when the next RPC comes in the connection will be automatically reestablished for you so no additional effort or implementation is required on your site this feature has been available for Java and the C core and we recently added into DRPC Go so you can customize the idle timeout with the code that I have on the slides another new feature that was released in DRPC Go is Lease Request Low Balancing so if you are interested in using Lease Request Low Balancing that I just mentioned earlier check out the latest release that we have of the DRPC Go okay so that brings us to the end of the talk as Kevin mentioned earlier DRPC Conf is currently calling for speakers so some major idea with the first URL that I have on the slides and visiting the DRPC IO site for documentation and the example code subscribing to our YouTube channel to get notifications when we have new videos available joining our monthly made up to get the latest updates on DRPC and you can always request a conversation with the maintainers to help answer any questions that you might have and you also might want to join in the DRPC mailing list to get the latest updates and finally you can also follow us on the Axeward Twitter thanks for joining us and we have a couple minutes to take questions about trust support for new projects is it better to use existing implementation like Prost or Tonic or it's better to wait for the announcement I mean I think that's up to you and your application and what makes sense on the timeframe you know we are hoping to move really quickly we haven't quite figured out although we've had many meetings with Tonic and we're trying to figure things out we haven't finalized exactly what we're going to do yet as far as timing I mean I think it's hard to guess but I would think you may end up waiting I don't know we're hoping sometime this year we have something to release and if that timeline doesn't work for you then obviously you need to do what's best for your application so yeah yeah does GRPC Web is part of the GRPC project and do you know if there's any plan to support it more fully I don't know GRPC Web is part of the GRPC project you know we sit pretty close to the people who work on that there are some plans at the moment to explore extensions to GRPC Web formal extensions that will enable all arities I don't think the specific technology has been nailed down yet whether that's WebSockets or FATCH but yeah you can definitely expect more updates on that in the near future okay thank you actually we apparently do have a video about those updates already so yeah it's on our channel our YouTube channel perfect check out our YouTube channel like days ago fresh off the presses hi I have two unrelated questions the first one is I hear about this reflect API being enabled and what not how do you cope with the security risks of it so what is the expected value for reflect on production is it enabled or disabled I missed the first part the what API the reflect API reflection sorry okay so the question was how was it enabled for like in prod do you enable it in general because like in your previous talk you noticed that you can trigger this Boolean to offer and I was wondering for instance it could be guarded by some authentication step first or something I see I see so to enable reflection on your server it's actually for each language generally a separate package that you have to pull in so like for Python there's the GRPCIO package you have to pip install GRPCIO to get reflection you actually have to pip install GRPCIO reflection and then there's a method that you can call to add it to your server so it is something that you do have to manually include at the moment maybe you can improve the UX there if we're talking about gateway API yes there are security implications which is why we probably do that as an opt-in not an opt-out okay and the second question was more about the go and I was quite curious when I used the I need to find your word the service configuration in Dyara it's really specific question I was just wondering why it was expressed as JSON as opposed to you know a builder or an option button or something like this yeah so service config is actually cross language the same service config should work whether it's C++ Python go node that's why we use JSON everywhere service config you can supply manually a couple of genus slides had examples of that proxy service mesh or XDS configuration it actually dynamically translates the XDS into service config in order to make that work so theoretically any like service mesh config you can express via service config thank you hi so I'm also a gRPC web user and it's not really ideal right having to use gRPC web I was wondering if in the future gRPC could be made to run natively in the browser like maybe it's trailers right that are not present so I don't know with HTTP free or if it could be made to do it thanks yeah so definitely a good direction to look at so yeah the big issue there is trailers within HP2 we've definitely talked to browser implementers about fully implementing the HP2 spec gRPC web currently is the way that we're looking at improving that the difficulty there is not necessarily with proxies but with the limitations of gRPC web I understand you can't have all of the arities you know unary client streaming server streaming and the extensions that I mentioned in the earlier answer will enable gRPC web to handle a full gRPC API so anything that you can run back end to back end you will be able to run browser to back end as well but probably with a proxy in the middle to improve the UX of that in the 2pm gateway API talked I showed off some potential extensions to the gateway API that will help in setting up an ingress proxy that will do that translation for you because you don't have to manually manage that middleware awesome two more questions here we go I was just curious gRPC is using H2 as a transport protocol and quickies are seeing more and more right so is there anything happening on the transport side of things to evaluate quick and whether it would make sense at all I am not aware of active exploration of it we haven't had any major complaints from people so far for the most part the complaints we've heard have been 3 is bigger than 2 we kind of want a more compelling argument before we start looking into that there have been some prototypes written for htp3 so c++ definitely has an alternative transport using cronet which is the chromium implementation so if you really wanted to try that out you can compile that yourself with gRPC c++ definitely a handful of folks have been doing investigation on htp3 so far we haven't seen that it has fit the needs obviously if there's something we're missing please let us know but so far that hasn't been the case I think we're going to take one last question because we have two minutes but Gina Richer and I will be outside in the hall afterwards so if you do have more questions please feel free to join us out there so last question so I just wanted to ask about protobuf quite struck that you never mentioned it once what's shed some thoughts there do you have a more specific question who do we talk to about protobuf who's telling us what's new in protobuf I see how do we get protobuf to work faster with go yeah we will definitely have the protobuf team at gRPC probably not a great answer but they did speak last time and they are it it is two teams so Wimbo who's my peer at one point in time he actually managed the protobuf team but as things grow and we invest bigger and bigger they are independent teams we meet with them quite frequently and the two teams work really really closely together but definitely I would say protobuf specific things would be best to go to them but feel free to let us know if you want us to help make those connections great well thanks so much everyone really appreciate everybody coming helping us make gRPC a continued success both here at the conference and out in our day to day lives I want to thank you all and again Gina Richard and I will be out in the hall if you have any more questions thanks everyone