 All right, welcome everyone. How's everyone doing very good? Does that have a good conference? Who stayed out after midnight at least once this week? Anybody stay out after one? Anybody stay out after two? You win you win beat me by a little bit. Anyway, I Want to welcome everybody. Thank everybody for coming My name is Kevin and I'm here with Gina and Richard They'll be presenting with us and myself, you know, I run the Java go and Python team for gRPC So what I wanted to do You know today in the talk Most of the talk is gonna be Gina and Richard talking about a bunch of the new features and going through Exciting new stuff that all of you can try Out but before we jump into That stuff I wanted to share just some of the Growth and experience of some of the developer stuff that that the team has been working on that we've done recently I'll kick things off kind of showing the update of stars that we have in our github project and really It's really exciting, you know for me when I look and see how we just continue to grow over time we've been around for quite a while now and Every time I look at this chart I kind of I'm afraid at what I'm gonna see, you know Is it gonna dip off? Is it gonna dip off? But as you can see and in every language Our core repo at the top and then and then the others It just continues growing over time, which is really exciting for me I also on the right there have the number of pull requests in several of the languages and libraries that we support and There as well we continue to grow and adding more and more features every day, and you're gonna get to see a bunch of those Later today Here's some of the big success Metrics that we keep an eye on and watch to confirm that the top one here is for MPM 7 million weekly downloads For all the Node.js users, so that's a really impressive Number, you know GRPC is becoming ubiquitous across the the internet everywhere, and we're really excited about that for Python You know 2.2 million times per day, so another really really impressive Huge number We're really excited about that and then finally for Maven 18 million downloads per month, so these are really numbers I think you know if you're considering GRPC if you're considering sticking with GRPC You can really see the the large volume of users that we have and it continues to grow and grow every day The team gets bigger and bigger every day, and we're really excited about that and we have all of you to thank for that So thank you Last slide for me I didn't want to share a year ago at KubeCon Detroit where we came and Sort of ask the audience about you know how they felt about our documentation compared to other open-source projects or were we meeting their their expectations and One of the things that we heard from several people was we wish there was more they pointed out a few areas We didn't have documentation and so we went in and added 11 new documentation sections It's been a huge effort over the last year kind of closing a bunch of those gaps added 32 new examples Across the repo and so those are things that are really direct Correlation from things that we heard from all of you at KubeCon in Detroit a year ago And so if you do have any feedback like that to share with us please let us know after the talk or during Q&A we would love to know that and Do you know whatever we can to to help give you the the best experience as you use things? and finally one of the things that we we took on was sort of revamping our YouTube presence and in the last probably six months We've put out 23 new videos on YouTube So if you're not aware of that that's a place where we're really starting to to put content out there and what we're hoping to do is Throughout next year as we deliver new features Delivering a quick five-minute video that goes over it explains it and makes it a little bit easier to consume So definitely tune in to the to the YouTube channel if you haven't So I wanted to remind everybody the gRPC Documentation site or main website gRPC IO where you can find everything and then two main areas where we kind of push things out One is on YouTube which I talked about and the other on Twitter or X So please feel free to follow us there and then lastly that last group of links is is how you can interact directly with the gRPC team So we have a Google group mailing list that everyone on the team is very actively You know engaged in we answer questions That's a great place for you to ask a question and you'll get the entire team looking at it other things that we have is We're doing a monthly meetup. It's roughly once a month and it's it's online. You can join and watch But as part of that with each meeting we're doing an office hour So you can type your question that you have and then almost all the maintainers a gRPC Or are typically on the call or maybe at least half of them. So there'll be a dozen or more maintainers In the meeting and if you especially if you post your question ahead of time We can make sure that we have like a good answer and then you can do some dialogue directly with One of the gRPC well all of the gRPC maintainers or most of the gRPC maintainers around your question so definitely encourage everyone to join the the meetup and You know, it's a great place to ask your your questions as well and get kind of a live face-to-face answer And then the last one we run something that we call meet a maintainer And what meet a maintainer is is sort of scheduling sort of like a 30 minute or a one hour Sort of deep dive with one of the main maintainers from gRPC It's mostly, you know, it's it's two things It's one is for us to learn about what you like what you want how you're using gRPC Where you struggle where you're not struggling what's working? What isn't you know? the other half is it's a great chance for you to Talk about your architecture talk about your app talk about what you're trying to build and get you know support help Start that deeper relationship with the gRPC team to help you achieve your goals And so definitely encourage everyone You know feel free to you know, we would love to speak to you learn more about what you're doing in schedule time to do That so you can use the link here So with that I'm going to hand it off to Richard talk about some developer tools Thanks, Kevin All right, so my name is Richard Valville. I am the tech lead for gRPC Python I do a bunch of dev work there as well as kubernetes integrations and service mesh The first thing that I want to spend some time to talk about today is developer tooling Some of these tools you may know about already some of these hopefully will be a surprise to you But all of them are going to help you along your gRPC journey So the first one is gRPC debug. This is a command line tool It provides you with a range of debugging information such as stats about how many rpcs have been sent or failed as well as Address resolution results and xds configuration so you can really get down to the nitty gritty of what is happening within your gRPC stack So you should reach for this whenever you want to troubleshoot gRPC Then we've got gRPCurl This is similar to the curl CLI tool that I'm sure you're all familiar with from just making web requests gRPCurl enables you to send rpcs from the CLI either with or without reflection So you don't necessarily need to have those proto's on your file system gRPCurl does a great job of letting you inspect the types within your API So it provides this really satisfying dev loop of query what an API looks like and then manually call it using the information that you just learned and Finally a tool that I'm sure many of you already know and love outside of the gRPC space Postman now has full support for gRPC allowing you to make rpcs from a GUI including streaming rpcs So pretty advanced full-featured gRPC functionality And for more details hop on over to the postman website and check out the excellent talk that postman gave at gRPCon 2023 All right, so moving on to something close to home at kubecon We've got the gateway API the Kubernetes gateway API is a recent API that provides a more extensible way To manage traffic routing in kubernetes clusters It's designed as a revamp of the ingress resource to get rid of vendor specific Implementations and annotations the special interest group behind the gateway API has identified the most common use cases for annotations on ingress Resources and built them directly into the gateway API. So we have worked with this special interest group to Introduce a new resource within the gateway API called gRPC routes so that you can more idiomatically route gRPC traffic Rather than having to drop down and route it at the level of HTTP So gRPC route is currently in the experimental stream of the gateway API And we expect it to be promoted to v1 in the coming months. Hopefully by kubecon Paris It's currently supported by gcp traffic director and a bunch of other controllers that you can see listed there The other really exciting thing within the gateway API space is gamma So this is using the gateway APIs not just for ingress but also for service mesh use cases So east west traffic and we've been really engaged in the design process to ensure that gRPC Procterless service mesh is a first-class citizen in the APIs So stay tuned for the ability to use vendor agnostic kubernetes resources to manage your gRPC Procterless service mesh We're also really excited to share that stateful session affinity support is now available in gRPC c++ This is a load balancing technique that ensures all requests from a particular client session Are routed to the same back-end server. This is useful for applications that maintain per session state information such as shopping carts user profiles and game sessions gRPC implements stateful session affinity using cookies when the first request is sent out the gRPC client xds stack Routes it to a server as normal based on the configured lb policies So round robin pick first the ones you all know and love In this example request one happens to go to server two based on that load balancing policy server two then encodes its identity into a cookie and Populates the set cookie response header with it and you can see that That set cookie header being attached to response one there the client then receives this set cookie header in the response and Uses the cookie in it to define a session all subsequent requests in that session it need to be populated with the cookie And the gRPC routing stack will ensure that all requests with that cookie get routed to server two persistently So here the client wants to send request two which is also in the same session as request one It populates request two with the cookie returned in response one and the gRPC xds stack Make sure that that gets to server two based on the cookie Until cookie expiration or until server two goes down all requests with this cookie are going to be routed to server two And as a result you're guaranteed to always hit a warm cash for that session Which will significantly speed up your application. It's a big win for latency critical applications And with that I will hand it over to Gina Thank You Richard Hello everyone, my name is Gina. I'm a tech lead manager of drop easy go team at Google So in the previous slides Richard talked about what staff will session affinity is and how he generally works So now let's take a look how to enable it on traffic director We have introduced a new custom resource called GCP session or GCP session affinity policy and in the yaml file you set the cookie TTL time in seconds and The session cookie will be expired at the time that you provided here and Then you can set the target resource to specify Target target reference to specify which route or service that you want to enable several sessions of it on So that's all you need to do and to learn more about Steffo session affinity check out the short link below Which will takes you to a deep dive talk about stuff for session affinity at Cook on Europe earlier this year Another job PC new feature is custom bag and metrics for load balancing So this is the mechanism in the drug PC library that allows you to inject your custom metrics as a drug PC server and This matrix can be used for load balancing decision We follow the open request cost aggregation standard and you can report your custom metrics in two ways The first option is your service attach the metrics in the trailer data when the RPC finishes and Another option is periodically sending the metrics to us our event The custom bag and metrics is available on production now And if you want to learn more about it check out the documentation at the short link below And that will takes you to our developer guide with example code across all the multiple languages We recently added support of weighted round robin low balancing policy And it can be used with the custom metrics that we just talked about it a few minutes ago It is really simple to configure your server if you are using gcp traffic director You just set the custom policy as weighted round robin with the parameters based on your use cases if you prefer to send the metrics out event just set Enable all be load report to true and we have additional parameters for you to fine-tune the behavior of the wedding round robin policy You can still leverage already run robin policy if you are not using gcp traffic director Here we have the example code in gold land and you can set the low balancing config with a configuration in just some format When calling dial from your drug PC application Once you'll convict the policy is weighted round robin. The next step is to send metrics from your baggants Here is the formula that how grpc load balancer selects a baggant service with the metrics that you send to us Which includes the CPU utilization QPS EPS and error penalty Below is the example of using out-of-band reporting in the grpc gold server code You create a server metrics recorder with the options feature needs like the minimum reporting interval and Then register your recorder at and start sending metrics like CPU utilization QPS EPS at any places that you like to That's all you need to do to enable weighted round robin provided by grpc and more details can be found at the short link below Next randomized randomized pick first. So we are extending the existing pick first policy With a flag to shovel the address order for you We all know that pick first is a very straightforward robin policy and just like its name when the name resolver Returns a list of addresses. We try connect to the first one and then the second one if the first attempt failed Pick first is commonly used when the DNS server shuffling the address order and in some cases Where the DNS server doesn't support shovel or randomization? You can simply set the flag shuffle address list in your drop PC code and we will shuffle the order for you We are excited to announce that micro surfaces Observability is generally available across all the languages that's drop PC supports This is a powerful tool for you to gain insights into your systems And it helps you to quickly troubleshoot the problem improve the performance and reliability of your micro services and So you can make better decisions about how to manage or architect your system Microservices observability provides three different types of data First of all it has metrics such as how many RPCs started or completed over the time Secondly traces which represent how long RPCs takes to complete also known as round-trip latency That's when all this logs that you added from your micro services That could be the message payload to the final status or the error code If you are using a micro services based architecture You should definitely enable observability to get all the benefits that I just mentioned and it's really simple all you have to do is to provide an observability config and DRPC will sell will send metrics traces and logs to for you to Google cloud platform Or any other third-party services that you are using We build a unified plug-in Integrated with any platform which supports open senses metrics and traces and this makes it easy to Identify and troubleshoot the problems regardless the stack that you are using today Here I want to show you an example of the observability configuration To enable cloud monitoring All you have to do is to add cloud monitoring object as a value that we have on the slide and In the cloud trace it could be overwhelming if you are sending every single trace because of the month could be huge so you can specify the sampling rate to feature needs and in this example only 5% of the trace data are randomly selected and sent to the platform that you selected in The cloud logging you can list out the events that you are interested or use a star To all to reference all the relevant events or exclude certain events that you are not interested After that you will need to add a few lines called a code into your application and here We are showing the code snippets in goal and in the main function call observability to start and passing a context to start a feature Metrics traces and logs will be sent to Google cloud platform or other third-party services that are using Don't forget to call observability and to flush the data and clear the message clear the memory and resource Before closing down your surface That's all you need to enable microservices Observability which helps you to gain insights of your system Systems performance and identify the potential potential problems a few more of the fans feature That I like to take this chance to talk about the first one is custom LB policy If our building LB policy doesn't feature needs you can definitely bring your own custom LB policy to drop PC and We recently added support of our back HDP filter for service and method scope client authorization on XDS enabled your PC servers That's when all this job PC clients currently support both IPv4 and IPv6 However, most language implementation do not have support for individual back ends Which has both IPv4 and IPv6 address and we are actually adding support to it In the near future the resolver and LB policy API will support multiple addresses per endpoint and Happy eyeballs will be used to determine the address and Next we recently added support of Java modules So the module name of the job PC Java will be automatically generated Another new feature that we have in job PC Java is least request low balancing Which distributes the incoming request to the server with the least number of active actions at the time that the request is received It is designed to this it is designed to improve the server utilization and the response time By ensuring that the request are evenly distributed among all the available servers and Interesting fact Java implementation of this in job PC is actually contributed by Spotify And we want to encourage all of you to bring your core ideas to job PC and benefit all the job PC users across the world And with that I will hand it over to Richard Thanks Gina All right, so over the past few releases of gRPC Python We've actually removed all external Python dependencies. So the library is now lighter and easier to install than ever We've also added support for Apple silicon So M1 and M2 chips arm64 and in the latest release you'll find Mac universal dynamic libraries Which can be run on both arm64 and x86 chips Then in gRPC C core So that's sort of an internal term for the library that C++ Python Ruby and PHP gRPC implementations wrap We've introduced event engine a new public interface for applications to provide custom implementations for IO and Asynchronous execution. So for example, you can drive to your PC using external event loops such as lib uv You can actually implement your own event engine in C++ and override various methods with the behavior that you want So you simply call set event set event engine factory to get started using event engine Over on the C++ side, we've recently upgraded the way that we notify you when asynchronous RPC events occur So we're excited to introduce what's called the new callback API You no longer need to manage threads and regularly pull completion cues Which can be tricky to get right instead the gRPC C++ library will invoke user-provided callbacks when RPC actions complete So the callback API provides a set of methods for your application to initiate operations And then your application can also override methods like on read done and on write done to get notifications when those RPC actions complete And back over to Gina So first your PC go we Introduce a new channel state idle as the initial state and it will transition into the ready state when the connection are Established if there is a period of time without using RPC We will temporarily move the channel state to idle and close down the open connections to optimize the performance for you And when the new RPC comes in the connection will be automatically re-established for you And no additional effort is required from your application This feature has been available on Java and C core and we recently extended the support to Goland You can customize the idle time-outs with the codes that I have on the slides Another new feature that was released about a few weeks ago In gRPC go is list request low balancing So if you are interested in low balance least request low balancing in go check out our latest release So that brings us to the end of the talk Visiting the gRPC.io site for documentation and example code Subscribing to our YouTube channel to get notifications when new videos are available Joining our monthly made up and to get the latest updates of gRPC You can also request a conversation to meet with the maintainers to answer any questions that you might have Joining our gRPC mailing list to get the latest updates and finally follow us on X for Twitter Thanks for joining us and we have a couple more minutes to take questions Sorry, I didn't think I don't think I got the question part of it. I see Um, I would say that we on the gRPC team are the experts on tuning envoy We've spent a lot of time on the proxy list stack I do think that there are some good recommendations from the envoy team that we've worked with directly And I think we can send you those offline if you want to talk to me the hall afterwards Any other questions? Oh, here we have one. I missed the first part you might have discussed this but like one of the challenges that we have using gRPC On front-end applications is like visibility of the payload maybe like, you know with an F12 and just kind of traceability of kind of what's going on sometimes they It causes some friction front-end developers honestly aren't fans of gRPC. So Any any improvements in that area? So I we do have the microservices Observability which was available for Java and go since last year And the browser so that's for it. This is more for our jrpc web yeah so one of the the feedbacks we recently did the gRPC com so we ran a conference at Google a few months ago And that conference was basically, you know focused all around gRPC and one of the feedbacks that we got at that conference was basically more interest in gRPC web and you know, we've made this a decision to make a much larger investment in that You know starting very very early next year and a little bit at the end of this year And so we should be you should be seeing a lot more Improvements and additional work there and that's something that we heard loud and clear From our from our community at our at our conference recently Yeah, yeah, stay tuned and hopefully you'll see more of what you're what you're looking for for sure And I think not exactly browser related, but you know things like postman support is another thing that we're excited about for Yeah, yeah Cool any other questions from anyone questions comments feature requests So Richard Jean and I'll stick around either in the room or out in the hall depending on if there's another talk coming up So feel free to come and ask us again. I want to thank all of you for coming to see our talk and Reminder feel free. We would love to have you join the mailing list Our YouTube channel and Twitter if you haven't done so already so enjoy your rest of your time here in Chicago And thanks again everyone