 Hey, I just want to say thanks for coming and really appreciate you guys taking the time. I know there's a lot of choices between talks you could take and just appreciate you coming and thanks for joining us today. Yeah. So I'm Eric Anderson. This is Kevin Nielsen and we'll be talking to you about feature work in XDS. I'm sorry, feature work in JRPC, XDS and not. We'll go ahead and get started with some just sort of what's new in JRPC and stuff because we don't have another talk and it'll sort of set some of the stage. And then, and Kevin will do that and then I'll get you into the XDS and not portion. So before I kind of jump into things, I did want to call out a few people in the audience and if you are a JRPC maintainer, raise your hand. So we got a bunch of folks from the team and so if you basically have questions afterwards or wanna deep dive or follow up with them, feel free to do that. So raise your hand one more time so people know why they're raising their hand. Cool, so those are our maintainers. I am curious how many people in the audience have submitted pull requests that got merged in? Have any committers? Cool, sir, he's a maintainer so he's done, we pay him to do several every week. Anyway, you see up on the board here, one of the things that we're doing is something we call the developer facilitation meetings. And it's kind of a weird name but really what it means is we sit down, we got a bunch of the questions we'd love to hear about what you're doing, how you're using JRPC, are you happy and is there anything we could build to make you super happy? And so I do wanna encourage everyone to sign up for those, jrpc.io slash meet. It's a great opportunity to do that and that's also a great time for you basically when you come in to schedule those like stay for another 10 minutes and ask questions or stay for another 30 minutes and ask questions and honestly if you schedule, go through one of these and you wanna set up a different time to meet with engineers to talk about things, we're happy to do that and questions and help you build amazing JRPC applications because that's what we want. The other thing we do have a mailing list, I imagine most of you in the room know about that mailing list but I did want to go ahead and call that out. Yeah, and as we're going along the slides are available PDF on the schedule app or the website so you can get those if you miss a link or so. Cool, thanks Eric. So the next thing I wanted to talk about a little bit is basically the help that we want from all of you and what you can do to sort of be part of evolving JRPC and making it successful. And one of the things that we do within the team is for each of the individual languages for JRPC, we go through and review all the issues and all the pull requests every single week. And so it's one person per language and what we do is we assign a different person, we call it like a rotation and that person goes through, reads the issues, tries to triage them, get them assigned to the right person and then we meet as a team where they go through these issues, these pull requests, talk about them, get advice from others and really what I'm highlighting is just how serious we take it and how much time and so please, if there is something you don't like, if there is something you feel we should be doing better, if you think there's a bug, please put that in. We would love to try to fix it, try to make things better as best we can. Yeah, quick question. Yeah, so let me repeat the question so everybody can hear it. So the question was how do we keep parity among the different languages and among all the different releases? So I'm gonna say there's two types of issues that get raised. One of them is there's a memory leak and go, right? And that's what, you know, there's some other problem in Java where it doesn't follow, you know, whatever it is. And a lot of the issues tend to be very language specific. A lot of the work that we do is keeping things modern for any of the particular languages that really don't have to do with the overall kind of underlying architecture. When we do have issues that are feature requests and things like that, we prioritize them, we do them. And I would say we at least make an attempt to launch features together in every language, or at least the languages we call supported languages, but that's something that's nice to have, not a must have. And so we rather get the features out, especially if we know someone is using a particular language and they want it and need it, and we push that forward like that. We take a ton of pull requests from outside the team. So, you know, just as part of the triage process, we bring those in and, you know, and merge in all of those. And so definitely feel encouraged if there is something, you know, if you have forked, don't keep it private. Share it back because, you know, a whole team of people that that's what we do. And the last point, you know, we would love to graduate from incubator status and one of the things that's super, super important to make that happen, but also something we're interested in in general is having more maintainers and folks that aren't Google employees. And so if you are interested, you know, start submitting some PRs, start submitting some issues, responding on them and let us know if that's something you're interested in and we can make that happen. Cool. I wanted to share kind of in a glance some of the more recent achievements. So these are things, kind of from the last year, last six months or so, you know, we've reached 66,000 stars on GitHub. So really, really, you know, people are really happy and they're kind of looking into the internals, right? And they're trying to learn more about it and trying to follow the releases and things and deeply engaged. Otherwise, you know, it's not really necessary to start the project on GitHub, although we do appreciate it. Other things, you know, we're investing more and more in the team. I put five members here, but I think it's six or seven in the last six months. I didn't really count so well, but we definitely have a large number of new members in the last six months, maintainers on the team, working full-time on the project. Down at the bottom, you can see the various sort of cadence for all the releases, very active, you know, for Java and C++ over PR a day is going into the repo. And then the others, you know, still very, very healthy there as well. And so very active project, doing a whole lot as we go. And then for our proxy server mesh product, we have a bunch of different features. We launched observability that I'll talk about in another two sides. And we're launching some custom load balancer policy stuff. Some folks in the room were very instrumental in making the request for that. And we wanna make that available. Ability for custom metrics, retries, authsy, and some outlier detection. So these are some of the key features that launched in either the last couple months or about to launch. So really exciting stuff, good momentum across the team. We're super proud that not only this week were we at KubeCon, but we were also at API World. So won the Microservices, Best-in Microservices Infrastructure Award there. And so that was a cool little thing that we're happy and proud about and appreciate that that has happened. Now I wanna share some of the numbers just so you can see the scale of adoption and usage and it's becoming a real industry standard across the board. You can see for MPM, having five million weekly downloads of our JRPC.js package. For PyPy, two million times a day, the downloads and it's number 52 most downloaded package on PyPy, it's pretty impressive for what we are. And then Java for Maven, 12.5 million downloads a month of JRPC context. And so these are really, really impressive numbers. We're really happy with it and I hope everyone will continue to help us grow those numbers and use JRPC for more and more of the things that they're doing. And this is another chart that's pretty cool is the Star History. So this is from our JRPC core project. And as you can see, just continues to grow with really no end in sight. It's just continuing to grow. And so we're real happy and excited about this and wanna make sure everyone realizes this is where JRPC is. It's very stable, very well used and continue to grow with big investments from Google and from our external developer community. Wanted to talk a little bit about the observability project that we launched. So we launched a public preview on Friday. So Friday at 5 p.m., when you're not supposed to launch software, we really put in a lot of effort and I wanna thank the team for that and everything that they did making this possible and got that out there. And really what it is, it's a public preview which means our first set of core MVP features we have in place. And we really believe that it's something, the APIs aren't gonna change that much. We think it's ready for folks to use. But at the same time, it's a great opportunity to work with us as we work towards GA in early next year. And so we would love to work with any of you on this product. Currently we're supporting logs in Stackdriver or Google Cloud logging. We have a bunch of analytics dashboards that are available, alerts and notifications and traceability. And so one of the most powerful and interesting things here I think is the alerts and notifications where you can set a threshold on either a percentage of failures or response time or whatever it is and set yourself up for an email or however you wanna be notified so that you can learn about these things through a private email that came right away rather than learning about them on Reddit. So that's our new observability project and we'd love, please reach out to Eric or I after the talk if you're interested in helping us evolve it, helping us build it and hopefully helping make it something great for all of you. And with that, I'll hand it off to Eric. So features in XDS or not, how do these things sort of work together? So XDS, this is a maintainers talk but a lot of people may not know XDS or may not know it by that name. Basically it's a control plane protocol to configure a mesh or a bunch of different nodes. It came out of Envoy and it was how different control planes could configure Envoy as the sidecar or that sort of idea or a reverse proxy for that matter. It is basically just a watch based API to get configuration updates. So that's most of the interesting parts are more in the resources that are those configurations as opposed to that protocol itself. And there's a couple of different implementations. You've got a traffic director would be one but Istio is in beta I believe and then some people are making their own with Go control plane. And all those will use XDS to push configuration to GERPSE. And so with GERPSE support, you are able to get something like the mesh where you're able to do the configuration but you don't necessarily need a sidecar. So you can have still the normal direct client of server communication without those extra proxy steps. The X may be like what's up with that. So overall XDS stands for some variable discovery service and that variable might be a listener or cluster or route and endpoint. It's not too important to know all those unless you wanna start making your own control plane. But so one particular XDS would be listener discovery service or LDS and we'll throw around terms like LDS, EDS, CDS, these sort of things all the time. And then we're talking about listeners clusters or whatever would be related to that. So now we have an idea of what XDS is. Let's sort of shift to features. Here's sort of sampling of features from the last year. It's not too important exactly which features I've selected it's meant to be a sampling although this is a lot of the higher profile ones. So we'll just sort of go through them, give a little bit of description of what they are. So we sort of know what we're talking about but we'll just sort of see how these impact GERPSE and how they're sort of done. So RPC retries, we had work already underway for retries before any XDS work. And so these, the idea is you're gonna try an RPC and then if it fails for whatever reason you can check what the reason is and decide whether or not to just try it again after delay and see if it works the second time. TLS security is not so much for creating certificates but it's more for configuring where the certificates can be found and what other sort of specialized TLS configuration you need and that allows a control plane or sort of centralized control plane to tell clients what to do and to form up a secure mesh. Service authorization, really piggybacks well off of the TLS where you can then have for a particular service you wanna know exactly which other services can contact it and so you can restrict only let's say service A and B can contact service C. Outlier detection, it monitors RPCs and for each backend it will sort of say, oh look this backend looks a little unhealthy, a lot of RPCs have felled on it compared to other backends and so it'll decide that this backend may be an outlier and decide not to send new RPCs to it. Least request LB, we just have this in Java right now but it's very similar to the round robin load balancer that's round so instead of just choosing the next server in the list and just round robin over and over this one looks at how many outstanding requests each backend has already from that client and it will send it to the server that has the least outstanding requests. Custom locality LB, this is sort of a bring your own load balancer if you want to make your own load balancer, you implement it and then this is some of the plumbing to use it from XDS and then custom backend metrics sort of plays into that. It's a little different than the others but because it ties heavily into the custom LB policy but it allows the client and the server to exchange utilization, mainly the server can tell the client here's my CP utilization and then the load balancer on the client could decide which backends to go to based on that utilization. So we've got this, it's a sampling what impact do we have whenever we're making these features and if you're interested in some of these but you're not interested in that XDS set of characters how, what does that mean for you? So we've been doing XDS for a little while now and we've gotten pretty good at figuring out okay there's this particular set of features in XDS how will that end up being done in GRPC? And so this isn't to memorize or anything like that but it's just given an idea that these are the sort of the mappings that we generally do. So there's this thing in XDS called a listener that will turn it into be credentials and configuration for credentials that's like the TLS thing, route that's gonna be service config, we've got clusters that most of that configuration fits pretty well into the LB load balancer space of GRPC and then it should be filter sort of mixed between listener and route but whenever you've got HB filter that's pretty much gonna be an interceptor. Those last two, the LB policies and the interceptors that is where most new features go and that's good because that is the predominant plugin architecture where you can inject code into GRPC and so us needing to do most features in those last two LB policies and interceptors means that that fits in actually pretty well with GRPC overall. Don't be memorizing this, this is a diagram from a recent GRPC. GRPC is our change proposal process. So the important thing to note is there's a lot of green here. Every green box is an LB policy and they're sort of constructed into a big tree. The big tree is not actually bad in my mind. To me, this diagram shows two things. One, the plugin system is working. We're able to inject quite a bit of functionality into GRPC through the plugin mechanism and so if you don't want some of this you're not gonna end up using it but if you want a lot of the various features that you can get them. And also that composition is working. We're able to take these various features, put them into sort of little boxes and then compose those into a much larger system and manage the complexity. So XDS only. I was talking these are XDS features. How do these particular features look like if you don't care about that XDS thing? Why is it not going? So to do that we need to talk about that mapping. So RPC retries end up being service config. We already had some retry work before XDS and so XDS is able to just configure that existing functionality that we were already working on. TLS security, though it's gonna be credentials. Authorization is gonna be an interceptor. The next three are all load bal... Sorry, the next three are all load balancers and that last one I mentioned is a little bit weird. It's more of an API but it interacts mostly with the load balancers and interceptors. Except for those two toward the top, everything else is actually not tied to XDS. It's pretty XDS independent. Whenever we're implementing it, there will potentially be some XDS glue that we put in. So the actual implementation process for XDS might have some code but that's specific to XDS but that's just plumbing. These things, the core of the functionality is available outside of XDS. And you get to see some things like that in the GRFCs which I mentioned were our change proposals. Outlier section says that it is usable by non-XDS GRFC users. And Least Request LB says it can be used without XDS in the future. In the future, we'll come back to that. So we've got all these things. How about those two at the top? What about those? Why were those sort of more XDS specific? So credentials are a little interesting in that they don't have any configuration sort of plumbing in GRPC themselves. Normally you construct them with an API then you pass them into GRPC. And we don't want a generic API generally for credentials because you shouldn't be getting that elsewhere. You don't want to look up, oh, how should I connect via TLS from DNS? And then like, what's the point? DNS is not secure. You might as well just be using plain text. So these sorts of things, instead of having you tying exactly into the same code flows that we'll use for XDS, instead there will be an alternative implementation, separate, or not necessarily alternative implementation but definitely separate APIs that we'd use. Sometimes it's separate implementation, sometimes not. And so in this case it'd be the advanced TLS APIs that we'd use. Service authorization. Similarly interceptors also don't have a generic configuration flow. And so these sorts of interceptors would also be a separate API that you would just use directly. And they can be the same implementation underneath but we will expose it a separate way. This one's a little bit special though because the configuration that XDS provides is actually really, really expensive. And we do not want to provide that for very long term support. And so for this API, whenever we exposed it to GRPC users not using XDS, it is a much simpler API. It still uses the same engine behind the scenes but it doesn't have absolutely everything. Some parts of it you just don't need. It's for REST things. So that just aren't very interesting or reverse proxy, just who knows what that traffic is. They've got a lot of various features. Other things you might be like, oh, that might be a little more interesting. I wish that was added. That could still happen. But if we're sort of taking everything overall, looking at all the features that we were talking about and you're like, oh, I really wish this feature was not available. I could just use it outside of XDS. Why is it not or something? The biggest reason most likely is that no one's asked for it. We do definitely care a lot about what people are asking for. And so if you're interested, heck, for a feature in XDS or a feature outside of XDS and you see there's something on the other side or interested, then it's mainly probably no one's asked for it. If people are asking for it, then a lot of these are pretty easy to expose in the other realm. If it's not that though, then there's two normal sort of cases, or not normal, but the two outlier cases would be it integrates too tightly into XDS in some way. There is quite a lot of configuration and sometimes they will assume all their parts of the system. XDS has its own architecture and some things we just don't have an analogous concept in GRPC. So clusters, for example, we have those, they work in the XDS space, but there's a bunch of XDS specific plumbing to make that happen. And it's just not a concept that makes sense whenever you're not in that space. There will be some different concept that you will probably have to create for that. And then as I said, some things are just really expansive in XDS. It's a very large API. And XDS does have the potential to change some over time. As more people use it, that's obviously got to slow down. And it's not like things are getting deleted every month or so. But XDS does have a potential to change over time versus the normal GRPC stability, which we take a very long-term view to that. And that also means we're gonna have to be conservative, more conservative at times. And so that might skew our decision-making on exactly what gets exposed. With that in mind, we can go ahead and do some Q and A. I've got the links here again. And remember that you can download those slides. Questions? Yes. Sorry, guys, coming. Yeah, we'll wait for the mic. Cause there's lots of people watching online, right? Hundreds. Thanks for the talk. I don't know much about the plug-in system in GRPC, but could you talk about the extensibility? Like I come from Envoy and we have filters, we have Fawesome, and you can specify XDS to plug in your own implementations. Does GRPC have any mechanism like that, or is all this XDS features just what you guys implement? So the question, so Envoy, GRPC does have interceptors. Those are very, very similar to the filters that exist in Envoy. And those are very, very heavily used. So that's the number one interception point in GRPC or injection point. There are some other things like load balancing policies which are eventually maybe gonna be plug-able in Envoy as well, but they're definitely plug-able in GRPC, well, mainly in Go and Java. And the name resolvers are also another way for if you're wanting to look up addresses another way. But I think we're out of time. Yeah, and we are basically out of time. And so Eric and I will be out in the hall along with all the other maintainers on GRPC. And so if you do have questions, meet us out in the hall. We'd love to meet you, talk to you, and help answer your questions. So thanks very much everyone. Thank you for coming. Thank you.