 Hello, and welcome to my session on taking full advantage of GRPC. I'm Jimi Zalinski. Let's get into who I am and why this is all important. All right. So I am the co-founder of a company called AuthZed. So AuthZed is the creators of SpiceDB. SpiceDB is an open source permissions database inspired by Google's sense of our paper. Effectively, what that means is we are a database where you store relationships between different objects from your applications. For example, Amelia is a doctor assigned to this clinic if you're a health care application. And then folks can query those relationships to determine access. So things like, can Amelia treat this patient? Who are all the patients Amelia can treat? And who are all the doctors assigned to this clinic? Our types of queries that you can make to our database. We are a little bit of a non-traditional database in the sense that we actually don't support SQL. So GRPC is actually our primary querying interface. And there are a couple other databases out there that follow this practice. Google Spanner is one. But you might find this surprising, but it actually makes a lot of sense depending on what your domain is. For example, ours, doing access control. When you ask if someone has permission to do something, you typically do that before they can actually do the thing. So your applications are almost always checking the permission before they have to do any work at all. So that puts kind of authorization questions in the direct line of fire of being a bottleneck for every request in your systems. So that means we're trying to squeeze out as much performance as possible, both server side and client side. And basically having an RPC layer like GRPC lets us actually have that control in both of those sides pretty easily as well. To make things slightly more concrete, we can talk about kind of like what these SLAs look like for something like SpiceDB in the single digit milliseconds at the 99% is actually kind of what we're targeting. So kind of making that even more concrete, if you are trying to make a request, if you have to establish a new connection or do a TLS handshake, which is required for all secure connections, that amount of time is enough to blow your SLA. So that means all connections have to be pulled. We have to have a decent amount of sophisticated logic on both sides of the server and the client to reach these SLAs. But before that, I worked at a little company called CoreOS. And CoreOS had a mission similar to AuthZed, which was securing the internet. But unlike AuthZed where we secure the internet through building authorization tooling, CoreOS's goal was to do it through automated updates. So they wanted server software to get updates similar to how cell phone software gets over the air updates. So before they could do that, before we could do that, we actually had to build a whole bunch of things to get the level of automation such that we could finally automate those updates at the very end. And we did that by building lots of systems that were inspired by the internal systems at Google. And you might have seen this trend. We basically built Google-inspired systems at CoreOS and I'm continuing to do that in my existing company, AuthZed. So fast forward a year into me being at CoreOS and 2015 Google open sources GRPC. At the time, it was shockingly similar to Stubby, which my co-worker and now co-founder, Joey Shore, worked on while he was at Google. Basically how the ecosystem thought about it and Google's logic behind open sourcing the project was that if there were gonna be folks building systems inspired by Google's internal software, while they may as well have an RPC system that's also inspired by the one internal to Google. So fast forward now and there are plenty of cloud native projects all using this as their RPC layer and we kind of have a healthy ecosystem around using GRPC. I promised that this is the end of my history lesson and I'll go forward with actually kind of talking about the meat and potatoes of this talk, which is in true BuzzFeed article fashion. Maybe one day I'll give this talk at BuzzFeed and I'll come full circle, but I have the top eight tips to get more value out of GRPC and unlike the typical BuzzFeed article where they kind of are daring you to scroll further and further to get to the best content, I actually ordered this. So it's the most impactful, the most valuable thing about GRPC kind of ordered down to the least valuable but still extremely valuable and kind of like morning at the very end, it just has like kind of like a takeaway. So if you plan to fall asleep throughout this talk, all you have to do is stay awake for the next few and you'll have gotten the most value out of it. So the number one thing that I think GRPC has that might be different from other things is real world usage. But you might be thinking REST, JSON, all this stuff has plenty of real world usage. But the thing that I kind of wanna stress is that there are really good projects out there that are both modern and mature using GRPC that are open source that you can go read their code. So examples of two that I would point people to are Vitess and My Owns by STB. Like these projects kind of are different because GRPC kind of crosses different language ecosystems. So you can kind of see best practices and things and extrapolate those workflows regardless of what your domain or your project is unlike kind of REST APIs where sure if you're building a web app and you're doing it in Ruby it makes sense for you to look at maybe what folks are using in the Rails ecosystem. But if you're writing stuff in, if you're trying to write, I don't know, a database software it may not be useful to see how REST APIs are implemented in web apps, for example. So that's just my straw man argument against kind of like the real world usage in other systems. But the super cool thing about GRPC is that you get to kind of see these idioms and patterns that are used in these mature projects. You can straight up copy them. But not only that because we have open source in the core of the ecosystem for GRPC, you can actually kind of go into the pull requests and commit messages for the software and read the justifications behind the decisions that they've made. Why are they doing particular things? Why have they chosen this? You might be you'll see that like actually this is a workaround for some other behavior or they're actually addressing legacy clients, for example. So that can be nice warnings for you to be like, oh, if you don't have that legacy, maybe you don't need to do this particular thing, right? But you get to kind of see these mature projects and see what their workflows are, what tools do they use. For example, deprecating RPCs or doing API versioning, these aren't the things that you'll find in the GRPC documentation. There's no one way to do these things. But if you look at all these different projects that are mature and kind of following the best practices, you can arrive at what you think that solution should look like for your use case in a well-informed way that you might otherwise not be able to do. All right, so now that we've kind of gotten that one out of the way, that was big number one. Big number two is buff. So buff is a fast, extremely fast proto buff compiler. So it's an alternative to proto C, which if you're following any of the tutorials or official documentation for GRPC, that's the compiler you're using. Now the value for buff isn't so much in the speed of the compiler, but actually the workflow that it provides. So buff was originally written at, well, buff is the spiritual successor of a tool that was internally developed at Uber to manage all of their APIs. And the big value that I think that buff gives you is an improvement over converting bash scripts to do workflows in GRPC and dealing with these proto buff definitions. But most powerfully it has static analysis and linting for your definitions. And I think this is so important that I even wrote a blog post about it that is featured on buff's website. And if you look at kind of like the last line of text there, that subtitle, I call it the first day of the rest of your life because the second you create an API, you're stuck with it now. Once people start calling it, you're going to have to maintain it. Creating the code is just the first step. Code typically outlives you if you're working on a project that is going to be serving customers. And so you might not always have proto experts available to you to help you with design decisions. So, and honestly, it's hard to keep up with all the changes necessarily. But the nice thing about buff is that once someone learns what those best practices are, if they can codify it, they will build it in as a lint rule to buff. And then everyone who's using buff will get basically, if it's built into your CI or just your local tooling, you'll be aware the second you write the code that you're either breaking, you're violating something or you're not doing the best practice. The really, really, really cool thing and most useful thing with buff is actually detects breaking API changes. So it can tell you if what you've changed versus what you had changes the proto buff wire format representation enough that you're going to break clients. That's incredibly powerful if you're trying to figure out how to move forward or do backwards compatibility with new iterations of the same API. And kind of like the big value here, and the reason why I think buff is number two is because if you're maintaining REST APIs, for example, let's use the guiding light, the North Star. The industry, which is Stripe, Stripe has basically clients that haven't been touched, that are still calling the same APIs perfectly compatibly 15 years later. But to do that, they have to hire a whole team to manage their API and they have to write up a bunch of custom tools and typically doing integration tests against their APIs. So they're actually testing the API after all the code is there and kind of like the complete end-to-end experience versus a lot of the same logic that they're testing when you're using something like GRPC, any kind of like RPC language that has this IDL form that we can use static analysis on, we can catch a good amount of these problems. The second you write the actual definition of the API, we don't have to write a client, we don't have to generate a client or anything like that. We don't have to like test it end-to-end in the real system to tell whether there's a problem and you don't need to hire all the engineers to build all of that stuff for you and make sure you're maintaining all of that if you just have a static analysis tool that runs in your editor or runs in your CI that does this for you. It's a huge boom of production if you are not using Buff but you're using GRPC, highly recommend you look into it. So talking about tooling, the next one is a library. So Google APIs is basically a collection of shared types from Google's Portabuff APIs. Basically, they had a whole bunch of services that were using Portabuff externally facing to the internet and they decided to refactor basically and pull out all the common types across those APIs. Turns out the common types across Google APIs are also useful. They're probably gonna be common types across your APIs as well. So you'll see general patterns here for error handling, managing times and durations, key value pairs, to find data structures like this. And the super nice thing about this is actually depending on what language you're running in, there might also already be a library that exists for these types. So instead of you having to define your own new type for timestamps, for example, Google already has one for timestamps and their timestamp library is going to convert between that format and the standard library's time type that is built in your language, native in your language. So you get a lot of really easy conversions between your native language types. So you can use all the libraries you've written and keep all your code kind of like native to the language and not coupled to Portabuff if you adopt some of the Google APIs. So the other warning thing here is that I will say it's kind of tricky to know if a project has overlooked Google APIs or deemed it too much complexity and not worth adopting. So the reason why traditionally a lot of folks won't have adopted Google APIs is because prior to Buff there weren't really good workflows for importing libraries into your own Portabuff kind of definitions and generations. So now that Buff exists, it's really easy to add a dependency to something but prior to that you would have typically vended it at a particular version which means copying and pasting the code and maintaining it yourself from that point on. So that's kind of error prone and clunky and not a lot of people like understand how to do the magical indentation for the Pro-C compiler flags. So a lot of people have actually avoided using third party dependencies when it comes to Portabuff and GRPC but that should no longer be the case. So if you see useful types in here, I say go for it. Next, in the same vein of trying to avoid writing as much code as possible, don't write it if someone else has. There's this custom plugin which I'll get in the custom plugin later, spoilers, but Product Gen Validate which basically writes a validation method so you don't have to. In your Portabuff definitions, you can annotate fields and say, for example, you have a light field and in a message and you can actually annotate it and say this field should never be more than 128 kilobytes or the string field should only contain this strings that fit this regular expression. And once you've annotated that, you generate code that gives you this validation method and if you call this Validate, it throws an error if all of those constraints that you associate with those types in the Portabuff definition are not met. This supports a variety of languages. It supports Go, C++, Java, Python, but I'm not sure if this exists in all those languages but in Go there's a really nice middleware that you can use that actually you slot into a server and it basically returns early with an error if any other requests coming in are not valid like the validation method throws an error. So that means you don't actually have to manually even call the Validate method in your handlers to know that every single request coming in meets the constraints that you've labeled, you've annotated in your Portabuff definitions, incredibly powerful stuff and you basically don't have to write the code, way less room for human error and making any mistakes in what can be pretty sensitive stuff. You don't wanna accept like corner cases or very corrupted forms of RPC requests, right? So there's another project up here called GRPC Gateway originally written by Johann Brandhorst and what it does is it works very similar to Product Gen Validate where you annotate your photos but this time you annotate it with an HTTP path and HTTP method and it generates for you a reverse proxy that will sit in front of your GRPC application and actually convert JSON HTTP requests into GRPC requests and then talk to your service and then your service will write a response back to the reverse proxy and then it will take that response and convert it into JSON HTTP and return that to the client. So that means your client, you can support legacy clients, you can support environments that cannot use GRPC, maybe they have like some kind of memory restrictions because they're an embedded system or anything like that. You can support all these environments and not write code to do it, you can just generate that code. What's super cool about this is not only can you generate the code to do that, you can also generate documentation for the HTTP API it generates but also you can use that same exact generation tool to generate clients. So this is all using OpenAPI, if you're unfamiliar you can Google that or Google Swagger which is the thing that inspired OpenAPI but at the end of the day what it lets you do is have API documentation and even generate clients for HTTP. So what that means is you can write a GRPC service definition and have it generate both documentation for GRPC, the GRPC service itself, documentation for HTTP and the HTTP service itself, services and clients for both. Incredibly, incredibly powerful stuff supporting multiple protocols. It may even be a better way of just writing and maintaining REST APIs at the end of the day even if you choose to never use the GRPC APIs or maybe your customers or users don't necessarily use it as much. So there's a really, really cool thing for Go programmers here which is because your PC gateway is actually written in Go you can do this additional trick where you can actually instead of running the reverse proxy as a separate process you can actually run it in the same process so that it just calls directly into your app like in memory but even cooler is you can actually make them share the same port if you're willing to sacrifice some performance by using a trick where you read the first couple bytes of a connection and determine whether the request is GRPC or HTTP and then route accordingly internally to your application. So that's really, really cool stuff. You can basically expose one single port for your Go service and it can serve HTTP and GRPC. All right, I mentioned middleware a little bit and I think that like one of the super, super useful and most interesting things about GRPC is that you can actually support client middleware. So there is when people think of middleware they almost always think of server side middleware. They think of adding on new behavior or like authentication or authorization into kind of like their handlers and changing the handlers in a server but what's super interesting about GRPC is it actually has middleware on both sides and that is less common but extremely powerful. So powerful that I argue it kind of alleviates the need for an API gateway a lot of the time. Like let's forget about all the rest stuff I was just talking about. Like let's get back into why we're using GRPC. Like let's take full advantage of it. With a single line of code we can add authentication, compression, modern observability, including logging metrics and tracing. We can do timeouts, rate limiting, recoveries, exponential back off and like all this stuff is a single line import into your client, your client. And you might be wondering well like why? Why would I want that in my client? Google actually believes internally in kind of this philosophy that is dumb servers more clients and the value that that has is it lets you actually iterate on your design a lot on the client side. You're gonna do more work and it may be a little bit more complicated but it avoids you putting behavior into the server that you're going to then have as tech debt forever. You'll have to be maintaining that forever forwards. So if you're not 100% confident that that is behavior that you need server side first you should try to experiment with a client side and make a really, really smart client. Great example of this is actually KubeCuttle for a super long time in the Kubernetes ecosystem. The Kubernetes API service was pretty basic and KubeCuttle when you did KubeCuttle apply it did all of this logic to figure out what needed to be applied to the actual PECD inside of Kubernetes but nowadays we have finally a lot of that logic that was being done in apply. We came to the conclusion that this was core logic it should actually be in the server and now we have server side apply in Kubernetes, right? So this is an example of that make the client really smart until you know that this is core behavior and then you can move that into the server. Smart clients highly recommended if you're developing a service and you don't know exactly what should be in the server yet. So custom plugins, I've mentioned a couple plugins so far I mentioned how we can generate all these different things additionally. So what a plugin does is it is the hook that generates code in a PortoBuff compiler. So for example, when you generate your PortoBuffs in a particular language so you use Go for example that is the Go plugin and then there's a GoGRPC plugin which generates your service definitions in Go. When I was talking about product GenValidate that generates your validation methods that is an additional plugin. When I talked about kind of the open API and the different HTTP content then you can generate these are additional plugins that you can generate off of your PortoBuff definitions but what's really cool is that we can write our own plugins. We're not beholding to just one of our plugins because it's already for JRPC and PortoBuff. So if you see your problem like these other projects that I just mentioned you can fix that problem and what's really interesting is you can even address problems that you find in the foundational plugins for example, the Go plugin or the JRPC plugin. For example, the folks over at PlanetScale while developing the tests built this project called the VT PortoBuff. What they noticed was that when you are writing Go code for JRPC or for just PortoBuff generally what you're doing is you're actually using runtime type reflection in Go when you're encoding and decoding to bytes to the PortoBuff wire format and so that's really slow and they're trying to write a high performance server so they realized like hey we actually have all this information ahead of time we know because we have the definitions and we're generating the code to do all this stuff we know what the size of this thing when it's encoded is going to be statically and we know all these types statically already why aren't we using that information when we encode and decode so what they did is they wrote their own custom plugin that generates the code that does all that so when you use encode and decode like their vt-martial and vt-un-martial you're actually not doing any reflection and it's way more performant than the built-in encoding and decoding that you get with JRPC so even when you hit the core like you hit the boundaries of what you can actually do with the core technology it also gives you a door to kind of like sidestep it and do whatever you need just to solve your problem so custom plugins are incredibly powerful because there wasn't really a lot of documentation or really a specification even around the input and output that you kind of take to write your own that both folks have done great work in kind of making this more well known and building out an ecosystem and packaging for this stuff I predict in the future that the open source ecosystem for custom plugins will grow massively I know that a bunch of companies actually have pretty healthy internal plugins that they share among themselves for some of the largest JRPC shops but really what we want is to build this ecosystem and have everyone feel empowered when they have an itch, they can scratch it but that, my final feature is the mystery box which is actually more of a warning that I'm gonna leave you off with this is that while I did mention a lot of the stuff, super cool things all done in the community it's actually still really hard when you're in the JRPC ecosystem to figure out what is the best practice you can look at some really popular projects or really useful projects or they describe kind of like the value they're going to give you and you can tell yourself, hey, that's perfect that's exactly what I wanted but it's really hard for you to know how they're doing it or if they're still maintained or if they're using all the best practices so for example, if you're in the Go ecosystem there's an amazing library called Go Go Protobuf it was incredibly useful for many years unfortunately, it's end of life, it's un-maintained it's using an old version of Protobuf you shouldn't use it for new projects but the functionality provided for many years was second to none it was incredibly useful library for squeezing out more performance in Protobuf and making different various trade-offs actually depending on what domain you're interested in using but there's also kind of nowadays at least they have a warning on it they fully admit that the project is un-maintained and you should look elsewhere but that may not be obvious if you're not reading kind of the readme but you're just looking at documentation or looking at someone else's code for example, EtsyD, which is a super well-known very mature product it's critical to all kinds of cloud-native systems including Kubernetes itself it uses Go Go Protobuf what's a shame is that there are very mature critical projects out there but they're not necessarily modern so when EtsyD adopted GRPC I think it was API v2 either API v2 or API v3 they adopted all the cutting-edge stuff it looked great, it was modern, it was great then but then they never touched it which is a shame because it means that if you are using modern Protobuf tooling and you go take the EtsyD service definition then generate them you actually probably can't talk EtsyD with that because they haven't kept up and updated the server side so there's going to be incompatibilities there which means that now they're losing a lot of the benefit of the GRPC ecosystem they're not able to actually leverage all these tools and modern new things have folks take their definitions, generate, and run and actually what you actually end up doing in practice is you kind of have an EtsyD-specific client now like it's no longer a GRPC client it's an EtsyD client because EtsyD speaks a particular flavor of GRPC that's old and bespoke it's really unfortunate and if you kind of go out there naively thinking oh, this is a critical project they must be doing it right I'm going to learn from them copy what they're doing you might end up adopting the wrong things unless you do diligence to make sure that what you're copying is right so that's my word of warning if you have any other questions you can find me on the social medias Twitter, Mastijal, and GitHub my company, AuthZed, actually has a Discord where we discuss lots of open source technology considering SpiceDB itself is open source so if you have questions about necessarily how we're using GRPC or you're interested in tricks that we have or find anything on the issue tracker related to that feel free to join that and ask questions there I'm also on the Kubernetes Slack if you want to ask me questions directly there or in the GRPC channel itself there so thank you for your time