 The thing that kind of makes gRPC unique as far as web frameworks is that it leverages HTTP2. So if you just wanted to use HTTP1, that's not going to be an option. And then it uses something called protocol buffers, which I'll talk about in a little bit, which is a way that you serialize your data. So think of something like JSON, except it's a more compact binary representation. So some of the features that come with gRPC, there are four RPC types. They are unary, client streaming, server streaming, and then bidirectional or biddy streaming. So the streaming APIs, like client side, basically means the client is going to send a stream of messages to the server. The server will respond with one response. Server side streaming is the client will send one request and the server will respond with a stream of requests, bidirectional streaming both ways. And then unary is just a fancy word for single request, single response, kind of what you would think of with a normal HTTP request. Another feature is metadata. So this is kind of a fancy way of talking about the HTTP2 headers. So gRPC kind of strips out some of the HTTP2 headers and uses what they call metadata as a way to send information about the RPC back and forth. And one of the things you can use for that is authentication, which is built in. So you can kind of roll your own authentication. It also has built-in support for Google authentication for obvious reasons. Deadlines and cancellations. So this just means that either side of the RPC, the client or the server, is able to time out or cancel the request at any point. Compression is supported out of the box. So for making your request a little bit more efficient, you can compress things before you send it over the wire. One of the really nice things that's built in that I haven't seen in other HTTP frameworks is load balancing. So it has a built-in load balancer in the client where you can give it a list of servers and then it will actually balance between those when it's sending its request. And then it can do a lot of things for you like automatically generated boilerplate. So it can generate documentation. It can generate a client for you so you don't have to use the gRPC directly. So there's a lot of really nice built-in things with it. So then coming back to protocol buffers, you'll basically write a file describing your service and the messages that it's going to send back and forth. You'll save it in something like a .proto file. So this is just an example. The very first line syntax equals proto3. Protocol buffers are versioned. So this allows you to define which version of the protocol buffers you're using. Next, the green line with the two slashes is just a comment. But we're saying here we're going to define a message type. So if we're going to create an echo server, where basically whatever the client sends, the server will just send the same message back. We're going to define a message called the echo message. And it'll have one field, and it'll be named value. And it's going to be of type string. So all of this stuff is very important whenever you're going to actually serialize your data. It needs to know what you're serializing and what the fields are called so that the other side knows how to unpack it. Then next, we're going to define a service. So this is going to be an echo service with four different RPC types. The four that I talked about before, the unary, client stream, server stream, and bi-directional streaming. And if you'll look, it basically says RPC echo unary. That's going to be the name of your function call. And then in parentheses, that's going to be your input. So it's going to take an echo message, and then it's going to return an echo message. And then you can see for the streaming ones that there's a stream keyword that just gets prepended. And it's really, at least for the protocol buffers, it's that simple to switch between unary and streaming RPCs. So if you want to use gRPC and JavaScript, gRPC is supported across a large number of languages. I think Go is probably the biggest one. But there's also a C core that is shared across a number of languages. And you can use gRPC and Node, Go, Java, I think Python. The list goes on. But as far as JavaScript's concerned, there's going to be two primary environments that are targeted. It's going to be the browser and Node.js. So the browser has some fundamental limitations when it comes to gRPC. The biggest one is being able to specify that you want to use HTTP2. So I don't think there's really an API that's built in for that. And if you're using an old browser, it might not even be supported at all. But then you also have to have more precise control over the HTTP2 frames that you're sending out onto the network, which the browsers just don't give you any insight into. And then Node is more of your typical back-end development, more along the lines of Go and all the other languages that gRPC is kind of a target for. So the browser recently, well, not recently, I guess last year, made ready for production something called gRPC web. The way that gRPC web works is you have to introduce an Envoy server into your stack somewhere. And the Envoy server serves as a proxy. So you have a browser that can communicate over gRPC web, which is just a slightly different protocol over HTTP1 using XHR requests. Your browser will talk to Envoy, and then Envoy will proxy your messages back to a normal gRPC server. But like I was saying earlier, at least our team had use cases outside of just the browser. So gRPC web wasn't really an option for us, and I don't think it was actually considered production ready at the time. So we looked at the gRPC module. This builds on top of a C core, so it's a native compiled add-on. It actually predates NAPI or an API, so it's built using NAND, which is no longer considered the best way to write native add-ons. And the native add-ons come with a number of other issues that I'll talk about in a second. But this module, I have the MPM statistics at the bottom as of a couple days ago. So I was actually a little surprised to see that gRPC had more downloads than I think every other node framework except for Express. I talked to one of the gRPC maintainers about that. Turns out it's bundled with all of the Google client libraries and things like that, so it gives them a little bit of an advantage and downloads. But one of the things they do to ease the pain of using a compiled add-on is they provide pre-builds, so a pre-compiled version of the module for different operating systems, different versions of the module and things like that. And last I heard they were shipping over 100 of these, so that's a significant amount of work that goes into that. But there's a big problem with compiled add-ons, and that is they generally don't work very well. Once you get them building, it's pretty easy to use them. They work like most other ones, but this is just a list of different issues that I took off of their issue tracker. The gRPC folks use a monorepo, so there's issues from gRPC and then JavaScript module that I'll talk about a little bit later mixed into here. But from watching the issue tracker, their issues are primarily getting the compiled add-on to build and run properly on all these different ones. And then the second biggest issue is getting TypeScript to cooperate. So we decided that we weren't going to be able to use the gRPC compiled add-on. When we were jumping back and forth between versions and Node, we were running into all kinds of issues where it wasn't supported on 12 yet and things of that nature. Oh, boy. Works on my machine. All right. I guess we're good. So yeah, that's where I was. So gRPC and pure JavaScript, we wanted to avoid the issues we were having jumping between versions and Node. We wanted to avoid a compiled add-on. There are actually some ecosystems in Node primarily talking about the happy ecosystem where compiled add-ons are just not allowed because of all the issues that they create. And then there are other issues with compiled add-ons. Like you have to keep crossing the JavaScript and C++ boundary a lot. Depending on how chatty your add-on is, it can introduce significant delays to your code. So we decided we were going to look at a pure JavaScript implementation and they had something called gRPC.js which they had actually, by the time they created this, they had moved it under the gRPC namespace on MPM. And if you look at the download count, so it recently passed the number of downloads for gRPC. I think Google has been slowly migrating from gRPC to gRPC.js. It's currently a beta release, but Google's using it. I haven't really found issues with it. It's, in my opinion, more reliable than the gRPC module, but it doesn't have all of the same features yet. Still a process adding them. It's API compatible with the gRPC module, so you can actually drop it in anywhere you were using gRPC before. The exception is if you're using some of the features that aren't supported yet, but all of the basic ones, like I listed earlier in the talk, are supported. It's built on top of Node's HTTP2 module. So that was in beta itself until Node 10.10, which was September of 2018. And then whenever you go to actually require or import or whatever you do, gRPC.js, it'll actually do a version check. So you have to be using that semverse string there, so greater than 8.13 or greater than 10.10. I recommend not using it with Node 8. I've seen issues with it. I would only recommend using it with Node 10 and above. And another nice thing about this is that it has no runtime dependencies other than semver, which is used to check the dependency string. So you don't have to worry too much about a lot of people slipping viruses or things like that into your code. So an example of a Unary client, the very first line here, I'm just going to be requiring gRPC.js, and then creating a client. It's the new echo service. If you remember back to the proto file that I showed earlier, that was the name of the service we were creating. It translates into the same thing in your JavaScript code. You pass in basically a host import that you want to connect to and then something called credentials. So in this case, we're not going to be doing secure communication with a server. So we're using credentials.createinsecure, but there are also secure versions of these things. And then it just works since it's RPC stands for Remote Procedure Call. So it looks just like a function call. So we do client.echoUnary. If you recall back to the proto file, that was the name of the RPC that we created. You pass in your value. So hello, Unary. And then it does still use callbacks. They haven't moved over to async await yet. I think they're looking for a proposal on the best way to do that. So right now, you still have to use the callbacks. And then it's just a typical Node.js callback. So checking for errors, logging that, and then handling the response, and then finally closing down the connection. Next, I wanted to show an example of client streaming. So the beginning is going to look the same. We're requiring gRPC.js, creating a client. This time we're doing client.echoClientStream, and it's going to return a Node.js stream to us. So if you're familiar with using the built-in Node Streams APIs, it's the exact same API. So it's pretty straightforward for you to get up and running if you're familiar with that. And then if you look at the very bottom, we have stream.end. So in this case, I'm only sending one message to the server, but I could do stream.write, stream.write as many times as I want. And then because it's a client stream, the response from the server is just going to be one response. So we get the callback the same as before. But if we look at a server-side streaming client, it's going to start off the same. We're going to call client.echoServerStream, and we're going to only take one input. So just like in the Unary case, we're passing in our one value, except it's going to return a readable stream. And in this case, we can attach all of the error handlers, data handlers, things like that. And then we can actually consume as many of the response messages as the server's going to send to us. Again, it's just nodes built-in streaming, and it's pretty easy to get started with. And then as you can imagine, bi-directional streaming is going to be the best of both words for worlds from the client and server streaming perspective. So you would do echo.biddy stream. It'll return you a bi-directional stream that you can then read from and write to however you want. The next thing I want to talk about is something called protoloader. So it is another module that you use to actually load proto files into your application. The original GRPC module, the compiled add-on with the C core, actually supports loading these things by default in the module itself. When they moved over to GRPCJS, they wanted to kind of separate out that functionality. I think the reason for doing that was to create a nice interface for proto files that could be versioned independently without messing with the rest of the module. So under the hood, it uses a module called protobuf.js. The way you would do it, obviously you just npm install protoloader. On the second line here, it shows how you would require it into your application. And then there's asynchronous and synchronous loading capabilities for the purposes of keeping things simple on the slide I went with the synchronous version. Loader.loadsync and then assuming that our file is called example.proto. And then there are some options where you can configure how you want to load it in. So keep case false means that whatever the case is inside of the proto file itself, you don't necessarily have to respect that. It'll do nice things for you like converting from snake case to camel case because that's typically what JavaScript developers use. And then for things like longs and enums, which don't necessarily have a type in JavaScript that they correspond to, you can say how you want them to be parsed into JavaScript. So in this case, longs and enums will both be parsed as strings. That's important because if you have a really big number that won't fit into a typical JavaScript number, you might want to encode it as a string and then do something with it from there. So this will give you your package definition. And then all you would do is then inside of gRPCJS called load package definition. And it'll give you back a package that you can then start using to make your RPCs. So gRPCJS was great whenever we started using it. But we also needed to have a mock server because remember the original use case was trying to talk to Golang services that weren't really working for us. And at the time, gRPCJS didn't have a server component. It was client only. I guess Google prioritized the client over the server for their own needs. So I wrote this. It is not an official, you know, officially gRPC supported module. But it seems to work just fine. It's a server. It's written in pure JavaScript, no type script here. It's also API compatible with the gRPC module. So again, you can drop it in for the gRPC module or now the gRPCJS module. It now has a server component. The only production dependency is gRPCJS, which is used for some shared data structures. So constants like status codes. gRPC uses its own status codes instead of typical HTTP status codes. And then the metadata thing that's used for transferring headers around and working with those. And when I was creating this, I was actually able to find a few bugs and, you know, opportunities to improve performance in the upstream module. So an example of what a server would look like, you would require gRPC server JS, pull out the server class, just instantiate your server, the server.add service. This is going to be the same thing that you got from your proto file earlier. And then you can actually define the implementations for how you want to handle all the different RPCs. So if you look here, we have echo unary, which basically just takes whatever was passed in in the call.request field and sends it back through a callback. And then, you know, for the different streaming ones, I have streaming implementations of the same thing. So when I went to start testing this thing, I ported a lot of the tests from the gRPC repo over because, you know, no need to reinvent the wheel. But they don't really focus a lot on code coverage. And I was coming from a background working with happy, where the happy maintainer beat into our head that everything had to have 100% code coverage. I didn't get up to there. I got to 95% code coverage. And that also uncovered some more bugs inside of the gRPC client implementation. They were just minor bugs like compression not working at all, and credentials not working. So just little things. And then I was, like I said earlier, I was able to actually go back and make some improvements to the upstream module. So they were very focused on the client. They didn't have a server implementation, but they were doing things like using the delete operation all over the place. If you do a lot of JavaScript performance work, you know that V8 doesn't really like the delete operator. It changes the shape of the class under the hood. And so it causes your code to slow down. I was able to find some like things where they were doing extra looping that they didn't need to be doing. They had a dependency on Lowdash at the time for things that are built into the language. So I was able to replace things like .map.for each, like these long functional chains with like a for loop. And so we were able to drop Lowdash from that completely, which is good for both performance, but also from a security point of view. I mean Lowdash is a popular module, but if you don't need a dependency, it's best not to have it. And this led to roughly 15 to 20% improvements in the performance of the server that I had been working on. And then I presented this work at gRPCconf last year, talked to one of the maintainers of the project and agreed that I could upstream the server to them. So I did a lot of wrestling with converting from JavaScript to TypeScript. It made me want to cry a lot. But it finally got in as of June of this year. The exact same code is now running as a TypeScript version inside of the gRPC module, or gRPC.js, I guess. We did some work around benchmarking just to see what performance would be like. So this is across gRPC.js or the server that I created, the compiled add-on gRPC. And then also we looked at Golang and Rust. Unsurprisingly, Golang and Rust were faster. The performance difference between the pure JavaScript implementation and the compiled add-on was actually right about where I thought it would be. So in general, even though it was the slowest implementation, I was actually happy with how it turned out. Along the way, we did run into a number of pain points. So one thing that we didn't personally encounter, but I have read a lot of reports about, is kind of gRPC incompatibilities with other tools in the ecosystem. So you might have a load balancer that doesn't know how to load balance gRPC traffic. You can't just use an L4 load balancer. You need an L7 that understands gRPC. They do exist out there. It's just something that you need to be aware of. Also benchmarking, so because gRPC is kind of its own special snowflake, you can't just use a normal, you know, HTTP load generator to throw the same traffic at, you know, a happier express server and then also gRPC. So getting an apples to apples comparison can be a little bit rough there. The Node.js gRPC community is not very large from what I can tell. There are, like I said before, a lot of downloads, but those downloads are primarily coming from Google itself. The other, and this was probably my biggest complaint, was that if you enjoy working on open source, you know, if it's something you want to do in your spare time, this isn't really the project that I would recommend contributing to. Even though it is a CNCF project, it's run more like a Google project that just happens to be, you know, publicly source code available. Outside contributors are tolerated, but not embraced in my opinion. And then the other thing that I've actually started working on myself is being able to use gRPC web without having to introduce Envoy into the situation. You know, you might not want to use Envoy or you might just want to be doing some local testing where you don't need to set up an Envoy container on your local machine. So I've started working on a node in process proxy that can speak gRPC web and then proxy that out. Some other future work that I would like to see happen. More feature parity between the gRPCJS and the gRPC module. So they have something called interceptors, which is their version of middleware. There's currently a pull request open to add client-side interceptors to gRPCJS. It would be nice to get server-side interceptors as well. gRPC, the compiled add-on, supports a ton of different server options. It would be nice to be able to support more of those in the JavaScript implementation. Then there's always going to be continued performance and stability work. You can always make things better. You can always fix more bugs. Integration of node JS workers I think would be something that would be interesting to kind of play around with. I don't know what kind of performance gains it might lead to, but it's worth investigating. And then just general tooling and nodes ecosystem integration. So it would be nice if there was a benchmarking tool that could talk HTTP and gRPC at the same time. It would be nice if there was a nice way to put gRPC into a happy server and express server. So just those types of tools that seem to be missing, but would be nice to have. And that is all I have. Thank you for coming to my talk.