 Who all here came to see Vlad? Anybody know who Vlad is? Vlad is actually on my team, but he's up in the top left picture here in Florida. So he was hit in Tampa, but luckily family is safe. House is OK. But he couldn't make it here to the conference. So I am here for you instead of Vlad. So Vlad's actually an author. He's written this fantastic book on Go. He's actively on Twitter. And he's a part of the code team. Code is an open source initiative at Dell focused on contributing and building ecosystems and community all wrapped around emerging technologies. So my team is really a group of open source engineers and developer advocates that focuses on all kinds of cool technology. I'm a Go advocate. I'm a Go developer. I've written plenty of stuff. I'll tell you that Vlad is the deep dive guy, but I'll do my best for you guys. So if you have questions, I may defer. But let's see how this goes. So we've got three objectives today for the presentation. First of all, let's talk about why GRPC. I've actually got a ton of experience in developing rest and HTTP-based interfaces. So we'll talk a little bit about where we've been and why we're here at GRPC. We'll go through an example of a GRP service. And then we'll look at some of the extra features that you get by way of GRPC natively. So why GRPC? Like, what are we really trying to do? We're trying to connect two components together. And we want to do this in as simple a way as possible. If you look at the orange depiction from component one to component two, we just want that to be a straight line. We want it to be invisible. That should be as easy as it gets. Because connecting two components together isn't where the value is, like the workings in between them. The value is just having those two things talk to each other. So there's this crooked path that we can walk to get two components to talk to each other. And that involves having discussions about rest, like building a full rest model around having two things work together, very difficult. It involves existing RPC methodologies that may be native to languages. Maybe it's HVJSON to make a simple interface. There's lots of discussions that happen around just how you connect two things together. So I'm pretty excited about what we're going to talk about today, which is simplifying this and standardizing it as you look at building out this communication between components. So what's the situation today? It's really interesting because technology is changing. The demand for interoperability is increasing day by day. The user experience, when I look at something like this, like my phone. Most of us use our phone almost all day long. And that user experience and responsiveness is really important to us. But we travel through all kinds of different coverage zones through cell sites and stuff like that. And getting the bandwidth, getting the responsiveness, it's just critical to user experience. So we want to be as efficient as we can be when we talk about interoperating. And it may be a service to a service. It may be a phone to a service. We want all that communication to be super efficient. So mobile technology is driving some changes here. The other thing that's happening is software architectures are changing as well. So when we look at a monolith application, it was pretty simple to have different pieces of the application communicate because you're using in-memory processes that just can naturally call functions and stuff like that. But we look at taking these big monoliths and splitting them out to some type of logical microservices. Then you have to think about how these things are going to communicate with each other. Are you going to develop your own JSON API? Or are you going to use something simple to connect the two components together at that point? So the mobile technology, the changing software architectures in terms of using microservices, these are both driving the need for something like ERPC. So what's the scenario? We've got a currency service, for example. The currency has a lookup capability that we're trying to expose in some way. If you have a mobile client, maybe it's Android that wants to use this service. So I'm talking about developing a web-based app, for example. Maybe it's an iPhone where I'd need to develop in C-sharp. Maybe it's a back-end process with Java, Python, Go, Rust, something like that. Maybe it's a desktop tool, and it's developed in C-sharp or Java. And maybe it's a website back-end using Node.js. All different languages, but you want to expose that functionality to this lookup service consistently. So the problem is that there's multiple systems. If you're taking this mobile device area, it's all bandwidth constrained in terms of working from a cellular site and getting the bandwidth all the way to the data center. It's something that should be really efficient. You're talking about real-time data access. So maybe you're streaming in terms of sending chunks of data rather than large globs of data. Different languages and platforms, and the need to even version. If you're developing a centralized version of a service, if you want to upgrade that service, then you're going to have to figure out a great versioning ability so that you're consuming languages or able to interpret things correctly. And backward compatibility. So these are all a bunch of things to consider as you're building out the service and making it platform-agnostic at the end of the day. So InComes, kind of our first thing we'll talk about, which is RPC over HTTB JSON. This is a pretty good solution. It's simple, it's flexible, it's kind of universally accepted. The point here is when you have a RPC JSON interface, you're doing a lot of the work. You basically said I've got two components and I'm going to start with languages and I have to build like soup-to-nuts, everything in that language to make these two things interoperate. And then if I want to build in safeguards, like type checking and stuff like that to make it more consistent and predictable, that's all on me to develop in every single language. So in RPC over HTTB JSON, it's kind of a simple interface. You'll see it pretty consistently across a lot of different technologies, but it's got its downsides in terms of being predictable and it's got its downsides in terms of adoption in different languages and platforms. I covered the data typing. I think a big thing to cover here as well is the efficiency. I'll give an example in a little bit, the idea that I'm going to take a language and I'm going to have it communicate over a text and then I'm going to have like serialization and deserialization of that text to try to fit it into object models after that. It's very inefficient. Like it takes a lot of CPU cycles to actually do that translation and send stuff over text. So it's really inefficient when you start looking at the way it's transferring the data. Another important thing here is, we're using HTTP as a transport and it's not a perfect translation between a language and the HTTP protocol. So if I want to use errors, for example, you might think, hey, I can take my errors for my language and kind of encode them in the error response, but that's not really true because errors in HTTP shouldn't include that type of a body. So you can't really use the HTTP protocol to encode language-specific information. It just doesn't work out. It doesn't translate in the right way. So there's lots of work to do, essentially, if you want to take advantage of something like HTTP and throw your own JSON messaging layer and accomplish RPC. So the next thing that people look at is, well, how about REST? So if I say HTTP JSON and I say REST and they're completely two different things, do you guys, anybody disagree with that? Does that make sense or not? Let's say, okay. They are completely different. HTTP JSON is just simply a way of encoding information and RPC is basically saying, I'm gonna send these remote procedure calls over this transport, essentially. REST is actually builds on top of those fundamentals. REST actually leverages something like HTTP JSON, but it's this idea that I can have a common way for developers to communicate. The idea is I want to be able to have a language point at an endpoint and if it's RESTful, then that language should be able to figure out how to do everything it needs to do with that interface. And so what that means is to be a truly RESTful interface, it needs to be self-discoverable. So it's kind of an interesting point. So what is self-discoverable in human terms? In human terms, it's basically the Google interface. For Google, that is our UI to a search engine. And I go to Google and I enter in whatever I wanna search and I just know that that's how I get done what I wanna get done. I've learned about it. So that's the UI and that is essentially self-discoverable. I've already figured it out. I know how to use it. In the case of computers, how do I get a computer, or an application to point at some endpoint and know what to do already? And that's the idea of REST, is there's a pattern that can be employed, which is the maze pattern. And the maze pattern basically says, take an application, point it at this endpoint and when it gets there, it's gonna know how to discover exactly what it can do. It's gonna say, okay, here's an interface, it's gonna list a menu, you can go up, you can go down, you can go left, you can go right, you can go enter and that's gonna define the ability to traverse a hierarchy. So when you look at an REST object model, it's gonna include all kinds of different object types. It's gonna include all kinds of different actions you can do on these objects and that whole model is gonna be self-discoverable. So that's the big difference between, say, something that's HP JSON, like an RPC interface, where you define all that ahead of time in a language and a REST interface, which is self-discoverable, meaning you point a language at a REST endpoint and all of a sudden the language knows exactly what it can do. The latter, this whole REST interface, which is the nirvana essentially, the holy grail of what's really trying to be accomplished, they get simple to leverage any type of endpoint, is very difficult and people write books on this stuff. There's conferences that are dedicated to building out truly RESTful services. I highly suggest them if you're interested in actually pursuing a REST interface, there's good reasons to still do so. I highly suggest kind of researching and seeing what that's all about. If you've heard of something like Swagger or Apiary, these are all kind of attempts at building frameworks around helping you build a REST interface. And then what that should end up doing is giving you these API bindings in your languages that you can actually make use of in your applications to solve the same problem that your RPC is trying to solve. So you've got HPJSON, you've got RPC over that, you've got these self-discoverable REST interfaces, which are made easier by way of something like Swagger, and then you've got something like GRPC that we're gonna talk about. So if you look at this thread online, there's a guy, Simon Brown, that I met this last weekend at Software Circus in Europe. He consults with customers about software architecture. And what he said down here is, I've heard advice such as break up your monolith into microservices with REST APIs between them, a number of times this week. And then at the top, he says, I've met a number of organizations who have done just this, and they're now struggling for obvious reasons. It's basically a bad idea, right? If you're gonna take an application and you're gonna split it up into microservices, and then you're gonna have to take the time to define a REST API between every single service, that's a lot of time and effort, and you're probably not gonna get the value out of that that you need to. So there's gotta be a better way of getting these things to communicate. So here's kind of the summary slide of what I've been talking about in terms of setting up GRPC. On the left side, you've got some of these attempts at building, or making it easier to build these interfaces. Who else used SOAP before? Yeah, what do you think about it? For the record, there's a lot of thumbs down. I would say inefficient, like if you have a large service by the time, the WISDL file is essentially this interface definition. By the time this WISDL file gets consumed by a language, I mean, the memory footprint is just insane sometimes. So I'd say largely inefficient. I talked about the serialization, XML, although very advanced in terms of the data structures that you can encode inside of XML, very heavy on the CPU. So very inefficient from a transport perspective. So SOAP was one that's been around for a long time. Another interesting point about it is that it's idiomatic in terms of client server. So SOAP, there's frameworks that have been written for different languages, whether that's Go, or Python, or anything else. And it should help you actually create these language bindings that are SOAP enabled. So idiomatic-wise, SOAP does have stuff that's out there. But the problem is that it's not part of the SOAP project, it's maintained independently by a bunch of people who care about those languages. So there are frameworks that help you create idiomatic connectivity. Is SOAP curl-friendly? Can you just pop open a bash session and just go open a SOAP interface? Not really, right? So it's a little bit more difficult to troubleshoot. Yeah, so inefficient, I'd say no. Predictable somewhat because you've got your bindings in your languages on a client server basis, and that does provide some predictability. You didn't build all that all yourself. There's some data-type validation to it. So the next one down here is RPC over JSON, which is basically where you said, I'm gonna create my own interface, but with a minimal amount of work. At that point, you don't have a client server stubs or client server bindings to take advantage of. You gotta write it all from scratch. So lots of work just to make every language compatible if that's what you need to do. It is curl-friendly, right? So if you wanna troubleshoot it, and you wanna do testing on it by way of pulling up curl, then that's available. So that's kind of cool. The frameworks are all custom to help take advantage of it. There's serialization is whatever you want it to be, JSON, something like that. It's probably not efficient and definitely not predictable. Custom rest, the difference with the custom rest is that you probably took the time to build your own framework. So rather than leveraging Swagger or something like that, you said, hey, Swagger doesn't do exactly what I need, it doesn't fit my use case. I'm gonna actually build Swagger from scratch. Like we've done that and it's not too fun, right? But it does get you to be a little bit more predictable. So that's what custom rest is all about. It's not efficient and it's probably not predictable. Under rest, that's where we cover Swagger. You do get your idiomatic client server stubs. It is curl-friendly at times, but it's not efficient, not predictable. The very bottom one is where we get to GRPC. So why is GRPC so cool? Well, the stubs, the bindings that you use for GRPC are all generated by way of the GRPC project. So as the GRPC standard for protobuf, et cetera, is maintained and moved forward, then your bindings at the language level are actually all put out as well. So they're all in sync. GRPC is not curl-friendly. That's kind of an important point. But they have their own tools for troubleshooting. But is that really important? Like if you're developing your application and I said, hey, the trade-off is that you're either curl-friendly or you're super efficient, what's more important? It's probably that you're super efficient. Because the bandwidth, data center bandwidth, CPU usage, et cetera, I think that that probably trumps the curl-friendly side of what you get by using other types of methods. The frameworks included for developing your stubs, serialization is binary. So super efficient and it's predictable because everything is generated for you. So let's go in and let's take, I guess, one more point here. So the Linux philosophy applied. So in Linux, the idea with all the different tools that are encompassed in the distributions is that you wanna write focus tools and you want them to do their job, do them well, but you also want them to interoperate with other things. So in Linux, if you have different tools working together, how do they communicate? The Linux pipe, right? So what we're really looking to do is create a, essentially a Linux pipe for boring distributed components and that is GRPC. So GRPC is a universal open source RPC framework designed to create efficient and fast polyglot services. So polyglot meaning many languages, any language. With the usage ranging from data center scale, so running backend services in the data center to bandwidth constrained devices like my iPhone traveling between all kinds of different parts of the world. So that is the focus behind GRPC. Be efficient and be fast. So GRPC is based on protocol buffers which are really how we can serialize structured data. So if I have an object in a certain language, the protobufs help me take that object, serialize it into a protobuf object essentially and then on the other end it comes out as a language specific object. So the protobufs help me with that translation and they help me define kind of a middle ground. So it's based on what we call an IDL which is the interface definition language which is where we can define like how do two things communicate with each other? This IDL essentially is what creates the client stubs and the server stubs and that's where you really focus all your time when you're defining your interfaces. So it's got a simple IDL, uses HP2 out of the gate. So HP2 is super efficient in terms of reusing HTTP connections. So it's great for mobile devices and just being efficient with data center resources. It's got bi-directional support and streaming. So if you wanna have a client or a server send data to a phone, if I've got a gigabyte in traffic, you can actually stream that in chunks so that the client is able to have a great user experience with that and chunk through the data as it comes in versus one whole segment of data. And there's other things. So it's not only like the user experience but that has to do with the memory. If I have to chunk a gig of memory, I need a chunk of a gig of RAM. But if I can actually do that in bits and pieces and I can stream it to the device, then the requirements to make sense of that data are much, much smaller. The other thing here is that it's got extensible middleware. So if you're developing your interface, there may be stuff that GRPC doesn't have already. Maybe you wanna add some throttling. Maybe you wanna add some authentication, all kinds of different stuff like that that you can essentially inject in the GRPC work stream and it can intercept requests and do extra things on the request for you. So plenty of extensibility capabilities is what it comes down to. So what languages are there today? All of these languages. So a lot of the common languages that you would use for not only your front end development for mobile devices but also your back end development in the data center. How about some performance examples? We're looking here at a dashboard that shows the comparison of languages. So if you just take a IDL, a Protobuf IDL, you define it and you generate your client server stubs and you take a look at what the round trip time is for the language itself. On the bottom, you have your baseline, which is your net perf. So what is the lowest expected performance? And then the delta between the net perf and any of the other colors is that language is added overhead for actually having the two processes communicate. So in this case, you've got C++ which is 25 extra nanoseconds and then you've got Go, C sharp, Java. So super efficient from that perspective. Here's some more practical results. Like when I was talking about XML, JSON as compared to GRPC, XML and JSON are text-based. GRPC is binary-based. So the translation there is much, much less in terms of getting stuff from one language to the other. So here's an example of the very top. You've got the JSON-RPC interface. The total time for the transaction was eight minutes and seven seconds, thank you. And then you had GRPC, the total time was 36 seconds. So that's a pretty significant difference, right? If you actually take GRPC and you scale it out to many, many hosts or many requests, then the difference is even more substantial for accomplishing the aggregate task, right? Because with GRPC, you can stream or you can send it out in different ways. So you get down to seven seconds at that point. If you compare the amount of memory consumed, so the nanoseconds per operation was down by 100%. The allocations per operation was down by 23%. And then the memory actually consumed through the operations was down by about 40%. So huge difference, like going with this human readable text format to make two components communicate together is probably a bad idea from a efficiency perspective. Using something with a binary format that GRPC helps you do is a good idea to keep things very, very efficient. Cool. How does this apply to something more practical? So don't take this as gospel, but just give you an example. Kubernetes, like when you look at a cluster and you look at the CPU usage, this is what's been said. Out of the Kubernetes cluster, because it doesn't use GRPC internally for everything right now. Some of it's based on Swagger and other API stuff. But you've got about 47% of the CPU usage is what's been said. I've heard this repeated a couple of times, but I'd love to get more firm details. But 47% is based on translating text to have different things to communicate with each other in a Kubernetes cluster. So pretty cool example of why we probably don't want to do that. So GRPCervice and Go, so let's get practical. There is a, so Vlad put together some great examples for you down here at this URL. And this session's obviously gonna be available after the conference through video and as a PowerPoint. But that is a GitHub repo that gives you some examples of how to use some of the GRPC features that we're gonna walk through. But what we're gonna do is something pretty simple. We're gonna find this service contract, which is this IDL, this interface definition. Excuse me, we're gonna compile the IDL into service interfaces and essentially a source code. And then we're gonna actually implement the methods that we're gonna find to create a real working service. So first of all, let's look at the IDL. So here we have a protobuf file. At the very top, you can see that we're defining the version of protobuf that we'll be leveraging. Next up, you can see the different messages. And we've got three that we're defining here. One is an object, which is the currency with a few fields in it, the code, the name, number. We're using strings and N32s there. And then we've got a currency list, which is essentially an array of the currency items. And then we've got the currency request, where we're receiving the ability to have a code and a number. This service that we're creating is about being able to look up currency. So there's a database of currency, the US dollar, maybe index one, the Euro's index two, and we wanna be able to query that currency database. That's what we're kinda generating here. So that's the interface, there's the request. At the very bottom here, we've actually got the currency service. So this is where we declare what the remote methods are that the client's gonna be able to access. So in this case, we've got the get currency list method. And inside of it, it's unary. So we've got a request object, and then we've got a list being returned from that method. So pretty simple, right? That's your protobuf definition. That is this like middle ground that allows you to define the contract between two components. So let's compile that. So we've got our protobuf file. We'll use protoc to actually compile that into a generated code. So this is generated code that you don't change, right? This comes out of that IDL. So in this, you can see kind of the one-to-one mapping of the message going into the type currency. On the right side, it's obviously go, right? But if you use protoc from a different language, it would generate the code in whatever language that you're gonna be using. So in Go's perspective, you've got the fields mapping pretty closely to the types. You've got the array, the currency list array, going into an array, the parameter of items with a array of currencies or currency pointers. And then you've got the currency request mapping over, and then here you have a interface, which is unary that includes the context object. So this is essentially working generated code, and the next step is just implementing what I see on the right. So let's implement it. So here's the simple implementation. We're gonna define a currency service struct, and that's gonna be an array of pointers, which are from the currency object that just came in on the last slide, right? With this new Go object that got generated for us. Our function is gonna be the getCurrencyList function, and what we're gonna do here, this is on the server side, is we're gonna range a CSV, so you can see under the four statement, right here, we're gonna range through a C.data, which is a CSV file. If the request that comes in that specifies the number and the code, I'm gonna append that item to the list, and then I'm gonna return this item down at the bottom as a currency list. So this is the server side implementing this getCurrencyList function. So here we have what this actually looks like as a runnable program. First of all, you can see the data come in from a CSV file at the very top line. Then you can see that we're starting our listener. Then we're gonna create our new currency service with this data object. We'll create our GRPC server at that point. We're gonna register that server and the GRPC server and the currency service together in the ProtoBuff, and then we're gonna serve down at the bottom. All right, so we've taken generated code. We've created a pretty simple function, which is this getCurrencyList. Then we packaged it up as a runnable program and go so that we can actually serve up this GRPC endpoint. So when I call it from the client, this is what it looks like. We've got a runnable program again as a client. At the very top, you've got a dial statement that says what's my endpoint that's hosting this GRPC service. You're gonna create a client by passing the connection in, and then you're gonna pass the client to the printUSD function. So the client gets passed in and then here it starts. So we're gonna, as a client, we're executing that remote getCurrencyList function. We're gonna come down as we get the results and we'll range through the results, all the items, and we'll spit out the information that comes back. So pretty simple. We had the server side implemented. This is the client side that's using the client stubs, and he's making use of the data, ranging through it and spitting out the results. Pretty cool, pretty basic. I should have used that for you guys, but that's all right. All right, so any questions there? Does that make sense? Generate the bindings, use the create the server, create the client, and all of a sudden the two things are talking to each other. No REST model generated, no working at an HTTP level. Lots of predictability because there's type safe built into this, so there's a lot less guesswork. You've got a really efficient way to make two things communicate together. So let's get a little bit more complicated with it to show some of the features of GRPC. So streaming, what the heck is this? I mentioned it earlier, if I say a device like this, and I'm gonna receive a dataset, and this dataset is bigger, I probably want to stream the dataset to the device because then the device can decide how much it's gonna chunk by. So if it's a gigabyte dataset, the device can pull off a megabyte, look at it, show it up on the device, and then it can move on and get the next megabyte, allowing the user to see the data in real time versus having to chunk through the gigabyte to show the results to the user. So streaming is pretty important from many perspectives, but it's great for the user experience, especially on the mobile side. So what does this look like? So if I update my IDL to include streaming, this is what it's gonna look like. At the very top, you have the existing definition of get currency list, and that's a non-streaming example unchanged here. As we step down, we've got three different streams, essentially, we've got three different stream types. One is server streaming, and that's the example that I was referring to where I've got a client device, and it's chunking essentially the results that come in, or the server is chunking it for me. Then I've got client streaming, so the opposite is true of what I defined from the server. So in some cases, you want the server to send in chunks. In other cases, you want the client to send in chunks. Maybe the client is on periscope, right, and you're recording stuff, and you wanna send chunks at a time upward. You want the server to be ready to receive in the same way that it would be sending. So you can see that the stream is just reversed in that case to have a stream be submitted rather than received. There's also bi-directional streaming, so maybe you want the communication both ways to be streamed, who knows. So all three are possible. Here's an example of actually putting that into effect. So if I'm on the, it's a server stream example. If I am on the server side affecting this get currency stream method, then this is what I'm adding. So as the results come in, instead of appending to an array, I'm just gonna send, I'm gonna open up that stream that came into me, and I'm gonna send that result real time back to the client. So instead of the server just appending in memory and creating the dataset and then sending that as a whole, it's gonna send the results every time it gets one as it iterates through the array. On the client side, what does this look like? How's it different? On the client side, we're gonna activate the get currency stream method, and then as we iterate through, we get to this four section. You're gonna see that we have a stream receive, right? And then we're gonna be printing out results as there's items that come into that stream. So that four loop keeps on going. Essentially, I think it's a blocking channel, I would assume, and once an item comes into the channel, then it moves on and actually processes it. Cool, questions there? Good, cool. Next one. How do I secure a GRPC interface? Another example of if you wanted to roll this yourself, getting TLS support across any language across different frameworks is difficult. Yep, question. So this example is not bidirectional. This would be the client, well, this is a server side, and I actually don't know the answer to that because I haven't implemented the bidirectional, so I can always, we can follow up afterwards and get you in touch with one of the GRPC people, yeah. Okay, so another feature, TLS. How do you guys usually do TLS as you create your own way of getting things to communicate? Not very easily, not very standardly. This is gonna be really easy. So TLS setup for GRPC is this. If you've got certificate files, key files, then you use the credentials new server TLS from file function there, or method, and then when you create your GRPC server, then you pass in the GRPC creds object, essentially, and that is how you create the server side where it's SSL based at that point. If I'm on the client, it's just as simple, essentially. You use the same exact method for creating this TLS certs object and then you inject it right here with a with transport credentials method, which will return that credential object for the dial. So pretty easy. How about request timeouts? How do you usually deal with backups and requests and all that stuff? Well, this is easy here. So what you're gonna do is you're gonna use goes context object. The context object is where you can pass around arbitrary data and it's thread safe, essentially, so you can send the context object to a method and then as it gets executed in a thread, it's able to use the information on that context and make use of it. So in this case, we're gonna augment the printUSD function and then we're going to add in the context at the very top. So at the top line here, we've got context with timeout and we're gonna specify the new context and then we're gonna say the timeout is essentially 200 milliseconds. And now I've got a context object that's got this 200 millisecond timeout information in it and then to make use of it, I go down to the very next line down here and I say get the currency list past the context but that context has the timeout and so now if the 200 milliseconds has exceeded, then GRPC is gonna error and say it didn't complete within the deadline. So timeouts are easy. Cool, what error handling? I think when you look at the error handling, it's probably one of the most inconsistent things as you look at how interfaces actually work together. If you look at, I guess in my opinion, if you want two things to work very predictably, then what happens during errors is one of the most important things. If I've got an error and the client side doesn't know how to recover or doesn't know what to do, what steps to take, then things can get out of sync and all kinds of problems happen. So having well-defined errors is pretty important and it's actually very easy as you use GRPC to do that. So an anti-pattern within GRPC is to send a error back which hasn't been decorated at all. Sending back the exact error is probably not the thing you wanna do. You actually wanna do some wrapping on it. So what does that look like? So in the case of a server error, if I've got some type of an invalid input or something like that, I wanna return a new error that's got an enum or a codes.invalid argument. So it should have some type of a code established for that type of error that occurred. And that should be actually defined within my protobuf essentially so that it translates back and forth easily. And that error should actually indicate some extra information so that you can make sense of what's going on as the client tries to report back what the problem was. On the client side, if that server's actually declaring the error, which it should and it's decorating it a bit, then it should be able to interpret it in different ways. So as we've got this error that happened and I can switch through it and figure out what the error is, then the client should actually go through and print a special message for that error. So it's all easy and very possible as you use GRPC to do this in a very standard way. And you can even get more complex. So if I decided that I wanted to add more information to the error that gets generated, I'm able to do that. And then at the very bottom, I can essentially take that object that got generated and I can send back an error-based object through the interface. So and then on the other side, as I have a more complex error that gets generated by the server, then I can make use of that error on the client. And I can, in this case, what I'm doing, if I started with an object which may have generated the error, the reason for the error, I can actually cast that object or I can assert it as the PB currency object. And if it doesn't assert correctly, then I can actually spit that out at the client and I can show the client like, hey, here's the object that came through. Take a look for yourself and you tell me if there's something in here that was invalid that may have caused me to have a problem. So all kinds of capabilities you can have by one, making sure you're using the JRPC errors, defining them in your protobuf, but also doing more details and asserting the values so you can properly report back to the client like what was going on on the server side and why there was an error. Other features. So other really cool stuff with JRPC. We talked about interceptors and taps a little bit. So in the case of services, like you may wanna establish a tap that does some type of flow control. So if you wanna make sure that you're not overriding a service or doing DOS attacks, you can actually inject some types of middleware that ensure you can establish some flow control. If you had extra things you wanted to add for authorization, you could do that. So there's all kinds of extra goodies that you can inject inside of the JRPC workflow to add some extra value on top. There's tracing for more information about the transactions, support for pluggable authorization and the ability to limit message sizes. So JRPC, I love it, we're big fans as a team. We just worked on one of our first implementations with it as part of the CSI project. Have you guys heard of CSI before? Or just interfaces in general, CNI, CRI, CSI, there's like three big ones right now. Anyways, as the cloud native ecosystems evolving as all these different components in these cloud native ecosystems are looking to talk with one another, then the interfaces and how they communicate are becoming much more important. So one of the things that we're excited about is what JRPC is bringing to the table and it is being used across these different interfaces. So as container orchestrators like Kubernetes are expanding out to be able to talk to different components and do more than they are today, JRPC is what's being used. So in one of the projects we've been working on very closely is the CSI project, the container storage interface project, and that's how a container orchestrator talks to storage platform. The JRPC interface is used there and so we had our first implementation with it recently and I can say it actually went really well. So a lot of the challenges that the team had been looking at previously in terms of creating our custom, like our own REST interface, working in the HTTP JSON area, those are all like problems arose which shaped some of what you guys see in this presentation today. And a lot of that stuff went away as we embraced JRPC and worked very closely in the CSI project. So we're very excited about it and I think it's a great way for, a great thing for you guys to look at as you figure out like, how to get your components to really talk to each other in the data center. So with that, that's pretty much all I had for you guys. Any questions? Yep, I'm actually not sure the answer to that. So let me, you can talk to me after and we'll get you hooked up with someone who can give you all the details that you want there. I would encourage you though to pick up jrpc.io, just bring up the website and they've got some great documentation that goes over all the details of what jrpc does and some of the stuff that they're working on. Any other questions? All right, with that, thank you guys very much and enjoy the rest of the conference.