 Good afternoon, everyone. Thank you for joining today today's developers corner. We're going to have a little chat about life and gRPC and joining me today is Our usual cast of suspects, but Jeremy and Eric you want to introduce yourselves? Hey, Jeremy Davis. I'm an architect at Red Hat. I've been here for about 13 years and have done a lot of app dev stuff And I'm Eric D'Andrea. I've been Red Hat for a while I used to work with Rob and a different team and now I've moved over as a dev advocate and Loving life since moving out at Rob's team. Oh, thanks a lot. Oh, wow And it's recorded too, so I should have my manager watch this so What is the genesis of this discussion and and how did we get here? is is that I personally work a lot with the app dev folks and We get a lot of questions about Especially when a micro doing microservices approaches or hexagonal architecture is about, you know What kind of API design should I do? Should it be RESTful? Should it be gRPC? Should it be GraphQL? Should we be using Kafka all the above? You know, how does that how does that fit so in our last episode? we we gave some rules to the road about doing rest and today we're gonna focus on gRPC and when to use it and why we would use it and You know, we're gonna examine it a little bit from the Java perspective in natively and then we're gonna Show you how much easier life could be if you're actually using quarkus to do that with Java. So We will explain some things all about it. So if you're not a Java developer We'll go into some depth about that and give some best practices near the end But the the main focus here is gonna be on the Java perspective So with that, I don't know Jeremy. Do you want to that's we can kick off a little bit We can talk a little why don't we do a little bit of slides first, right? And then we will very quickly go to some code here We introduced ourselves for that and You know, like maybe do a really quick over you the difference between gRPC and some other some other patterns API Ways of making APIs and then we'll we'll build some stuff that was gRPC. Sure Can you guys see my screen or is that not? Hey But again, instead of just sitting here, okay share screen It's the moderators gonna bring it into there we go. Yeah, there's it It's a little fuzzy a little blurry there Jeremy. Okay Is any better or no, oh That's much better. Okay, would you do you wipe the screen down with some Sure, so last time we talked about rest and what is restful and how it was resource-based and quite honestly most of the APIs that we're seeing nowadays from an external in to your code are probably going to be restful But That's resource-based. There's really no standards for that and there's really we're gonna talk about the comparison with rest later But you know, one of the problems there is is that even if you're using open API or swagger to develop stuff It's really not a kind of code first experience and gRPC is the polar opposite it's One it's about high performance, but and we'll get into what that means So it's not just saying high performance will give you the definition of that But it's really about, you know, those kind of mechanisms that people are probably familiar with if So if you're Decomposing some kind of monolithic architecture into components, you're probably doing direct calls today And as you decompose those components both from a performance perspective and the perspective of it looks like that component is local To me gRPC is a kind of a natural fit there Graph QL again, that's a query language, but that We're gonna go into that comparison Maybe in a future episode, but I don't know Eric or Jeremy you got anything to add to that I will tell you which one I hate the most and that's the one at the bottom Yeah, I graph QL, I don't see graph QL as a really that's a really different animal I think and makes a lot of sense for run-in services For sure, but I don't really see it to be too much of a comparison between rest and gRPC Yeah, I typically what I tell people is you know, if you're if you're Starting to decompose stuff use rest as your external facing API and try to use gRPC internally I think what a lot of people do is they think they're doing microservices so they have to do rest and Create all rest endpoints and then they discover that there's a performance hit on that Yeah, Shane in the in the comments is saying whizzle is a triggering trauma You know anybody that done whizzle and had to do the XML to define the data And then you get into all the schema stuff and then how do you handle exceptions versus errors and all that other stuff? You know or even something really special like you or maybe you're doing CXF or something like that and Doing some magic in maven in order to compile the whizzle together and do some soap stuff Oh, it's horrifying. I couldn't stand it but What was soap trying to accomplish so? Do either you guys remember the Don box and Barcelona thing if you if you Google for Don box soap and Barcelona You'll see one of the very first soap discussions was Don box from Microsoft Well, I don't think he was at Microsoft at the time but he got wheeled out on stage in a bathtub and gave a Talk of one of the first talks about soap From a from a bathtub on stage. So it's it's pretty classic. You should look for it But what were they trying to achieve an actual RPC kind of mechanism on the web? right because you were moving from You know development on windows or something now you're doing something on the web And they wanted something that was kind of RPC-ish like but today GRPC represents something way beyond what we think I thought of as RPC. So I don't know Jeremy did in Eric. Did you guys use Corba? I? Started I use like back in the EJB days like IIOP which was Kind of around the same time RMI IOP with like thick desktop clients to talk to back-end systems But I wanted to you know go back a second though. Like you mentioned Externally-facing rest internal GRPC. I've seen that pattern a whole lot I actually worked with not on the project But I worked with the kind of promotion of it the data stacks folks You know the Apache Cassandra guys the whole Astra as a service day of Cassandra as a service Everything there which is all internal is all built on top of GRPC Yeah, and all the tools, you know the the resume enhancing activity that you might be doing around Kubernetes or Something like that all that internal communications done via GRPC and it's so much easier to version and control and work with the performance aspects of it, but Yeah, I don't know what about you Jeremy I It seems it seems to me most of the people I work with who use GRPC do it for performance, right? The rest is pretty easy. It's very easy at going with but you're going to get significantly better performance at GRPC And it's much easier than you think or it's much easier than I thought when I really dug into it Oh, yeah, so I didn't put ahead just to hear the you know what RPC We want to talk about RPC really quick. We have people who are younger than you know, that didn't be soap stuff RMI. Yeah, so soap and RMI and then there was Corba and For them Microsoft ease there was calm B.com all that those were I mean common D.com or DCE RPC and Corba had its own version what did what was happening there was you had Something called the interface definition language or IDL and you defined your data and your Your functions and there and you implement that those as interfaces and then you use this Mechanism to do exactly what you see on the screen here, which is this client server model. So it looked like the component could be you know Basically location transparent to you, but it for your code during execution it looked local, right? So it looked like you're just executing something locally GRPC provides the same kind of mechanism But it adds a whole lot of other stuff on top of it that makes it actually usable So with that I think let's look at some Java code. What would this look like if we were just doing a basic? Maven project. So In a basic Maven project We're probably going to want to have GRPC installed on something we're going to show you that you don't need that when Jeremy gets to the corkest part but we're going to define our services and adopt proto file which means proto buffer and we're going to go into what that means and what proto buff is That's like our IDL that I just described her the interface definition for our data and our Services and then we're going to generate the code and then create a client server from that Today, I think most of the stuff we're going to test with is probably through postman though Yeah, we'll test with postman We'll we've got our code right here basic Java example That look at this okay, you guys. Let's jump to the palm file first because that's Yeah, right so with a basic Maven example We start with the properties we we have to know what target We're going after for the GRPC in the in the proto buff Which is the the IDL or mechanism we're going to show that to you in a minute This I have a Mac, you know an M1 and if I did brew install GRPC It's going to install GRPC and the proto buffers that are the version that you're seeing there If you scroll down you're going to see a whole bunch of other stuff that have to include here also in order to actually get it to work and Then yeah keep going down and then eventually we're going to get to the part where we're going to build it And what this is going to do is it's going to take the contents of this proto buffers definition file The dot proto file and it's going to create some generated sources from that and then Keep going down and the antler stuff is just you're going to take the generated sources and make it available to my my code So if we go up and look at the proto file So here's an interesting question while you're working So I see like a bunch of versions and stuff there Do clients and servers have to be on the same version of proto buff for the GRPC spare the the proto buffer needs to be compatible the GRPC versions, that's a great question. I think that That doesn't matter from which it is but the proto buffers need to be compatible So I have used newer with older and there's a bunch of stuff to do breaking changes with versions and stuff And we'll talk about versioning Kind of in the best practices when we get to that part at the end That's a great question. No, you know, I've gotten asked that before and I don't really have a great answer for that. I think I wasn't just trying to prompt you That's okay, that's a great question. I think you guys do it quickly because I'm sharing my screen. So yeah So with regard to the proto file This is all it is. We're just you have a service definition Unlike other things what we have here is we're trying to do a little best practice here So we separate out the request the response instead of just having we're using the same one over again as both the request and the response Wait back in the day of Corba and calm you had those were challenged by having stuff compiled into C++ which created a An address and memory location. It's kind of similar to this So what when you add elements to the message, you're gonna put them in an order and give them a numbering in the order And that's gonna help us later on when we talk about versioning and stuff for the compiled code so the only thing that that the Maven did right now was we set it up with a bunch of dependencies and then we We generated the generated sources If you're this this package is for put above not for Java So don't rely on that for Java package if you're using Java You're gonna want to set your own package up here these other two you can probably live without But you need to set your Java package. So let's Um What we're gonna run is even clean compile You notice there's our protobuf plug-ins, right Hated over here and look at what we've got our target. We have generates proto sources protobuf So we as it built out these messages, right these objects Mm-hmm So we can see like a hello reply looks like hello reply Yeah, hello request Or so you've got some methods to you know, we've got some, you know build our methods and stuff on here Anyway, so all of our Java objects are created out. So Let's talk a little bit about how we call this watch. Here's how we implement the server, right? Yeah Can't see the whole area So this is just a basic You're gonna notice that it looks a little unnatural in that we have a void When we right and that both the request and response are actually in the the function declaration Yeah To me that looks a little unnatural because I I don't like doing that I'd like to know that it's a response. So when we get to the corpus It's gonna look a little more natural, but for all intents and purposes all we're doing is saying, you know Take the stream observer Add to it whatever and then when we complete it then we we can jump out Um, and that's it. That's about as basic as you can get for a unidirectional call And this is uh, this is about as basic as you can get for you know setting up the main We can make this much more complicated read from a config file have the port number dynamically added etc But um for our our purposes Um, we're just doing it all right here in main and saying, you know, our set in our arbitrary port number. So Here if Jeremy runs this, um We can just yeah, we're short of yeti All right, so he started the server Come here So postman has some really cool feature for grpc. So He's probably already got I can't see it, but he's probably already got a collection going and he can Import the protobuf file Because i'll start from scratch just in case if I can see how i'm gonna say Jeremy can you make the text a little bigger? um Command play there you go Over the corner Yeah, right. Okay. So what we're gonna do is we click this new And then grpc. Yep And we'll take service definitions And we'll import our protobuf file Whoops Yeah, make sure it's the right one Uh non corgis java source Main proto We'll click next I should have given her a name. So now I was just calling new api. That's kind of bad, but In any case, we'll say localhost 8 9 9 And it's got our methods already Yeah, also do this kind of nice little thing where uh Message it'll build an example message for you. Although that doesn't really look so much like a name does it? We'll say uh, say eric is our guest so say hello to eric here So we invoke that and then we get down here. We see our message Hello eric, right, which is what we're expecting. So this morning eric. I said something silly like what did we ever do before postman? And somebody said soap ui Jane has mentioned kraya have you ever used kraya? No, no, I've only ever used postman Okay, I don't know kraya. Yeah, so if you want to do this we regenerate and you know recompile We can make changes Hey, shane, if you want to put a link in there for us, um, that would be good. I I don't think I've ever used kraya I don't think attendees can put links. Oh, they can't moderators kraya We'll figure out what kraya is right? Oh kraya app. There you go That's a basic basic api Cool All right, so that's about as simple as it gets for a um Now what we haven't done is we haven't talked about streaming and by direction, you know multi-directional Communications and stuff yet, but we're going to get into that with the corkis and show you how that makes life a little easier So what what makes protobuf so much faster than rest? Or am I is that oh, we're gonna That's a great setup, but I wish you had asked that in like a couple minutes because we're going to go into that All right rewind this Rewind that rewind and record over right? All right. Basically, you know, we installed brew install g rpc We created a protobuf file a prototype file generated We've generated server code right started our server So what we're going to do next is going to be the corkis way of doing this and we're not going to do so much With configuring the maven project. Um, we're still going to both create a project using either the corkis cli or project generator Still create a protobuf fail. Actually, I'll grab the same protobuf file. Um, and then Um, we just start the app And so that's what we will do for so do you want to go to um, you know the code You know actually show them You know where we would generate where we generate the corkis code from So start with that we can um Do a code corkis.io So here actually we can show Um In case you want to dig into more of this and these little there's links at the bottom of these slides The documentation for both protobuf and drpc. Um is I think it's io right? It's really good. Yeah, excellent documentation for you know, all this stuff, right? So you can come in and clone projects and get started pretty quickly the same with corkis. Um Yeah So you can hear I think what we're gonna do is use the Default one at the top the like the rest easy one, but there's options for g rpc also Yeah, so there's all kinds of there's all kinds of stuff that we can use I'll create from the command line and then also if you go to corkis Um, and you look on the corkis documentation or guides if you just come to g rpc There's really good stuff here getting started online service consuming service and other stuff in here, too So really nice really nice documentation. So let's come over here. I will go to this tab and I'm Right here. He's in a g orc directory. I'll say corkis create app red hat Have corner and we'll call it No, so what jeremy is using is the corkis cli from here This is another way to to create the app if you had done it through the browser You download a zip file and you can just unzip that and whatever directory you want Or push it to your uh, it'll push it right here repo And then I did not add For corkis g rpc gin I can also add my extension. So corkis ext add and I need rest easy reactive And g rpc cool All right, so then we've got our stuff right here and let's open up intelligay That shot us. I've got the intelligay open. I'll just go ahead and import it as a module into this one So I'll say file new module from existing sources number's corner rpc g rpc gin Open that up and import that from maven All right, so now let's look at this file. So Go into the presentation mode too All right, so the first thing we'll notice in the palm file If there's a lot less in the palm file, right? So if you remember all the stuff we had in there, this looks like just a regular corkis app We define corkis platform or a bomb Right there's really nothing special going on there and of all the dependencies we need to bring in um There's really nothing that's not kind of just corkis g rpc, right? I haven't installed brew on my system. I mean, I haven't installed g rpc in my system. So it's all gonna be by corkis here Um, so we come up here. We're gonna need we are gonna need to create our proto file, right? So create track directory proto You know what? Let's just copy the one that we have here Force proto Copy and we'll just paste that here So we've got and I'm gonna change this to what we can do um That hat Right, so that's that's our path got back. We're right come on right at death border. Yep These example things here and just get these out of the way All right. So first thing we're going to need to do is implement our server like we did before Java class here and we'll call it Mute I'm going to call this a mutiny server and the reason I'm going to call it that is we're going to use With corkis corkis has it under the covers corkis is reactive You can do either imperative or reactive programming with it the imperative programming is going to look Essentially exactly like what you just saw with the other Java example. So I'm going this time. I'm going to use Reactive which I think looks a little bit nicer. There is a bit of a learning curve to the reactive stuff You know, actually I'm going to need to build this first Let's say we're say corkis Build so I'll run corkis build and this is going to generate all those classes for me So, you notice they've got a generate code going on here Generate code right right there's going on. So we look over here In our target, we have generated sources g rpc and corkis made a lot of decisions about this code First time you generate code you need to tell your palm control Tell jad you did that it picks it up on subsequent changes, but uh first time you do you need to tell corkis So the the core entry point into what it generates is the greeter interface And it's got a couple for you already greeter g rpc and this is going to be the the um basic Java one you can you can extend that class or you can do a mutiny greeter g rpc We're a little bit more interested in the mutiny one. We're going to implement our own greeter. So Let's come here And we're going to say implements Greeter And know something's wrong, right? So it's going to say implement methods implement our implement our method, right What we're going to need to do here is we're going to return a unie. So I mentioned that we're using a small library active library and Small right uses two concepts unie meaning a single result or multi meaning multiple results You can find that you can use this right so so you need to create from item. I'm going to say So just to kind of interject a little bit. So multi isn't necessarily Multiple items. It's more about being an unbounded stream or an unknown or undeterminate number of items so Which is kind of a difference from other frameworks that are out there where you know, usually you would use like singletons and multiples But the behavior is a little bit different Yep It's a rumor. We need our hello request, right? So I say hello request and it has a method called new builder We're going to set name Maybe our request That name then called build on that And when I'm returning a hello reply I can return a hello reply. We're going to turn the string Why didn't that matter right now matter about say hello right there? Okay Yep, we need one more parentheses close parentheses. Are you getting mad at me? I need a uni and this is a uni uni. Hello reply. So So you you've injected the same class into itself like what i'm not sure what you're trying to do I'm gonna cheat one second because I just I just met them. I messed up my code here. So uh Rather than make you guys watch me struggle with this Let's All right, so what we're doing we're creating from an item so we're creating a single result From an item and that item is going to be a call um to New builders that message hello request. So we're a request request and then build a request, right? And the request we're going to build is hello I'm going to say from my community so we can tell if it's different from the last one All right, so let's come up here and Start this up. I'll say it's dev You're injecting the same class as a subservice. You gotta get rid of that. Oh, yeah, uh, thank you You tell me that in a second I got in you and so we are running and we can come here Um, well, let's just swap this out to local host 9 000 say hello Hello, eric from mutiny. So you guys can see that now. We've got hello, eric from mutiny. I want to change this we can say, uh in me says Name Come back and it runs really quickly, right? So same as as you're used to using quarkis if you used quarkis development mode before right We can make changes and it recompiles and it's running pretty much immediately So there's an interesting question some uh, nani or nanny. I'm sorry if I mispronounce it What's the difference between g rpc and apache kofka in order to I would think achieve scalability and durability So I I think like g rpc is the protocol or is the communication mechanism and protobuf is the Format of the messages right which you can use in kofka. You can use protobuf as a message format in kofka. Yeah, what I think the significant difference here is uh in a previous episode we did talk about um, you know, basically uh loosely coupling things and um kofka provides us the ability between micro services to more loosely couple our services so if we want to do some design patterns like bff or um Things of that nature and you know and and more loosely couple our our services you are going to have coupling With g rpc. So this is more Where you know, you want performance. You're not so concerned about the coupling between things because the client and the server actually um are acting like they're You know, they're they're calling each other natively even though they're they're not um and they're distributed But kofka is i'm going to put a message someplace and other people are going to read that message And they're going to act on it and they may put a message in return for me, which I might have to wait on On another channel, but yes Yeah So yeah, there'll be three things in here like in in our example running this client machine is postman Right and the server machine is quarkus or that other job when we built and they're talking directly to each other Right. So postman is making a call to Quarkus the same way it would a web server, right and then quarkus is sending back The response so it's similar. It's just like a web server works So, you know if you're calling let you know quarkus or tomcat or spring boot Except that you're using a different protocol besides htdp. Actually, yeah, and this is a bind It's also a binary protocol, but from uh It is the difference between using um messaging versus an rpc mechanism, which is um Basically a direct call with an open channel And so what we'd see if we were using messaging you'd have a third thing down here that would be kafka And this guy would write the kafka and then this guy would read from kafka And then this guy would write the kafka and then this guy would read Shane Shane uh, online said it more eloquently and then less words than we all bit so It's more point-to-point, right? Just like you would kind of like you would you're swapping out the underlying transport mechanism But you're still doing point-to-point communication. Yeah Yeah, point-to-point. Let's see if you want if you wanted to do a consumer too, we could say, you know, unique and Rest and we put a rest api over over on top of it, right? We can say like uh, grpc yet public string Say hello And so while you're typing another question like how does the grpc apis work with like front end ui frameworks like next j s or angular? My um, right? I would say don't uh Now there is you can use you can generate um Uh, grpc code or you know from proto files for uh, javascript Think of it on the back end all the all our, uh, browsers support htp2, but I would you know, you're not going to be able to support grpc from a call from Some front end what it's going to have to be is some back end server side Javascript code so more like a node j s kind of thing on the back end But not from stuff that's actually running in the browser Now where there might be some differentiation with that is could I open a channel say from WebAssembly with grpc and that's a that's a different Beast all together we can talk about later And Jeremy you got to transform a Hello replying to a string Right, so what we're doing here um what this says is I I injected my grp service This is where I needed to inject that not in the actual services Um, so I inject my grp service This is just a rest api and then what i'm doing i'm getting one response It's a string and i'm going to call this method say hello I'm going to grab a query param called name and then i'm going to call this server right and call that method say hello And this is like instead of calling like a url, right? We're calling the actual method on the server because I can see the server and i'm sending over the actual object Built from the protobuf message and this is just the helix new builder set name build It's the syntax these uses to create the request And then from a if you if you haven't seen reacted stuff before what i'm doing I'm calling this an on item meaning when I get that Hello reply i'm going to transform it when i'm transforming is i'm just pulling the message out of it So I send the string directly and i'm already running servers already running. So you're gonna do this we could say new hgdb request and we'll say Google host and hello grpc And we'll say So let's see what happens here And what you have here is mutiny says hello rob So that's what we've got from our our message. We can watch what it's calling that that's coming from the mutiny server, right? That's we can just change we can change that to right and just say Hello rob. It'll pick it up really quickly Hello rob, right? So we change that very quickly. So this is how you write a request and this would be the same again a rest framework We would just be calling an endpoint, right? We'd use a rest client. We're not using the rest client We're using a grpc client, right? So we have our grpc client right here grpc service Um, and we're using that to connect uh to the uh to the server Let's go back to our slides now. We'll come back and look at some more code too I'm in a moment Motion detected at the front door All right. Oops. Let me get a pack. Sorry. So we did the package in my house All right, all right, um, you never know what's gonna trigger your alexa to start talking Yeah, exactly. Oh order some toilet paper for eric. So, um All right, uh All right, I guess we want to talk about grpc again. So what actually You asked that question earlier eric, you know, what makes grpc a little more special than rpc. So um, let's start with Hgp2 uh hgp2 is supported by uh all the browsers but as As far as that goes it it came out in 2015. It's probably more popular than hgp 1.1 right now Which is you know, where we use think of mostly rest based, but what makes it different? is um You can start with the um, you know, the binary framing layer. So um with regard to that hgp2 request response is kind of divided into these like small messages And framed in like a binary format and that makes the message actually more efficient. So uh, binary framing allows the request response multiplexing also, um without blocking network resources Maybe if you want to go on to the next one, I'm going to zoom through some of these so we can get back to the code Okay, and one of the things that that enables is a header compression is also part of uh hgp2. So Everything in hgp2 including headers is encoded before sending Which significantly improves the overall performance and it uses this compression method called h pack. So capital hpack um And hgp2 only actually shares the values that are different from any previous Header packet calls. So what this is demonstrating is is that if I change something It's only going to actually change send me back the changed thing. So that also improves performance Okay, so uh something that I you know, we kind of skipped over really good, but that's okay is uh There's other hgp2 concepts like streaming flow control and processing. Um, we don't need to get into all those but that also enables um other performance and uh mechanisms and Can control like buffers and flight sizes for inflate messages, etc But this thing protocol buffers, which we've been using Is really um the idl language the description language. So it helps control the serialization and deserialization Um and enables us to define services and then auto generate those client libraries So as we noticed everything is is defined in that proto file and then What we did was proto buff um provides a compiler called proto c which is uh our protoc depending on how you want to Um pronounce it and that compiles um everything that's in that proto file into uh code that you know, um We can use uh in memory at runtime and takes it allows us to serialize and deserialize into this binary format and then You know parsing with proto buff requires fewer cpu resources because it's not converting Text like um we do with like json and stuff. It's actually um in a binary format So if you want to go on maybe to the next one And then we let's go through the next couple and then we can jump into Um, that's right Yeah, that's okay. So, um uh Yeah, if you So jump down. So what are some of the yes, uh, go back to one. Yeah back one Uh, nope the one that you had with the advantages What I don't know how you you're you're running the show there Jeremy. Sorry Which one that's at advantages Um You had it a minute ago. Yeah right there up up up up up Right here. Nope. Oh that's we're gonna do that in a second scurlla Keep going Keep going There you go, right? Okay. Let's let's talk about that for a minute. So rather than show you a ton of slides We're just going to talk about it on one slide uh Eric asked about performance. So I just told you a bunch of reasons why it's more performance But what can we expect so results may vary but generally over um arrestful plus json call You're probably going to get some close to 10 times the performance out of it Eric um It's just you know add all those other things that we talked about together Um, you know protobuf serializes the messages on the server and client sides It's smaller more compact payloads You know, we can do multiplexing eliminating head of line blocking. We can do Compression faster loading, but it's in binary format. So we're going to talk about that with some people don't like that because they like to see their json Streaming is another thing, you know, this enables streaming. So Uh Jeremy's going to get into um some examples here, but you know, we have uninary which we already Demonstrated that's actually no streaming. Then there's a client server streaming server to client streaming by directional streaming Code generation A lot comes along with that And interoperability is part of code generation means that we can support Java javascript ruby python go dark objective c c sharp rust I've uh recently done some rust with it I knew you're gonna throw rust in there at some point I had to throw a rust this so I moved on from camel and every conversation to rust Eric so Ruby this year apparently Yeah, rust rust and a web assembly are my new favorites. So um I don't invite me to that talk Oh, you want to you want to come to that one? Yeah, I don't want to come to that one the web Don't want to come to that one. Okay. Um So, uh, yeah, you can go on. Let's let's go over the Uh comparison really quick with rest and then go back to code because we really want to get back to code um so What are the differences here? Um standardization versus no standardization with rest You can say you're doing something standard because you're doing an open api swagger kind of thing But really that's not part of the standard, right? There's no standard. There's Rest is just something, uh, you know using resources, right? And I'm going to Uh create a standard maybe within my organization to use certain tools in order to help me with that But at the end of the day, there's no real standard and g rpc Brings a standardization process to this Paradigms, we've already talked about rpc versus um resource based Service modes. Well, I just said uh g rpc can do, you know client streaming servers To client streaming bi-directional streaming really rest is unary only From other kind of requirements, um, you know, uh rest is htp 1.1 And you know, typically we use json over that Um, I guess you could use xml for those people coming from maybe a soap background or something I'm sure uh, there's quite a few people still doing both soap and json for some reason um Let's see from a design perspective as an architect. It should be, you know, uh Design first, but um really rest is really a code first kind of experience Um, I know that you might be able to argue that because you're using certain tools But in practicality Um, you have to design uh do design first with g rpc anything i'm kind of missing um Yeah, I don't know that I necessarily agreed that api design and rest is always code first because you can do Open api and there are plenty of tools out there that can generate your rest various frameworks But they don't it doesn't require the framework rest itself doesn't require to use those tools you And enforcement within your organization require that and there's no standard to what that looks like so you know, uh post get You know all those all the verbs there, you know, um You could you could use those wrongly and we we talked about how do you you know, we think think you should use them properly in our last Talk, but you know people use them incorrectly all the time look at the variability with the versioning, right? You know with I mean you put it in the uri You could put it in the media type, you know, and if you don't put it in the media type that's going to close you off from doing You know, uh, heyday os, right? So, um You know, there's consequences to that and there's no standard to that as I guess what i'm getting at Okay, so i'm arguing with you because I want you to argue back And that's kind of why I brought it up because it the determination of like where you're talking about this chart here is like from the Like I want to build an api like like you said you could do code first You could do contract first, but there's nothing in rest that says which way is better which way, you know, it's completely up to you Yeah And in gbc grpc, you're absolutely gonna do a contract first Yeah, you have no real choice, but to do contract first in grpc um Anyway, uh All right, let's let's jump back into some code and actually see some of those things we just talked about so All right, um, I'm gonna run in title top. So you know what I'm gonna do. I'm gonna add several methods that warns me Instead of just uh, oh So what we can see here. Let's go back to our Presentation mode or maybe I'll maybe I'll maybe I'd several maybe a little rpc Yeah, while he's doing that. I don't want to imply that Our grpc is the the cat's meow of everything versus, you know rest What i'm getting at is, you know, use the right tool for the right reason Um rest is still the public facing api for most things, right? Um You know and internally you're gonna want to do event-driven things to decouple your code. Um, Especially in microservices environment, but grpc is a natural Way to go, especially when you're doing that decomposition first and you're doing component You know separating things out into components It's a way to address. I now have this say hello several times I'm sending over one request right up. We'll do the same thing. We'll send over a name I may get multiple replies back Based on this right so i'm going to return a stream of hello replies And if we come over here You got to regenerate Didn't have to am I so am I did I turn the server off? There we go. So he's successfully finished generating Proto files so One thing quarkus in dev mode watches your code notices our code has changed and it notices I am now no longer correctly implementing greeter, right? So implement greeter object And now I have to say hello several times, right? Yeah, if we were in that other maven project, we would have had to stop the server recompile And then and then start it again after we coded everything up, right? um, yes from We create this from a multi multi is multi response multiple things. So we can create this from One of the things you can do is create it from a Yeah, I'm on the He's from an interval object, right? So for an interval object what I'm going to do. I'm going to get a hell hello request and I am going to uh say Raise as list while you're typing. I think one interesting and whether you've done you may have already done this or not, but um Maybe a follow-up Webinar or whatnot would be how do you with you know some practices and tools for sharing these because this is your Proto file is essentially a contract all the Everybody who's talking on that service needs to have it the tools and techniques and practices for sharing and versioning and all that Yeah, and there's gonna be it's not so when eric says sharing it's also about um Not just sharing between the two projects like we just did, you know a single proto file But some you know proto files can include proto files, right? And you know, we might have other definitions that we want in there Yeah, so there's a question that came in does three scales support g rpc apis as a back end And if yes, does it show up in the developer portal? I have no idea. Wow Okay, we would have to find that one out. I'm not not that familiar with three scale For uh, that's the api management, um tool Not to be confused with like an api registry. So we also at red hat. Anyway, we have a api registry Based on api curio. So if you went to the api curio website, um, and you want to author Say that open api kind of thing, uh, you know from a From that perspective, there's also a registry that you can run And the registry does support g rpc graph ql arrest and json And a couple other things doesn't it eric? I'm pretty sure it does Yeah, I know it does g rpc and rest and it does kafka too, right? To me internally, that's where you're gonna have want to find You know do some uh service discovery So maybe we should do something that incorporates that on one of our subsequent those sessions Because we can run that locally in docker. I think Yeah, so like between like api curio and mic rocks and whatnot. He's got a bunch of decent tools Yeah, exactly Always for the hell Man, I'm not I'm sorry. I'm making a mess of this today. So Wow, and you're you're in london and you have is is it almost it's like in Later in the day, so it's almost like beer time over there, right? I mean, maybe that's the problem Yeah, everybody is already Beer time has already passed So all right, so what we're doing is I just created our list of some compliments that we will add To our our response I'm streaming those and then just which is just like Pulling all these out. This is just regular java stream, right? And I'm mapping each of these to a new Taking the string there and mapping them to a new hello reply, right? Because I want a bunch of responses I want to send multiple hello replies back over the wire And so what I'm doing is that the new builder hello reply new builder We saw this a minute ago, right? It's the same thing we have up here hello reply new builder And request get name plus s, right? So I'm going to say I'm going to add in like a hello here too as well. Hello What's your name plus s build that and then we collect and to a list, right? And so what I'm returning is going to be a multi right because a multi comes You can create the multi from an interval and a list is interval So let's look over here at what we've got. Let's um You know, uh I have to have a new New grpc and I think I'm going to need I don't What a portable file we are in grpc and we are in June Source main proto Next we should we can give it a new api. So we'll say uh For grpc All right, and then message again, we can we can click example A thousand and we got her say hello several times. You can give us an example message. Um We'll say hello to Eric. So What happened here now we can see is Those mental health at the bottom we sent one message, right? So sent request to 9 000 name Eric And we received this response received three separate responses So we say, you know, hello, Eric, you're wit and wisdom or apparent hello, Eric Your experience is all inspiring and hello, Eric your skin tone is fabulous All right, so we have multiple messages with some nice little compliments The reason we're getting this message here, um is because we are returning the hello reply Um, if we wanted to change this to returning a string. So let's say we say String and we could um, actually, I'm sorry. I wouldn't be out right there We need we have to return hello reply because we're you're strongly tight If we call this from our consumer we can then um Turn that into a string. So let's come back over here. Take a look at our rest api, right? So if you wanted a rest api, we can return the string Like we are to uni here We can do the same thing. We would just transform each of those and add a transform return strings So, uh, there's a question that says have you observed any issues with corkis native build of grpc api code versus the uber jar build? um No, but uh, I but to be honest with you, I don't have experience using uber jars very much at all. So Yeah, yeah, I know that the astro folks. I think they're doing mostly native, but I could be wrong and I know there was um I want to say it was the the logic drop. I think I could be mistaken But I think a lot to drop is a blog post on the corkis.io about it I think they're doing a lot of grpc with native as well. Yeah So what we'll do here is we will do one We'll get multiple requests and have one reply but add these in here too. Like just do them both at the same time and then Say Say hello And we'll stream both of these. All right, so when I say that um, we will figure out Beauty server will in one moment anyway, or so figure out Where's your forks good? I don't think we're gonna have time to get to but well, um While he's doing that, you know You give me talk a little bit about micro profile here One of the things, you know with micro profiles like adding, uh, you know open telemetry kind of things The classes that Jeremy has here, um, we could simply add some of the annotations for um Uh, you know some of the open telemetry and get information from that adding micro profile To these I don't eric if you want to add anything to that. I'll grab my ticket. We're doing that time here. So, um Yeah, what you do the way you say hello one time Which to look at this is we take a request when we get the item of the request We call this method request get name Collect those into a list And then we turn all of those into a long We take all of those as a list, right? So we've collected all of the strings as a list We then take that and call a string join So we're going to join all the names that we get so what it's going to do is going to open up a connection It's going to listen for multiple requests once the connections closed It's going to stop and return this one message, right? So we want to look at what that looks like And thanks eric. I know you got a drop, uh, but thank you very much for joining us today Yeah, of course. May I do have a hard Top of the hour All right, anything that goes bad after you drop. Uh, we're definitely going to blame on you. So, of course We wouldn't want it any other way Thanks, eric say hello once That message will say All right, so eric just laughs. We won't say hello to him, right So we said so we'll send that and so we just sent this you notice it says in streaming here that tells us our connection is still alive Who do we have uh anybody who we have listening online? Yeah, they're still Our producer Caroline All right, and then we end the stream and then we get back one message. Hello rob jeremy caroline our producer Right, so what it does it opens up this connection waits for all these batches It right does some processing and then sends it back to us And if we implement this last one, we'll take a quick look what this one this say hello back and forth What we're implementing here is This one we're going to stream on both ends right so bi-directional streaming We're going to get lots of requests and we're going to send back lots of replies, right Um, what we're doing is we're getting the request same thing. We're calling get name offer that request We're transferring that to your friendly stream says hello plus your name when we get that we then take that message And we create the rehello reply Right and then send it back to send it back and forth. So let's come over here and see what this does and we'll say Whoops, we'll say say hello streaming And we'll say Oh say hello I named it something different this time. Well, so it's the way you change your servant different So you need to import a new refile import the proto file. Yeah, you need to re-import the proto file because I changed the name um, it's different What'll be called? Hey, you don't need to click that just you don't need to click that Yeah, I can skip that. All right, let's say by import. Yep And Now message will invoke that okay All right, the name Say hello back and forth. Yep invoke so send And so and so we notice it came back immediately, right? But it says end streaming so we still have a connection going on, right? It's going out and forth, right? So for like, right, we're writing a chat bot effectively, right? Right But my connection is still in place second doesn't terminate to a click and streaming and then calls completed There's really nice stuff inside of postman for using this postman The more I use postman the more I like postman. I know All right, yeah, your screen is a little blurry again Sorry Yeah, I don't mind All right, let's go go over. Yes. Yes. I'm really good. This is really good. It's a little blurry. Jeremy still bad Yeah, I thought it with my eyes, but um, you want to try to share Send to me. No, that's okay. Um Yeah, we're gonna go through this pretty quick. So The goal here is just to just to give some best practices and then we'll we'll close it out in a couple minutes here. So Um If you don't do anything else or take anything else away, uh, use the style guide We'll we'll post the uh a link to the presentation. You'll get the links here But definitely look up the protobuf style guide. There's some uh excellent linter You know plugins like for intelligents and vs code Definitely look at the style guide for naming conventions and stuff. Um, keep going um with regard to this, um Really, uh What do I have to say about uh this is Definitely separate your request and your responses and the reason is if you try to use the same one, uh in both What you're going to wind up in is in a in a problem state sooner or later Where you want to add something to the request or you want to add something to the reply or you want to remove something And then they don't match anymore So you're going to have you're going to get in trouble if you do the like the first line on the create order thing there um Uh with the nooms You uh because it is cross language You also have to take into account that other languages are going to see in nooms differently. So C sharp has some Does think wacky things like removes the prefixes C plus plus doesn't see in nooms the same way. So definitely, um Look at um making sure your nooms don't have name clashes across the nooms too. So If I had two nooms that said medium in it, you know, that might clash in certain languages when they generated the generated code Right. So try to make them unique right, um With uh well regarding well-known types Proto buff comes with a bunch of well-known types out of the box time stamp is one annie's another one empty's another one and then the Those kind of come out of the box. It also has some Pretty interesting definitions for proto uh with proto definitions For things like money and other abstract items like that too So what I would recommend is probably, you know, try to use the things that are out of the box instead of reinventing the wheel there Keep going Versioning this one's really interesting because it goes to great lengths to try and make sure that backward compatibility is is maintained So, um, there's rules for anything, you know, but non-breaking changes. You can even delete Some old, you know, some fields, but that could also create backwards compatibility problems. Um, but um binary breaking changes are a little different. Um, So, you know renaming a message nesting unnesting messages might have Consequences in particular languages. Um, like things in dot net with c sharp Versus uh java or c++. So you do have to be aware of what you're, uh, You know, what is a breaking change and and the documentation that jeremy pointed to in the very beginning Is really good. I would definitely go there. Um about looking at versioning Um Optional fields the really cool thing here is this thing at the bottom called the field presence tracking What that means is is that? Is the default value if it's optional Is it go to a default value or not and that again is going to be different based on different generated languages so, um You know what we want to have is um the same default value So, um, definitely look at um, you know optional fields because not everything is Not everything is nullable The same way in every generated language. So keep going One of this is you know, exactly what it means. It's um Means that you know when I put one of in there, I only have one of those orders in there Keep going Yeah, um large messages the default here is now we're into g rpc with large messages Um, you know, this may you know, I would avoid this in garbage collected languages, um And stick with the default message size. So which is four mags. So Uh consider streaming splitting the payload up, etc. Um, if you if you have really large messages Also from a load balancing perspective g rpc runs at in layer seven not layer three So it's not the networking, you know from a networking stack perspective And this allows me to do some really cool things like client side load balancing where I can retrieve the list of my known endpoints And then load balance from the client Um Definitely reuse channels Um channels are opened and we know that channels are expensive to create. So, um Keep channels open, uh, as long as possible. Uh, hcp2 has a keep alive period So after a certain period of active inactivity, it'll shut down a channel and you're gonna have to make, you know, explicitly, um Keep those channels open. So if you're not using like, uh, um corkis or something You may have to actually insert code that keeps those channels open And max concurrent connections. This is just like a web server, you know, um, that has threads on the back end That says I can handle a certain amount of requests. So The max concurrent streams, uh out of the box is a hundred everything's configurable that we just talked about but, um I like to stick with, um, you know, what you get out of the box. So you don't have to deal with all that stuff And that's about it. So when we, uh Put out these slides in a pdf There's going to be a bunch of links here to some really good resources from other folks here at red hat who've written about gfpc and eric d'andre who joined us earlier. Um, it has written a couple of them And he and his guy came on his coffee and built a really nice demo that uh All right, that's it. So thank you everyone for joining us today. Sorry. We went a little bit over um And uh, I hope everyone has a great, uh rest of their week. Thanks, Jeremy. Thank you. Thanks to everybody. Join us