 So I assume we're live here. So I'm Jeremy Davis. I'm a principal architect at Red Hat. I've been around for 13 years or so. And presenting with me today, Rob. Hi, Rob Sidor. I'm a chief architect doing application development. All right. And so we're going to present a microservices transport shootout. We're not going to do perf stuff, right? So if you're looking for like which transport is faster outside of general discussions, that's not what this talk is about. We're going to look at REST, GRPC, GraphQL, Kafka, how you interact with those, how you build out APIs or how you build out services and how you consume them using Quarkus and talk about some of the design considerations that those force onto your application or bring to your application. So maybe GRPC is probably faster, right? I think we can say that sort of fine right out of the gate, right? So we're not going to talk about fast speed too much anyway. We'll talk about some other things. So we'll talk about the basics of each of these technologies. Talk a little bit about synchronous asynchronous development. We'll talk about streaming some. And then we'll talk about, we'll look at, you know, client and server, like building these from both ends. Like how do you consume these services and how do you build these services? All right. So Rob, you want to take this for an intro, a little bit about API design, which we've talked about before. Sure. Jeremy and I've done a couple of sessions on different API design, mostly around REST and GRPC. You know, REST is a resource-based considerations. So everything should be designed around actually consuming the resource, you know, with things like Heyday OS for, you know, RESTful design. GRPC is really, you know, what we're going to talk about is it's an RPC mechanism. So it's tightly coupled. GraphQL, which Jeremy's used GraphQL, is really cool because it's over HTTP, but it lets you actually query something so you can only bring back that data you want. We listed web sockets and web hooks in here because most people are using these for something, usually a web socket in order to connect to something to bring something back like an asynchronous RESTful call or async API kind of thing. We threw soap in here because Jeremy and I work with a lot of folks out there who are actually still using soap because they can't get rid of their legacy. And that's kind of a really heavy overhead on both the development process and the communications. And then Kafka, primarily because we see a lot of event-driven microservices. We've run things with event-driven microservices architectures and we really wanted to compare the REST, the GRPC, the GraphQL and the Kafka because those are the things that people are primarily using in their microservices architectures today. All right, so first things first, right? So REST, when microservices first popped up, the Netflix toolkit, they were mostly talked about, at least most of the examples I saw were done in REST, spinning up REST servers, often Tomcat stuff. But REST is seemingly older than that, right? Some of the characteristics, it's HTTP, it's HTTP-based and it marries HTTP, it matches HTTP very well. It's stateless and it's synchronous, right? So when I make a REST call, I block and wait until I get something back. Now, you can get around that with active programming and that was one of the reasons like reactive Java, RX Java and the RX framework popped up for lots of different languages. RX Java was written by a couple of guys in Netflix and we're using Quarkus for our examples. Quarkus comes with Mutiny, SmallRy Mutiny built in. It's a much easier learning curve or a much smaller learning curve, I think, than you would find using RX Java, at least it was for me. But to be fair, I learned RX Java first so that might have something to do with it. But so you can do some non-blocking calls, but ultimately the protocol itself is blocking in synchronous, right? Anything I missed there? Anything I missed on that? No, I don't think so. I think though maybe with REST, because there isn't any defined structure on the way you need to do things, there's recommendations, right? So even though we use swagger, it's not an RPC mechanism that's well-defined. So you don't really know how to consume something unless you get somebody's opening API or swagger document. And then there's rules to the road. There's three different ways to do pagination. There's multiple ways to do versioning and if you don't do media type versioning, for instance, you can't use RESTful features like hit AOS, right? So there's a bunch of rules that are implied. Yeah, they're implicit, but you can really do whatever you want with it. So that's part of the kind of a drawback to REST. But I would say everyone can consume REST. So mostly REST is used for externally facing APIs, I would say right now. I don't know if you agree with that, Jeremy. Yeah, I do think it's a great thing for externally facing APIs. What I liked about it, this was the original copy of Roy Thomas fielding the inventor of REST, his doctoral dissertation, which I remember reading this way, way long ago. The thing that I like most about REST is I've done tons of web stuff and it marries with get, put, post, patch and delete. It marries with the web. So I'm old enough to remember JSF and one thing that I didn't like about JSF is it struck me as it was fighting the web for it was sort of bolting on this other kind of programming paradigm on to request response web stuff. And what I liked about REST is it just said, no, let's just utilize what's there. Let's marry to what's there. That makes a lot of sense to me. So that's the thing I liked most about it. Also, I like the fact that, like you mentioned, there are a lot of already well-established parameters. When you hire somebody, they understand how REST works. And that's some of the advantages. Another big advantage I really like is I like JSON. That might sound kind of funny, but building APIs in JSON, I can add a feature and I don't break my clients, right? Because you can safely ignore if something is added to JSON, unless you can make breaking changes, obviously. But you don't have to have a correct version to still have clients that work. And I think that's really nice. Also, because it's stateless, it's really easy to scale REST wall architectures. In my book. Yeah, I think that, you know, let's jump over to the disadvantages there first. Yeah, so even though it scales asynchronous, you're going to really need something like an API. So Jeremy is going to talk a little bit about that in a minute when we show you some code. But it's synchronous, right? So all your calls have to be synchronous. So if I need additional data, I need to pass an HRAP or some kind of link in order to get additional data. And so some of the disadvantages here are going to show up as advantages using some of the other protocols that we're going to talk about. But like Jeremy said, everyone knows how to do REST. For those of us that had to do SOAP a long time ago, that was clunky and horrible and I can't say enough bad things about it. But at the same time, you know, we were just talking to somebody the other day who's actually passing XML over REST because they just converted it from SOAP, right? So you can pass whatever you want. You can GZIP it. You can, you know, but you have to control everything yourself. Oh, yeah. So great memories in the chat. Yep. More like horrifying. Let's look a little bit at REST endpoints, right? So with Quarkus REST, is my server running? I think my server is running. Let's just do a REST endpoint. So what we decided to do here, we would just do a little sort of coffee shop, right? And so I'm going to just post, right? I mean, use a regular REST verb. This is postman tool. I think most people are familiar with postman. If you're not familiar with postman, it's a really great tool. And we can send a post there. I'm not running. Post the demo, Jeremy. So yeah, there we go. Already messing up the demo. That's great. So RESTful endpoints are pretty easy to understand, right? So we have, you know, a path. We tell in Quarkus, we tell it what path we're using. We say we're going to consume and produce JSON. And then I'm using injection to get something that's going to actually do my work. But so here's the post. And I can do this. I can return an object, right? And I have an object coming in. It handles all the marshaling of the JSON under the covers. It's one of the things that's really nice about JSON. Like I said, I can add extra stuff. It's not going to break my request. Let's see if I'm going to start it up now. Yeah, start it up now. Let's send the port. Okay. Let's see what I've got wrong. All right. This is a smashing start to our demo here that we're not even running. Try it from here. And see, we'll start it up here. So anyway, we can do this. And then we can, you know, return our record. But this is blocking, right? So I'm going to make a call to this. I'm going to do some kind of work. And then I'm blocking, right? Yeah. Something has grabbed my server, my gRPC server. That's really great. Yes, why does he have to grab Java? Okay. We have a lot of Java things running. What else do I have here that might be running at that running? That's running. It looks like it's already running. Okay. Okay. So Rob, why don't you talk about something for a minute while I'm here? Well, we can, Valentina and I can talk about her on page for SOAP. Yeah. Yeah. So Jeremy, you're using the Dev environment though, right? Maybe it's a Dev environment. What else am I missing here? I still remember those SOAP web services. They're all difficult. Well, I remember having to put together all the XML files, everything that's available here. Yeah. All right. Now that I should have no more Java running on my machine other than IntelliJ. Now I'll be able to boot up. Okay. This is always a great way to start a demo, right? Well, I'm pulling this. I have dependencies here. So let's let this guy boot up. All right. Looks like we're booted up. So now we're not found. Yeah. All right. So let's just look at the code here for a second. We can also do the other advantages of REST. Like we can do a post method. We can use it. We have an API for response inside of Jaxrs. So I can send a response, right? I don't have to do objects. I'm a lot of flexibility. You can also do puts, right? This is well-established, you know, sort of protocols for using REST. Now we mentioned that there's synchronous and asynchronous. So this is all synchronous. But you can do asynchronous development by kind of asynchronous, where we can mock out something that's sort of asynchronous, right? So instead of blocking and waiting to return the actual object after the processing work is done, we can return an accepted, right? We can return a 202 status and then go about our work and then eventually update the application a number of ways. Actually, Rob, talk about a bit about how we can update, how we can become eventually consistent if we model things out this way and why you do that. I'm going to stop sharing and monkey with my demo again for a second. Sure. So, I mean, we can do, with REST, we can do a long pull. We can do pulling exercise. We can use the ASIMP API. But more or less you're going to need some kind of event-driven API. So, you know, with Quarkus, we're using VertX behind the cover, so we have some kind of adventure and a reactive approach to do that. But there's really no actual mechanism itself about an API in order to do REST. So, Jeremy, you can figure it out again. Yeah. Yeah, we get the REST running. Okay. Yeah, let's show that real quick and then we'll jump back to the other part. All right. We'll just go to GRPC. So, let's just go to GRPC. Well, for everybody who knows it, REST is in the way. All right. So, GRPC, right? This is fun. Corbacom, DCOM, RMI, right? Remote method. Remote RPC calls, right? So, I never did this before. It's very important. All right. So, characteristics. It's going to be RPC calls, which means we're going to make method calls, right? We're not going to adhere to like a strict API like REST does. It's going to use a technology called protocol buffers. They have, they're well-defined. It works over HTTP2. You can stream events and there's also kinds of interceptors. So, you can, you know, intercept function, functionality that's happening along the way. It's a really interesting way of doing, of building out applications. I had not used RMI really. I'd never used Corba. It seemed really old-fashioned, but the way it works, however, I do like GRPC having used it. The way it works is you define your application, your objects in a .proto file. Those get compiled to whatever language you want and then you pass serialized data over the wire, right? Some of the advantages that you come up with, one, it's very performant, right? It's strongly typed. You've got a schema, so it's strongly performant. It works for any kind of language and you can have a Java client talk to a Node.js client or a C-sharp client and the code generation happens for you. So, you don't have to do that. Your library, your SDK will build all that code out for you. The disadvantage of the things that I don't like about it is it's tight coupling, right? So, I mentioned like, if you're building REST and JSON, well, you can add something to JSON. It's not going to break the client. Well, if you change your proto file, it's going to absolutely break your client. So, you have to version your proto file. You have to version your objects, which are called messages, and you have to version that contract, right? So, you have very tight coupling between your services. Trade-off is you're going to get a lot better performance. Another kind of negative is that it has a bit of a learning curve. As you'll see the code, we use reactive code. Well, you obviously don't have to, but I prefer using the reactive code inside of Quarkas and reactive code can have a bit of a learning curve. I prefer to wrap your head around. And it's not something that I would use for customer-facing or facing web applications. I would use it for internal microservices-based calls. Is that a job with your take, Rob? Yeah, I think, you know, HTTP2 is very fast. You can do HTTP3 with it also now. It wasn't just a year ago, probably, or two years ago. HTTP3 started coming out. It's very fast. I think for just to give you an idea of the speed related to it is, you know, if you're using Kubernetes or Docker today, using internal communication under the covers is actually a generic PC. And it's very versioned. So it's very easy to version versus the options that you have with REST. So the API. And there's very well-defined rules on what breaks the interface versus what doesn't break an interface unlike what you see with, you know, a schema doing REST. So those are actual advantages when you're working with other groups internally. But again, just like Jeremy said, the normal practice is probably REST for externally-facing APIs and GFPC for internal, you know, microservice to microservice communication. All right. This is what it's going to look like when you're building out. This is your proto file, right? This is your contract. This is the schema you're going to have. Some of the stuff, you know, syntax, these things, your outer class, and it's going to generate a bunch of classes. This package is not related to your Java package name. So don't don't relate it to your Java package name. And then you define your service here. I've got three RPC. Hopefully everybody can see this. Okay. I've got three RPC methods here, right? A place order, all orders, and in-progress orders. You notice in-progress orders is returning a stream. So I mentioned you can stream over GRPC and that's what we're doing here. You can do bidirectional streaming. I don't have that coded up, but you can do bidirectional streaming with one of these protocols. So I define a place order. My Java object, my value object, is called place order. So I called this one place order proto just to make, you know, imports less of a headache, right? Everything is named proto on the end. Some of this is a little odd, at least a bit of the learning curve. So if I want to do a void method, there's no void, there are no void methods. I need to pass in a message that I just named empty because that makes the most sense to me because I could have called it void. There's also, you then define like your place order proto and what you do is you define a message. So everything's a message instead of a class. And you tell it what you're having. It's a strongly tied dimension. So we have strings and you have to tell it what position of the mess, of the message it is. And there's other types in here. There's like, you can see in 32 strings, you know, so there are, it's strongly typed, oops, strongly typed. And then you define these other messages. And my menu item proto down here is an enum. And this is one thing that's a little bit wonky with Java. So the enum is defined by its positional or its ordinal number only. It's not going to give me back a small coffee or espresso. And so I need to convert that inside my Java code when I get that back. So these enums are going to use, you know, these various enums that I have are going to use their ordinals. But once you're going to get your head around some of these things, it becomes pretty easy to do. Now, what you need to do, there's other toolkits. I mentioned we're using Quarkus. So when we do a Quarkus compile, all I have to do is you just add in inside of the Quarkus world. You just add in GRPC, which is just a Quarkus extension. And by adding that in, whenever I compile the project, I am going to get a bunch of extra stuff in my target directory. And it says generated sources here, GRPC. So let's zoom in here and take a look at some of this. So, oops. So you notice it creates a whole bunch of stuff here inside of my generated sources folder. You don't want to make monkey with this. You just need to pull this inside of your Java code and you can use it from there, right? So you don't have to really interact with these. Let's see what we have to do here, the way this looks like to implement a server. Let's compare GRPC resource. So in the Quarkus world, I just annotate this with GRPC service. So like on rest endpoint, I had a slash path as a GRPC service. And then I am implementing that generated code of GRPC service. So if you look in here, I'm doing a couple of things that are kind of funny. This is, I mentioned mutiny. This is asynchronous code. This is... Excuse me. This is... So this is small rhyme mutiny. And I have my columns method place order. And then this is reactive. So what we're going to do is I'm going to create this order. And then when this comes back, I'm going to transform it, I'm going to take it and I'm just going to cast it into my order record. Or I'm going to create the order record proto from my existing object. And then I'm going to... So see what this looks like here. When you come in something like, in a client, like... Because this is strongly typed, it's grabbing reflection from the server. So I can see the various methods that exist here, right? You can see my place order all orders and in progress orders. And I can invoke this order and I get back my cappuccino, right? It's put a status and stuff on it, right? But if you notice, I'm ordering it. I'm calling it an ordinal value here, right? So I'm putting a 1 there. The vote back comes back and that gets translated to medium coffee. So Rob, you want a large coffee? I always want a large coffee. Yes, I always want a large coffee. Now when we call the all orders, this is going to get all of the orders, including the ones we just put in, right? So this is going to pull back all of the orders. And then this is kind of nice. This is my in progress orders. Let's pull this. So now what it's going to do, it's going to watch and we'll be able to stream back anything that comes here. So we'll get you out of something different now as to prove we're doing new things. What? Cancel? All right, invoke. All right, there we go. Now we'll get this going. All right. And we're going to crash again. We're going to run all of our demos. So in progress orders will stream back through as they come in when the demo works. They will anyway. All right, so let's look at a couple of the way that I'm streaming this back through. I am doing a bit of a... So all orders, if I just run all orders, what I'm going to do is I'm going to pull them. Just run on virtual thread. I'm using reactive code and then I'm not using a reactive code in it, so I just add this annotation. That makes it no problem. So what I do is I call out, I get all this stuff back. You notice I mentioned I have to call payment status ordinal, right? Because I want to set the value. I don't know what that is, but I just marshal this into the object I want and then I send over my request and that returns my request for all the existing orders. Now the way I'm streaming this, this is a little bit tricky because it's a Quarkus-specific thing. Quarkus is built on top of Vertex and Vertex has the concept of an event bus. It's an in-memory event bus. It's not like stuff that you used to use before. So I've injected the event bus here and what I do as part of my reactive pipeline here is I publish the order onto that event bus and so down here I'm consuming that order and then just streaming it out, right? And so this is an empty request. It's defined inside of my protocol and then in GRPC it's just going to publish this out and it'll stream back over. So I don't know why it's... I don't know why my request is crapping out if it works right now and then stops working when I turn this on. Let's just try doing something different. It changes again. See if maybe the connection works better this way or if I'm still going to get a problem here. Nope, it's not going to get a problem. So to sum this up, GRPC is really fast, right? Can you hear me? Yeah. Yeah, GRPC is really fast. But it's got to tighten up really fast. That's the main disadvantage is the type output. So it doesn't pick up the type output. I think when we get to the event-driven cop and get to the event-driven, that's probably quite different. The other thing is this is really a one-to-one relationship. So in Kafka, one side to that, we're going to talk a little bit about kind of many, many relationships. All right, so we will go on to GraphQL. GraphQL is probably the most interesting in these. Maybe people are least likely to have seen or maybe that's GRPC. GraphQL is based on the notion of queries, right? It was started by Facebook since most people were accessing Facebook through mobile applications. And REST APIs get extremely chatty, right? That's the concept of overfetch or underfetch. So I can get back way more information than I want from a REST call, or sometimes I don't get back enough because one of the conventions in REST is like if I create a new object, I return the URI pointing to that direct object. So I might just turn back on OK created status and give you a URI. So you have to make two different calls. That kind of gets to be a bit of a pain because it's not a client, so Facebook came up with GraphQL. It has the notion of queries and mutations. Mutations are essentially posts or a notion of mutating the data, right? So either creating something new or changing existing data. The thing that's really nice and most interesting to me is that there is a single, the notion is there's one endpoint and then you can dynamically query that endpoint. You don't have to write separate queries and you write very, very little code. That's from a client standpoint. You can also stream things through what's called subscriptions. So similar to the way you can stream things through GRPC, you can stream things through GraphQL. It also has a nice built-in UI. So this is a Qarkis UI for it and there's a film application that's kind of the default GraphQL, a Star Wars API, right? So if you build this out, this gives you an example of how you build up queries. So let me film with ID2 and I want back these values. Even though there can be more values in there. So let's take a look at a bit of that. All right, so this is our orders and we have a schema. We can get it using introspection through the server, right? So it can grab the server and you'll notice we have in here we've got a query for all orders and we've got a mutation called place order and a subscription called in progress orders. So what we will do here, let's place an order and the API is nice because we can look at this at large coffees, like a large coffee there and that's a mutation we can place in order for a large coffee, right? And the data we get back, when we get it back to our client looks a lot like JSON or is JSON, right? So we can deal with this really easily. On the other side, it doesn't look the same as generating JSON, though. It looks really different. So let's look at place server looks like over here. So this is what a place order looks like. So we annotated that it's whoops, we tell a quarter that it's a mutation we give it a little bit of a description that's going to come up in your API and then we call place order with a place order command. I create that and then I also stream it, right? It goes over a broadcast processor and then I return the order record. So this is the order record that I got back right here. Now, if I want to see all orders, I can come over here and I can send this query over and I get back all the orders that I've been putting in during this demo. Now, that's nice, but the real power here, one, I can put in some arguments, right? So let's say I'm going to say Rob. Now I only have Rob's orders. That's pretty nice because I didn't have to write any special code to do that, right? If we look, let's take a look at what this query looks like. This is my all orders query and all I had to do to implement this is I gave it the potential parameters, right? So I said we can query on name, menu item order status payment status and if I take, if I take one of these off or I can add these extra things to match what it is I want to allow somebody to query on and that's all I have to do. I can just call order service or a query, right? And just pass in this, you know, params builder, right? So it's a little fluent API to build out this query. Actually, I built that fluent API to build that query just to make it a little bit easier, but nonetheless, you just pass over the parameters for your query. I can do any of these things. So if I say menu item, I could say you know, Espresso, how many Espresso's do I have? Right, so I've got two Espresso's. Another nice thing I can do is query get everything. Let's say that I don't actually want I don't want all these things back. I just want the name and the menu item. I can unselect all of these. Whoops. There you go. And I can, so I can I can tailor the data that I'm getting back based on my needs, which is, which is really nice and this is, this is pretty slick. What this looks like from a client perspective there's different ways we can do this. So Quarkus has both a dynamic client and a type safe client. Type safe client is an interface that you implement, right? So I can I can write my own methods in here. I said, you know, list all orders, but I can implement other methods in here and then the dynamic one we build up dynamically. So I can say give me all orders with name and menu item and I can add other fields in here and then I just call dynamic client execute. So this allows your consumers of your API or consumers of your microservice to be able to query things in a manner that they're most comfortable with. We can also stream. Let's do our subscription here. So I think I really like this a lot versus the restful heyday OS pattern because they talk about the heyday OS pattern per second too. I say what? Talk about the heyday OS pattern per second just to make sure. Basically that's so I want to in rest if I want to dynamically bring something back. Say I have the orders that Jeremy just had. I would have to create an href link and bring that back so that you could then query it again and that creates a lot of chatty behavior with rest if we did it with rest. So if I wanted all the rob orders versus all the orders, I'd pass back an href that would give me a link to get just the portion of the orders I wanted. So for each of the things that Jeremy just selected or deselected I would have to pass back another link and then do some other kind of query. Jeremy is actually doing it dynamically with GraphQL and that really reduces the roundtrips that I'd have to make or do that same kind of process with rest. I think it's pretty nice as well. I like this ability to stream these orders. We're streaming these orders as you're doing. This is how you do queries, how you do mutations, and then how you stream your orders. I guess we could look at the Quarkus one. It's also built into Quarkus which is built into most of these. Apollo is the Node.js client that's most common. It's also built in there where is our services extensions. Where is GraphQL? GraphQL. There we go. GraphQL UI. Here you go. You can write some queries. Let me just cheat and paste in one of the queries that I have written here. I'll just cheat and put that in there and run it. I don't expect a token. Clean that up a little bit. I'm going to do this wrong. I should have another all orders. Anyway, the UI exists right here if you're more confident than me at writing your queries in real time. We don't have that long left. Let's jump in here. Your domain gets exposed as a graph that you can pretty easily consume over a single endpoint. It's a great choice for implementing any kind of APIs where people are querying your domain. In terms of performance, although it may not be as fast as say GraphQL as GRPC, performance is certainly going to help because you're tailoring the amount of data that's coming back over the wire. Disadvantages has been a learning curve. A couple of comments is it's bringing back JSON. I think you should consider gzipping that in the encoding because it's really going to be more performant. Just be careful of creating chatty behavior by allowing so many types of queries. We'll get to Kafka and event driven architecture. Kafka is a little bit different. It's typically known for streaming applications. It marries really nicely with event driven architectures. Essentially, it's an append-only log file and it's known for its performance and scalability. It is super, super fast and it is easy to scale up massively. Now, there's some things you have to take into account to do that, but it can certainly... I don't know, we didn't have a underfetch. It should not be on the slide, I don't believe. Some of the things that are good, you don't have to use a schema. You can use a schema. I like just using JSON that's what I've got in the example. It's extremely performant, which you already mentioned. Another big advantage is because it's essentially an append-only log file, meaning everything that gets sent to Kafka is persisted in that Kafka file, Kafka log. You can go back and replay this. If I want to change my business logic, I can store that off. I can go back. I can make changes to my business logic. I can deploy that and run my exact messages back through the system in the same timeframe that they were before. I can see if my changes exhibit the characteristics that I expect. If I'm changing my business logic so I get a different outcome, I can verify that using actual data, using actually the same data. You can also use that for debugging. This is one of the biggest things, the people I work with who use Kafka, this is one of the biggest things that they like about it. Anything to add, Rob? Yeah, I just like the fact that it's many to many. I can pass different messages through it, but it opens me up to more types of event-driven architectures like CQRS, Outbox Pattern, and things of that nature where I can actually do some more advanced microservices architecture with that. Now, that's also part of one of my disadvantages. It's asynchronous and it means that your application will be eventually consistent. When we're using REST, we're going to post something over and get back something pretty quickly. We talked about how you can do that asynchronously, but it's still mostly synchronous. Same thing is going to be true with GraphQL or with GRPC, but with Kafka we can't do that. We're going to pass a message and then something's going to have to happen somewhere else and then it's going to have to let us know that it finished doing that work. As Rob mentioned, you see CQRS, so let's say my service sends something over to the barista who's actually going to make the drink. The barista might just write to a database, right? There's tools we have to be easy on that can watch that database and then pop a message back onto a Kafka topic to notify the application. The application that does the processing can just pop something back on a different Kafka topic. There's different ways you can do that, right? But you have to build decoupled applications using Kafka. That's both a pro and a con, right? Overall, I think it's more of a pro, but it's not going to be as fast as doing request response. Now the way this looks inside of so this is pretty simple thing. This would be a rest endpoint with the path of Kafka and what it does is it's got an emitter and this is just going to this is what this is going to do is if you post a message to place order, it's just going to send it out onto a Kafka topic, right? Which is a Kafka queue effectively. And then it's going to listen for another one and it's going to consume that and when it's ready it's just going to say it came back, right? So I also have another service over here that's also running which is my drink station that's going to actually make my drink. So it's going to be listening. It's got a Kafka resource as well. And it's going to be listening. It's going to consume and I had this blocking annotation because it's running on an event loop. And what I'm going to do here is make the thread sleep for some random number of seconds to, you know, make it seem like we're, we're waiting for a drink to be made, right? And when it's done, it's going to send it out to a different Kafka topic. So my, one microservice gets a message, pops on a Kafka topic, the other microservice does some processing and pops another message back onto that topic. And that is also why it kind of marries well with with a Ventureman architecture is marries nicely with a Ventureman architecture is its events, right? An order has been placed. An order is now ready and it's easy to model those as events. So let's, if we send this out for of course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .