 Okay, we should start. Thank you for coming. Welcome to my session on Microfile Active Specifications. My name is Martin Stefanko. I work at Redhead as a senior software engineer now. I work mostly on middle-level technologies like white fly, JBOZ AP, and more middle-level oriented as Farcus or SmallRide. I'm also a Microfile Committer since last year, and I'm a big microservice enthusiast. If you want to catch me on Twitter, it's Stefanko. What we are going to talk about today is reactive programming or reactive systems in general. So I always like to start this talk with short history about how we get where we are and why we made the decision that we made to get there. So it all started in 2014 with a document which is called Reactive Manifesto, which is a one-page document available on reactivemanifesto.org which was put together by a group of, I think, five or six independent developers which identify the group of properties that they commonly find in a modern enterprise scalable systems. And they just defined them as reactive systems, and they put this together out to the Internet for people to see and to try to achieve what they think that is useful in modern applications. So they defined these reactive systems as a system that has these four properties. They will take them from bottom up. So on the bottom we have message driven, which should really be read as asynchronous message passing. It means that we are not sending messages directly to some precise addresses as we have URLs or something similar. We are sending them to named channels or data pipes. So we are just sending them to some dump pipe and we don't really care about this message anymore. This asynchronousity basically means that we are sending message to the pipe and we don't really know if it's going to be received or when it's going to be received. There is optionally acknowledgments on the backing side if you are familiar with any messaging at all. This is not a new idea. What this in terms gives us, and I switch that, is elasticity and resiliency. So elasticity is basically a way or the ability of your application to scale up and down as on a needed basis basically. So if you have many requests, you will scale up. If the requests decline, you will scale down back again. This directly comes from this named channels idea because if you are sending just to some named channel, you don't really care if there is a load balancer behind it or something similar, how many instances there are, if they are going down and up, you are sending to some channel and you know that the message will be eventually processed. This directly fluently turns us into resiliency, which is ability to handle failures. So if you scale down to zero, you have no services. Usually the pipes are clever enough, for instance Kafka, to save the messages for you when the service will come back again, you will start processing them again. And if you take all of these three properties together and you will put them into a system, you will get the most important property, which is responsiveness. And responsiveness is ability of your application to process the request in a timely manner. It doesn't matter that if it's successful or there is an error, users just hate if you click something and you need to wait for three, four seconds until something happens. And then they will start doing something similar to this and you are basically denial of servicing yourself. And then it's a different kind of story for users. So from reactive systems and these properties, we are directly identified that this is not something new. We are already doing this and people call this reactive programming. So basically that's a programming model in which you are not specifying computation in terms of steps that you are doing one operation after another, but you are rather reacting to some stimuli. This stimuli can be whatever, user clicks in Excel spreadsheets. You have that cell, so you change something in some different cell and that some cell needs to react to it. So we are already using it in our everyday operations basically. And all of these reactive programming models having come on are that they are trying to be non-blocking. Non-blocking in a way not that you are scaling threads per request, but you are not blocking individual computations. So I don't think that I need to say anything more to that. With that reactive programming and non-blocking operations, we came up with something which is co-reactive streams. This is an API or a specification which basically defines asynchronous data flows or dead named pipes with a non-blocking backpressure. And what that means, again, we have these name channels to which we are sending messages and backpressure is basically you have publisher and subscriber, or subscriber, consumer which is consuming messages. Consumer can become overwhelmed by the messages. So if you are sending too many messages, consumer doesn't need to have a way how to process them. So backpressure is basically a way how the consumer can say to the publisher, please stop. Now for a moment I need to finish the processing what I already received and when I will be ready again I will tell you and it will continue sending messages again. So as I said, this is really an API which consists of four interfaces, publisher, something which is producing messages, subscriber, something which is consuming. Processor is an object which is both publisher and subscriber. So you are consuming from one channel and producing to another. And subscription is basically a class which maps to a link between publisher and subscriber. So it represents the relationship. And since JDK9 they are actually including in the JDK itself under Java U2 concurrent flow. So how this actually looks like when you are trying to use this API is that you have a publisher and a subscriber. Subscriber calls subscribe method on a publisher with itself as an instance and in turn publisher needs to invoke unsubscribe call bank on a subscriber with that subscription object which was created. When the subscriber then wants to get some data from the publisher it will call a request method on a subscription with some number which represents the number of messages that he is going to receive or he is able to receive. So this is directly implementing the back pressure that I was talking about. If I am not able to process more than two messages I can request only one. And then publisher invokes on next call back with individual values from the data pipe or the stream on a subscriber. So the subscriber can actually consume them and do whatever they need to do with them. And this will be repeated as many times as needed or until there are no more values and in that sense publisher is required to invoke on complete or on error call back on a subscriber representing that the stream was completed or some error happened and the error is passed. So this is pretty straightforward actually. All of these four interfaces have together I think seven methods only. So it's pretty straightforward to implement but actually it turned out that it's not that easy and many people think that they shouldn't include it in JDK itself because they think about it more as a SPI not API so service provider interface and there is actually a TCK technology compatibility kit for this around 38 tests I think. And it's not that easy to get this passing. You need to really think about many edge cases which all of existing implementations really rely on. If you are interested in learning more about this there is a really interesting talk from Devox Poland last year where Jacek Kunicki actually tries to implement this in I think 30 or 40 minutes he is able to do 16 out of 38 tests and he is not even trying to get past it. So we have these four interfaces of reactive streams but it's only a basic publisher subscriber model so we are just consuming values basically. Usually if you are using JDK 8 streams you do want to do some operations on them so you want to map something, filter something, et cetera these kind of operations. So for that reason a set of libraries was created which are commonly called reactive extensions and the most popular ones are Erics Java or Erics whatever language the Erics extensions is available for and Project Reactor which is used in Spring. They are both implementing reactive streams themselves and this API reactive stream is actually shooting as a bridge between different implementations. So you can create publisher with Reactor with Flux for instance and consume it in Erics Java. And here is exactly where Microfile comes in. Are you familiar with Microfile? Everyone is. So this is what we are going to talk about today. Microfile Reactive. Microfile Reactive is in fact two separate specifications. Right now the first one is Microfile Reactive Streams Operators which is a single class Reactive Streams which suits as a builder for Reactive Streams and a set of operators like the map, flat map, et cetera. The other one is called Microfile Reactive Messaging which is the main specification for which we created Microfile Reactive Specification and it provides a mapping of the Reactive Streams into a CDI model. So you are able to create CDI beans and define with a bunch of specifications, two annotations, pardon, the processing of Reactive Streams. So you are not touching the API directly. You are just saying that you want to consume or produce messages. And with that I will get to my first demo. Hopefully if everything works, please bear with me if I will make some mistakes and feel free to shout at me if you see what the error is. So for my demos today I will be using Quarkus. Quarkus is a runtime from Reddit. There are several talks on it on this conference. If you haven't heard about it yet... Oh, I was too fast and I forgot to delete it. Yes, I was trying this whole. So if you haven't heard about this before, definitely check it out, Quarkus.io, really fast based on Gravian. You are able to compile to native and what I like the most is actually this and that's Quarkus DevMode or Live Reload Mode. So if I now start Quarkus with this Live Reload Mode, is it big enough for everybody? Great. I can now go into my service, which I have here and just open something which is usually generated for you but my custom script is going for me. So I have a single JAXAres resource, which is returning hello. So I can actually try it out here and I will get back hello. What that Live Reload is is that I can go into my IDE and just say here DevCon, save it, go back to my application, haven't stopped it, nothing, just repeat the same call and changes the directory compiler. This is something very powerful and nowadays when I'm trying to show something or try something like really little code, this is actually faster for me than to compile a main class. So if you want to use basically anything with Quarkus, you need to edit as an extension and there is a command in the Quarkus plugin which is called ListExtensions, which will give you back a list of all extensions that are currently available. These are being updated on basically monthly basis, communities always building new projects into Quarkus. This is mainly because there are some kind of hacks that you need to do if you want to compile your applications to native extensions that are doing this for you so you don't need to do it yourself. So I was telling that I am going to use reactive streams operators. Here is a command that I can just copy. Really pretty easy, I will just copy paste this, I will find the extension that I want, copy the, oh, copy the, come on. Copy the extension, let's say. And hopefully I have it there, yeah. I will just provide it in a single command, run it and it should like install the extension but really what it does, it will add a new main dependency. But I can do it in fancy way. So now we can start actually working with reactive streams operators and for that I will create a new resource. I will call it just RS1. And then we will do void RS1. And here, I told you that basically what the reactive streams operators are all about is single class which is called reactive streams. And these reactive streams is actually builder. We will be using the builder but from now we just want to create the custom data flow with only a few values so I will just create a few values here. I'm a big Star Wars fan so sorry. And then you have already these operations that you are usually using on streams so I can map something on there or maybe do uppercase. I can filter only ones that are starting with L. And then I can push them directly to reactive streams subscriber or to something which is coming also from reactive streams but we don't need to take care about that now. But you can also collect them directly to a list and this is what I'm going to use right now. Sorry? Preferences. Ah yes, thank you. So basically now we have something which is called a competition runner and you need to run it yourself. This is not something that you are going to use when you are developing with the next specification because this is somewhere inside of that framework that is providing you the implementation that will run it for you. So hopefully if I type everything right I have already a new terminal here and I now invoke that RS1 my application is recompiled and I didn't print anything, right? Sorry about that. So this will return a competition stage and we can just do then accept system by now run it again it will again recompile and we get our result back. This is nothing fancy but it is already a way how to deal with reactive streams and we will use it later. So one more example that I want to show you is actually how to use that reactive streams with reactive streams operators. So I will do just RS2 and here we again do reactive streams but now we will use method from publisher which will take a publisher, reactive streams publisher then we will push it through a processor a reactive streams processor and in the end we will... it doesn't work like this. We will push it to subscriber I just need to quite help idea. We will push it to subscriber and of course in the end we need to run it but idea doesn't play with me now. So let's create publisher processor and subscriber really in a fast way so this will be publisher of let's say long and as I was saying there are multiple... come on, yes, thank you. There are multiple implementation of reactive streams and actually the implementation that we are using which is called more reactive messaging comes with Xgero dependency so I can use probable right here and just do interval in every half a second and that's everything that I need to do I already have a publisher which will be producing values every half a second I will just limit it to 10 values because then it will be going forever. For a processor this is a little bit more complicated we need to specify two values the ones that we are consuming which will be long and we will be producing strings and now we again start with the reactive streams but this time I will use a builder directly and I want to use here for instance map here is Java little bit shorter on Java type system so it thinks that it's an object if you don't help it and say that I actually know that this will be long and now they should give me long and let's just do this long plus iteration and let's just also map it to uppercase sorry, to uppercase and then all we need to do is just build RS and that reactive streams already know that this is a processor and it will build a processor for you there are a lot of overloadings of these methods inside of the reactive streams class you can check it online in the documentation so the last thing that we need to take care of is the subscriber and this will be actually subscriber of string not something that extends things and here I do something that I told you not to do so don't try this at home because I will implement the subscriber myself and I will write just the basic subscriber which will definitely not work in any normal use case but for demo purposes it's enough so first thing that we need to do we need to save that subscription if I can write it right so we have now our subscription if we subscribe to any streams and then we need to use that subscription to actually request some data I will only request one value this on next callback will be invoked for every value that is going to be produced from our producers so I will just see out the value and I need to again request another value otherwise nothing happens on error I will again bring only error and on competition I will just print that we completed and with that if I close this we should be able to compile and if I type everything right sorry we should see something now hopefully and we see that our subscriber is now consuming every half a second our transfer annotation creating reactive streams on the fly so with that I will close this and we'll go back to slides for a little while if there are any questions please ask as I'm going or say them into an end it doesn't really matter so the other and the main specification of reactive is reactive messaging and that CDI model that I was talking about so in your application if you are Java Enterprise developer you probably know nowadays micro profile is really based on CDI model so we encourage people actually to put everything into beans and to use it in that way so in micro profile reactive messaging we expect that the stream is created in some CDI bean then it's passed to a different bean which can for instance map it to something map it again to something else filter something and in the end you just consume it somehow to console or something else or you will push it somewhere else so there are actually a motion or how you can connect your streams to different systems and this is called connectors basically a connector is a plug-in extension that you can put into your application we take that the application is that great rectangle we have several beans that beans you are doing your own sending the flows as you like but you will use connector on the edge of the service or on your API of your service to actually connect to a different system and this is something that is provided for you by the implementation so you can for instance say just I want to consume the stream from a Kafka you will just say somewhere that you want to use Kafka connector and basically the stream will be filled for you from the Kafka transform and you can consume it in your CDI bean you can put it through the chain of CDI beans as you like or you don't even need to put it only in one bean and again on the other side you can plug a different connector which is connected to something else of course these values can be produced by users themselves so you will just transform the you will push basically the messages from users to a new stream similarly as I did with the flow level what reactive messaging is really if from user perspective are currently two annotations there will be three soon the first one is outgoing which is just taking the name of the channel to which you are producing messages and the second one is incoming again name of the channel from which you are consuming messages if you combine all of them you are creating basically reactive stream processor you are consuming from one stream and pushing values to another and with that I will get to my last demo in which we will be trying to rewrite an HTTP microservices deployment to use reactive messaging so what we will start with is this architecture so we have our users which can request coffee through HTTP on our frontend coffee shop and coffee shop in turn with again currently HTTP ask the barista service to prepare the coffee there is a random sleep time in the barista which simulates the preparation of the coffee and when this is finished you will get the chain back to the coffee shop and the coffee shop will pass the information that the coffee is prepared to the user so you would say that this is how we usually order coffee and people are happy about it but usually they are not because if you order the coffee you need to wait for the duration of the whole chain until it's finished with that random sleep time and until it is finished you are not able to do anything you are just waiting walking the thread so when for instance even the barista service fails because of network or something similar or you are not able to contact it you will just try coffee shop will again try to contact the barista service but the service is down so you will just propagate the error back to the users what will the user do with the error again nothing this is usually how you will order coffee in stands like here so you will come there, you will pay the money and you will stand there until you will get the coffee and you can do other stuff while you are waiting to get your coffee so what we are going to do is actually to reply to asynchronous message passing to that narrative system we will still get an HTTP request from our user to our coffee shop from the end not an HTTP we will send messages actually to two queues or actually Kafka topics one is named queue, the other one is orders and basically we will directly return and respond to a user saying yes we got your order and your order will be prepared somewhere in the future from the queue topic there is a board service which will type in the same service as coffee shop but for architecture is better shown this way which will just be displaying the messages to the user so we will have some like table saying that yes your order is in queue and when it is finished we will switch it to ready from orders we have another microservice which will be again barista taking orders and producing beverage ready messages to the queue again this will be displayed on the board and the user can read that the coffee is ready from the board and go take it and go away so this is if you are fan of starbucks more to the model of starbucks you will get the ticket you will go see somewhere do your staff and when your coffee is ready they will show your name or some variation of your name and you will come for the coffee but in the meantime you are doing whatever else you need to do so okay I will try to type it right now I still have plenty of time so hopefully we will finish it I have here already the architecture prepared so I have here my coffee shop my barista and my client so I will just start the services sorry and now they are running on the HTTP architecture so if I will go to localhost 8080 hopefully now it is running and I will request a new coffee we see that it is in progress and when it is finished hopefully we will get that it is ready but what I mean by that blocking is that when I click this order I cannot click it anymore I can order only one coffee coffee per time so it is actually nicely shown here if I do just HTTP you can see that sorry by the duration when the coffee is prepared I am basically blocked I cannot do anything when the coffee arrives I can continue my processing but I am blocked for the duration of the preparation of coffee which is a random time so hopefully it would take us actually to rewrite this application to use Kafka in the background with reactive messaging so first thing which I always forgot is to actually start the Kafka and we need to create our topics that we are going to be used because we want to distribute them to between different baristas at the same time so now we should be good to go if the Kafka is running hopefully and it is we can start by adding actually dependencies into our services because currently they are not there so what I need to only type here is Maven Quarkus again at extension and we are adding reactive messaging and we are adding that small right Kafka connector because we want to connect to Kafka and I need to run this command for the coffee shop you see that it was restarted and I need to run it also for the barista and now the barista is restarted I can close this and now we see that the extensions have been installed and we should be able to now rewrite our service to use Kafka instead of HTTP so we will start with actually this beverage class which is representing the JSON which is being sent to the board resource I will actually copy the one which is in our coffee shop service because it has one more field and that's preparation state which is only in queue or being prepared already and this is necessary for us this was unnecessary for HTTP because you were just returning the coffee sorry but right now it's better because now actually the barista will say when the coffee is ready so I will just copy paste this here and replace it so it really just edit this preparation state and edit a new queue to the factory which will just queue it but we will not even use this and we need for a jsonb getter etc so in our barista resource we can start to actually start retyping I will close this retyping this juxtapres resource to actually we'll just fix this error because it's just saying that it's now missing the one required parameter so we are just saying that when the coffee is prepared then it's in state ready so this is our juxtapres barista resource only single post method which is called on slash barista it will just sleep for a random time and it will prepare a coffee what we need to do to transform this into reactive messaging cdib actually in kvarku all juxtapres classes are also cdibs but for our use case I only need to make it a cdib it doesn't need to be juxtapres resource anymore and I can just rewrite this annotation post to incoming which we were talking about and it will be consuming from queue orders outgoing sorry and we will be producing to queue in the same way as we had in our example of architecture and I think that's it this is everything that you need to do to transform an htp service to be actually based on reactive messaging free lines okay there is one more thing that I need to add and that's codec because now we are going to consume order which is jason in our case from kafka and we need to give a hint to smaller eye implementation that this is actually an order for jason b to be able to deserialize it and for that we need to create a single class which will be in codecs package and it will be order this deserializer this needs to only deserialize it with order and we need to give it month constructor order which will just call super with order class we are working on the way how we can detect this automatically but right now you need to add this single dummy class to just help the implementation around what jason b class to deserialize so this should be it only thing that is left is to actually configure the kafka connector and for that I need to help myself because this is a bunch of properties that you will never need to remember and this is why I don't remember them but I can explain what they are doing so basically each micro profile messaging config property is in this type there is always mp and the beginning mp messaging then incoming or outgoing depending on the channel that you want to configure so you can have the same name of the channel for incoming and outgoing messages then the name of the channels orders or queue and then there are separate properties that are actually passed for the configuration so we are just saying that for our orders we are going to use here we have incoming and we want to use connector that is one that we added through a command line from a value deserializer we will use that class that I just created while back and here are a few properties for kafka itself which just configures it for one example that I want to use later and for queue topic similarly because outgoing now the name of the channel is queue and we are going to use kafka and we use the default deserialization because we can use it here so that should be everything that we need to do with actually barista service the only thing that I need that is what for me to do is to actually restart it because I don't have it I have it in dev mode and for that I need to do an HTTP request so I will just restart it by calling the health but this is a side note and with that I think that we can move to our coffee shop service and we will start in coffee shop class so for now we have two endpoints one is HTTP that you saw right now which is just doing a blocking HTTP call here we have an async version which is just returning competition stage just to save time say this is exactly as that HTTP version default juxrs implementation is juxrs implementation what they do when you return a competition stage they will just substitute the method invocation to a different thread but you as an external corp and you are making HTTP call you are still blocked it just executes on different thread on a server so you can do more requests on server but you as an end user are still in the same sense HTTP so I will add here another post method but this time we will do slash messaging and we will do here actually what we will be returning here is order right now because this will be async so this will directly return just some value and we can call this messaging and we will also consume an order from our user and we will start with actually just setting an order ID and I have a helper method here and what we need to do now if we check our application so we are typing out this coffee shop service we basically need to send two messages to Kafka and the one is to queue and the other one is to topic orders so let's just do that since this is inside of the method and this is something of imperative and reactive world because in vocation of this post method will be still done in a blocking way normal HTTP call but now I want to switch to a reactive stream to create the flow and for that we are actually having an option to inject directly from an implementation a channel like this we need to provide it a name and this channel will be first one queue and what we will inject here is actually an emitter class which as you can see is currently coming from small write itself but right now is there is an open PR how we will move it to the spec because it's very useful we just need to say it what we will be emitting and what we will be emitting to our queue that's beverage and we will call it queue and we also need to inject original orders and this will be also be emitter but this time we will be saving orders and this will be the topic that is consumed by our barista service and let's call it orders sorry so to actually send a message on an emitter it's really easy because there is a send method we just need to send there a beverage and for this case we have directed that static factory to just create an object which is queued based on the order and to our orders we will just send the order and we will return the order now with ID so we will really just send to messages and we will return back to the user so user will get back confirmation that the order was created and sometime in the future he or she will find it on the board that it's prepared so that should be something that we need to do here so let's now actually create the board resource because right now we don't have an access from this front end to our application so I will create a new Jaxarize resource which will be actually let's call it the report service and this will be actually only Jaxarize resource that will be consumed by your web socket in our front end so we have like 5 reload in the front end what I need to do here is again to eject that channel but this time we are going to consume from that queue and for that we are also going to use the channel we can call it whatever we like so we can for instance say here beverages if I type it correctly but this time we will be consuming this is slightly confusing if we are consuming we are injecting the publisher because we are actually going to publish from this queue to the front end but this is the way it needs to be done and we can just consume here directly the string even if I know that it is going to be JSON in there I would need to serialize it back to a string nevertheless when I'm sending it through a web socket and I can call it queue because I know that it's actually called queue here we need to just single get which produces media type jamax core servers and events so if you are not familiar with servers and events really useful stuff I don't have time to go into that and we will just return here our publisher get queue and we will return the queue and that should be everything that it needs to be done I think hopefully again the last thing that we it's left for us to do is again to put a bunch of properties in our application configuration sorry I need to copy all of them but I will go again through all of them what we have already here Kvarkus HTTP port the default port that the Kvarkus application is going to run on in our barista service come on we have 8081 so they don't conflict this one is actually coming from a different micro profile specification which is called the REST client and it's basically what we are using to call the barista service through HTTP so this is again a different micro profile specification it's just an easier way how to structure your outgoing Juxer REST calls and now we can get into our micro profile messaging calls so again the same structure MP messaging coming so if we have orders which is outgoing because in our official service we are actually emitting values to our orders we need to say outgoing the name of the channel which is order again we are sending to Kafka and we are sterilizing with the default JSON B serializer and for our Q we are also using Kafka this is the same one that is injected here we are also using Kafka serializer and we are just saying to Kafka that we want to broadcast this value and the last one that is going to be configured is this one which is actually incoming because we are consuming beverages from the Kafka queue so we need to say incoming the name of the channel this needs to match this string which is passed to the channel or to the incoming or outgoing annotation and then connecting to Kafka the topic name in this case is not the same as the channel name so we need to specify that it's a queue and again a bunch of properties for Kafka we are just saying here that we want to use a default string the serializer from Kafka string is strange colors and that's it like really this is not something that you would learn by heart this is available in Quarkus documentation you can find it on internet there is a lot of examples really nice guides so if you need to configure something usually you can find how to configure it not something that I would type here on the place and with that I should be now able to call my messaging endpoint which hopefully restart the application and I will get my coffee back right away and if you will check here the Barista service you will see that the coffee is prepared in a length of time so I am already finished here and the coffee is a random time so unfortunately I need to try it several times you see that I can do other stuff I am not blocked and when the coffee is prepared the message is pushed back to the queue so if I show you this in a front-end I will just refresh the front-end and we will switch to messaging and I do here some order we see that it's in queue and it's ready but this time I can do several orders as many as I want and they will be all pushed to that queue and eventually when the Barista is ready it can pull the order from the queue prepare the coffee, push the new message to the topic queue and this will be displayed in our front-end service so what this allows us to do actually is to directly debt properties from the reactive systems that I was talking about because now we are having these name channels and we are pushing messages somewhere so if there are multiple messages in a queue you just see and the Barista service dies what will happen so I can go actually and kill the Barista for a while and now I will go back to my front-end service and I will actually make a few orders now we can wait for a while but nothing will happen because there is no Barista to actually process this request so when the Barista will come back from break he or she will see because there is a random name generation that there are some so it's a he, it's a George he will see that there are some orders which are not prepared in a queue start pulling from a queue and finish all the orders hopefully again thank you so this would be that resiliency if we have if it comes back again start pulling messages again from a queue if we have too many orders so we need to take another Barista we need that elasticity we need to scale up they will just start taking messages from a queue on as needed basis if the second one is finished sooner it will just pull another message and for that I will actually stop this one and open another one and I will start I will package this application because it's easier this way so just a little while if I now start this application manually make this bigger and start this application on the other port 8082 now we should have two Baristas ready to process messages and if we go to our front-end application and I will place a new orders in a queue hopefully there will be two different Baristas taking the orders from the queue and all I need to do is basically three lines in a Barista I will change it from blocking HTTP to actually be using Kafka in the ground and a bunch of configurations so that would be everything that I had prepared from the coding demonstrations again when the number of orders decrees we just kill one of the Barista services we will scale back down usually you are not doing this in a terminal like I am right now I will just be sending some commands scale up, scale down so if you find this interesting this is the dependency for micro-profile reactive stream operators currently in version one on one and this is micro-profile reactive messaging this is only the API micro-profile itself are specifications API so if you want to use this somewhere there is currently SmallRai as an available application and LightVent has something so SmallRai Reactive messaging and reactive operators and that should be everything that you need so thank you for your attention I hope that you like what you saw and if there are any questions I can take them now what is the implementation like is this based on NetE or something Vertex Vertex at least in SmallRai can you take care or how do you take care if you send a message with the overall case or something can you take care of getting it out in another screen or something else how does that work? well basically that's exactly what I did because you will get back whatever the message is that you are consuming if it's Kafka it's a string it's parsed with JSONB back to an object I can map it to something change some values and I can again serialize it to some different channel in different form so exactly what I was doing manually with the reactive stream the publishing processor you can do it with that annotation incoming and outgoing if I will have incoming outgoing from string to long and you will do the transformation in the method that method will be involved for every value and you will produce one value to that outgoing channel so in that way you can change it in any way you want but if you are using the connectors to Kafka you need to take care you need to tell basically the Kafka or the SmallRai implementation that this is actually an JSON object this is why I needed to add the serialize myself but it isn't hard but it's unnecessary so we are trying to figure out how we can directly from the structure of the code and you will just say somewhere that you won't use JSON-B and it will switch everything automatically for you like JSON JaxaRais is doing already so hopefully that answers it Are those extensions for smallRai extensions for GAA or they are not ready to use? they are already GAA I was using one that all final those extensions are automatically active extensions they are already supported I wouldn't say supported because Quarkus is still community version we are working on creating a product and in that product they should be supported and the product should be out in several months and there are active drivers for databases are also GAA now? I don't know that sorry but if you ask on Quarkus mailing list that should be the best place to ask anything else? if not thank you for your attention