 Hello my DevNation friends from everywhere in the world, thank you for coming to another DevNation Tech Talk. Today we have a frequent guest because we welcome Mr. Reactive, Mr. All Things Reactive. So directly from friends, from his cozy house in France, let me introduce you to Clemence Coffier and Clemence. Thank you for coming and the stage is yours. Hello, thank you Edson. I'm very happy to be here. Thank you for having me. Let me share my screen. Can you confirm you see my screen? Yes, I can see it. Alright, so again, thank you. My name is Clemence Coffier. I'm working at Red Hat. I'm doing plenty of Reactive stuff including Quarkus, Vertex, Mutiny, Small-Eye Reactive Messaging and so on. We're not going to cover all of them today, but we're going to see quite a few of them. Yes, initially this talk was named Take the Highway with Reactive Roads, but we got some kind of mutiny around there and we had to change a little bit the content of the talk. But don't worry. What I'm going to present at the end of this talk is amazing. It's the next generation for Quarkus. It's going to really blow your mind. So today we are actually taking a pirate's journey with How-How Reactive and we are going to explain how Quarkus has become reactive, why we did it. We are still going to speak about Reactive Roads because it's a big part of our Reactive journey, but you would see other stuff, other reactive parts of Quarkus and you may even discover how Reactive works under the hood. But first, of course, what's Quarkus? Quarkus is a stack to write Java applications that has been tailored for cloud-native and Kubernetes. It provides everything you need for microservices and serverless, but not only. You can write command-line tools and with a set of extensions that never stop growing, you can write almost anything with Quarkus right now. So what makes Quarkus different from the other framework? Well, if you use another framework, you generally develop, then go to your terminal, do maven, clean package, skip tests. You get a fadjar, a wow, or something. And then you start it, java, da, java, blah, blah, blah. At runtime, it will first locate its configuration, then will do class path scanning to find the annotated class, then for each framework build a model, and once each framework has everything it needs to stop managing, then it will start threads, pools, HTTP server, and so on. The main idea behind Quarkus is to do all this at build time. So at build time, we are going to pass the configuration, do class path scanning, build all the things that the framework needs, again, at build time. And because we know everything, we know how your applications is going to work, what it needs, we can actually augment or decorate the package, so the jars, with exactly the right set of bytecode instructions required to start all the frameworks that your applications depends on. Which means that at runtime, we are really ready to keep in. We don't need to do any other things than just reading this bytecode, well, executing this bytecode, and starting the servers, and so on. Thanks to this, we have obviously a faster boot time because all these tasks are taking a lot of time and we don't have to do them anymore. And the second thing is that it reduces memory consumption because, well, we don't have to do class loading to find class paths. We don't have to bring an XML parser at runtime and so on. So by reducing the number of classes, we get way better memory usage. Quarkus is not only this. It's also an amazing developer experience and you will see already an application running. I'm going to develop it and I don't have to restart anything. SuperSonic subatomic Java. We've already seen that with the build pipeline. You may have heard about the Quarkus capability to build native executable. This is actually the side effect of this build pipeline because during this build time, we can collect enough metadata on your applications to configure exactly how it needs to be built this native executable. A lot of people believe, oh yeah, the only thing you need is declaring reflection and resources. Oh no, you got it wrong. It's much, much fine-grained than this and you need to tweak and enable a lot of flag to get the right native executable. Best bread of libraries and standard Quarkus rely on many of already known standards and library but we're also innovating and we're going to see how we innovate in the righty space because that is going to be the topic of today how did we make Quarkus reactive? But first, why reactive? A lot of people say it's because I'm drinking too much. It's not only this actually, even if it's also true. Most of the applications we're building on the cloud on Kubernetes or whatever are distributed systems. There is one thing we know about distributing systems that they are going to fail. They are going to collapse. Something is going to break. You can use whatever framework, whatever technology is going to break. You may think that Kubernetes or the cloud make building distributed systems simpler or easier. Well, it makes them more affordable but it's only half of the story. Your application on top of that still needs to understand the distributed nature and all the challenges of distributed systems and actually being able to create different resources is adding another requirement to your applications the dynamic availability of these resources. So your application needs to track them, needs to be prepared that they may not be reachable or may not be there and so on. Reactives, thanks to reactive systems can be summarized as distributed systems done right. By using asynchronous message spacing as a main way to interact between our components to provide elasticity and resilience. Elasticity because we are sending messages to virtual addresses and then we don't know how many consumers we have. Maybe none, maybe 10 and this number can evolve over time. At runtime, without needing to do anything. Resilience comes from the ability of MessageMiddleware to do an acknowledgement but also if one subscriber or one consumer crashes to be able to dispatch these messages to another one. If you are able to handle the messages, handle requests, under load and when facing failure, you become responsive. You build a better distributed system. A system that has a better user experience and that's a key element, an essential aspect characteristic of your system. Reactive is not limited to that and one of the reasons we make Quarkus reactive is that there is new class of applications out there that only crowd and things like that. We are seeing over the past 5 to 10 years a lot more event-driven applications. Why? Well, the first thing is Kafka. Kafka has great popularized event-driven processing. Also things like IoT or mobiles are domains where messages are very common. IoT is emitting a lot of messages and you are seeing a lot of notifications on your phone. Machine learning, business intelligence, data analytics are also domains where we exchange a lot of messages. So reactive is going to help us shape and develop such kind of system because we are going to react to messages and send other messages, other reactions to that. So it's reactive. The last point is about the deployment density. As I said, Quarkus already helped in terms of deployment density by reducing the memory usage. But we can go a lot further thanks to reactive. Reactive is going to reduce the number of threads used by the applications. And with this, we are going to save a lot of memory and a lot of CPU, which means that on a set of resources you can deploy more applications that will handle more requests, more messages. That's absolutely amazing because you are going to save money thanks to this. The fact that reactive is able to use the resources efficiently is a key, well, it's essential today because, yeah, well, the cloud is expensive. So how does that work? Let's see this. So our reactive frameworks are built on top of these three layers. We first have non-blocking IOs, which leverage operating system facilities to have a single thread to handle multiple concurrent IOs. So if we take EPOL, but there are other things like KQ, IOU ring, or SELECT, basically we have a set of file descriptors that represent our interactions and one thread that would traverse this set of file descriptors to know when things are done, when we have the response or something like that. On top of that, we have a reactive framework that will make all these low-level aspects generally implemented with NetE in the Java world. So the reactive framework is going to make all those low-level aspects a little bit more consumable by the application code. But there is one thing, and that's why there is a bottle of rum here because we generally need that for this small, small issue. The non-blocking IO is calling the reactive framework using the IO thread, the EPOL thread. So that means that the application code is also going to be called with that thread, which means that you cannot block anymore because if you block, then the full model collapses. You cannot handle any more requests. Reactive is a different concurrency model that comes from the fact that we are going to leverage non-blocking IO, but you need to write non-blocking code. So how does that work? Because when you realize that, you say, oh, son of a biscuit-hitter, which you can check the private dictionary to know what it means, but you realize that something is going to be quite difficult. But how do we make it in Quarkus is to really simplify this. We still have our non-blocking IO based on vertex and neti, our Quarkus extensions, and then we have a routing layer. And the routing layer is going to decide whether it can stay on the IO thread or if it needs to switch on a worker thread. If it realizes that it's a reactive endpoint, no problem, we can stay on the IO thread and we stay on the thread so we don't do the switch. If we are calling something that can block or may block, then we switch to a worker thread. We lose a little bit some of the ability of reactive, but while we can write simpler code. What's your advantage to stay on the IO thread? Well, let me show you the last result of the Tech Empower benchmark. Go to TechEmpower.com and you can check that. This is round 19 that was run in May or something like that, earlier this year. This approach that I just explained is the approach used by Vertex. And that's what makes Vertex one of the prominent competitor on this benchmark. Whatever the category you check, you will see Vertex and a few variants of Vertex because we have Vertex Web and pure Vertex Core really, really fast, be able to under a lot of concurrency. Again, having a lot of concurrency means having an application that can under a lot of loads which save resources for the other applications. Let's go back to the slide and see how we did that. Let's have a look at what, how well, let's have a look at the Quarkus story. The Quarkus project started a long time ago but I'm going to have a look to the last two years or maybe three years. So the journey to become a prominent interactive actors has not been that simple because initially in the first proof of context, proof of concepts around Quarkus that was named Protean at that time it was absolutely not reactive, it was focusing on crud. So what we were able to do there was, well, what we can still do today is to have Jack Saracen point like this Hi Ho and I can call them, let me see I should be able to call it, yes here we are we have a Ho and what's important here is that we are called on a worker thread because yeah, Jack Saracen it's imperative. Quarkus supports a Spring compatibility layer so you can also write Spring code like this so if you are using Spring that should not surprise you and if I call this well, we still get called on the worker thread so that's great, that's not reactive but at least it was a start so how did things evolve after that? Well, after that I start getting involved in the project and because of my vertex background I say well, it would be great if we can run vertex on top of this new framework so we reach the vertex cape and be aware there are various tracks around the vertex cape so it can be a dangerous area so what happened here? Well the first thing we were able to do is to have an extension a vertex extension that was running and it's still the case so there is a managed vertex instance in Quarkus that you can inject that way then you can write well, vertex code like for example, I will create an HTTP server and it will get some request and when I get some request I will write a response like a hoi vertex and I'm going to write the name of the thread to know on which thread it's called where I am here and obviously I need to listen I'm going to listen on the port 8082 80 port so if I have this and I go back here oh wrong one sorry come on and I go to 8082 boom! I got a old vertex and here I'm called on one of the event loop so see here we were called on a worker thread here we are called on an event loop which is one of these IO threads so you may ask why I got 13 of them I got even more than this because the number of event loops is two times the number of CPU cores of your machine so that's explain it so that's great but well that's a second server that is not really what we want here we don't want to have a quarkus application with two HTTP server so that's what we did after this we went to the HTTP island where we have rebased everything on top of our vertex layer on top of our vertex extensions which means that we have now vertex and then our extensions and the routing layer so let me demonstrate that so it's not only vertex it's actually vertex and vertex web and one of the main component of vertex web is a router on which you can register roots and because it's running underneath you can well you can add roots but you can also use it programmatically so I got my router and I will do get on so I get request that on I I will I will response with response and say plus again get on thread get me so if I do this I go back here and I call it yes I should get this so now on the same server the same server that was serving the spring endpoint and the track service endpoint we got something called on the audio thread has it start to be reactive and we start to have a little bit more interesting here but I need to prove you that this code is actually executed even when we have Jax RS and spring so let me do something little bit different here I'm going to register a filter a filter when you use a router it just a handler that will get the request and let's say that here I want to put some header and I will call this chips and we will get one of the most known chips that we can use and then because it's a filter I need to call next or nothing else is going to be called so if I go there and I call my spring endpoint for example we see that we now have our new headers here and that's also the case for a Jax RS endpoint so what's important here is that you can thanks to this add reactive filters to your Jax RS spring or reactive routes that we are going to see very very easily speaking about reactive routes yes something that we may not really like about that is that well we want something a little bit more declarative than this even if router models are getting a lot these days we need to have some annotations some declarative things so that's what we are going here and we are going to do there I have another annotation which is a reactive route where I say ok I want to under get request on pass slash can router and when I got this I will say under I get a putting context as parameter and here that's going to be the same code I will do an and say where the booty same thing I am going to display the spring name alright so if I know I am going here and I am doing yes this one I should get our was the booty response called on one of the IOS thread so that's great it's already one thing but let's go back to the side and see where we are so we did we cross the STP island we have seen this we had a long adventure around the Kafka monsters that I am not going to cover today because well this is a big topic and well Hudson can re-invite me later this year or next year where I can go in depth around our fight against the Kafka monster but it's the same idea in the sense that you can consume or write to Kafka in an imperative or reactive way reactive on the bottom and then we have a routing layer in between but what I want to talk today is what happens on any pirate ships we got some kind of mutiny because well it's nice to be able to write reactive code and asynchronous code but it's hard to write and there is plenty of reactive libraries out there except if you want to be lost in a nomad hell it's not something that you want to read in six months or six months after you've write it and most of the time you don't maintain your own code you delegate that to someone else or yourself in six months and when you read all those chain calls you say what was that so we decided that we needed a better reactive experience here a better reactive programming library and we built a mutiny and how mutiny has been integrated is absolutely awesome in the sense that everything in Quarkus is actually or has a mutiny API and obviously one of the things that has mutiny API is our reactive routes so if let me copy this if I get something like this shantay which is a song sung by pirates I can return a uni which is one of the type provided by mutiny uni of string for example ok and then I need to return a uni a uni is like a kind of a futures it's have some differences but basically it's a reactive type that get a value an item in the mutiny lingo asynchronously probably later and here what I'm going to do is that I'm going to call this Quartz gRPC services that we are going to see in a few minutes so I'm calling this Quartz request newbuilder.build when the gRPC services sent me the response then I got an item that's my response and I'm going to extract from it to transform this response which is a Quartz reply I'm going to just extract the Quartz and just return this and now if I do this I should get one of the famous Quartz from pirates the Quartz is more like guidelines really if you know Jack Sparrow you can recognize this Quartz even if it's not Jack Sparrow that said it I guess so one thing we can do here too is to do the same thing but with stream I want this time to return a set of Quartz and let's say 3 Quartz so I will do multi, there are several ways of doing this but I don't know these days so I create a range from 0 to 3 and on item I will ignore this and I would just call getQuartz and I need to decide if I want to merge or concatenate I don't care in that case I would just do this and here if I do this I should get my 3 Quartz you can recognize the first one you can always trust the because you can always do it ok that's a long one and so on and so on but you see there are little bit mixed up because we just say ok that's my 3 Quartz send them there is no separator and nothing so of course that's useful when you have a byte array and you want to send chunk of bytes but here we want to have something a little bit better so for example I can say reactiveWoods. as JSON array and here I will get them but using a JSON array syntax which will be first item second item stop blowing holes in my ships which is a good idea and so on we can also say I want this on an event stream which in that case will create a SSC using the same thing so data this is the first Quartz ID0, IDR, 1, 2 and so on so it's very nice to have this integrated or to have mutiny integrated with reactiveWoods because that makes model a lot more a lot closer to JaxRS but yeah it's a different model and we will come back to that later one thing that is great with mutiny is that mutiny really is really infused everywhere in Quarkus which means that if I go back to my JaxRS endpoint here I can actually I can actually return a unit from a JaxRS endpoint so if here I want to get a pirate and I will get a pirate from my database we will see how it's done in a few minutes pirate.getFromThePirate yeah that's a pirate and here I should say pirate here so now we got pirate, sorry we got a way to return a synchronous result from a JaxRS endpoint using this homogenous API so the mutiny API and if I go back here and I call HTTP80 slash pirate I should get a famous pirate which is Alexander Dalzil not so famous at least I don't know him but he's one I found it on Wikipedia and I trust everything from Wikipedia so definitely a pirate we can do the same thing with SSC where I'm going to return a stream of pirates something like this so when you do SSC with JaxRS it's a little bit different than with interactive routes in the sense that you need to say that you produce an SSC and how it's going to be serialized every item of your streams so if you do this and now I do stream there we go we got every second one famous or not that famous oh William Burke I know this one pirates so okay so we've seen this but one thing that we have made in Quarkus recently is what I call the reactive bay that we already seen that Mutiny has been the homogenous layer everywhere in Quarkus you can use it in interactive routes you can use it in JaxRS but to be useful we need to have Mutiny APIs and what we did here is to have Mutiny APIs everywhere so we already seen in this this one that you can consume GRPC services using Mutiny which is great because GRPC is asynchronous by default so you can use a uni and multi for a uni for give a response and multi for streams that works very well implementing the service can also be done using the Mutiny implementation base so you just return uni, multi or whatever is required but it's not only this we also got recently Hibernate Reactive which is an implementation of Hibernate so the RRM you know that you use but on top of non-blocking reactive driver coming from the vertex ecosystem so it's blasting fast because of this driver which are used in the tech and power benchmarks that I've shown you and it's already provided Mutiny API and here because I'm not too familiar with Hibernate myself while I could use a name query and so on I could also use Hibernate Reactive with Spanish because we have it already Hibernate Reactive with Spanish is the Spanish we were using before but on top of Hibernate Reactive which means that all my methods are synchronous running on the event loop totally non-blocking and boom don't need to switch thread anymore so that's really great and the last thing I want to show you and that is going to conclude my presentation is something we have been cooking for a few months it's something that is not yet in Quarkus and that's what I wanted to show you today because that is amazing so let me switch to another oops no to another intelligent this one recently we got we had the chance to integrate something called RSTD Reactive and RSTD Reactive is an implementation of JacksRS but really, really reactive because right now when we use JacksRS well JacksRS is a little bit outdated in when it was created Reactive was not existing so here we can actually run JacksRS on top of the event loop so what we have here with JacksRS Reactive is that you can, RSTD Reactive is that you can decide if you want to be called on a worker thread by using the add blocking annotations or if you want to be called on the IOS thread with the non-blocking annotations and then our routing layer is going to detect that and decide if you need to switch or not let me demonstrate this very quickly so first I need to start the annotations here so to get that working it's a little bit more tricky because you need to build a specific branch a specific pull request but that is going to be mailed soon so now if I'm here I can say Matej which is my blocking one and I'm called on a worker thread and inside the same resources I can have a non-blocking one which is called on the IOS thread that change everything because that now you don't need to know two different models you have mutiny of the glue and you have the same set of annotations and look at this we have in dev mode some diagnostics that would tell you the efficiency of your endpoint like the first one, Matej so the blocking one it tries to say ok so you have a score of 66 over 100 so it's good but not that great and then you look my API is 83 over 100 so it's not too bad it's efficient enough but with this you can decide if you want to continue with imperative switch to reactive and decide and you will have details about all these steps when it will be mailed in Quarkus so that concludes my presentation that's what I call the reactive place where we are right now Quarkus is now a prominent player in the reactive landscape we have a hand to hand reactive story going from the higher level to the development model we have plenty of things like hibernate reactive but we also have access to Cassandra, Mongo, Redis and so on we have the complete vertex ecosystem that is available and we have the mutiny API for all of that we also have cameloractive which is great if you need to integrate with anything I would say legacy and this is because of the architecture and the ability we have in Quarkus to do all this reactive stuff how does that work? Well as I said we have a reactive core and on top of that you can decide if you want to go reactive or imperative and you can switch, you can mix, you can decide and thanks to the new diagnostics we have seen in the last demo I don't know if you want to stay on the imperative side or if you want to go on the reactive side that's all I have and if you want to see these demos except the last one because we need to build a branch it will be contributed as soon as it's made go to this GitHub repository and you will have all the rest of the code I've shown today if you want to know more go to the Quarkus.io website or code.quarkus.io and start coding immediately you can interact with the team you can follow us on Twitter and if you want to contribute or just like what I just said today go to our repository and give us a small star that is totally useless but that make us the Quarkus team quite happy thank you very much and yeah, do we have time for questions? Oh we are already a bit over time but we have a lot of questions see if we can cover them so first Andy is asking what is the performance of Quarkus running vertex compared to pure vertex regarding performance and memory footprint? That's an interesting question so of course Quarkus is adding some hover red but this hover red is actually quite small if I not mistaken the last measure we have done is less than 5% so except if you really hardcore you won't see the big difference is it's because we optimize a lot of things to really actually write the vertex code you will have written yourself even if it's a managed vertex instance the vertex instance we have tuned, we have configured correctly and well before in the vertex team we have the vertex team involved in Quarkus so we work with them to get everything blasting fast awesome and Steven is asking can I use Reactor with Quarkus? You can use Reactor with Quarkus our APIs are not exposed using Reactor there is a few issues so first you may have issues in native because Reactor is using some constructs that behave a little bit differently in native so before going to productions really double check that it's behave as you want it to behave but except that yes you can use Reactor awesome and can thus Quarkus support Reb to MQ do we have an abstraction for that? So that's part of the Kafka monster so we have a reactive messaging abstraction which support Kafka MQP and Rebit MQ by using the MQP plugin we also have support for MQTT and so on so we have one abstraction for that and as I explained it's all based on our reactive core and then you decide if you want to consume and send messages in an imperative manner or in a reactive manner you can also do GMS but these days that's not really a popular option so we have better in Quarkus everything I say works native by the way even Rebit MQ communicating with Rebit MQ works in native if you want to try awesome and Bruno our friend from Portugal is asking about the reactive route that you showed is that compatible with MicroProfile JWT or any MicroProfile APIs so yes OpenAPI supported JWT is supported too so we got all everything is it's just a beam so everything from MicroProfile is going to be supported except GraphQL we still do some work around that and I've seen some example combining that with Fortory runs so I know it's working even if when you are reactive the Fortory runs you do it a little bit differently but that works so yes normally it should work out of the box we have some tests we don't cover all the cases all the combinations because obviously that would be quite large so if you have any issues just come to us and we will fix it but also with rest easy reactive that is coming that is going to be merged in Quarkus well actually you won't change your model it's everything is you will decide what need to be reactive and so on and everything else everything in Quarkus is going to be supported out of the box awesome and Steven is asking again can I create an event bus in Quarkus we have the vertex event bus built in so just use it it's there yes so and Bruno is asking too you've shown the Quarkus the vertex rest part the last thing that you showed and you showed the efficiency number how is this efficient calculated oh that's a good question so actually if I go back to this here it's actually going to look at all the interceptors that are required to handle your your request and remember this is computed at build time here I'm in dev mode so there is no big difference between build time and run time but this will be computed at build time and everything is going to be in line at build time so that means that it's then just calling the function in the right order so it's all it's all generated but for each of them we know their efficiency cost either need to switch red if they need to do synchronization if they need to do all this and that what help us giving a score to every to all of them and compute this final score so typically need a worker spread dispatch boom that is quite bad a single resource instance for all request booms that is quite good so that's how it's computed we're still tuning a little bit all these computations it's pretty new but yeah I'm very happy to see that I found this diagnostics very very cool and and something I didn't mention we will soon have a better developer experience already better than what we have right now with a UI where you will find such kind of diagnostics not in your console but using some well HTML wondering that people can read without becoming blinds awesome well Clemence thank you very much for presenting this amazing content and answering the questions I'm pretty sure everybody is excited about this reactive stuff and well we'll certainly invite you to talk more about the other reactive stuff in other opportunities maybe next year and if you're watching this I'd just like to point out that next week we have Thanksgiving here in the US so we won't have a tech talk but we'll be back the the other week after Thanksgiving thank you very much for watching and stay safe see you soon thank you for having me bye