 Hello everybody, welcome to the next session. The next talk is building reactive microservices with MicroProfile, and it will be presented by Matin Stefanko, which is yours. Please, unmute your mic. Thank you. Thank you. So welcome everybody to my session, which was already introduced. So shortly about myself, my name is Matin Stefanko. I work as a now Serial Software Engineer at Redhead for the past seven years, I think. I am also a MicroProfile Committer, which means that I can match things to MicroProfile basically. I am also a big microservices fan, so if you are interested in these kind of architectures, please catch me up on Twitter. I am mostly a guy. Okay, so what we are going to be talking about today is basically the reactive story or how to do reactive microservices. I like to start at the beginning because getting the idea of overall decision-making process, how we get where we are is very important, in my opinion, to understand why would you like to switch to reactive programming. So it all started in 2014 with a one-page document called Reactive Manifesto, which is available at this link. And you should have linked to the slides in the session. Also there will be a bit of a link at the end. Basically in this document, I think four architects come up together with a set of properties that they identified in really scalable, fast, reliable systems, modern application-deployed systems, which they wrote down in this document. And basically this became de facto, not standard, but de facto the way or the strive for these properties in the modern application deployments. So in this document, they define these properties or systems which have these kind of properties as reactive systems. And we will go over these properties now. So we will start from bottom up. So at the bottom we have message driven. What this means is basically that you are not making your requests or communication between your services and between the clients in asynchronous manner, you switch to asynchronous data pipes. So basically some form of data channels, which have a name, and you are just sending messages to these kind of channels. In this sense, you are not actively waiting. So we are not blocked to receive a response directly when you send a message. So you will just send a message to the pipe. Optionally, you can receive another message as an acknowledgement later, but this is not necessary. What these in turns give us is these two properties because when you are sending just to some address, some channel, name it's channel, you don't really know where this message that you are sending will end up. So there might be one instance of the service at the end of the pipe. There might be multiple. You don't really care. So this directly gives you elasticity because you are able to scale your services up and down as per needed basis basically. And of course this directly turns on into resiliency because you can also scale down to zero or your services can fail to zero. And in that sense, you are still sending messages and usually nowadays if you are using something like Kafka which I will be showing later today, the data pipe is clever enough to remember that some messages were not yet received by any consumer. So the pipe will remember the messages. And once you will again scale to some instances, instances should start processing the not yet processed messages. So putting all of these three together, we get the responsiveness time which basically means that you are able to always respond to any request right away. Why is this important? Because nowadays when we live in the era of internet and everybody is accustomed to some form of reliability that you are getting from your usage of any site that you are able to visit nowadays, you are usually expecting that when you click on something or access something, you will get the response back in a number of milliseconds. If you need to wait for number of seconds, even like two, three seconds, usually you get lower customer counts because usually when you are waiting after some threshold and I believe that in some companies this is as low as 200 milliseconds, people just not wait for your resource to respond. They will switch to something else. And in that sense, you are not making your, sorry, my keyboard is acting up, you are not waiting for your users to do something else because otherwise you can get many requests which are not that important. So responsiveness is something which is really striped upon in the modern application systems. And then if you are not providing responsive system that is a different kind of story for your users. So okay, so from reactive systems where we get this responsiveness, we moved on to something which is reactive programming and we realized that this is not something new, reactive programming has been here since forever because if you ever used Excel spreadsheet, you know that you can like code in Excel that if you type something into A1, you can make A2 change with it. So it computes some value based on some different cells. So it directly reacts dynamically to some form of event. And this can be also with many different models. However, what is important to take from is that you always have some form of event, some asynchronous stuff that just happens out of nothing and you are able to react to it. And this is how we usually work because you are reacting in every day if you hit your toe to something, you feel pain. So reactive programming, one important thing that you need to always do when you are doing reactive programming is that you never blocking the main threat, usually called event loop threat because basically this threat is always responsible to process user requests. If you block this kind of threat, then you are moving back to blocking manner. So this is not allowed in reactive systems. If we put this together, we will get something which is called reactive streams and hopefully everyone already heard about reactive streams because since JDK nine, they are actually part of JDK itself. So basically reactive streams again started as a separate specification which was based on reactive manifesto which came up with only four classes or four interfaces which described how to write like asynchronous data flows with integrated back pressure and we will get into this in the later slides. So these basically four interfaces that we have are called publisher, something which is able to produce messages, subscriber, something which is able to consume messages, processor, something which is both publisher and subscriber so you can transform your data pipe and the subscription object or subscription interface which corresponds to the link between publisher and subscriber. And as I already mentioned, since Java nine, they are available in Java utility concurrent flow. So how this works like and what is the back pressure that I talked about? So the first thing that needs to happen is that actually you will call, sorry, subscribe method on the publisher with the instance of subscriber. What is on turns must do? This is dictated by a specification, publish and must call an unsubscribe callback with that subscription object on a subscriber. The subscription object by itself contains only, I think two methods, the one is request and the second one is cancel. So with this request, you will pass a number of messages that the subscriber is able to process right now. So basically now I'm saying I am able to process only two messages. In turn publisher is required to call on next callback with a values directly taken now from the stream to the on a subscriber. So it will call it two times because it was requesting two values. And this is exactly what I mean by back pressure. Back pressure basically is a concept where slower consumers can basically somehow instruct publishers that they need to produce messages slower because they can be overwhelmed because what happens if you will just be calling on next forever and subscriber is not able to process them, you will get into some form of overflow and then you can decide either you will just throw the overflow messages away or basically you need some buffer on the subscriber side to save these messages, which can again overflow. So eventually we'll get into the similar problems. So this is directly the implementation of the back pressure because subscriber will never request more values that it's able to process. So we again request only one value then we can request forever, for instance four and we will call on next until there are no more values in the stream or we will throw some error and in that sense publisher must then call either on complete if you finished successfully the stream or on error with the throwable which corresponds to the error. So, okay, so we have an API and maybe I should mention that originally this lives in org that reactive streams project before Java E nine or Java nine. So I guess we can implement that, right? So it turned out that this is not so easy to do. There is also something which is called DCK technology compatibility kit which is a set of tests that you can verify against your implementation of that four interfaces that it corresponds to the right usage of these interfaces. And actually, if you are interested in this why is it not so easy to implement? There is a grades told by Yasek Kunitski which is a whole new to it's a little older told but it's still corresponds to the valid points. He will try to implement these interfaces and he will get only something about 20, 30 tests passing in one hour. So it's not that easy and he will explains why. Okay, so moving on, we have reactive streams now. What is the problem with reactive streams? By design, these four interfaces are really simple. They're really only able to produce values and to implement the back pressure. However, usually when you have streams and hopefully everybody's already using for instance, JDK Streams API, you know that you usually want to do some operations on the stream, mapping, filter, et cetera, flat map, et cetera. So because reactive streams doesn't provide this functionality, several libraries were created in different spheres of Java community which provides reactive extensions as we call them which are basically these operations on a stream. Probably you heard about Erics Java that are several Erics projects for different languages, project reactor from Spring, or small-time routine, which is a relatively new reactive extensions library which is provided and using Quarkus which we will be using today. And yeah, I will get to it later. I will show you in code. So and from that, we finally get into micro profile and why there are micro profile specifications around reactive space because all of these libraries and there are more, I didn't list every one of them, use a slightly different API for these extensions. So reactive streams are already an API or SPI service provider interface which they correspond to. So you can plug, for instance, publisher from Erics Java to subscriber from Mutiny but these operations add different APIs. So micro profile tries to unify these in specifications. Okay, so getting to micro profile. Actually micro profile reactive are two specifications. The first one is reactive streams operators which contain single class reactive streams which acts as a builder on which you can call these kinds of operations and plug in the publisher, subscribers and processors. And the second one is micro profile reactive messaging which is basically data streaming with CDI. CDI means context and dependency injection, basically dependency injection implementation in Java E. Hopefully everyone heard about that already. So moving on, we will finally move into IDE and hopefully everything will work. So if you will see something not working and you see what I'm doing wrong, please just shout in the chat so you can help me dynamically. So I will create a new Quarkus project. If you are not familiar with Quarkus, we had a session yesterday about Quarkus and also workshop yesterday about Quarkus. Quarkus is now around three years old project coming from Red Hat which is basically reinventing slightly how Java should work and you will see in a while what I mean. So I created a new Quarkus project with my custom wrapper around my main Quarkus plugin. It doesn't matter. So I will open in IDE somewhere else and now what I can do is start Quarkus in something which we call Dev Mode or Development Mode with Maven Quarkus Dev. And you will see in a while what this is. I will open another terminal for my client invocations. What Quarkus generates for us, sorry, we're on IDE. What Quarkus generates for us is actually a really simple core project which has access to REST or REST easy implementation and our CDI implementation, which is called Arc. It will generate for us a simple, go back side as resource which is basically REST implementation that we use. And you can see here that on path slash ping I can invoke get methods to get hello REST easy. So if I go into, sorry, into terminal and invoke slash ping on my local 8080. Oh, sorry, I'm already running something on port HTAG. And let's do T82 because I have my second demo running on 8080 already, sorry. So now on port 8082, I will get hello REST easy. And what that live free load mode allows you to do is to change your code. The live com just saved the file, repeat the call and you can see that the Quarkus will be automatically restarted in a number of milliseconds and you will get back your recompiled change stuff. So you don't need to basically when you're developing with Quarkus you can just bump up this development mode and you don't need to stop it for the whole duration of the development. This is exactly what I'm going to use now. So going back into IDE and my keyboard decided to not work today, sorry about that. Okay, so I can actually delete this because I will not need it anymore. And we will create a new get method called just reactive streams one. And this can be done with one thing that I forgot to do is actually because Quarkus by itself works on the concept of extensions. So you see here somewhere that installed features which I have installed by default doesn't have reactive stuff in it because it tends to be as small as possible. You need to manually say that I want to use reactive streams. Then you can use another Quarkus put Maven plugin option, which is called add extension. Actually, maybe I can show that with this Quarkus plugin, you can call list extension to get the list of all available extensions for your project. So you see here that there is actually a lot of extensions that you can use, but I know, oops, which one I want to use. So I can directly call this and I didn't like take this. I already know what I'm doing, but you have it here. So I can just call this Quarkus at extension with the name of the extension with the small reactive streams operators implementation of the micro profile reactive streams operators specification. And you should see that my Quarkus application is automatically recompiled. And now I should see here smaller reactive streams operators. That is correct. This is nothing fancy. What it actually does, it will only add another Maven dependency into your Pomex ML which is managed by Quarkus. But what this allows me to do if I import the project is to use now reactive streams in my application. So I can hopefully reactive streams. Now I can't, come on, yes. Now I can call these reactive streams calls that I mentioned. So we see here that we directly have this builder class, but for the time being, I will just use predefined static, basically flow, which I will manually create right now. So let's do a look on. So basically I've created just a stream with three values, which are statically linked here. So this doesn't differ that much of basically normal streams API, but to demonstrate the example, it's easier. And if I already have a stream, you see here that I already get access to these operations like map filter, I don't know, flat map for instance. So I can use this to directly use these operations. And these reactive streams is already something which is a publisher. So I can pass it to processor or to the subscriber as a finish of my pipe. So, but for the time being, let's just use something simple. So let's do to uppercase filter the ones which are starting with L. Now I have a transfer pipe, which I can basically directly do something with. But if I want to consume it, I need to pass it into something as a subscriber at the end. There are several two methods. The first one is taking subscriber directly, second one subscriber builder, which is something coming from smaller reactive streams, which basically ends up as a subscriber eventually, but there is also this two list method which I will use right now. So I will get a static list. And by default, if you just create any form of pipe or any form of reactive streams publisher, it will not start producing values unless something or some subscriber will actually subscribe to it. So for this to happen, I need to call run method on this because this is competition runner only. This is not yet finalized result. If I don't call this run method, nothing will happen. So if I run this, I will get back a competition stage and on competition stage, I can then just print it to the system out. So hopefully if I type everything correctly, should be able to just, because I am in the demo, just repeat the call now to RS1, I have two streams one and one different port. And you see that the Quarkos was restarted and we have our transform stream print to the system out. This is not that interesting. So let's do something more complicated. Just call it RS2, can be also white. So what I would like to do now is to create more dynamic stream. So I can show you that we can process values dynamically as they are coming up. So for that, I will again start with the Active Streams class, but this time I will create it from publisher, which I will create later and I will eventually link it to some subscriber and we will also pass it through processor. So again, I can't remember to actually run it at the end. And okay, let's go step by step and create. So first thing we need to create is probably share. And because small rai reactive messaging and there are two streams operators still needs to function with JDK8 because we still have customers on JDK8, they still yet not adapted JDK9, Java U2 concurrent flow. So we still need to use this org reactive streams version, but this is one to one mapping. So it doesn't matter right now. And we know that we will be publishing actually longs. Okay, so to create a publisher, I can do several things. And as I was showing you at the slides, somewhere here, where it was somewhere here, you see that you have different implementations that can create reactive streams publisher. So I actually know that reactive streams from smaller I can come with at xjava. So I can use followable for instance, but it is encouraged and I actually prefer to use that smaller I'm not in a project which is coming from small rai directly. So for that we have multi and uni object which probably are familiar with flux or flowable from different libraries. If you are familiar with this. So I will use multi here, which represents a stream of values, one or more values or zero or more values, uni represents zero or one value. So I will start with multi and I will show you why I like this API in particular because it's really fluent and easy to start by. So I have only two methods on multi which is create from and create by. So I can choose for instance, create from if I haven't seen this before. I can see here that I can directly create it from publisher, competition, say et cetera. I know that I want to use this right now. Here I have every which takes a duration of let's do every half second. Then I can skip, transform or filter map, et cetera. The operations that I already talked about, I will try select because I want to take only first 10 values. So you see that it's very fluent and easy to move by just by auto competition in your IDE. I think this is very good. If you are not familiar with API, I actually haven't got the needs to read the Gemadoc so far. Usually this auto competition is very easy for me to move by. So now what I created is a publisher which will basically every half a second produce a value. And I just said that I will stop after 10 produce values because otherwise it will go forever. Okay, so moving on. And if you have any questions, please ask in the chat directly. We don't need to wait for the end. So for the processor, I actually know that this will be process. So again, from ORC reactive streams which will be transforming our long stream to strings. And for this, I can directly use now that reactive streams class again. And now we will use the builder because I want to build basically transformation pipe, and that will transform somehow my stream. So because this is Java, if you want to use reactive streams builders, you need to help the Java type systems because by default it cannot know that I am producing longs. There is no way how to defer that. So I need to say here that I know that my stream is coming from longs. And then what I can do here is already, oops, long. So I will just call it L, now L. And we can just transform it to string by adding iteration to it. We can also map for instance to uppercase two just to show that we can do more things. And then to get actually an implementation of ORC reactive streams processor, I need to call this build reactive streams method. And this will transfer my specification pipe to ORC reactive streams processor. Okay. And the last thing is actually a subscriber. I know that this will be just strings. Basically, there are different ways how to consume values directly from the reactive extensions just because of the example purposes, I will actually implement a subscriber right here. We don't need to do this. I have to show you that there are really only these four methods that I had in my slides. So as I was already saying, unsubscribe is actually required to be called when this subscriber, this anonymous subscriber subscribes to the screen. So publisher will invoke this unsubscribe method with a subscription option. What you should do in this subscription is actually save the subscription because you will need it later on. So I will just save the subscription to a field. And if I want to continue consuming the stream in my subscriber, I need to actually request some value. And I can show that there are only these requests and cancel method as I mentioned some value on that subscription because otherwise on next we'll never get or basically on complete, we'll never get called if you don't request any values. This is that integrated by pressure. So okay, if I request one value, what should happen is that the publisher will call my on next callback with the first value. What I can do here is basically see out just the value for the system out and then again, I need to call on my subscription the request method with the number of elements that I want to request. Okay. On error, I will just again print the error and oops, and on complete, I will just print that we complete it. I'm missing semicolon. So this is basically the very simplest subscriber that I can and oops, slip the on, let me fix this. Let me fix this. And I can basically just print the values as they are coming, always requesting only one value. You can see directly that for instance, you can run into problems because this can go into infinite callbacks, possibly if you are not careful in the publisher because you can on next always you are calling requests and the request basically results into calling on next. So you need to be careful with this kind of invocation and this is everything tested in the DCK. So hopefully if I typed everything correctly, I can just save the file, go back into IDE and call now at S2 and hopefully every half a second we are getting our publisher produce a new value. Eventually, maybe I can also print here on subscribe that we are on subscribe. You see as the first call we receive is on subscribe, then we have requested values. So producer every half a second produce a new value and eventually when after 10 values because I limited to the publisher, the uncomplete is called because no error happened and we run out of values. So this is basically the reactive streams API in a very simple way. And if there are any questions, please ask as we are going. So moving on from that, we have that second specification, which is basically the main one, reactive streams operators was created as a prerequisite specification for reactive messaging because this is the main way how we intended to use reactive data pipes in microphone. So moving again to the second demo or maybe I will go slides first a little to explain what this is. So my camera for active messaging as was already mentioned works over CDI beans. So you have a bunch of CDI beans annotated with some methods, which are basically processing your data pipe sequentially as the values come. So in the beginning, you somehow receive from some publisher data pipe, which is then passed to some CDI bean, which can do for instance, map operation, again, some map operation, filter operation, et cetera. And this can be in the same bean or in different CDI beans, it doesn't matter. And eventually it will produce some stream out of your last CDI bean somehow. This green arrow basically means that either the CDI bean can create this first stream by itself, but it can also receive the stream dynamically or the events dynamically from somewhere else. And this is done by a concept, which is called connectors. Basically, if we take that our application is this boundary, so we have several CDI beans that are going to do this processing, we can plug different connectors on both consuming and producing end in a way that basically this connector says that the values which are going to be produced to the first CDI bean are coming from some external system. And you can imagine here that we have Kafka connector, IMQP connector, I don't know, in-memory connector for the matter, et cetera. So I can for instance, consume values from Kafka, do my computation, my whatever operations that I need to do, and then the end push it to IMQP queue, for instance. I hate it, okay. So how is this done is actually as most of the things in micro profile with annotations. So you have this concept of these named channels that I was talking about in the beginning, and you have two annotations. The first one is outgoing, which is used as reactive streams publishers. So basically you are producing new values from this method. You have also incoming, which is basically a subscriber. It is only consuming values from a method. And then if you combine both of these, you are creating reactive streams processors because from one channel you are consuming and to other channel you are producing. Okay, and with that, we can move to the second demo. And we have not that much time, hopefully I will be able to finish. So our second demo, actually, let me do a little bit more slide, is a coffee shop. So what we currently have is a coffee shop which is based on HTTP as you would write most of the services today. So in a blocking request reply manner. So our users can request coffee to HTTP. Our front end coffee shop will make another intern HTTP call to second service, which is called Barista who is actually responsible to prepare coffee. When the coffee is prepared, you will get back a response to the users. What this means is that the user is waiting for the whole duration of this call because otherwise you cannot do anything else with HTTP basically. You don't have a way how to respond to the user from the coffee shop back. So you would say that this would make users happy because they will get the coffee, but they're actually not that happy because for instance, what will happen if Barista takes a break, so it fails. Again, you will call the coffee shop, but the call to the HTTP call to the Barista fails because there is no one to process it. What you can do is actually return some dummy response to the user or basically propagate the error, which in either cases that's something that you're looking for. So what we are going to do is actually transform this service. Maybe I can show it because I should have it running here. So here is my coffee shop from then. So if I pick that coffee here to HTTP, but DevConf, I should actually be able to get in some time that the coffee is processed in my Barista. And you can see here that this is Barista and this is coffee shop. And here I have client. However, if the preparation takes longer than three seconds, I think then we will fail. So let's get some prepared. This will also fail. Yeah, you see that if it takes less than three seconds, it is ready and you will get the name of the Barista that prepared it. But as you can see, if I order coffee, I cannot order more coffees. I am eventually blocked for the whole duration of the preparation of the coffee. So I'm trying to click it here. I don't know if it's audible. Maybe I can show it also here. So if I call HTTP manually in my terminal and I will have a chance to have a little bit longer call, you'll see that for the duration of the call, I am eventually blocked. So what will happen if I kill the Barista now? It's still, yeah, I'm able to call it, but as you can see, I will always get the fail state. And this is because in my Barista resource, I have a manual, what is it? Manual, sorry, I don't see it here. What is it? Oh, sorry, it's in coffee shop. Sorry, in coffee shop, I have this manual fallback that if it takes more than 300 seconds, then I will recover with some predefined failed coffee. But if I were to remove this, it would fail. Okay, so moving back, what we are going to do is to rewrite it very, very fast way because we don't have that much time to use Kafka instead. So we will have two topics in Kafka, two channels that we will use orders queue to which coffee on order will push a message. Our, and we will directly return the response back to the user saying that your order will be processed somewhere in the future. From the queue, we will be pushing to some board resource which will be displaying this in our UI. And from the orders, Barista will pull the orders as they are able to pull them and push them back to the queue. After the coffee is prepared, you will just update the state on the board and the user can check the board that the coffee is prepared. So this is something like Starbucks if you are, if you've ever been in Starbucks. So moving to the IDE, so I have this started. First thing that I need to do is to actually add that reactive messaging extension again to both services. So I will add now small right reactive messaging Kafka to my coffee service and also to the Barista service. The same thing. Okay. So now in the IDE, I will start with the Barista service. To change this, the first thing that I actually need to start with is to add or update the beverage JSON that we are sending to the, that they will be sending to Kafka because this beverage doesn't have one field that I will need. So I will just copy one, the copy paste the one from a coffee shop service because this one has this preparation state which we will need to use to actually update the state in the UI. And then to actually transform my JAXAR as research of Barista, all I need to do is I no longer need this to be JAXAR as a resource. I need it to only be a CDIB in. So I will use application scope. And instead of post, we will use incoming and we will be taking from orders and pushing to queue. Oops. So I am pulling from orders and outgoing to queue. And this is everything that I need to do. Actually one more thing because now we have constructor with already all those the state. No, it is state ready because this is already prepared coffee. Constructor was changed when I copy paste the class. So this is everything that I need to do in my call to transform from JAXAR as resource. Now this call method will be called for every order pulled from Kafka and we'll produce a beverage to, sorry, from orders and we'll produce beverage to queue. Last thing that I need to do is actually make the configuration which it will just really fastly process because we don't have that much time. So MP messaging always starts with MP messaging. Then you either have incoming or outgoing depending on which channel you are configuring then the name of the channel which by default if you are using connector which is one mandatory value that you need to set up. So we are saying that we are using small right Kafka connector that is also small right, I'm QP, et cetera. After that you can configure basically any values for that connector itself. So these values are already something which is specific to Kafka which we will be using later or like the serializer which is just saying that we will be using JSON B to serialize other messages. I don't have that much time to go into much more detail. So let's just really quickly jump now to coffee shop service to show you how this works. So in my coffee shop service I have this post-method using HTTP which we were using so far. I will create the new method, call it messaging. This will be actually also written in order because I will not have the read when I will be returning from this method. I will also take an order. First thing that I need to do is to set order ID for which I have a helper here and in the end I will directly return the order just with ID in the end. And what I need to do here is to actually set the asynchronous messages to my queue and orders. But to do that, I can inject the services directly into my CDIB in this sense. This is a Jackson as a resource which is also a CDIB in with the channel annotation where I can say the first one channel will be order. I will use something which is called emitter coming also from the specification which will be emitting orders to the order topic and the second channel it is called queue. And this will be emitting that reaches which is the ones that the JSONs that are going to be displayed on board and call this queue. So here all I need to say is only orders sent my order and queue sent beverage and I have a helper static factory in my beverage which will queue the order, create a beverage with the in queue state. This is everything that I need to do. So one more thing that I'm missing and I will go fast here is that actually something which will be pushing events to work. So I will create a new Jackson as a resource for the resource and because we don't have time I will create it really fast. Basically this will just consume now messages from beverages to a multi. So I'm not pushing, now I'm consuming and I will use something which is called servers and events to push my messages dynamically to my UI. I'm sorry, I don't have much time to cover this in more detail. So again, a bunch of configuration. So we need to separately configure orders for pushing, queue for pushing and queue for consuming into other beverages. So in my board resource I use the channel name beverages and you can see here that here I need to specify that the topic to which it maps is actually queue in Kafka. Otherwise it will default to the channel name. So this is just to show you that how this works. One last thing that I need to do is to actually create a deserralizer because I need to say how the strings from Kafka transform to JSON, which I had injected. And for that I need to create a new jar class which will just extend. Let's use object mapper deserralizer of beverage. It's basically is only using deserralization with the JSON. And I need to provide a constructor without parameters which will kill super with beverage class. And hopefully if I type everything correctly I will just restart the services really fast. I have already Kafka running in a background in a Docker container. If I typed everything correctly, hopefully they should be able to connect to my Kafka. And now I can say here Defcon switch to messaging and I should be able to place the order. You see that it's in queue and after this process in the Barista is ready. But what I can do now is to queue multiple and when they are ready they will be processed in my application. So you see that this is happening at random times multiple. Okay, so if there are any questions, please ask. I still have one more thing that I want to show. So please write your questions into chat if you have any. In the meantime, I will really fast compile Barista in the background and I will run it manually with Java minus jar. So hopefully it will still work but now we should get a different Barista because every time I will do it, it is restarted. What I can do now is to stop the Barista to show you that I can still place orders. They will just not get processed. But if I restart Barista again, the messages are now saved in Kafka topic. So when it will come up, you will see that it will start processing messages. And one last thing that I want to show you is to start another instance of Barista in the background which will connect basically to the same queue. And what I can do now is to hopefully show you that Oliver and Mia will be exchanging or pulling messages separately depending on their availability from the same queue. So in this way, we have the LCTT and resiliency that I was talking about. And because of the time reasons, I will just skip the slides to the end. So here you have links to everything that I talked about. One specification, second specification, smaller rate of active messaging implementation and smaller mutiny. And with that, I think that I still have one minute for questions. So thank you for your attention. And if you have any questions, please write them into the chat. Thank you very much Martin for your presentation. I don't see any questions in chat or Q&A section. So if that's it, thank you again. And if you want to talk or discuss anything, feel free to go to workadventure. You can discuss, I think, relating your topics. It's a virtual platform, so it's a great place to interact with each other. So I encourage you to go there. Thank you again. Thank you too. Talk to you later. Bye.