 And we're back here to talk about more Quarkus, but now reactive programming. So yes, at this point, you already know that Quarkus is a stack to write cloud-native microservice and serverless applications. And some of the Quarkus benefits are, as we've seen before, developer joys, supersonic subatomic Java. But right now, we're interested in these parts. Quarkus unifies imperative and reactive styles. So we can do on your project both imperative and reactive. And let's try to discuss what is this reactive thing. So when we're talking about reactive, first there are different definitions of what is reactive. The one that we're going to use comes from the reactive manifesto. And the reactive manifesto has some requirements. For example, responsiveness, which means that you don't have real-time constraints on how, what is the SLA in which you need to reply to a request. But yes, you need to respond to them, all of them, in a timely manner. And since we're talking about reactive, we can say something about reactive systems. So reactive systems, they need to be responsive. Also, these systems need to be resilient, which means that they can go down in one of the endpoints go down. Also, they need to be elastic, which means they have to be able to scale as the demand goes up. And to be able to do all of that, they need to communicate between each one of the endpoints through messages. These messages can be propagated, for example, for a message broker, or can use some other sorts of internal networking, even though it's much more common to have a broker. And if we have to react to systems, we have reactive programming. And we go back here. Reactive programming is your ability to be able to react to events. So traditional programming, imperative programming, you are used to say that, well, this is going to happen. Next step, this is going to happen. So you break your statements, and they are executed in sequences. When we're talking about reactive programming, first, everything that we do in reactive is asynchronous. What is the difference between imperative synchronos and imperative synchronos? Well, when we have a synchronous command, which means that if this operation takes a lot of time to process, then we'll have to wait for this operation to finish until we can execute the next statement. In asynchronous programming, no. All of the statements that we execute, they return immediately. And then maybe, if it's a long-running operation, we'll just return immediately. And once the operation is finished, we'll get a notification saying that, well, operation is finished, you can go and get the result. This is the difference between traditional imperatives, synchronous programming, and asynchronous programming. And once we have asynchronous programming, we can start to think about reactive programming. Reactive programming is your ability to react to events that happen in our environment. So how can we translate that into programming, into code? Suppose that's in imperative programming, you say that variable A receives the result of B plus C. This is traditional imperative programming. And if you want the value of A to be updated again to B plus C, because, for example, the value of B changes, and you want to update A again, you have to run this command again. A receives B plus C. That's traditional imperative programming. In reactive programming, you just say, A will receive the value of B plus C. And whenever B or C changes, the value of A will be automatically updated. That's one of the properties of reactive programming. And one of the ways for you to be able to program this way, like, oh, A needs to be updated whenever B or C changes, is that whenever B changes or C changes, you're going to generate an event. This event is going to be propagated through a data flow or a stream. And then you will react to the values that are presented on the stream. That's how reactive programming works. That's what I'm going to show you. So one of the very first reactive programming examples that we have are large spreadsheets. Spreadsheets, you just say, well, these spreadsheets are going to be the result of the sum of all of these other cells. Whenever one of them is updated, the result of this one is going to be updated, too. So who would say that, like, for example, Microsoft and Crane Excel would use reactive programming many years ago? And other things that are interesting in the realm of reactive programming, for example, you can do reactive programming using events. You can do reactive programming using the actor model. You can also do reactive programming using fibers, which is a feature that we don't have yet on the Java language. But it's soon to be added to the language. If you follow Project Loom, we'll be able to use fibers in Java soon. I don't know, Java 14 or 15. It's soon enough. Certainly before the next LTS release, which will be Java 17. And think about fibers. Think about fibers as lightweight threads. You can do asynchronous programming like you were coding imperative programming. So it will be a very nice transition between the imperative word and the synchronous word. So this is lightweight threads. That's a nice way for us to try to explain fibers. And since we're talking about reactive programming, asynchronous programming, known blocking, yes, we need to talk about streams. As I said before, one of the ways that we can be notified about changes in some values are through streams. And we have also the reactive streams specification. And reactive streams says that, well, we have some data flows from one point to the other. So whenever something changes, we have a notification for this data flow. And have in mind that reactive streams are a very low level specification. Most programmers, they don't want to program to the level of reactive streams. It's better if you can use an extension to be able to do this programming. But before we reach the extensions, I have to say that reactive streams are specified also in micro profile. Since Quarkus implements micro profile, yes, we already have reactive streams. And also reactive streams specify protocol for back pressure. What is back pressure? Back pressure states that if you have a producer and a consumer of messages, and if the producer is sending messages too frequently and the consumer is kind of low, like you have too many messages and the consumer is going to be overwhelmed by the messages, the consumer has a way to tell the producer, slow down because I'm not being able to consume all of the messages that you're sending. And since the system needs to be responsive, you know that all of the messages very likely need to be processed in a timely manner. So overwhelming a consumer doesn't help your system. So we have reactive streams, but to code. In our application, usually we use an extension. Two of the most popular extensions are ArcS Java and ArcS Java 2. We also have project reactor from Pivotal, which is used on the spring. Both of them are kind of equivalent. For example, the Netflix OSS stack is coded with ArcS Java or ArcS Java 2. So if you ever use it like observables in your code, you're very likely using ArcS Java. If you're using spring reactive, you're very likely using reactor. And I'm not, I don't have a PhD in reactive, but one of my colleagues engineers, Clemence Coffier, he has a PhD in reactive systems, reactive programming. And for his particular taste, he prefers ArcS Java. And that's why Quarkus uses ArcS Java as the basis of his reactive model. But we're going to code reactive, use some Java features, some ArcS Java features. So I don't intend to give you a live coding example of reactive programming. I still believe that 90% of the code will be coded using imperative programming for most business U.K.s, but for some pieces of your system where you really need to have a high throughput of messages, where you need a low latency for responses, maybe that piece will use reactive programming. So where is reactive being used? Well, one of the most popular reactive frameworks out there is Vertex. And Vertex is known to be one of the fastest application frameworks available. And we have a lot of, for example, companies in Europe using Vertex for these front-facing applications. So you have a high workload where you need to respond to a lot of requests. At the same time, if you want a very low memory consumption and you want a very low CPU consumption, you probably want to use something like Vertex. And I didn't know that, but again, last week I was in Brazil and was talking to some engineers. And Brazil has approximately 250 million people. I know it's small compared to India. But most of them have to pay taxes. They need to process their tax refunds. And the system that processes this tax refunds is implemented in Vertex. So it was a very nice use case. I didn't know that. And apparently, you can handle the load. And what is another property of reactive systems or reactive programming? One of the differences that we, it's clearly when we're trying to program. And that's why most of the endpoints that use reactive are like network endpoints. When you have a public endpoint that needs to receive as a lot of requests, traditional programming will use synchronous programming using threads. And when you use threads, it's very natural. Like you have 100 threads. When you receive 100 requests and if all of them are being processed, request number 101 will be blocked. You have to be waiting or depending on your policy, it might even be dropped, okay? But it won't be processed. If you're using reactive programming, if you're using reactive framework like Vertex or Quarkus, if you have 100, you don't use threads. You just plain use asynchronous programming. So you will be able to handle no matter how many requests you are receiving. You don't have a limit, like 100 threads. You'll be able to process a lot more requests. And the more load you get, your system might get slower, but all of these requests will be processed. Okay? And if you use it to, I don't know how many of you ever coded in Node.js. Node.js uses a similar model. One, it implements the reactor pattern. And one of the issues with Node.js, which is very common, Node.js uses a single thread to process everything in your system. Which is very nice, but of course, if you have a server with 32 cores and you have an application that uses one single thread, maybe there are some resources that you're not using well. So what Vertex does, Vertex implements the multi-reactor pattern, which allows you, yes, you have one event loop per core. So Node.js has one single event loop. Vertex implements the same event loop, but we implement one per core. So we have 32 cores. You will have 32 event loops, which increases the scalability a lot. And we said a lot about reactive, but we didn't define what is reactive. So if you go to the Oxford Dictionary, the definition about reactive is that, reactive means showing a response to a stimulus, like whenever something happens, do the other thing, or acting in response to a situation rather than creating or controlling it. It is very important because being imperative, you are controlling the flow of your code. You know exactly what's going to happen next because you know the order of the execution of your statements. When you're talking about reactive, you don't know the order or when your statements are going to be executed. You are going to respond to events, something softer. Reactive means an application reacting to stimuli, such as user inputs, messages, and failures. And in the particular use of case of reactive programming that we have right now, we will be responding to messages, which very likely are going to be propagated through an event bus. And this event bus implementation could be, for example, an in-memory channel, if you're just processing messages locally, or this event bus will be implemented using a broker, which could be, for example, a traditional message broker, like ActiveMQ, or could be another kind of broker, for example, Kafka. So you have these options, and if you're using reactive streams, it really doesn't matter which implementation you are using because your code will always be the same, okay? So depending on your requirements, supposed to date, it's very nice to use an in-memory event bus implementation, but later you decide that you want to distribute these invocations through your network, and depending on your requirements, you decide that, well, today, I think that a traditional message broker is a better implementation because it's already running in my network. It's just another topic that I have to create in my message broker, or maybe later you decided that there are some specific user cases that need Kafka, so I'm going to switch to Kafka. You don't change a single line of code if you're using reactive streams. You just go there, change the configuration, point to a different broker, and it's already running. So that's one of the beautiful things that you can do if you're using something like Quarkus, Vertex, and reactive streams. So is reactive, event-driven? Well, reactive, reactive systems, reactive programming, yes, they have concepts that are reused in, for example, event-driven architecture, which is the subject of the last talk of the day. I'll be talking just about event-driven architecture. So yes, event-driven architecture doesn't require you to be reactive, but depending on your case, your life will be much easier if you're using reactive programming or reactive systems to achieve an event-driven architecture, okay? So when we're talking here about our reactive, we have events, we have messages and refailers, we are responding to that. And when we're talking about event-driven applications or architectures, also implies that we have concurrent applications. And building concurrent applications, here's an example of stack. These are some of the building blocks of if you want to build a reactive system. First, you have to use non-blocking asynchronous statements in your code. You can't be waiting for something to complete, or else you won't be able to be reactive. And once you have those non-blocking asynchronous invocations in your code, you can use reactive streams to pass messages between each one of your endpoints. And once you have reactive streams, then you can use reactive programming to respond to the data flows, to the events that are being propagated on these streams. So these are building blocks to be able to reach to reactive systems. And if you're talking to the lower levels, if you want to use non-blocking asynchronous operations, IO, for example, you will be using Neti, which is the most performant library these days to perform IO, using Java. On top of that, you'll use a reactive framework to provide you the reactive streams. In our case, we want to use Quarkus and Verlux. And on top of that, then we'll use our application code, which we will use our model. If you're a very old school, you're using callbacks with the dangers of callbacks in house. If you ever use a JavaScript, you know what I'm talking about, callbacks, calls, callback, callback, callback, which means that you have a lot of indentation in your code, it will be very hard to follow. But if you're using a modern library, very likely you'll be using NARX Java or Reactor or something else, okay? And also, that's an example that I said before, if you have a blocking framework, it's using multthreading. It means that you have 100 threads, you have 100 texts working at the same time. If you have a reactive framework using an event loop, now you can have technically an infinite amount of requests being processed at the same time. Your system will get slower, but even though it will be able to handle the input that is taking. And you might be thinking also that, well, then we have two very separate words that are very different, and how do I bridge these words? Well, depending on the framework that you're using might be harder, but if you're using Quarkus and Vertex, for example, they bridge very well together. So if you want to have a reactive endpoint, facing the network, the public endpoint of your system, but you're still using a JDBC driver that is still blocking, you can. So what do you do? We have a compatibility layer, so you can be reactive in almost everything, but if you have this piece of code that is still blocking, then you create an adapter, and this adapter we will be using a thread pool. So thread pool for blocking operations, reactive for anything else using just a single thread. Then Vertex was the pioneer of reactive JDBC drivers, so it has been available for more than five years. And other projects have been catching up on how can you do create reactive JDBC drivers. For example, the first implementation was a reactive JDBC driver for PostgreSQL, and we're trying to implement reactive JDBC drivers for most open source projects. Of course, there are some databases that don't provide the JDBC drivers as open source, so we can't do anything about that. And Oracle, for example, in the example, the driver is not open source, there is nothing we can do about it. But for the other ones, we're helping, and even who would say that, like Microsoft has an open source JDBC driver for SQL Server? So we can help that. The other ones, we can't. And for event-driven applications to event-driven systems from reactive applications to reactive systems, because these days everybody's trying to do microservices, so we need to expand the capabilities from simple applications also to reactive systems, so we need to keep being responsive, message-driven, elastic, and resilient from one application to multiple applications. So this is here concurrent applications, one single node, one single endpoint, you use all of this, and when you apply this concept to systems, to composing a system from multiple separate points for multiple separate microservices, you can build a responsive distributed systems that you will use elasticity resilience and as I said before, messages. If you're using streams to be exchanging message between your endpoints, if you're using reactive application, you can use any memory channel. When you have multiple distributed points, in memory is not an option anymore, you will use a message broker. So reactive will imply being responsive, and the key concept of being responsive in this reactive world, of course the other properties are important, but one of the requirements to be achieving the other ones is using message-driven with a synchronous message passing. And why do I say a synchronous message passing because messages can be synchronous too? You can send a message to a server and wait for the reply. If you're doing that, your blocking doesn't work. Another implementation pattern, traditional message brokers, they have one way of working, which is request reply. So you send a message and you're waiting for the reply on the same channel. And of course it doesn't scale. I don't know how many you have ever implemented this pattern, but it only works for very simple situations. If you really want to scale out, everything needs to be a synchronous. Like I've sent a message and I'll be notified when this message is being processed or something else, I'll get notified about the response and never wait for anything that is happening. One way of thinking, one good example of what is being reactive or not, for example, suppose that you have an endpoint and this endpoint is going to generate a report for me. And this report is going to build a very complex SQL query. And I'm giving the example because I did that many times in the past. So suppose that this SQL report is going to, so that old report is still being generated. But the difference is that nobody's going to get the response for the reply from that response. Now you generate a repeater request which is asking to generate the same report, which is going to be processed again. Now you're not going to take five minutes, it's going to take like eight, 10 minutes to be processed, two of them at the same time. And maybe this time you get the request. So what will be the reactive way of implementing the same use case? Well, the user asks for the report. It will send a message and it will return. Your report is being generated, right? And when the report is done, the user will receive a notification. Oh, report X is ready, click here to view it. Then it will open and we will see in the HTML or the PDF on screen. That's our reactive implementation that I guarantee to you scales much better than the synchronous blocking operation because before joining Red Hat, I used to be a consultant and I solved many issues of scalability by just implementing a reactive approach, for example, reports. Because it's very common for customers, it used to be at least that. Suppose it's still common today, people complain, my system is low, you go there, the database is the bottleneck and what's happening, people are issuing multiple requests for reports all the time. And well, maybe if you don't process everything all the time, if you have this failure request, you'll be much better. And one of the benefits of this situation is that you can say that if it's very common for users to be requesting reports with the same information, you can even cache the replies for quite some time. So even alleviate the bottleneck on your database. So reactive has a lot of benefits. Again, your system will perform much better and you can even put some caching on your port which will improve even further the performance of your system. It's a very nice example of creating a reactive system which doesn't necessarily implement reactive programming. That's another great concept to be explained. Reactive systems, they have these properties. They are elastic, responsive, resilient and they are message driven. I'm not saying which technology you should be using. I don't say I'm saying nothing about the way you should be programming. Usually it's much easier to achieve a reactive system if you're using reactive programming, okay? But it's not a requirement. So that's what I said. This report use case scenario, for example, I did use in traditional JavaE. I didn't use any reactive programming and we achieved a reactive system at least in this particular use case without using reactive programming. But luckily, it was many years ago, 2019, we have a lot of options to be able to create reactive systems using reactive programming. Async programming is different from multi-threads. The example that I gave before about the number of requests. And async means that you're never going to block for any operations. If it's long running, I don't wait for it. I return immediately and I check the result of the processing layer. That's the basics of being a synchronous. The reactive part is receiving the notification that the result is ready and responding to that. That's the reactive part. And from HTTP to messaging. So when we were discussing reactive applications, I think it was a simpler discussion. But when we're talking about reactive systems, which is composed of multiple endpoints, I think it's very easy, the very first image that you have in your mindset. Well, now that we're creating microservices, I need to distribute the data somehow. And whenever I need information that is available in the other endpoint, what I need to do, I'm going to create an HTTP endpoint. I'm going to expose that information for rest. And whenever I need that, I'm going to issue a remote vocation, fetched information, and process that. What is the problem with this approach? First, it's a strong couple. I'm tired too. I need to get the information from that particular endpoint. I need to know where is the information. So there's a strong coupling between. And another problem is that it's not only strong coupling. It is uptime coupling. So if you study distributives in the third year, you see that there's something called temporal coupling. Temporal coupling is that your system only works if both are being executed at the same time. And uptime coupling, your system only works if both endpoints are available at the same time. And when I say availability, you have some SLAs. Both need to have the same performance level at the same time to be able to process the request in a reasonable manner. So what happens? I need to process the information, but I'm using rest through HTTP. I'm going to invoke the other endpoint, but the other endpoint is overloaded. I won't be able to process my request because my reply is going to take forever if I get a reply. Or the same happens if the other point is down. I can't process my request, which means that I'll have to report an error because the other endpoint is down. And I'm giving an example with just a single endpoint. Suppose that the information that you need to process your request is distributed across 10 different endpoints. So if any of them is down, or if any of them is low, you won't be able to respond to your request in time. That's the problem when you're using HTTP, for example, for communication. That's the problem when you're using synchronous programming because HTTP is inherently synchronous. It can be asynchronous, but most of the time we call that in a synchronous way. And even if you are using a synchronous HTTP, the problem is that we have uptime coupling. If the other endpoint is down, our endpoint needs to be down. And to solve these problems, for example, cascading failures, one endpoint brings the entire system down because it failed, failed, failed, failed, failed. We implemented some smart strategies called, for example, circuit breaking. And circuit breaking, depending on your use case, might be useful. Depending on your use case, it's not. And why I'm saying that because most examples that you see on the internet, oh, I need a circuit breaking, the fallback returns for you a default message. And talking about default messages, I would say that these days, at least for me, we have a clear separation between two types of systems that we're developing. We have enterprise information systems where 90% of the developers worldwide are working on. For these systems, the business model is very complex. You have a lot of data, and you have a lot of correlation between your data. You have a lot of relations between your data. That's why we have very complex reporting. And that's why SQL is still kink in the business domain model use case. But for these systems, business model is super complex, infrastructure simple compared to the business model. You don't have many scalability issues in the infrastructure. For internet companies, for startup companies like Google, Netflix, Amazon, Facebook, or your new FinTech, which is a unicorn, for these companies, the business model is simpler, but the infrastructure is very complex. So you have these two scenarios. And in the past five years, we've been trying to apply this internet company solutions to enterprise business applications. But we've seen that we don't have a perfect truth of the solution. Secret breaking is one of them. For Netflix, it works perfectly. Because for Netflix, for example, I don't know if you ever noticed that. Sometimes you open the Netflix application, and you see the first line is the recommended movies. For you, you look at these movies, there is no way I'm going to watch this. And why is that? Well, maybe the recommendation service was down, and they're just providing me a fallback, which is a default list of recommendations. You see, for Netflix, providing me a fallback in this use case is perfectly fine. And on the other hand, I don't know how many of you noticed that. For example, when you're watching your movie or series, and you ask for some subtitles. Sometimes you're watching that, and you just realize that that for that particular scene, you didn't have subtitles. But on the next scene, you already have subtitles. And when that happens, you think, oh, it was a problem with my internet connection. Well, maybe it was a subtitles service that was giving you information. And oops, I had a glitch. And what is the fallback? Well, instead of giving no subtitle message, you just don't show. Most people don't even note that you didn't have the subtitles for that particular scene. So that's how fallbacks works in this business domain. But you need to compute something. You need to generate a report. You need to add a certain amount to an order. You need the order information. You need the account information. How do you provide a fallback for that? You see, in the Enterprise Information Systems world, it's much more complicated to use circuit breaking. And one way for us to solve a problem, the best solution for this I'm going to discuss in the last talk, so I won't dig into that. But to solve the problems of HTTP and synchronous communication, we need to use messaging for communication between your endpoints. In the traditional HTTP synchronous world, we need to fetch the information. We go and pull the information that we need to process our requests. In a message-driven world, in an asynchronous world, in a reactive systems world, we don't ask for information. We get the information. And why is that? Well, remember what I said about reactive programming and reactive systems? I want to be notified every time, for example, that data changes. So if I need the customer information to process my request, I'm not going to fetch the customer information because that's imperative programming. That's a synchronous system. When I'm producing a reactive system, I have, for example, a copy of the customer information. But the end point responsible for the true data, the canonical source of information, is the customer microservice. So what happens in our reactive system, whenever the data changes there, there will be a message going through a data stream. And of course, since I'm interested on the customer information, I will plug into this stream and I'll be notified whenever the customer information changes. So you will see, oops, customer information changes. Yes, I need to update my local database so I can process this information. So it's just part of the solution. I'll be talking more about that in the last talk. But just for you to know that the solution for creating resilient, responsive, and elastic systems in a reactive way is by using messaging. So instead of having data at rest, which means using rest for everything, well, maybe the data needs to move. Maybe we need to process everything as events. So whenever something changes, it just notifies everybody and everybody that is interested on the information change will get the response that we have. And what does it have to do with Quarkus? Well, Quarkus is supersonic subatomic Java and it also reactive. Quarkus will allow you, you want to use async HTTP. Yes, you want to use synchronous HTTP. Yes, but if you want to use messaging, yes, Quarkus will allow you to do that. And as I said before, we can use many different broken implementations on top of Quarkus. And since we also like to use standards, we also implement micro-profile reactive messaging. And that's a very important message to you. People will think about standards. They think, oh, standards are as low. Standards are always behind. And it's particularly true because of the nature of standards. Red Hat was always committed to standards and it's not different with Quarkus. But with Quarkus, we had to make a choice. And we decided that between following standards and providing innovation to developers, we will always choose innovation first. So if the standard is as low, we'll do it ourselves and then later try to push the changes to the standard. We are not waiting for a committee to specify something to implement. So even before the reactive messaging specification was available, Quarkus already had reactive messaging. And once, and with the result of our research on top of that, we were contributing these changes to the reactive messaging specification. Once it was ready, we just changed the RPI to follow that guidelines. So this is the way that we're doing for almost everything these days. So the demo that I have right now, it's available on GitHub, as I mentioned here and the comments. Before lunch, all of these slides, you are going to receive an email with all of these slides and also all the links to all the source codes using the presentations. So don't worry about that. But this slide in particular points to this demo, which was implemented by the way by Clemence Coffier. My friend is a reactive expert and he tried to show a simple example of how can we model, for example, a coffee shop with synchronous HTTP requests and with a synchronous reactive system. So what's the use case? If you have a synchronous coffee shop, if you're using HTTP for your coffee shop, it's the use case like, if you have a very simple coffee shop, the same person is going to be the cashier and the barista. So if you have like one person and you have five customers in the line, well, I only have one thread. I don't have one person. So this person will go to the cashier. Oh, what do you want? I want a cappuccino. Okay, get the order. You go to the coffee machine. You process the cappuccino and return the cappuccino. Finished? Yes. Next customer. And you can tell that this approach doesn't scale that well. So we have a very limited scalability. So the consequences of that, usually your coffee is cold or you don't have a coffee in peak loads or something like that. Or if the person gets sick or needs to go to a rest room break, no coffee at all, okay? Or if the barista is kind of overwhelmed and processing a lot of requests at the same time, well, well, the reward doesn't happen this way, but suppose that your CPU is the person and is processing a lot of things at the same time until it gets the coffee and delivers to the user, it's going to be cold already, okay? Not the best analogy right now, but you get to the point, okay? And how do I change this synchronous communication to async and later reactive? Well, if I have two examples here, if I just want to change this endpoint, if I'm using a beverage in a synchronous rest HP endpoint, if I want to be non-blocking, if I want to be asynchronous, not reactive, if I want to be asynchronous, I just change the return type from beverage to completion stage. And if you look at the source code, the two endpoints are implemented this way. And if I want to process that later, if my endpoint is HP, you know that I'm going to process and client.order in this case is a synchronous blocking operation and it will return my beverage which in this case is my coffee. If I'm using reactive, if I'm using asynchronous programming, then this method is going to return immediately and I'm also going to return here the beverage in my customer. You get notified once it's completed. That's how it should work. So let's try to go to the demo. So instead of showing the code, let's see and hopefully my demo works. So if I go here, localhost, forgot the name of the endpoint, let's go here to my Quarkus coffee shop demo, coffee shop service, I don't know. Yes, nothing that I refresh can't solve. So if you just refresh here, I have dashboards of my application. This is going to be the Starbucks demo because in Starbucks, you know we'll have one person usually getting the orders and you have many baristas being able to deliver you the coffee. That's the reactive way of doing that. And just to make the analogy go even better, let's see what happens when you reach the Starbucks. You go to the Starbucks, you get an order, the person that cashier will get a cup of coffee. It will write your wrong name on the cup of coffee and then it will put your coffee cup on the queue. Okay, then you're good, you're free because you're ready paid for the order. You're just waiting for that to be ready. Once the barista takes your cup, produces your coffee and the coffee is ready, you will receive a notification. The barista will call your wrong name and hopefully you'll be able to get your cup of coffee. That's how Starbucks works. Starbucks is reactive. So what is the thing? Well, you just paid for that and you received a correlation ID. Let's use the proper name. You receive a correlation ID. The correlation ID in this case is your wrong name in the cup of coffee. So you know then when your coffee will be ready. And what is the reactive part? You could be doing anything else, but when your coffee is ready, you will receive a notification and you then get your coffee. And how do you know that is your coffee? Because you know your correlation ID, your correlation ID is your wrong name, okay? So how does the demo work? If I'm using the HTTP one, let me copy-paste here my commands. So I have two end points. I have the HTTP here implemented. Let me try it. Same service. I have the HTTP one slash HTTP, which is synchronous. And they have the async one, which, well, it's asynchronous. It's using the strength of the reactor system. So let's try to invoke this HTTP service through the command line. So how do I do that? If I just copy the right lines, it will be like this. So I am gonna order three different coffees and I go here to my terminal and I send them. You will note that one request needs for the other request to finish. So I request one, then when it's finished, I'm able to send another request and so on. That's how a sync HTTP works. And I'm doing a very simple, extreme example because I only have one barista. Because if I had more threads, then all of them would be processed at the same time. It would be harder to overwhelm. So I'm making very sure I only have one single barista to process everything because it's a demo. And on the other hand, if I want to process it differently, here is the dashboard of my Starbucks coffee shop. And now I can try to overwhelm my endpoint. So I go back here and I go back to my terminal. Luckily I have a script which we'll try to overload because since it's reactive, it will never be overloaded because everything is a synchronous notification basis. So everybody will get their coffee and I can put as many orders as I wish. So I'm going to order five coffees and as soon as I send them, it's already finished. It's much more responsive. And if I go back to my dashboard, you can see that I only still only have one barista but everybody already have their orders putting into the queue and I'm just waiting for them to be ready. Okay, so the responsiveness of the systems is much better. So I send five orders, they're all in queue and as soon as they're ready, somebody's calling my name and how do I know that is my coffee because I have this order D, which in the case is a correlation ID of the business message. What happens if I send even more messages to this? For example, I could send like 15 orders at the time and my system keeps responding. You can see that, well, I was adding more orders and the first coffee was already ready so you can keep processing at your own pace. Your system never gets overwhelmed if you're using reactive system style of coding. To be able to do that, what do we have? We have a Kafka bus, in particular in this scenario. We're doing async HTTP. So whenever you ask for a coffee, you already get your reply using your correlation ID and then you're adding the coffee orders to a Kafka queue. You have another endpoint, which is reactive, which is receiving the notifications of this Kafka queue. So whenever a new coffee gets into, I receive a notification, I'm going to process this coffee and whenever it's ready, I'm going to put the reply, which say coffee is ready, into another queue. And this queue is going back here to my browser, which is getting these notifications in queue, ready, something else, okay? I could even go further to the third example because I only have one barista. But if I add more baristas to my application, the coffees will be produced twice as fast. So it can scale much better too because it's a separate microservice. So I could scale that, I would process my request twice as fast, but the responsiveness of the systems would still be the same. Nobody will be waiting to get a reply from the server. That's a typical use case of reactive system and in this particular case too, I'm using all of this concept that I said before. It's a reactive system. I'm using asynchronous programming because all of the HTTP points are asynchronous and it's reactive because I'm using data flows, reactive streams, to communicate these notifications and force something to be executed on the other end point, okay? My question is like, let's say I have implemented some application, some mobile clients are there which are consuming the HTTP request. So they're asking for user information, whatever, shopping cart kind of application where I'm asking, give me the list of categories, give me the items in the category kind of thing and this is happening in the HTTP server. So when I change my HTTP servers to be React, the async one, reactive one, in that case I need to change my application also because now the application is not going to get the response quickly. So both has to be changed, right? Okay, in the example of you want to check the content of your shopping cart, you can change the HTTP endpoints to be async. You can use reactive framework like Vertex to do that. It won't change the synchronous nature of your request. I want the shopping cart and I need it to be displayed now. Okay, so this behavior is still synchronous but if you choose async and reactive in the backend, you will be able to scale much more, okay? Now let's try to change. How do I architecture my application to be a reactive system? When the shopping cart is not a good example but let's try to do different. Suppose that you want to go to the shopping cart and check out that and when you're checking out, you put your payment methods and all sorts of things. You also have once it's paid, you have to ship that. A synchronous approach, a non-reactive approach would be, well, you click on the checkout button and you have to wait for the credit card process server to return to you. Then you have to wait for the shipping service to return to you saying that everything's okay or else it's a fault. This is a synchronous process and definitely non-reactive. What will be non-reactive? You click on the checkout button and that's what, that's the Amazon experience. Actually it's the most, it's the e-commerce experience. You click on the checkout button, you have an immediate reply. Your order is being processed, okay? And you need to check back later if everything went well. You know some e-commerce applications, for example, forces you to validate the credit card at the moment. Check out and it's going to communicate with the credit card gateway and it's going to take forever to give you a reply and it asks you, please don't refresh your browser, okay? That's a typical synchronous non-reactive application. A reactive system wouldn't do that, okay? Check out, your order is being processed, then you added a message to a queue. The credit card payment service is going to be notified about that and it's going to process. This process can take like one minute. You don't care, right? Because you already received the reply and hopefully, well, you can do that through your browser but it's much more common that you receive a notification in another way. Usually you receive the notification free mail or if you're using some sort of communication that allows you to have notifications. Very likely you have a notification on your mobile phone like on your app or something like that. So, oh, payment was processed successfully. You receive a notification on your mobile app or something like that. And the same thing for the shipping. Next step, payment successful. Your system is reactive. You're going to add a message to the queue saying to the shipping service, you can ship this order. And you know that until they package your thing, they put the stamp and the carrier goes and pick up your package, it can take some time but you're not waiting for this operation. The system is processing your request. Once it gets shipped, you get a notification in email or something like that. But on your browser interface, you won't get that because of the nature of your application, okay? Just to correlate what is happening. What I noticed in the Amazon India app recently is instead of telling that your order is successful, they are saying your order is processed and we are done. And in parallel, if you check email, you have email saying that payment is done. That is already, there is a change in the response and it is faster. Oh yeah, no, they're very good in taking your money. So, you just, check out the button. Almost immediately, oh, your order is being processed, oh, payment successful, and then shipping takes more time, okay? But yes, that's a very good case of efficiency. But imagine the same case, like if the process service wasn't that efficient, it would take like one day for you to get an answer. Maybe, it's taking forever, I just go to another vendor and get the thing that I want. So, being reactive helps on that and the responsiveness is also one of the requirements of having a reactive system because you could be doing all of that, but if the payment service doesn't scale very well, well, it would be asynchronous, but it wouldn't be reactive. Welcome. So, this is the basics of the demo and just to finish my slides, you get access to this deck. So basically, we're trying to explain and showing the code from which one of the pieces, what is the reactive streams, how I'm consuming that. So if I want to consume, if I want to publish something to a data stream, for example, I use the publisher, oh no, actually this is the consumer. If I want to publish something to a stream, a data stream, I use the emitter interface. So whenever I do an orders.send, it's asynchronous, it returns immediately and a message has gone through the channel. And as I said before, you don't care about the channel implementation because you can change that anytime through configuration. The processing here, you have an intermediate stage which is processing the orders, which in this case is creating the coffee, but in the reward in the e-commerce example, for instance, this could be the method that is processing your credit card payments. So your credit card payments is receiving something from the data stream as the inputs and the return of your methods is going to generate something that is going to be put in another queue. So it's very simple for you, you just add two annotations, I'm receiving something from the orders channel and I'm returning something to the queue channel. So that's how you program using reactive streams using Quarkus. And the final one, in this case because I'm using a web interface to be notified of the changes. So if you want to publish using server-side events, which is the technology that I've used before and some people ask me, why not WebSockets? Well, it could be, I just use it. Clamon, use it server-side events. And you can use a publisher, so the publisher will be publishing to your server-side events channel, okay? And that's how you configure your channels to map your channels to Kafka topics. So it doesn't need to have the same name. So orders channel could be writing and reading from my topic in the Kafka bus. So it's really a different, this is a Kafka example, but it could be, as I said before, MQP, could be other types of implementations. But right now we have support for Kafka and MQP. Okay, and we're just explaining how did it work. And just as a summary, Quarkus supports HTTP messaging streaming has everything for you to be creating your reactive systems. These are basically the interfaces and annotations that you're abusing to code using reactive streams, which is the react part of your application. And what is the bridge between reactive and imperative? Well, once it's on the stream, it's everything is reactive. But if you have imperative, usually you have imperative code that you want to publish on the channel. And the bridges are the emitter and publisher interfaces, which are in the source code too. Okay, and that's what I had to share about reactive. And for now, thank you very much. Do the consumer need to persist all of the data of its interest locally, like in case of a billing service, in the e-commerce example, in case of billing service, does it need to, in case of billing service, does it need to persist the customer address locally instead of calling the customer address to get the address at the time of billing? If I need to process the message locally, is that a question? Do I need to persist the data that does not belong to that service locally? Like billing service, does it need to persist the customer address locally instead of calling the customer service to get the data? Okay, yes. The customer service is a publisher, billing service is a consumer. Right. I can answer this question, but it will be much easier if you attend the last talk, because then I have the full example. Sure. Okay, so, yeah. The event-driven architecture and OXPAY, security asset and sourcing and this kind of events, then that's the answer for this question, which is a very nice question. But since I have the full answer, yeah, and it's gonna take some minutes. My question is very general, and I want to know, is reactive programming is necessary for a full-time system, except for the case of time-critical data? It's not required. And as I said in the beginning, I believe that 90% of the code will still be imperative programming. Reactive programming is not a requirement for reactive systems, but for some use cases, it's much easier to achieve a reactive system using reactive programming. And I think the key use case is scalability. If you need an endpoint to scale very well, then it is a good candidate for reactive programming. That's why I don't recommend, oh, let's make everything reactive. Oh, you have an endpoint that doesn't scale like receives a message every 10 minutes. Yeah. Why does it need to be reactive? Yeah, assuming that your data is not time-critical at all, still we can use the Netflix OS circuit breaker, all those things, right? In synchronous manner. Like you can use what? I mean, generally, circuit breaker works in the synchronous manner, right? Of Netflix OSs. And I mean, assuming your backend data is not time-critical, just like in banking application, okay? No update is getting performed. Still we can go ahead with this kind of mechanism, breaking the circuit and going through the fallbacks or cast that previous response in the memory. Yes, you can use synchronous programming and circuit breaking, but in the last talk, I'll try to explain to why circuit breaking, maybe it's not the best answer and a better way to solve that. Okay, yes, you can use circuit breaking, but there are some requirements and you will be using a messaging to solve the problem efficiently too. And once you're doing that, you start to thinking, well, if I'm already doing that, why don't I change to a synchronous and eliminate the need for a circuit breaking? Because circuit breaking, if you're using HTTP synchronous, is a requirement. If you change the architecture to an event-driven architecture on top of message-oriented mid-aware and you use a synchronous request, you don't need circuit breaking anymore. So you're solving the same problem in a different way. So then you eliminate the need of the requirement through a different architecture. And I'm not saying that it's invalid. I'm just saying that it's one of the approach. I consider it to be one of the best approaches. But I would, there's a nice talk from Neil Ford on YouTube is one of his recent ones. And I like this quote, which I usually use in the last talk, which is, you can't consider yourself as a software architect if your answers don't start with, it depends. So if a person, how do I solve the problem? If the person just starts out, you need to solve the problem this way. Maybe it depends, like depends on restrictions. Depends on the context and everything else. So there is no, in software architecture, one of the things I learned, there's no such thing as best solution. It always depends on the context in which you need to build a solution, which makes my life much easier because as I consulted before, I used to say it depends a lot because it usually saves your skin too. But, well, it's a good solution for the context. Maybe it's not the best. I don't even know if you have a best one, but it solves the problem. Maybe it's okay. Yeah, because 70 to 80% of use cases are handled by a circuit breaker. And some type bank only, Spring has stopped this project, Histrix, recently. They are moving toward again, reactive programming. Yeah, and Histrix, for example, is deprecated. Like the word is changing to resilience for Jay. Yeah, yeah. Which they claim to be a bare implementation of Histrix. Yeah, Histrix, they are not giving, hello, yeah, Histrix, they are not giving any patch now. So they have stopped giving any enhancement or something. So because, you know, this, so OpenShift and that is a wrapper on top of your Kubernetes. So, Histrix and all, I mean, that has been covered inside this, you know, Kialia and this OpenShift. Oh, you mean using Istiov? Histrix will be off very soon. Oh yeah, yeah. Histrix is like is, Netflix is giving up on Histrix, which means that they're not actually maintaining anymore, but it's an open source project. So if anybody wants to patch it, for example, for security vulnerabilities, it's okay. Officially, they are saying that, you know, they are not allowing it to get patched by anyone. So now, I mean. Oh yeah, Netflix is not doing that, but. So my question to you now is, I mean, maybe I'm sorry, if I've not thought it properly in the complete session. My question to you is, the way we use REST API. See, REST, Spring REST give you, gives you that, you know, facility, Spring REST. So like, you can use REST with a jersey, you can use REST API with the JavaScript, et cetera. Like, I had used that in OpenIDM for identity management. So like that, I just wanted to know, like, you know, React, React, the famous stuff is ReactJS. Like that, this. So REST is an architecture. No, no, yeah, I'm asking you, this Quarkus, does it provide you some wrapper on top of your ReactJS to work with React? To work with ReactJS? Yeah, so is it providing any wrapper at all? No. No? No. But I don't know if it was related or something that you wanted to. Secret breaking, you can solve this problem. Different companies solve the problem in different ways. Netflix implemented the secret breaker pattern with a library inside your code, an application level. Spotify, for example, solved the problem using LinkrD, which is a node proxy. So it solves the secret breaking problem on the network level. And on the other hand, these days we have Istio, Istio uses Envoy as the proxy, which solves the secret breaking problem at the network level too. So these are three different approaches for the same problem. I need my services to keep responding, even though I have some unavailable or slow end points in my system. But yes, we don't provide a wrapper for anything and nobody asked me, there's Quarkus support, Java and Kotlin.