 Welcome to my session about distributed transactions. My name is Martin Stefanko. I work as a senior software engineer now in RedHead, mostly on middle-end technologies, like Wi-Fi, AP, Qarkoo, SmallRide. If you heard about these projects. Since last year, I'm also a micro-profile committer. I particularly like working on the specifications and also starting with JakartaE right now. And I'm a big microservice enthusiast, and if you are curious about stuff that I work on, you can find me on Twitter under this handle. So what we are going to talk about today are transactions. And before we start, I would like to play a little guessing game with you. Can you guess how many Bitcoin transactions there were yesterday, like performed yesterday? The number. That's a funny example. There's always around 300,000 Bitcoin transactions per day. But this was taken yesterday from the site. However, you came here today to hear about something different. And that something different is called Saga. And we will get to it, what it is. But I would like to rather show you the implementation that we are working on in micro-profile, which is called long-running actions, or LRA in short. However, to start just typing, it's really hard to comprehend it. So I will just do 15 minutes of some theory, and then we will jump to ID. So we are probably familiar with the typical asset transactions, so I will try to compare what you know with what the Saga is. During my talk, I will use this simple example that we have a transaction that we are going somewhere on the business trip, and we need to book a flight, book a hotel, and book a car. We want all three of these, otherwise we are not going. So what the asset transaction stands for, you probably already know this, atomicity, all or nothing property. Basically, I want all the things in the transactions to happen, or none of them to happen. Consistency, if we start the transaction in a consistent system, we need to end it in a consistent system, so we cannot end in a system when we have flight and we didn't book a car. Isolation, if we have multiple transactions that are performed in parallel, they cannot influence each other. So basically, if we have transactions A and B, for A, it must look like that the B hasn't started yet, or it's already finished. And the ability is just saying that if the transactions finish successfully, or not so successfully, it needs to be somewhat persistent. How we achieve asset transactions today is the only way how to achieve it is by usage of consensus protocols. And the most known one that you are familiar with from the university is a two-place commit protocol. And actually, this is still the most used protocol for any transaction processing done today. However, there are other, but there are rather slightly, little slightly more complex than two-PC, and people are trying to avoid them. But there are some solutions that are solving some of the problems of two-PC available. However, for the simplicity reasons, and for my consideration, two-PC is more than suitable, so I will stick with it for my today's talk. So we probably know how two-PC works, just to really quickly phrase it to phases. In the first phase, we have this two-phase commit coordinator, which is a standalone service, and I'm already talking microservices, so we have airline microservice, auto microservice, and current and microservice. So in the first phase, the transaction is passed somewhere to this system, and basically the coordinator or someone else will ask for the individual resources in a particular microservices. These resources are located in the individual services, and some form of logs are taken. So you are not actually buying the flight or the ticket for a flight. You are just looking it for someone else to not take it. If you are able to log all the resources, you will just send some okay message back to the coordinator, and in the second phase, if everyone responded successfully, so the transactions can be actually performed, the coordinator start the phase two, the two-phase commit protocol, and send to every service, yes, go ahead and actually perform the operation that was requested. So the logs are removed, and actually the tickets are bought, basically. So you are paying money now in this particular time. If everything's successful, again, only confirmation, and we can say to the core that the transaction was successful, we are holding all three resources, and we are good to go. If something doesn't go so well, so we cannot, for instance, walk the car or reserve the car, because there are no more car legs left or something, we need to abort all the resources because the whole transaction cannot be completed, so we are sending an abort message now from the coordinator, again, logs are removed, basically the resources are just forgotten, we are not doing anything, okay message is sent back, and we are finished with the failure, so the core can now repeat the transaction in a later time or do some other action. However, what is the biggest problem with LRA coordinate, or 2VC, basically consensus protocols in general, especially if we move into network, so these are really stand-on services communicating throughout the network, is that things on the network dies a lot, so if the coordinator fails, or it cannot be contacted after the first phase, we are basically now in a state that all of these three services are holding logs on these resources, and that may be other services, other customers which are requesting these resources, and they may be coming and coming, however, you cannot allow them to actually get to this resource, so if this is the last ticket for this flight, there may be four people waiting for this flight, nobody ever paid for this ticket yet, but you cannot give it to anybody, so you are losing money, or even worse, you will leave all of the other requests to wait for you to actually, you cannot say if you ever will lose this log, so you will just keep them waiting, and I had a talk yesterday about reactive systems, and the most important stuff in today's modern enterprise applications for many people is responsiveness, so if you want the user to quick something and wait for three or five seconds, they get nervous, and you don't really want to get to these situations, so here is where Saga comes in. I would like to say that this is a new idea, but actually this was first published in 1987 by Hector Garcia Molina and Kenneth Salem, where they describe this pattern in long running database transactions, because this taking of logs, even in databases, when you needed to log the whole table for periods of days, it was not acceptable, even in 1987. So, what does Saga actually is? It's still a transaction, but it is not ACID anymore. Saga is basically again a set of operations, but it allows individual operations to interleave with each other, which is not possible with two-faced commit, because we are either committing at the same time or working at the same time. So now, when I will have this example, I will actually go, call the app on service, and I will pay the money by the ticket. So now I'm in a state that I have a ticket for a flight, but I still haven't called the car rental and hotel service yet. So I am in inconsistent system. Isolation is broken right away, and how the Saga deals with this kind of situation is by a compensation action or compensations. Basically, compensation is only a reverse action or semantical undo of the originally performed operation. That may be whatever you like. It may be in a, if your original operation is to add a role to database, the reverse semantical undo compensation that we just deleted is wrong. However, if your operation is something more complex or something which is not important, not irreversible by opposite action, like sending an email. I cannot undo sending an email. However, this is a semantical undo. Basically, this means that you are defining what the undo action should be. So if I originally sent an email in my transaction, I can send full email just saying that the previous email is canceled or something similar. And this is totally up to you. You are defining your services. You are defining your compensation. So you know what you are doing in your original operation and you know how to cancel it. If you are interested in really how this pattern can be put into practice, this is a really interesting tool by Cathy McCaffrey, where she describes how they use Saga pattern in a halo game in multiplayer to call it a statistic or something. It was really interesting to see that this can be actually used. And people are not always in the need of full asset. They will get to it. So we again take our original transaction and we will try to put it into a Saga. So now we only have these free services. We lose the coordinator for now. And we just send the Saga definition or this can be adjacent to the first service. The first service will allocate the resource, find the ticket and actually go ahead and pay the money. So now we are in state. We already have the flight ticket. We can mark it in a JSON somewhere or in the Saga that is passed away. And now we are in a consistent state basically already. This is already breaking S. Then you will send the Saga to other services and the same operation. Again, you will pay the money for the ticket. Sorry, for the hotel room. And again, you will just put it back to the car rental service and you will pay money for the car and you are done. Basically, if everything was successful, so you went through the whole chain of operations and all operations were successful, you are finished. You are again in a consistent system and everything is good, you are happy, your customers are happy. The funny part is when something fails. So we are again in the same phase. We are already paid money for flight and for hotel but we cannot book a car because, again, reasons. So now we have a problem because we already paid some money and we cannot actually complete the business trip. So how the Saga does with it? It will just start calling the compensation actions but in reverse order so your individual operations can depend on each other. So we will just send cancel Saga message back to the hotel service and semantical action for hotel room will be cancel the booking. So we just cancel the booking. You are refunded probably for the full price and again, the same thing for a flight. So we cancel anything. We have our money back and we can say that Saga was unsuccessful. Again, it can be retry later or something similar. However, we are again back in a consistent system. So there was a state where we lose some money, we paid some money and we were holding the resources for some brief period of time but eventually we get back to a consistent system and we get our money back. You see that this is quite different from original or traditional asset transactions but it turns out that in many situations this like really only that you are holding resources for just a while is acceptable for many use cases. Just because this can be quite hard to follow if you are seeing this for the first time I just want to repeat that failure scenario but writing down the individual operation. So again, we are booking the flight, sending the Saga, booking the hotel, sending the Saga, booking the car and this fails. So now we are sending the Saga cancel message and we are just calling the compensation action in the same service defined by the same service. They know how to compensate. We'll just provide them with the ID of the room, let's say and again the same thing for the flight and then we are finished. So you see that we are holding for a certain quite time but eventually the compensation is called and we are getting back to consistent system. So, asset versus base. As I already were talking previously, we directly lose isolation because other transactions that are running in parallel see that you are already holding one ticket for a flight but you don't have to book anything else and we lose consistency because it's inconsistent state. What Saga actually tried to utilize is a different transactional model which is called base and that stands for basically available. If you are familiar with CAP Theorem, are you familiar with CAP Theorem? Okay, CAP Theorem is basically an idea from older paper that in distributed system which you have in which you have components connected throughout the network, can have at the most two or three things and that's consistency, availability and partition tolerance. Since you cannot make network reliable at least today, you need to choose between consistency and availability. As it chooses consistency, base chooses availability. Soft state, that's basically again that because of the individual operations part can be performed and part isn't. You cannot really say that you know in which state your Saga is or your system is because eventually it can get into some different state. It can even be in this state if the messages are already sent but not received and the most important one is that eventual consistency. So Saga guarantees you that in some point in the future the state of the system will become consistent but we don't know when. So either all of the operations are performed or all the compensation that Semantical Andus are called for all prefrontal operations. So eventually if you don't start any new Saga's your system will become consistent. We'll just wait for photos. Should I smile? There are two different approaches that you can take when you are developing the Saga's and that's decentralization and orchestration. Basically decentralization is whatever is showing you so far. You are passing the Saga through some definition that can be whatever throughout the different services. And then there is an orchestration that you have again some coordinator. Now it's a Saga coordinator and this is irresponsible to actually call that individual operations and compensation on your services. So you can either pass Saga directly to a coordinator or pass it to individual services but the services needs to end with the coordinator and they will tell the coordinator please call this when there will be potential compensation. So again every service needs to end with the coordinator so the coordinator is responsible to make the decision to actually compensate or something else. And with that we are getting to the main point of this talk and that's micro profile LRA long running actions which is basically the translation of this Saga pattern into Java world into micro profile specification. We are currently in RC1 but RC2 is coming this week hopefully and GA in next month or so. So it's not still finished but I will show you to you just in a minute and hopefully it's stable enough already. And with that if there are any questions please shout the questions as I'm going if something is not clear. I will just jump to a terminal and I will spend probably the rest of the talk in the terminal. One question about that because most systems have some auditing, some logging attached to the services and I presume that using this Saga pattern I should also somehow reverse the auditing on those systems so. Not necessarily because in a point of time you are actually holding the resource so you can log it somewhere and then you will cancel it. So I don't really see a reason if you have a use case for this for sure. But I will show you in a minute. These are going to be basically rest invocations. So what you define in your rest invocation is up to you. Yeah, sure. Even bigger? Okay, because I will have like quite a lot of terminals open in a while. So what I will go to use right now is actually implementation that is available in Narayana transactional manager which is used in Wi-Fi. There is also implementation currently in Pyata and we expect more in the future. So I will just run this coordinator. It's already a Quarkus service and we don't need it anymore. So I will just move it away. And basically this coordinator runs on port 8080 and we are allowed to query all active LRAs or all arrays that this coordinator knows of by a simple rest calls. So currently there is no LRA started so we have an empty array but I will actually put this into a watch so you can see when we will be developing different services that we are actually starting something. And with that I will finally, sorry, create actual microservice that we are going to use. I am using Quarkus but I have my script which is building my local Quarkus. We have a pull request open for an extension since last week only. You can use it directly if you put dependencies but I want to use Quarkus because it's just faster and easier. But in the end there is a link to a full tutorial, sorry. I forgot probably to remove the one from yesterday. There is a link to a tutorial which is using turntel as a runtime where you will use normal dependencies and it's not an extension. But for demonstration purposes I will show you or repeat this at the end. So just let me see the tool LRA service. Again, if you are not familiar with Quarkus it's basically working on an extension pattern so we already should have here somewhere and I can make it bigger. Narayana LRA extension, if I will be able to find it. If someone sees it it would be nice because I don't see it. Yeah, here it is. Narayana LRA. Currently if you just download Quarkus or run the latest version in .p there but hopefully soon the pull request will be matched. So to actually add an extension here is a command which I need to copy to add an extension and the search mechanism is really nice so I can just tag LRA here and it should add my LRA extension and basically that's it. Now I can compile this project and run that Quarkus live reload mode make it less bigger again and what? Now it's not Java, I probably have something running on that port because I have always like 20 different work spaces open and I still, yeah, now I know what is the problem. I started the LRA coordinator on port 8080 and I need to start this service on a different port, of course. So if I do Quarkus HTTP port 8081 and just repeat this watch. Yeah, live reload demo, so we will forgot the basic stuff that you already started something on 8080 like two minutes ago. Okay, just because I know that this is usually hard to follow I prepared labels this time. So we will have LRA service somewhere. I hope that you can read these. We have seen, we can see here that this is an LRA coordinator and I will open here a client. So I will just copy paste this again. Oh, sorry. Where did I put it? Different one. And one last thing that I forgot to do is to actually open this in idea. And we will move this into seven. I'm starting the projects from scratch so it can sometimes make some issues. And I will put this into presentation mode and this should be hopefully big enough for you to see. So what this is basically only calling Maven Quarkus create in the background. So this will create for us only a single Jaxar as endpoint I can just demonstrate, sorry, that it's running with HTTP 8081 and that's pink and yes, it is running so we can start actually working on another LRA. What that Maven Quarkus extension only did is it added a Maven dependency which you can do manually but if you don't know what the name of the extension is, it's usually better to start with that list extensions that I wanted to show you. So with that, let me just create LRA resource and this will be at LRA and we can start creating our application, our operation that is actually going to be performed. I will just call it perform, call it response from the core, call it perform. And here we will look nicely that we are performing performing an operation that we will actually perform the operation and then I will just return response okay, built. I don't want to do anything critical. This would be your business action. This is the work that you want to execute inside the transaction. So I will just create a really simple logging so we can see it in the terminal that we are actually doing something. I will just print the parameter and I will let one see out at the end and our operation that we are going to perform will be only a thread sleep. So we can see it on the coordinator that is actually started. So this is the resource that you probably already have performing your operation orders, booking the flight, booking the auto room to actually put it into an LRA. All you need to do is to add a single notation at LRA. I will just save it and because I am running Quarkus in that mode I should be already able to call this at LRA perform and when I call it, it will be replaced. We can see probably you were able to see that the transaction is started on the coordinator and after two seconds it's finished. So by default, there are several transactional types again with LRA similar way with JTA if you are familiar with it. So I can say here, type, I need to do LRA type. Mandatory nested, never suspended, supported, sorry, et cetera. You are probably familiar with this if you are familiar with JTA. The default one is required which will start the new transaction when the method starts and we'll finish it when the method ends if there is no transaction received. I will get to it in a later point of view but now we are good to go with the default. So now we are already having our transaction which is started when I'm entering this perform method and completed when I'm finishing it but nothing is actually ever enlisted inside these transactions because we don't have that compensating action. So let's edit. To create a compensation action, as you can see this is mostly built on top of JAXRS but that's not required for everything and I will get to it in the end but I will for now stick to JAXRS resources. So I will have my compensate method. Now again compensation, surprisingly, annotated with compensate and this will be again response, compensate and here I will just look nicely that we are compensating and I can return response, okay, built. So this is my compensation. Just again, JAXRS end point which I annotated with another annotation compensate. That's it. So if I now rerun this, nothing will happen because we are actually closing the saga or the LRA successfully. So this will be only invoked if something went wrong. So the compensation will happen. How we can actually make LRA fail is by returning a different HTTP status code from the LRA annotated method and what status codes actually make LRA to cancel is defined, sorry, by this to attribute cancel on and cancel on family. These are only basically HTTP status codes if you are familiar with response status where you can specify that I want to cancel on 412, 400, four, et cetera. By default we are canceling on 4XX and 5XX. So I don't need to type it here because I know that the default is cancel on family if I just change these 200 to 500, the LRA will now be instead of cost compensated and we will get our compensation called. Easy as that. How do you know that he should call this exact method? This is exactly what the specification is for. That annotation at compensate. When you call these methods annotated with LRA, we actually go in our implementations, scan this class for that at compensate annotation and we will take this LRA, sorry, this endpoint, save it with the coordinator and when the compensation is called. LRA and multiple compensated with the same class? You can, but then arbitrary one is chosen. There is no point to, you can have multiple at LRA methods and I will use this later because that makes sense but having multiple compensation inside one participant of the transaction doesn't make sense. So we will just pick it up. I can't use some name parameters like name at LRA, name at compensate to actually join them together. You can't. This is compensate for this LRA then this is compensate for that LRA. There are two different compensates. That LRA is starting a new saga that at compensate is enlisting a participant and participant is something which needs to be granular to one Java class, one JAXRS resource. So you can start multiple LRAs inside one JAXRS resource but one JAXRS resource can be only one participant. That would be harder for us to really find out what we should call. I will show you in a while but what you are asking for is just really create a similar class with another at compensate method. So I can create here multiple at LRA resources and this would work. So just not in one class. But what I wanted to continue with is to actually turn this back into an okay and show you that there is also another. So basically this at compensate is required to enlist this resource as a participant but there is also a similar endpoint which you can define which is called complete and that is denoted with a complete annotation. And this is the callback which will be called basically when the LRA is closed successfully and you are at least in with that compensate action. What can be this used for is, we can do the same thing similar but just sorry I will finish the typing and then I will start talking. So response okay built. What this can be used for is basically that imagine that this LRA is actually performing some order or the booking of the flight. So you probably need to remember that for this particular LRA I booked flight 66. So when the compensation will happen you can match that this LRA was compensated and you will cancel the flight 66. So you need to remember this flight ID somewhere. And because the LRA was already closed successfully there is no point to remember it any longer. So you have an option to define this optional complete callback to actually perform any cleanup that you would like to. So you can forget here about that fight ID. Sure. You are using put method on the compensate for specification. This is. You should provide full resource data. Does it store coordinator or? There is actually a feature request for adding this option of some data that can be passed to a coordinator but we decided to not do it for one that though because there are a lot of issues. It's transactional framework that we need to deal with. But yeah, this is on roadmap that we want to cover. For now it's up to you to actually save the data inside your services. So I can do here some normal AC transaction for instance and save something to a database. And here I can take it out. So what I wanted to just show you that this complete will be called now instead of the compensate. So if I just repeat this call after two seconds we should have our complete callback called. So easy as that. Good. So I meant because put method should use in body. So how is here looks like request is permissible? There are actually from a coordinator we are passing some compensation data but it's only something that is internal to the saga itself. I can show you to you but really I personally don't agree with using JaxRS for this at all. And we already have a support in a specification just now and I still not didn't catch up. So I will actually show you the specification itself. So this is the whole specification micro profile LRA API. We have here only annotation package with this WSRS. And you can see here that LRA is in this and leave. These are the only two annotations which are right now required to be on JaxRS resources. All of these other ones that I am showing to you compensate complete doesn't need to have. We will expose a JaxRS endpoint for you and we will call it basically any CDI beam method for you. But it doesn't work now. Right now with the Quarkus we are getting to it because we just need to finish our implementation in our own. But yeah, this is like really a good point. If you want to really follow the rest principles this doesn't do it. So okay, so this would be the basically the usage of basic usage of LRA. I told you that you need to somehow associate the invocation. We are starting right now only a single LRA and we are enlisting a single resource so we know that is the same LRA and the same resource. But you can call this method several times with different IDs. So you have this single resource enlisted in different LRAs in multiple, in the same time. So you need some way to somehow know that now this particular LRA was compensated and how we are doing this in a specification is by header parameters in a LRA itself. And the most important one is this LRA HTTP context header or long running action, which is always an URI if I can import it, yeah. Which is basically our LRA ID and I can just edit here. And actually this will be passed to every invocation of every method that you are going to ever use an LRA annotation on. So I can put it into my complete and compensate if, and if I now return, repeat the same method, we will see that we get some URI and I didn't put it in, I didn't put it into log, sorry. But you see that the URI is actually an URL inside of Narayana and we are actually using it also for recovery of the transaction. But I don't think that I will have a space to do it today. And if I now repeat the transaction, we can see that the newly started transaction ended with 23 and we were compensating the same transaction. So in this sense, you can match that order ID with particular LRA that is being compensated. There is also one more. I'm sorry. With the compensate path itself fails. I'm sorry. If the invocation fails. Well, basically, we will repeat the calls until it succeed. This is the transactional guarantee that there is that coordinator which will start the recovery if it cannot contact all that compensating actions. And we will basically after some time out, which is by default two minutes, I think, call it again. And if it fails again, again, again, until we can reach a decision. There is also a possibility because you understand now we are going to invoke that compensation multiple times and probably your compensation action is not going to be idempotent. So you don't want to invoke it several times. There is also an option in the LRA specification to define status method, but I will not have time to do it now. And if you have in the same Jacksord as research also this status method, if the compensated invocation fails, we will instead call this status method. So you can check your state in a different method, which can be idempotent. And if you respond that, yes, I am now already compensated, we will finish the transaction, but the compensate will be called only once. So the compensated doesn't store the state? Compensate will only pass you this LRA ID, which I showed you and also a subscription ID, which I'm going to show you in a minute. But the state, we don't know what the state is. This is up to you and that's to your question. We are working in one that one. We would like to have an option for you to store the state actually in the framework itself, something small, some string or something similar, and this would be passed with the put invocation. But right now with one that all we decided to rather skip it, because the framework itself, it looks simple, but the implementation is not. So okay, I was talking about that participant ID or that subscription ID where did I lose the cursor? And that's again another header param, which we will take from the LRA annotation, and that's this long-running action recovery or HTTP recovery header. This is basically a subscription ID. This allows me to unleash this particular participant within multiple LRAs. So this will be again passed to every invocation of compensate and complete. Just it's a lot of, again, a different URI, so I just don't want to put it there. But this will allow me to start multiple LRAs and unleash this resource to multiple LRAs. So LRA HTTP context header, long-running action is LRA ID, and this long-running action recovery is subscription ID or participant ID within a particular LRA. So even one resource can be enlisted multiple times within the same LRA, but you can have only one compensation. So okay, I will just not do this right now just to really save some space in the terminals. And with that, we have a single resource now that is starting, joining the LRA, and finishing it. So it's not really distributed. So what I'm going to do right now is to actually start a new service. Sorry, I will make it bigger, and start it basically just by copying it the same service into LRA Service 2, and I will open that in the IDE. And again, I will put it into MacFarcos Dev Mode, but this time I will take care to use a free port. Hopefully if this starts, yes it does. We have here our LRA Service 2, which is right now the same service as the first one. And here I can start explaining what this LRA annotation can be configured with. So I already show you this value parameter, which is of LRA type. And here we have like a typical JTA types, if you're familiar with it. What I'm going to, I'm not going to go through all of them. They are really nicely documented even in LRA specification. What I'm going to do use now is LRA type mandatory. This just says that the LRA ID when this method is invoked must be received. And if it not, we will return precondition failed 412. And what it means to receive an LRA ID is basically to receive this HTTP context header. This is the way how you can propagate the ID yourself. So now I will switch back to the LRA Service 1 and now I need to perform an HTTP request to LRA Service 2. And for that, I actually, because Quarkus doesn't really like normal REST client, I need to add a new extension. And that's REST client, I think. I will just type it here. REST client to my LRA Service 1. You see that Quarkus is even clever enough right now that it is, I am able to change the POMExamal and it will restart. So I can add dependency on the fly. I haven't stopped this service still since I started typing, sorry. So now I should have my REST client already here. We can verify that it's here. Yes, it is. And I will show you really quickly a different micro profile specification and I will create LRA Service 2 API REST client. So to do a REST client, all you need to do a register REST client and then you will type normal JAXA resource basically. It just needs to be an interface. And here I will do the same thing as I have in my LRA Service 2. Just get at LRAPerform slash perform. And here we will do, we can do even Void call. And this should be it. I can do one more step and to actually set the base here. Since I know that I'm only going to invoke my local host already here. So I can say that this will be local host 8082. And I have created and REST client basically. For this particular example, it's not that important, just Parkus doesn't really want to play with me with the client builder right now. So to use it in my LRA resource, I will just to make sure make it applications code. I should have done this in the beginning because by default it will be request code, which is unnecessary for my use case. And I will inject REST client, which is my LRA Service 2 API. And I should be able to now make the call here, just like this. So this will make now an HTTP code to 8082 slash LRA slash perform. And hopefully if I typed everything correctly, if I now repeat and I will give you the last label, LRA Service 2, if I now repeat the call to LRA Service, it should call LRA Service 2. And both of this should be, yes, should be enlisted within the same LRA. And you can see that the competition was called on both services. So now we have truly distributed transaction running on my local host, propagating the LRA ID and calling the compensation of both services. Again, if I go back to LRA Service 2 and now I will fail this transaction, again I will set the status to 500. Just save it, repeat the call. After two seconds compensation will be called and please ignore this because this is from REST client. This error, but we can see that the compensation is coming. It's just REST client is telling me that the status responded with 500, but I did it, so I know that it will respond with 500. Let's just put this back. And what I want to show you, probably you noticed that there was a log message in LRA Service, when we were completing this one, which is basically saying that we were trying to close the LRA in LRA Service 1, but that LRA Service 1 perform method ended, but it was not found on the coordinator anymore. And that's because by default, LRA annotation, now we are in LRA Service 2, by default LRA annotation will end when the method ends. So in LRA Service 1, we were invoking mandatory endpoint in LRA Service 2, and when this method in LRA Service 2 ends, the transaction is closed. So when we are returning from a call to LRA Service 1, and then again the transaction will be tried to be closed, there is no more transaction. So if I want to just get rid of that message, I will type here end false, and that basically just say that even when this LRA method will finish, just don't close it. So if I just rerun this again, I should get rid of that last message, and we are closing now successfully in LRA Service 1 again. One, there are two more things that I want to show you. Are there any questions? Yeah, I have that because that feels extremely hacky. Why? Because the coordinator should know that this is like embedded transaction because the second transaction inside, in this particular situation, is embedded inside the other transactions. Not necessarily like that. Not necessarily because I will get to my last example, you can start the LRA in some utility service, let's say, then propagate it wherever you want, and take my example with the business trip. So we will start the transaction in airplane service, propagate it through hotel, and car needs to compensate. So why would you return this whole chain back to the airplane service and then the coordinator needs to call the services if you know that you need to compensate already in car service? I think that the client should do it. Like this will be the same like with regular transactions. You have like the- Yes, this was a design issue. When you have a regular transaction, there is a client that is calling this method. And in that way, it is just Java. So we just call from Java method perform. The invocation is then intercepted, and transaction is started. Then any transaction inside there will just continue the already running transaction. And now we are already- Yeah, I got it. Will end in the correct way. So you are not forced to just- This is the basic of just transaction management. And now we are getting into an issue that I'm running to since I started with this specification that people still think about this as AC transaction JTA. This is not it. We are innovating something new. It just seems like a transaction, but it's not JTA. This was a design decision that we made because we want to save network calls. This is like really the main reasoning and we are sticking by it. Okay, so I have five minutes left and I would like to show you some two things basically. So I will just make it really fast and I will skip my last slides. There is also an option to add another endpoint here, which we will call after, and there is another annotation called afterLRA, which is basically only a callback that is going to be invoked every time that the transaction, any transaction that passed through this particular resource, it doesn't need to be even a participant that will be invoked. So you can do afterLRA listeners. So you can watch for some LRAs, log them for instance, or do something else, and you don't need to join them if you don't want to join them. So you don't need to create these empty compensations or something similar. So this will be really just again, sorry, response. Let's call it after. And what I want to show you here is that since we expect that the use case for this method will be to actually start a new LRA, so when some LRA is closed, I want to start a new one. We actually added a new header here, which is long running action ended. If you want LRA of the ID, which ended, and here you will finally get payload, which is an LRA status. This is a valid put invocation. And we can here just do here after plus ended LRA ID plus status, and just return response, okay, build. This is nothing fancy, just really show you that you have an option to also define something which seems like a participant, but it's not. If you have a use case for it. So you can see that after the LRA was finished, we got our after invocation with the state of the LRA. So we can do this kind of auditing and make a decision dynamically if to start a new LRA or not. And if you start the new LRA inside this method, you can use then LRA HTTP context header to get the newly started LRA inside this method. So with that, the last thing that I want to show you is to actually not close this LRA at all. So I will put even in LRA service one and false. So this is the use case that I wanted to show you. And I will create a new LRA endpoint, which will be just slash end. And I will add another LRA method. This is like really transforming now this resource which will originally a participant doing something useful in the perform method into really more LRA utility class. So it will be responsible for starting and stopping the LRA. Just start it, propagate it somewhere. We can actually even return that LRA ID and I will need it here to assist string to the user. So I can call this service, get an ID, then propagate it to different services. And when I decide to close it, I can close it manually. So this would be one of the use cases that you were asking about. So I will just do here response, close. And I will actually make this also mandatory because I want to know which LRA I am closing and I just, I can explicitly type here through even if it's not required. And I will also inject here the context header LRA ID which I am closing just log nicely and LRA ID. And I will return that I want to finish successfully. So what will now happen if I run this is basically all our operations are performed. You see that the perform was called in LRA service one in LRA service two, but we are already finished with our client invocation and the LRA is still active. So this, in this way it will stay active until somebody really calls it. And I returned the LRA ID to myself here. So to actually just close the LRA, I will call the endpoint that I defined. Oh, sorry, LRA end. And I need to pass the LRA ID manually here. So I just will copy paste this and paste it here. And in that sense, when I call this method all of compensation, competitions are called after LRA are called and we see that the LRA is finished. So with that, I will just really quickly go back to slides. I should, five minutes, yeah, 25, that's great. So what I show you here basically, the state model is precisely defined in a specification document. When you start the LRA, it is in a LRA stay active that's before the competition or competition start to be called. When you close, it's closed. When you cancel, it's canceled. But there are also a possibility to stay from the compensation manually that you are unable to complete or compensate with returning a specific payload back, which is a participant status. If you state that you are failed to complete or failed to compensate, we are basically stuck. We don't really know what this means for your particular application. So it is only required from a specification point of view to log this somewhere and probably some manual intervention is needed. But as somebody here was already asking, there is also this immediate states closing and canceling in which we are basically in a state that we are calling compensation or competition methods. In Compensate, if you return HTTP status accepted, 202, it's basically saying here that you are not able to compensate right now, but you want to be called again somewhere else. So you don't really need to fail the invocation to get your compensation or that status method invoked. All you need to do is return 202 from that complete or compensate. In that sense, this method will be invoked again or that status will be invoked again. So really that failed to states should be reserved to really something where you need some human really to look into the outcome and some really manual intervention is needed. I'm sorry that I am not able to show you, but this is like a really little advanced concept already in a specification. So if you want to use this in your project, this app is API dependency. We are working on RC2, which should be done this week. Hopefully, and the Nareana, which I was using is this one. However, the parkour extension is already open PR, so hopefully it will be matched soon in our next parkour release. And with that, this is everything from my side and this jump a little. You can find me on social media and thank you for your attention.