 Welcome to Implementing Resilience with Micronaut. We are very glad to have Nareesha with us. Nareesha, thanks a lot for doing this. Welcome to the presentation on Implementing Resilience with the Micronaut. This is a combination of the resilience patterns in architecture as well as Micronaut. So we will see how we can apply these patterns or implement them using Micronaut and what Micronaut offers something additional to framework that already exists. My name is Nareesha and I help teams to get better with their technical practices. Let's get started this way. So these people often talk about microservices architecture. So the patterns presented here would be very useful if you're using microservices architecture. Even if you're not using microservices architecture, you know, it's very rare that your monolith could exist in isolation. So often you will have to talk to other applications in your enterprise or some third party applications. So kind of, you know, talking with the external systems is a very trivial part of most of the systems today. Let's take this example. We have two services here, conference service and presentation service. Conference service invokes presentation service and gets some data in response. Imagine you are having these two in a monolith application. Everything is going really good. For some reason, you believe now you have to, you know, these two services have to be independently deployable. So you deploy them as two different services. Now you have a whole new set of problems you have to think of. So when you move them as independent service, what do you think would be the first thing you want to take care of? Yeah, take a moment and think about that. Well, now you have, you know, all these aspects coming in because they are, you know, what a network and, you know, there is a latency, there is a bandwidth issue, a lot of reliability issues could come in. So to make your application resilient, you'll have to consider all these things. So here is, you know, eight policies of distributed computing that most of you might be already familiar with. The first thing we take care of is typically the timeout. Even if you don't do anything else, you can't assume that you will always get a response. If you don't get a response, should you keep on waiting, you know, for how long that issue? So what would happen is your, you know, application would, the thread will be blocked waiting forever and application performance will degrade and eventually, you know, it will not be responsive at all. So that's the first thing you want to take care of. How do you typically take care of for timeout? So in MicroNOT, whenever you want to make a call to external system, that's an HTTP call, you represent that with the Java interface. So in this case, you see I have a presentation client. And within that, you know, you are all getPost or any of those methods can be represented. So in this case, you know, then I can name it anything get presentation info. I pass value and get the response which will be passed back into this object font. So what happens is this is a very declarative nature. What you define is the interface and an implementation will be generated for you. This aspect would be already familiar to if you're using something like a history, but the salient feature of MicroNOT is that the implementation is generated at compile time, not at runtime. At compile time, MicroNOT will run an annotation processor and generate proxy here. So it uses a compile time proxy, not, you know, runtime proxy. And typically, for an HTTP client, you can set a timeout like this by default. It's something like 10 seconds, but you can override, say, MicroNOT HTTP clientry timeout is five seconds. But if you want to customize it for, you know, per your client, what you could also do is like this. So I can also use an ID instead of directly giving URL. In this case, this works pretty handy with, you know, discovery services like Eureka console, etc. But I'm not using for the sake of simplicity. The details of the URL can be configured here. Sure, yeah. Name the service as a presentation service. I could give multiple URLs and I can specifically say what is the retip on. So at a, you know, HTTP client level, you could mention what is the timeout. That's the simplest thing what you could do. And if you want to run that, you would see something like this. Wait for, you know, three seconds and then it will timeout. You will see our timeout here. Let's go further. The immediate next simplest option what you could add is retry for some reason if the system was not available. And, you know, or it was like, you know, it was down. It is coming up for a few seconds. It may be down. In that case, retry would be a better, you know, option rather than failing the whole request altogether. Let's see what retry will look like. So my cannot predominantly uses annotations. So you have retryable as, you know, an annotation, along with which you can supply how many times you want to retry. And what should be the delay between the retries and you can also, you know, keep on increasing the delay by multiplier. So these are the parameters available to you. And you can say, you know, for what exceptions I should retry for what exception you don't want to retry something like that. These are the options what you have. And yeah, the code would look something like this. Again, there are two ways you could implement retry. Let me switch to the code. Either you could give directly, you know, annotate your method with the retryable. In that case, it is applicable only to that particular calls. Otherwise, directly at, you know, the client level itself, you could put the values like how I have shown here. Directly at the client level so that any invocation, even if you have multiple methods here, any of those invocations all will be retried with the selected configuration. But make sure you just have it in one place, not in the, not in both places because it's more acts like, you know, decorator or proxy during compile time. So if you put twice, it will be, it will be multiplied. So to check that what you could, how could a random failure for this method is called. Yeah, you would see, you know, it's trying to retry that. That's the very declarative approach for retry what you have. The next evolution or the next level of sophistication what you have is Circuit Breaker. Circuit Breaker is kind of a more sophisticated version of retry. The problem with the retry is that say, in this case, we are calling the, you know, presentation conference service is calling presentation service. So the presentation service is already, you know, loaded. It's already processing a lot of requests. Bombarding more requests is not going to solve any problem. For example, you are browsing a website and you're not seeing the website very responsive. Often you end up hitting the refresh button, but it's not going to solve the problem because if a service can't solve x number of requests, obviously it's not going to handle 3x or 5x times of number of requests. So in this case, we can improve the responsiveness by typically, you know, not sending any requests to the service at all. That's what Circuit Breaker does, which has three states. You would have the, you know, close state by default. That is when the requests are flowing to the service. And if there are failures observed, what would happen is it would, you know, come back, come to the half open state. It will wait for some amount of time. Then again, you selectively allow, you know, one request to see if it is going through. It will come back to the close state. Otherwise, if that request fails, again, it will, you know, go back to the open state. The benefit what we get is the responsiveness. So in this case, if I want to try the Circuit Breaker, would see that, you know, it will take a while. You would see it's going to half opening. But then, you know, in that time period, if you fire more requests, it is going to fail immediately. It did not wait for the service to respond. That's the benefit you get. Let's see the Circuit Breaker in action. Again, it's, you know, you could use the same technique. In this case, I have given the Circuit Breaker directly on the client itself. Yeah, here I have given the Circuit Breaker. It should reset. I mean, it should reset to the half open state after 20 seconds. And how many times you want to retry those can be configuration. Essentially, it's a, you know, enhanced version of retry with the state. And make sure that you don't have, you know, both retry and Circuit Breaker. It's recommended, you know, you have only one of them present in your application. And also Circuit Breaker involves, since it has to maintain, you know, additional state, you know, you can trade off, you know, whether you want to use Circuit Breaker or retry will suffice that decision you can make. And these are the configurations very, very similar to that apart from, you know, the reset value. The last option I'm going to discuss here is the fallback. So in all these options, if you're not getting the original response, you could do something like use a, you know, cast response or some default response as per, you know, what is possible in your domain. And you could return that to the user. So you would have already seen here that I'm getting some TBD title at the fallback response, which I have configured. The way you would configure that is simple. We just the same interface to provide an implementation with then, you know, fallback annotation. That's how you could implement these things. So these were the quick, you know, demo of some four essential patterns for handling resilience and how you could use micronaut for implementing these things. Of course, there are a lot of, you know, other patterns, but you know, since this is a short demo, I thought of, I could just stick to these four basic ones. With that, the code base is already available here in the in my GitHub repo. With that, I think let's wind this up. So thanks a lot, Naresh for doing this. Thanks. Hope it was useful.