 Welcome to the session on RATPAC. The idea here is to give a foot for thought, like we have been building a STTP application. By that, I mean both the traditional web applications plus the rest services, what they're using. So we have been building these for quite some time, but can that be done in a better way? Can there be a better programming model for that right on JVM? I'm sure all of you have learned a lot about functional programming and might have got inspired by that, but if your day job is to develop pretty much on JVM, then how some of these good ideas can be brought onto JVM and use them. So RATPAC is one such effort. So you could call it anywhere between a library and a framework. It's not so opinionated as many frameworks, but it's a good tool for developing a lightweight, say, microservices or gateway to microservices, REST API for mobile device or so. So that's where RATPAC stands. RATPAC was started about maybe three years back as a groovy version of Sinatra, but later on, people found that much more can be done on that right by changing the programming model. So as of today, RATPAC core is totally a Java 8 library. On top of that, you have a small groovy layer, which makes it very easy to write a few DSLs rather than writing full-fledged methods. But Java 8 does have lambdas, which will make the life a little bit easier. But if I were to do everything in Java 8, probably we can look at only a few examples. So it might take quite a bit of time to explain how our world is programmed. So that kind of readability is what groovy DSL will bring in. I don't have any more slides, so I'll right away get into the code and explain all the concepts or the programming model for what I want to highlight using code. But before jumping on to that, the traditional programming model is the thread per request. So whenever a request come, I mean, the server would have already created a few hundred threads and it's ready to serve the request. So whenever a request comes, a thread is allocated for that request. And throughout the processing of that request, the thread is allocated only to that particular request. But serving request involves several IOs, maybe calling, reading a file, making database calls, or maybe calling some web services and getting the response. So all these times, the thread will get blocked. So what this leads to is that non-optimum utilization of resource. And if really the app is meant to be used by several hundred people at the same time, you would definitely have a lot of threads waiting on that. So scalability factor will be less with that. What rat pack changes there here is, I'm sure people are familiar with Node.js. So it's a similar programming model. It's followed here. So where there is an event loop, so the request gets assigned to event loop. But whenever there is an IO job, it goes to a separate thread pool and it gets processed there. It comes back to the event loop and it runs there. Unlike Node.js, the typical number of threads in the event loop is number of cores multiplied by two. That's the default. You could change that as well. But it's few, okay? With that, feel free to ask questions anytime or during. Let's get into the code. So let me create, so one of the interesting factor is you can run rat pack as a simple single file groovy script also. But I'm going to create a project here and use it. So to bootstrap a project, I am using a utility called LazyBones. LazyBones will just create a project template for you. You can create templates for any of them. So I say create, I want the rat pack. How to mention the version? Want to go to the RC3 is the latest version of it. So the GA is supposed to be released on 13th of this month. And the RC3 right now has only one bug filed against it. So for running it, I'll use a gradle. So by default, this will also create a gradle. So I can use gradle, W. That will create the IDF example is what I use. So that has created the IDF project. So I can go and launch. So this is the structure. So under rat pack, you have a rat pack.groovy that's a default script that gets created. Sure, it's okay. Let me just get these things. Let's start with the handlers. So this is where you will specify the URL pattern. So by default I'll say for get. So I'll run gradle in continuous build mode. So for that whenever there's a change in the file, automatically that gets a build. Now that's for rat pack, these handlers, something, this is extra. I don't need the findings right now. Okay, let's start building a simple REST API where typically FMCG sales rep would go to the retail stores and he'll get few orders for the product. So we'll build some of those things here. So the first thing he would want to know during the beginning of the day is which outlets to be taken. So let's have a web service for outlets. I'll just put some hard-coded data there. So I have a outlet class and some more. To get that, typically I would do is get outlets. Then I want to render it as a JSON response. The problem to be noted here is that. So here it's a hard-coded data, that's okay. But in your usual use case, you would make a database call and you would get the response and then you would try to render that. But during the database call, your processing thread will get blocked. So doing this thing on the event loop is a very bad thing in rat pack. So this should not be done here, right? So the way to delegate the task to a blocking thread or to take it away from the event loop, we have to use blocking.blocking.get. Within here, I'll have to place my call to get outlets. Let me just put some data here so that gets complete. And I'll assign that to a variable P and let's just see what is that. What is that return? If you could see what it has returned, it has returned a promise of it's. So what happens here is that when you make this call, the call is made and immediately the event loop continues to the next, processing the next request. I mean, processing the next statement there. It doesn't block. Resolve that. Ultimately, when the processing is done, we want to get that data and display it back to the client, right? So to that we can use, so let me just remove that. We'll come to that. We'll come to that. So this render doesn't make sense now because I just want, because anyway, the response is, you don't know when the response is ready. You put that, then when the response is ready, you go and render. So it is the implicit parameter. So the then block gets called. The default parameter is called it or you can go and give a name or so you can give something like this. Now that leads to an important question. What is the execution sequence, right? So what happens if I go and put something here? So let's say print here that is before blocking. And we are in blocking. So we say inside blocking, there are four parts. So what is the execution sequence? First thing is obvious before blocking. What is the next? Are you, anybody has a different opinion? Yeah, so usually, in usual case, it depends on the, say, whether the thread which is running separately, did it run first or there is a problem that the result is unpredictable then. So print statements may be okay, but what if I have a code here and say that throw exception? So is it that if the exception is thrown, does it have to wait for the blocking thread to finish and take something, right? So that leads to an unpredictable result. So instead, what Rat Pack does is makes it very predictable and you could see that. So before blocking, after then, inside blocking, then inside blocking. That is the, before executing any of the blocking operation, whatever is there outside the blocking operation, that gets executed. So that's why you would see that after then comes always before inside blocking, right? But the dependency here is that once the inside blocking is done, then only the then part can execute. We don't know when is that going to happen, right? So to make sure that, let's add few more detail here. I'll put thread, current thread, just to understand where it gets executed. Because that's a, if you understand this, pretty much you understand the Rat Pack programming model. As you could see here, before blocking got executed in the compute thread, which is the event loop. Then after, which also got executed in the compute thread, inside the blocking got executed in the separate thread. You could see that it's in Rat Pack blocking, right? Then the then part, whatever is executed after that blocking, that also gets executed in the compute thread. So what, while programming, what you have to ensure is that all the computations happen in the compute thread and whatever is the blocking operation waiting, those have to happen. I operations have to happen in the blocking. Yeah, that's one thing you have to make sure that. It's not a, I mean, see, after getting the results, I could have converted that to JSON in this thread itself. Then I could have just rendered it in the then part. But converting to JSON is not an IO operation, so it's not a good practice to put that in the blocking. Blocking should be only IO. Yeah, this is a CPU bound operation, right? So any questions? Questions? Okay, let's go ahead. So I have a few more web services which are already built, these other microservices which this application will be using. So that is, which I am running on 5051 port. So I have one to get the product information. So that will give me product ID and product name. I have one more to get the price information as well. So what we'll do is we'll call this and return this data. Then again, we'll call price data and compose them together and return it as a single object. So let's see how we can use them. Let me just copy this to boilerplate code here. That is mainly for the data structure as well as converting from the response to objects, back to objects. So let's say that it's a get product slash, slash I want to say pass the ID. Now I want to make a call to that web service and get the response, right? So let me say which product I will just port the HTTP client and hit the product ID for that. So to make a call, Ratpack already provides a HTTP client. That's what I use here. Okay. Now HTTP client is by default available, so I'll just inject that and pass it from here. That's a parameter which will be automatically injected when, because Ratpack is calling this. So I need to pass HTTP client. So I mean, the idiomatic way would be to create a service class and put HTTP client as an instance there and inject HTTP client into service and inject the service into all these things. If I have multiple, you know, requests just to make it simple, I'll not do that. String I'll convert it to log. So what I'm making a web service call which is again IO request, right? So do you think they should be executed in blocking or in the compute thread? It should be in blocking, right? I'm not doing anything. Just I have not put blocking dot get or anything. Let me put the run for it until we have something. So let's say product, it returned a promise. So note that the HTTP client what Ratpack gave returns a promise by default. You don't have to explicitly say that run this in the blocking thread, right? It will automatically do that for you. Now, what I want to do is, I want to get the result when it is available, convert it to the object I have here as a product master and say display it, right? So that can be say dot, I say map. Map will take a promise and return a promise. So the input is the response what received from that which I use here in this one. Transform to product master. So it is the response what I got from the web service. I say body, then get the bytes. I using a JSON slurper, I convert it to JSON. Then I just get the ID and the name and create a new object of that. So I say map it to, I'll just again convert it to JSON. And... So it's in the situation of the method that is going to come in for the same reason? Oh, no, it's a, yeah. Methods have a contract that whether they expect, you know, accept a promise or they accept any ordinary time. So promise is the API provided by Ratpack, which has got several methods on it. For example, here you have maps, yeah, correct. Right, right. You have to know it from the API docs. When you write your own classes, you may decide to return promise if you're doing it here. Or you may use one of the wrappers available. So for example, here within transform master in this closure, I did not, there's no mention of promise anywhere. So it's already the promise is unwrapped and finally when the result is created, it's wrapped inside a promise and returned. That map function takes care of that. So whenever you call a web service, you know, you may have limitation, for example, if you're dealing with the third party request, they may say that, say you can make only 10 requests at a time. So if it's a Google reverse geocoding you want to call and say all of a sudden your application is called by 50,000 users, doesn't mean that you can make 50,000 calls to the Google API, it will not allow that. So many of the calls will simply time out. So there is a way to throttle that. So in your HTTP client, you can go and say, throttle off size, say 20. So I just want to make sure that only 20 calls happen at a time, so which is available out of the box. You don't have to write any custom solution to do that. Cache will have to write explicitly, because it doesn't understand how. You can put a transformation function in between and decide. So right now I just got one of them. So I'll write this copy space and say I want to fetch the price, which is again the ID. So here I have received the product master. Now that I have the product master from that ID I want to take and make a call to the next service. So I can't use a map here because the next call, which is going to make a, going to invoke that web service, is again a blocking one. So instead you have to use the flat map, okay? So the input here is the product master what I already received from the previous map. Now here I'm going to call that fetch price of the specific client already. Product master.id, what I'm going to use, right? Then I have to, I'll get a response, which I have to convert it to the object I have with that price, this one. So I already have a class for that product price. So now what I want to do is I want to combine these both information together and create a new object for that. So that's what I did here, transform to product which takes a product master and price and clubs them together and use one product. So this takes product master. So now this will receive a product object. So you get all the, so now this is okay, but this is all applicable for only one product. So it's not a good idea that, the client makes call for each product and get that. It's much better to know, you have a list of products and you get it together. So let's write one for that. Need a specific client again. So first of all, I would need product IDs. I'll say one and two. So again, Rat Pack implements reactive streams. So those options are available, so you could use that. So I would say stream and I would have to convert my list of product IDs into a stream which is done by this publish function. Then I would call flat map and I can do the same operations here, right? So I can call, there is no, because each ID would get passed to the flat map one by one. So the parameter itself is the value to be passed. Then once the result comes, I have to do the same thing. Yeah, so you have to wrap it in that. Yeah, it has to, it's one by one. Yeah, so whatever is there inside, even if you have two blocking operations, I mean say in the blocking, if you have two IOs, Rat Pack is not going to run it concurrently. It's run as serial. I'll have to do one more flat map. I mean, ideally I can put it in a separate method and call it from both places. That should work well. So now what I have is the, it's a, previously I had promise, okay? Now I have a stream, not a ordinary promise. So I'll have to say I have to convert that promise to back to list and then this later. This has to have three that should be needed. Yeah, that's how you would wrap it in a stream. So streams have got several other methods where you can compose them together very well. So I mean, again, it's reactive streams compliant. So if you go to reactive streams, you would find several projects which support them. And if you're not happy with the reactive streams and you want much more powerful, you know, composability, then you'll have to use RxJava. And Radpac has got a, in a module, I mean, Radpac doesn't follow a plugin system. Instead it follows a module system. So you can have a Radpac Rx module, which helps you to, you know, adapt a promise to observables and observables back to promise. So that is something which you could use. Finally, let's do one of the post examples. Again, I'll just put something here. Then I'll say, so request I can get from the request and I'll get the, since this is a post, body will contain data. I'll keep it to post. Let me put it as just said, done. If you see, again, it's a promise. So for example, say you have a very big payload and say the authorization fails, then there is no point in reading the body and parsing that and all those. So by default, Radpac is lazy in, you know, parsing the body. So the way I can get that done is say request, right? So I just want to get the text content from that. You'll see the same thing is echoed back. So you could do any other processing before ensuring that, you know, whether you really need to get the body or not. And here also, see it's a promise. Ultimately, you want to get the content inside the promise has to be resolved, right? So map does it automatically. So here I did a very simple thing, I just getting one thing. So you can call a function, okay, which would take that data promise of T, whatever T is there. Your function here can take T as a parameter and return something. No, no. So map, you can pass any function. It's highly composable. Otherwise what happens is beyond that, it's not composable. So only one time. And yeah, lazy, it's everywhere lazy. So for example, here also, if I don't have a then block, the blocking dot get will not get called, whatever is here. So for example, yeah. So now if I remove that, let me put a, if I don't render something, it will complain. If you see that, it did not print anything. So that's how it's lazy. So lazy and composition, all those are pretty much what has been borrowed from functional programming. So that's at a high level what I wanted to cover, mainly from the census of P conference, what are the grades of functional programming that used here to, so the key focus here, asynchronous and non-blocking. So with that, you have fewer threads waiting and your application can scale very well. And this also has some templating mechanisms through which you can generate the UI and all. And yeah, but I have used some depth somewhere. If I remove that, I can make that. You will have to, that depends what is there inside that. If you put, it may not work. You'll have to do the type, yeah. So Groovy has both options. You can make it as a compile static and I mean, you can say that, check the type during compilation or it can be dynamic also, so both options are available. But the library itself is statically compiled. So because of that, I get all the auto-completion and all. If I put here, it shows me all those handlers and all those things. And for a very big application, it may not be a good idea to put everything in one file here. So always whatever I put here, like the handler parts are there. That can be put as a separate class implementing handler interface. Or if you have separate two, three handlers together, I can put it as a class that implements handler chain. So that way it can be further put into separate models. Right, it's a script file, yeah. Right, you may have separate classes. Right, right. So these are the quite a few modules, you know, available, you could see. Core is the one what we used and groovy is something. You could write the entire, instead of the groovy script, you can write it as everything as a Java class. But that would look very, you know, ugly to read. So which has a juice module by default. Then there are a lot of other, you know, Ikari, you know, connection pool is available. And if you are using MongoDB, you know, you could use Gmongo, which will be, you know, very easy to make a call. And yeah, and making a, you know, call to other REST APIs is much easier with the static client. Yeah, juice module. Right, so if you are creating, you can create it as a juice module and put it there. And then you say, you know, in the beginning you just specify what is the module required. And for security it has a pack 4J integration. And the RX is session. And also yeah, it supports Spring Boot also. So if you want to run it on top of, you know, Spring Boot, I mean, you can use whatever is available in Spring Boot also. Maybe if you are going for slightly heavy lifting with the database connection and all you mean, you can use that. By default, you know, it's a, I mean, it uses Netty, that's how the asynchronous part works. It's a jar. It's a jar. You're right. It's not concurrency. It's not concurrency, exactly. It's just a effective segregation of, you know, blocking and non-blocking. Concurrency, there is nothing available out of the box as of now. Right now it doesn't have anything. You could use what is provided by, you know, Java or any of the Java implementations. Executor service or, you know, the parallel streams. All those things you can use out of the box. It's a JVM. Right, right. So when you build, yeah, when you do a Gradle build, it'll generate a jar file which you can run on JVM. It requires Java 8. Older versions of Java will not work. Right. Yeah, any Java library you can integrate here. I'm not very sure about that. It should support, I believe, because Java, by default, supports any code. So it's a code that's run on that product. Yeah. It might be a file, it's a plan, it's whatever it is. What I mean by that is I already created the template. Oh, okay, okay. That's how I'm going to do it. Then it should work. Yeah. So now, if there are any products, I'm sure we will be able to do that. So the documentation is not very exhaustive as of now. So the better way I found is to read the test to understand how to do things. And it has been changing quite. That blocking was initially called some background. Then it became blocking. Then now I've got blocking.get. And you have that drop wizard in metrics. Yeah, that would be the best way to use, I believe. You know, instead of developing a very heavy application, yeah, you make more of a microservice. That's a default option. No, you could use any of the messaging. Yeah, typically, yeah, either you have to use, you know, HTTP or you will have to use a micro rabbit MQ or any of the messaging. That will be another way to. But still it's a network level, right? Yeah, that's more of you consider them as a two logical databases, but physically they may be one schema. HTTP looks like the preferred option these days, but still I would say you could use, you know, message passing is still a valid option over there. It depends on the use case. Again, in terms of the development efforts, you would see monolithic versus microservices. Microservices would have slightly more effort there. But it's like it is for a benefit, you are trading off. Well, any other questions? I'll be around tomorrow as well. So if you want to discuss anything, feel free to. Well, thanks for attending. Have a nice day. So very soon, I mean, the 13th is supposed to be where one or two gets released. Now it's production, it's pretty nice. This is one of the implementations. If you still need, you can integrate RxJaw also into that, three years. Yeah, but initially it started as a just groovy flavor of Sinatra, but then later they thought about all this netty, asynchronous and all. So the idea is that, you know, the Grails framework is planning to put a profile with this as the base sort of thing. So that's another, where community can make it expanded. Rat Pack is one of the profiles. So they want to put this one, then some Hadoop and some microservices, it netty, yeah. So you could use Spring Boot as a base, actually. Pack 4J is a library available there. That's one of the modules. I haven't tried it out. But Pack 4J is a separate project. So just integration here. Yeah, that's like, it's a readily available module. We just put, say, this is a module and you say this is a data source connection. That's all you have to say. Connection pooling is already available. Yeah, it has. Well, thank you.