 Hello there, and welcome to another Dev Nation. We are going to have a great show for you guys today. We actually are going to deep dive into a technology related to business rules and business processes, and of course, how that relates to Quarkus. You guys seen a lot from us over the last few weeks or last few months about Quarkus, our supersonic subatomic Java. And I encourage you guys to go back and watch and review the recordings that we have out there on YouTube, just to get more context for that. But you're not going to see exactly how we've taken a new project into this new world. And I'm very excited to have Mario with us. Mario, of course, is our drills project lead. He's been a big time committer in all sorts of open source over the last several years, and he is our guru and expert for all things business rules and business processes. So at this point, let me turn it over to Mario. Mario. So, hi guys, and thank you, Bart. Yes, I joined into the drills project years ago. Now I'm the project lead since one year. And we wanted to move the drills to the cloud, to make it cloud-ready and to join the Quarkus revolution. And this is why we started this new project, which is basically not only rebranding, but more re-packaging and a big refactoring of our business automation suite. So basically, what is Kojito? Again, Kojito is a new platform that, of course, is based on rules and JPPM. Of course, we are not rewriting a full-fledged rule engine or workflow engine from scratch, but we did lots of work to adapt them, to make them cloud-ready, and to integrate them with Quarkus. And so what we want to offer is a business, a comprehensive business automation suite that can run on OpenShift that is cloud-ready, that offers automatic integration based also with automatically generated REST API to fully integrate a web application with a decision server or a workflow engine. I'm, as I said, I'm the leader of rules, so I will speak about mainly of the rule engine part today. And there will be another presentation in September from Majes Widerski, which is my great friend and the project lead of JPPM. I will give you more details about it toward the end of the presentation. So this is what Kojito is supposed to be, but a super small demo worth more than 1,000 words. So let me give a very quick demo. Let me share my screen. OK, so what I wanted to show is that I have a very simple rule, and I want to expose this rule as a web service using all the Quarkus goodies. Moreover, I want to also use the native capabilities of Quarkus, so I created a native image of this web app, which is very simple. I'm not creating the native image now because, as you can see, it doesn't work. It takes almost two minutes on my machine, but once you have it, it's super fast. It's up in really, you see, 3 milliseconds. So it's extremely fast. And this is what we really were looking for, and I will clarify this in a minute. So what I have here is this. So what I have here is that I have a super simple demo rule. I just have a result and two persons. I'm looking for a person with named Mark, and I'm looking for a person with a different name and that is older than Mark. So I defined this rule, and then I created a very simple web service, which is basically, again, an yellow word, which calls my service. In this service, I'm injecting the true session, which is configured here by name. So here I have a configuration file that says how this specific session is configured, and I'm injecting this session into my service in a very simple way, basically with plain CDI. And yes, I'm inserting three persons, Mark, and a person who is younger and a person who is older, and I'm expecting giving the rule that the engine will find this second person. And of course, if I try to invoke the service, you see it's super fast. It prints the expected result, which is the output of the rule. Of course, this is a very simple, let me say, very stupid rule, but I just wanted to show how it works. And this is where I really started, when I started trying to integrate our stuff with Quarkus and when we started this cogito initiative. So I will go further with this demo toward the end of this talk, but I wanted to show you the target, the destination of this effort, but we are all developers and we all know that often the journey is at least as much important than probably twice more interesting than the destination. So the idea of this talk is also to demonstrate how we get there. So the minimal agenda of this, as I said, this is also a compilable with a native image. And to do this, we had to first understand what GraalVM is, which features and limitations it has, and how this impacted a lot on rules, so how we had to change the rules to fit in this picture. And then we integrated the cogito with Quarkus by developing an extension and also implementing the autreload, which is the last thing that I will show at the end. So again, the idea of the talk is to explain how we get there. Why? Because I told you that I joined the Druze project eight years ago, and in reality Druze is even older. It's like 13 years old. So it's quite old project, and of course when it has been designed and developed, we didn't have the cloud in mind. So it was totally well established. It is a very robust and bulletproof rule engine, but it is a traditional Java framework. We didn't have the cloud in mind when we developed it, and we had to change something. First of all, we had to change something to fit it with Graal. Because again, we wanted the native compilation provided by Graal. So what Graal VM is? Graal VM is a polyglot virtual machine that works with bytecode and then with all languages supported by the Java virtual machine, but also with other languages like Ruby or Python. And it is able to do all the optimization that the normal just-in-time compiler does, but even better, it is also able to do this cross language. So for instance, if you have Java method, which is calling a Python function, and Python function is more than enough, then the Python function can be inlined in the Java method invocation. So it provides the same goodies of the just-in-time compiler, but also cross language. And the other very important thing, which is the most important thing for us, is that it allows you to create a native image of a Java program. So this is what we wanted to do. We wanted to be able to create a native image of a Drus-based project. The important thing of Graal is that it does lots of study analysis upfront. And as I said, it works in a similar way than the just-in-time compiler, but it does a lot of optimization, not just in time, meaning while the program is running, but ahead of time, meaning during the compilation phase. And this allows to have a smaller program. And of course, a super fast startup because you are not launching the Java machine anymore, but you're launching a native image. The problem of this, at least for us, is that Graal comes with some limitations that are understandable. The biggest part of this limitation is that I didn't mention this, is that Graal is based on what is called the Closet Word Assumption. So it assumes that it knows everything about the classes it will be used. And there will be no specific Java machine tricks like dynamic class loader and reflection, because otherwise, Graal VM is not able to solve this Java trick and it is class loading at compile time. So in-drills, we did a lot of dynamic class loading because you saw a typical rule, we have a few constraints, and then we have a consequence which is basically is translated into a Java method. And then we put all this stuff in classes and we do a lot of dynamic class loading. And this is totally unsupported in Graal. Graal has some other limitations that in reality it didn't affect us, meaning that it cannot run some native virtual machine interfaces and it doesn't allow to have a security manager. It does support to finalize that any way it's deprecated and stuff like this. This really didn't affect us. What did affect us a lot is that you cannot have reflection, okay? Or better, you can have reflection if you follow the closed world assumption again, meaning that you have to tell to Graal, look, I want to use reflection on this specific class and then you can configure that class in a JSON file and give the JSON file to the Graal compiler. But in reality we didn't want to do this and also we didn't want to... We had to get rid of all use of reflection in Druze which again was a lot because all the constraints were evaluated, of course, by using reflections, okay? So what we did is that we introduced what we call the executable model, okay? So what is that? It's a model, a pure Java model of a rule base. So here I have the same rule that I showed you in my demo and now we have something that translates this rule into a Java model using this DSL. Don't worry, you don't have to write this DSL by N. This is automatically generated for you so our parser parses the rule and generated this Java method. And you can see now that all the constraints have been translated into lambda expression and the consequence, the right part of the rule became a lambda expression as well, okay? And the moreover in the executable model there are stuff that we had to, again, to avoid reflection and to make everything explicit. So there are stuff that Druze did for you like setting some indexing and using a feature that is called property reactivity. All this stuff was inferred from the DRL, from the rule language, but it's written explicitly again, written generated for you inside the executable model, okay? So the executable model allowed us to have a Java DSL representation of our rules and also it has a set of good features but again, the most important for this talk is the last one in this bullet list, is the fact that it allowed us to not use anymore any reflection or any dynamic loading and then allowed us to have our rules natively compilable on Graal VM. So we don't need a dynamic class load anymore. Now our internal class loader just throws an exception in the defined class method because we don't need it anymore but you have to change it because otherwise Graal will bump into that dynamic class load definition during the code analysis and will refuse to create the native image. So we have to just have a specific class loader for these cases that doesn't do any dynamic reflection and then I showed you that I quickly showed you during my initial demo that we have an XML file when we define the properties of our rule basis and sessions that we have in our project. You cannot parse that file anymore. You need a lot of reflection as well. So what we did again, we learned the lesson from the executable model and now we also generate a Java class, a Java file that carries the same information that you type in the XML configuration file and same thing, we cannot use reflection as you said. So we had a lot of wiring of our services based on property file. What we had to do is hard coding the name of these services in a Java file. Why? Because this is interesting that you can use class for name with a constant as you see in the right side of this example. Why? Because this is not breaking the clause and word assumption. If you replace that constant with a variable, grass doesn't know anymore what the class sees and it doesn't work anymore. So this is the way we had to change our internal wiring of services to make this work. And why we did this? Because if you run a very simple example like the one that I showed you before by launching a Java virtual machine as you see from this screenshot, it took me 73 milliseconds and took more than 100 megs of RAM. If you do exactly the same by launching the native image you have a sub millisecond execution and it takes only 16 megs of RAM. So it's almost 100 times faster and almost 10 times less memory occupation and this is what we wanted because we wanted to have a rule to function as a function as a service, as a decision service as a function that can take decision for you that when you invoke it, start it, run the rules, take some decision and return it. So this is why we did this lots of work to allow native compilation. And then once we had this native compilation ready in reality, we started to look at the graph before knowing the existence of the Quarkus project. And then we have been lucky on time because they announced this Quarkus initiative and it was really what we wanted because of course having a simple Java application is not enough. You want to leverage all other technologies that you are used to use. So as I showed you, I'm exposing myself by exposing my services with REST API and I'm using CDI and I may want to integrate my rule engine with a camel root or monitor it with Prometheus and Grafana or feed it with Kafka Stream. I really want to integrate all the cool Java technologies, the largely used Java technologies with this stuff and Quarkus offered us the possibility to do this. So with Quarkus you can have this native executable which is the thing that I ran before. If you do a native image based of course on GravDM or you can run it with a plain Java machine and this is what you do mostly when you are in development mode. So then we needed to integrate our stuff with Quarkus so we created a cogito extension for Quarkus and what we did in this extension. You have a build step that has to create that is just a method annotated with that build step annotation which is just a method that in our case it was very simple because we already had that code generation bit so we just wired it, we are invoking the code the generation of what we call the de-executable model and then we also started co-generating some automatic rest endpoint and what we do is just registering these Java classes created on the fly inside Quarkus. This is basically what we did to integrate the cogito with Quarkus. The next bit was that we needed to also enable the auto-reload so Quarkus is very cool if you started playing with it meaning that you can launch it in development mode and then you can change some Java classes while it is running and you don't have to bring it down and restart it. You have the auto-reload of these Java classes. And of course we wanted exactly the same thing for our rules files and decision tables and processes. So we also added the possibility to have auto-reload for our DRL file. Of course Quarkus wasn't ready for our stuff it was only for auto-reloading of Java classes but we added a bit with Quarkus we sent some requests that have been discussed a bit and then merged and this allowed us to have our auto-reload feature and it was also a very nice experience because all the Quarkus guys working on the core platform are very open to discuss improvement and new use cases to which they simply didn't think about. And then yes, I showed you a very first example of a very simple REST API that you can implement with Kojito. Let me show you another example which is the example of these slides. So let me share again my screen. So I will bring down the server. This time I will restart it in development mode. So now for our case our extension is recreating all those classes that I mentioned before. So if you go in the target folder you can see the generated classes for our project. So this is for instance one of the generated rule of the executable model. So now I'm invoking another service which simply tells me if I'm old enough or not to drink so I'm exposing this REST API and if I pass to it a Mario person which is old enough it will tell me that of course I can drink. But I launched these in dev mode. So this means that I can change the rule. This time I coded my rule with the decision table so I guess I will need to share it one second. Okay, I'm sorry but I'm not able to share the Excel file containing the decision table. Okay, so let me try. Sorry about this. Let me try with the other rule. So for instance this one, so again this is saying this now. I can I guess do this. And as you can see the other law is re-evaluating the new rule without triggering it. And under the hood, Quarkus did not replacement of my rule files. Okay, so this is basically all. What's next is that we are designing a new modularization of rule that is based on a concept called the rule unit. We have better integrating rules with JBPM because we want this unit of work, this unit of rules to be orchestrated by JBPM. We are, as I said, developing a new integration with Kamel, for instance with Kamel Group, Kafka Stream and stuff like this. And the other thing is that something which has started just now is that we want a generation of some automatic rest end point. So if, for instance, you write a query in your DRL, Kojito will generate for you a rest end point that you can call and when you call it, it will populate the other session and invoke that query and you'll be returned with the result of that query automatically without basically writing any line of code. So this is all on my side. As I said, as much as we ask you who is the JBPM lead, we'll do a follow-up of this talk on September the 5th. So it will show you the JBPM side of this and hopefully it will show you also part of this new implementation that I'm mentioning today. And that's all for me and I'm open for question now. Thanks. Any question? Maybe now I'm live. Okay, we actually are out of time, Mario, so we actually have to finish this call rather rapidly, but I did try to answer the questions in real time. I provided links in the chat for people to get more information. There was one question related to a REPL, so like an interactive shell. Is there an interactive shell somewhere in there for maybe editing business rules or something like that? And then the other one is does the change to the business rule, the compilation of that business rule, is that a blocking call? Sorry, I didn't get the question. Do you have an interactive shell? Yes, it's the console of IntelliJ, yes. Okay. You mean an interactive shell to change the rules? Right. No, you just change them inside your IDE as a plain source file, yes. And if you have decision table, they are just plain Excel files, so you just change with Excel. Okay. And edit, save, refresh. So for those folks who are new to Quarkus, remember edit, save, refresh. Edit, save, refresh. That's the model we live in now. All right. Well, thank you so much for that, Mario. We're out of time, and thank you all for hanging on the call today. But if you have further questions, certainly find myself or Mario out on Twitter. Thank you so much. Thanks a lot. Bye. Thank you.