 Hi, I'm Iaku Parota and I'm a Senior Software Engineer at RELAT. Today I'm going to talk about the trustworthy decision management and how you can increase the trustiness of your decision models using trusty AI capabilities. I'm going to talk about trusty AI in few minutes, but first of all, let me quickly introduce business automation, for those who are not familiar with it. Trust automation was born because some businesses simply needed to automatize processes. And the goal was to increase the efficiency and control the costs over the organization. Think about it, if you have a decision or a process that follow a standard workflow and that you can automatize, you would like to focus only on the problem itself and not on the technology. The business value is your decision and not the run time or the development of such decision and the maintenance of such decision model. For that reason over the years, some standards have been developed so that you can just focus on the model definition and not on the technology that runs it. In the industry we have some standards, for example for processes we have BPMN, for decisions we have DMN and for rules we have DRL and rules in general. In this talk I'm going to focus on DMN only. For that, strongly believes in open source and open standards. And business automation products have been built on top of such standards. And the idea behind these products is to provide a generic run time for business automation resources, so BPMN, DMN and DRL. Those products are perfectly integrated and they enable really a lot of new scenarios with this combination. And in particular I'm going to focus on BPMN and PMNL. PMNL is a standard for the serialization of machine learning models. You can start with your data, train your machine learning model with the language with the tool that you prefer and simply export it into a PMNL file. You don't have to be an expert of PMNL so it's just how you're going to store the model. It doesn't impact the training of the model at all. Once you have exported your machine learning model you can then import it again with the same tool but also with other tools written in other languages. Also your model has become portable and the training is now completely decoupled from the run time because you can train your model with Python, with R, with whatever language and execute it in production with another language, for example, because the service will simply import the PMNL file and execute it. Once you have exported your PMNL model it can be simply be called by a DMN decision. And this integration between PMNL and PMNL enables a lot of scenarios because it combines the capability of both standards. It also supports this soft connection between departments between the same company. So the communication channel are now the standards and both parts do not need to be technology dependent because the business analyst that gets, for example, a PMNL model doesn't really need to know how that model was trained and what was the language, what was the data, because it is a portable model at the end of the day. Red Hat started 20 years ago with business automation, so it has a lot of experience. And a few years ago Red Hat started a new brand project called Kojito to build business automation microservices. Kojito is the next generation cloud native business automation solution provided by Red Hat and it's built on top of corpus and springbook if you want to use it with both. Roughly speaking Kojito is the runtime for your business models, BPMN, DNN and DRF. It's super easy to use and also to consume because you just create a new Kojito project, which is a Java project at the end of the day. You take your business models and you put them inside the project. You just compile the project and Kojito will generate the microservice for you. And it's ready to be consumed. It is already integrated with all the top notch technologies for microservices, so Knative, Kubernetes, OpenShift, Kafka, Grafana and Prometheus for monitoring and many, many others. But now we get to the point. Can you trust your decision? Because the standards are not enough anymore. In the last years many companies faced reputational damages due to machine learning usage. And in this example it's just an example. An Amazon store was biased against women, but there are plenty of examples like these that impacted all the big companies, but also small companies. And not only, there are also some regulations like GDPR that make some constraints and are making things even more complicated for production environments. Since we are dealing with decision models and in particular machine learning for business automation, it is necessary now that decision services are monitorable, outbeatable and explainable. Usually there are different personas involved in business automation. So, for example, we might have data scientists that have a very good knowledge from a technical perspective, but they might have limited knowledge on the business domain. On the other side, we have the business analyst who has a very good knowledge of the use case, but maybe it doesn't have that good knowledge of the machine learning model that has been used or, anyway, technical knowledge. Of course, this is not always true, but in general it is, so take it as it is. And as a matter of facts, it's easier to explain simple models rather than complex ones. Usually complex models can solve complex problems with very high performances, but there are black boxes and they are hard to explain. So, we have a trade-off between the explanation of a complex model and its performances. I've been working for some years for an insurance company and we were trying to find some new variables to improve the risk score calculation. And we used machine learning, very complex machine learning models, but when we talked to our colleagues that were doing the traditional risk score, they said that they already tried some complex machine learning models, but the main problem was the regulator. So, the regulator is a person from the government agency whose job is to check that insurance companies are not discriminating anybody, that they are following some strict rules and, for example, that the margin of your risk model is not biased and it's not against the interests of the policyholders, for example. And they used GLM for 30 years, a machine learning model, because it was really simple to explain and it was not possible to use a complex one because it was not possible to explain it. If we improve the explanation of such models, it will be likely to be used to be more adopted in the industries. Thrust AI is an initiative from Red Hat started two years ago and its goal was to add capabilities to business automation services. Thrust AI is in the ecosystem of Kojito and to get a bit some other services. The features that Thrust AI has to add to decision services are monitoring, tracing and accountability and explanation. And let's quickly go through them. So, the first one is the business monitoring. Based on your model, we provide some metrics that are dependent to your decisions. So, for example, if you think about the risk score calculation, a possible metric would be the average price for the request. And another one might be the average age of the request, for example. So, those are metrics that are really dependent to your model. And they are automatically generated. This is the nice thing. So, we provide a generic runtime but also a generic analysis of your model. We have another kind of metrics, which are the operational metrics, and they are more DevOps oriented. So, they are used to check that the application is running fine and it's healthy, for example. So, that the response time of the request are within a particular range, or for example, the number of requests over a one minute average and so on. The second feature is the accountability and the tracing. So, we keep track of all the executions of your model. And for each execution, it is possible to drill down to each specific input and output. So, for each execution, we provide the input, the outputs, the intermediate decisions, the model that was used to evaluate that particular decision, and the explainability for that execution. We provide two kinds of explanations. The first one is line, which stands for local interpretable model agnostic explanation. And it can be applied to any model that, so it fits perfectly our purposes. It provides a feature important chart, and the explanation is calculated as following. So, a data set is built, starting from the original execution, perturbing some inputs, and keeping fix some others. All those executions are then stored, and the result is stored, and a linear classifier is trained by labels. So, to understand what were the most influencing features that impacted the original decision. So, for example, in this slide, we see that the children is a very important feature for the original execution. But for example, the age was not. The second type of explanation is the counterfactual. And this kind of explanation aims to provide explainability by examples. So, for example, let's take the mortgage approval use case. A customer requests a mortgage, and the mortgage is rejected. We might wonder, what do we have to change so to get the mortgage approved? So, we set the desired outcome, which is the mortgage is approved. And we also set some constraints on the inputs. So, for example, I cannot change my age, of course. So, that one must be fixed. But for example, I can change the total amount of money that I request. Because maybe I cannot request $100,000, but I can request, I don't know, $80,000. And with $80,000, it gets approved. So, I let the total amount of money to vary. And let's see if counterfactual is found. So, now we get to the deal. And our use case is the mortgage approval. So, the customer requests a mortgage, the mortgage is rejected. And we put the head of the case worker, and we do some analysis, and we try to understand what happened. So, we have to start with the DNN model, of course. So, we have some inputs here. So, the age, the monthly salary, the total asset, the total amount of money that are required, that are requested, and the number of installments. And those are the inputs of this risk score decision, which is basically calling the PMML file that we are using here, which is a random forest model. This random forest model should provide a number between zero to 100. Zero, if the risk score is very low, and 100, it's very high. And you see here that this risk score number is then the input of this mortgage approval decision. And it's basically a decision table. So, if the risk score is below 40, then the mortgage is approved. Otherwise, the mortgage is not approved. I've already exported this DNN model, and I've already created a new cogit application, and I also deployed within the trusty AI infrastructure. So, if you would like to know more about how to create this DNN model, how I created the PMML file, how I trained the model, and how I created the cogit application, and deployed the trusty infrastructure, I will provide all the resources in these slides. And very briefly, if you would like to know to reproduce this particular demonstration, just go to my GitHub account under the trusty AI or the SC West repository, and you will find all the steps. And if you would like to try to build the cogit application by yourself, there is another repository always under my GitHub account, which is called FromDataToCogitoDemo. And if you would like to do it step by step, I have another presentation at the key line, which is our community channel, and it's 45 minutes. So, if you would like to have a look and comment it if you need any help, just feel free to write in the comments or to reach out directly to us. We have an open chat on Sully. So, okay, now we can try to execute a decision using this DNN model. So, the cogit application is running under localhost 8080. And here we have this work UI that we can use to interact with the service. And here we see that cogito has created an endpoint called MyMorgage, because the DNN model was called MyMorgage. And we can check it out. So, for example, we can evaluate the DNN model using those inputs. So, the age is 20, the monthly salary is 2000, the total asset 50,000, and the total amount of money 100,000, and the number of installments is 150. So, we execute it. And let's have a look at the response. We see that the risk score is 43, and the mortgage was not approved, because the risk score was above 40. Now, we can go to the trusty UI. The trusty UI is running under localhost 1338. And let's reload the page, because of course, no executions were calculated before I executed it. And you see here that we have one execution of the MyMorgage DNN model at the date is one minute ago, and the execution start was completed, because it was evaluated successfully from an operational perspective. We can drill down to the execution, and let's see what was the outcomes, the inputs, and the models, and etc. And you remember we had two decisions. So, the risk score and the mortgage approval decision. And you see we have two outcomes, the risk score and the mortgage approval. And we can see both the results. Then we can drill down to the details of the outcomes. We can select it with this menu. So, let's have a look, first of all, at the risk score. And we see that those were the influencing inputs. So, all the inputs were relevant for this decision, and it makes sense, because, of course, I created the PMML model, so I know it. And all the features were relevant for the risk score calculation. And here we see also the inputs, of course, of this decision. And if we go to the mortgage approval, you see here that the mortgage approval was not approved. And the input for this particular decision was the risk score. So, we can see all the intermediate inputs for all the decisions. This is really interesting. And we can have a look again at the input data of the model. We can do the model look up. So, for example, you see here, we have the model that has been used to evaluate the decision. And now we go to the counter-function analysis. So, as I said, we can set up the desired outcome. For example, let's leave the risk score to be adjusted automatically, because, of course, if I keep this risk score fixed, I cannot get the mortgage approved. Because, you know, if I keep risk score equals to 33, this will never change. So, I leave it to be adjusted automatically with this check. And I would like to get to have my mortgage approved. So, I set it to true configure. And now I can set some constraints on the inputs. So, for example, you know, I cannot change my age. So, I leave it as it is, meaning that the counterfactual must have age equals to 20. Let's try to change the number of installments. I can set a constraint. And the minimum value is zero. And the maximum value is, oh, sorry, not the number of installments, but the total amount of money that I require. So, here constraints zero. And the maximum value is the same of the request, so 100,000. I apply it. And then I can simply run the counterfactual. This is, you see here, this is going to be run for one minute. And it will find the best results that it can find within one minute. We have just waited a little bit. And you see here, a new intermediate result was found, but it will keep trying to find better results if possible. So, but in this very moment, you know, if I request 96,000, more or less, my mortgage will be approved. So, let's see if there is a better result, but I don't think so. So, we just wait until the time out. And then that's it. So, you see, all right, the best counterfactual you see here, it was found 96,000. But of course, 97,000 was better, because it was closer to the original request. And it was found. So, this is the best result that it will find to get the mortgage approved. Let's try it. So, we go back to this Wookie UI. And let's try with 97,000. And you see here, now that the mortgage was approved. So, this was the counterfactual analysis. Here, you can find all the resources with some new videos, with all the trusty AI introductions, and all the rest. So, thank you very much for attending and waiting for your questions.