 that we do in life is a process. Making a cup of tea is a process. Driving to work is a process. Brexit is a process, or at least that's what Boris Johnson tells us. I'm not sure I believe him, though. All processes, of course, can be improved. Not too hard to think how in the case of Brexit. But the question is, how can we improve other processes? And more importantly, how much is it going to cost to actually implement those improvements? Well, our next speakers believe that new, user-friendly AI solutions can help organizations understand operations and improve their performance. Welcome, Alberto Fernandez and Marta Ranz, first-party data director and senior data scientist at MINSight. Welcome to the two of you. How are you doing this morning? Thank you very much. You're doing very well. Fantastic. Marta, are you okay? Yes, thank you. Great. Nice to see you. So whenever you're ready, take it away. Let's go. Thank you. Thank you. Okay, so thank you very much everyone for coming. My name is Alberto Fernandez and here with me is also my colleague Marta. We are both part of the Data and Intelligence Digital Practice at MINSight. Today, we would like to give you an overview about a very innovative technology to maximize your processes and lead your organizations to the operational excellence. This is process mining. To understand the concept, we could say that, yes, we live in a world of process. Whether you find yourself just following them on your daily basis or you're just under specific situations, what we can all agree is that they can always be improved. So the question is how much it is going to cost us? Okay, so we'll see now process mining from a MINSight perspective, where we could help in this task to improve the business processes. So let me move to the next one. Trying to do that. Okay, now, sorry. So the first of all, we would like to introduce ourselves as a company. What we do is specifically at MINSight. So MINSight combines business consulting, advanced digital technologies, and also cyber security to tackle every single project from the very beginning to the very end. This is a full range of needs from the strategy or ideation to the execution of them. And there you have some figures, some numbers. We are more than 3,000 professionals and keep growing. But the area involved specifically in process mining project is our artificial intelligence division. There we have a specific methodology that goes from the ideation of the use case of our project to the prioritization of them in a kind of backlog. So we have our specific roadmap for each initiative, but we are always having data at the center of the methodology. So that's the way we face artificial intelligence project, but also it's the way we face process mining project. This is the model that is going to be seen today in this meeting. So talking specifically about process mining, we can explain what is it with a practical example. So this is an example of how a business process looks like. In this example, we see a client thinking of buying something, then purchasing it. A purchase order is created. Then we send that purchase order to the vendor. The client receives its goods and the customer pays the invoice. Okay, that's fine. But how do we know this is not just the theory of how the process should work, but the reality is quite different? Or how do we know if there are any friction points or bottlenecks in this process that could lead us, for example, to a late delivery? So answering those questions is having a classical approach to answer them. It is having a huge amount of sessions and interviews with the agents and the person that are involved in this process. This will lead us to many people involved, a slow identification of the inefficiencies, a complicated way to measure the real impact of the process, and of course, to a very subjective opinion of what is happening in the process. So what can we do? Apply process mining. So process mining tries to solve all those issues by doing a data-oriented analysis of the process that is 100% objective and it is based on data, not opinions or a subjective way to see it. So while classical approaches stays in the first or second layers that you are seeing there, so the storytelling and defining KPIs, process mining goes to the nucleus by making data analysis of the process through the digital trace. You can start evaluating patterns and sequence from the data, not from opinions. Bill Vanderals, that I don't know if I pronounced correctly, is one of the fathers of process mining and said process mining is bridging the gap between the classical process model analysis and the data-oriented analysis because it is focusing on processes, but also at the same time is using real data. That's a good definition, but in a more technical view, for us, from inside, we define it like a set of analytical techniques that identify bottlenecks and inefficiencies and improvement opportunities in the business process from the digital trace of the system that supports them. So why do we need process mining? Let's say that we decided to create a process mining division years ago because despite all the investment the organization do in the processes, they are still being supported by rigid technologies, fragmented organizations, and rapidly changing markets that results in execution gaps, and in a huge difference between what is the aspirational or the theoretical way to work for this process and the reality of it. So at the end, what do we need to implement a process mining project? The first thing that we need is to define the perimeter, that is, define the business process that is going to be improved. The second thing is to collect all the traces that we are registering along that business process, then we need to break into sub-processes and define the typology of those sub-processes, and at the end we start analyzing them. So let's try to do a focus as soon in each phase. So the first one, to select the perimeter. How do we choose the perimeter to isolate a process, a business process? So there is no scientific way to select one, but there are some drivers that we can use to select one instead of another. The first one is to choose a process that we know that is very complex or with a lot of stages and not easy to follow. The second one is velocity. Try to choose always a slow process because these are the ones with more room to improve them. And of course we have, if we have extremely expensive process or especially relevant process for our business, this could also do a very good process to apply process mining techniques. The second thing and the most important is the trace. The digital trace is the fuel of the process mining project. The more detailed trace we have, the more rich will be the output of our projects. So depending on the complexity level of the trace, we can have from basic trace to the business trace. What we necessarily need is the basic one. That is a unique identifier representing the primary key of the event happening on the business process. Let's say the case ID or the event ID. And the second one is the activity or the stage in which this case ID is in each specific time. Along with the time stamp representing when that case ID has arrived to that specific activity. And you can think, okay, like most of our clients, okay, what would happen if we don't have a digital trace? So we have two different things that could happen here. So the first one is, okay, I don't have a structured trace. That's not a problem. We can use a specific data treatment algorithm to allow the trace to be understandable, for example, natural language processes techniques. Okay. And if we don't have any trace, what we could do is apply task mining techniques, which is a previous state where we connect a monitoring system in the devices of the agents that operate the process to start collecting those digital traces through the clicks and actions that they do when they try to operate in the process. Okay. So next step is typing the sub-processes. So if we have selected our business process and we have gathered digital traces, then we need to establish the sequence order of how those sub-processes happen in the whole process. We can have assisted or non-assisted or automatic sub-processes with a huge variety of frequencies, depending on if they happen always, sometimes or just exceptionally. This is very important to know because from our experience, the most painful processes are the ones that are assisted and the ones that need specific decisions from humans, because these are the ones that we could optimize more. Okay. And the last one is the analyzing part. We like to understand inside that the deliverables of a process mining project contains three different dimensions. There are dimensions that we say. The first one is the redesign dimension that have to do with the needs to redefine the processes due to inefficiencies in time or a lack of compliance with the theoretical model or the happy path model. The happy path model is just the way the process should be executing, okay, theoretically. The second one is the resource dimension, which includes those modifications to be made to the process in terms of management of human or virtual resources. And finally, the recommendation dimension, where the analytical power of the tools is exploited to offer recommendation solution to the lack of decision or the wrong decision making, infertile points of the process. For us, these are the different scenarios that our clients need to apply this technology. The first one is a specific process optimization project, where we have to choose that project, because we need to maximize that project, that process, and to do it efficiently. The scenario two is to try to seize the opportunity of a technological migration to only migrate everything that works properly and try to fix everything that was not working properly in that migration. So for that, we try to apply a process mining project before that migration to see which are the pain points of the process and which are the things that we are doing well, and we are doing bad to only migrate the good things of the project, the process, sorry. So in this scenario three, we try to set a process mining project, just taking advantage of, let's say, a strategy project or a business process that has increased the scope by acquiring, for example, another area of the company, or even that decision of a new company. Those projects are a strategy project and need a process mining project before just to know if you have to redefine your processes before doing that strategy decision. With all of these scenarios, meanside will do is to propose a specific methodology based in three steps. The first one is always to try to measure. We try to gather all the traces that we have been seeing of the process and try to gather all these data to the specific tools. The second one is to understand it. Then when we have those data, we try to analyze the traces, the digital traces, and try to understand the process gaps and all the inefficiencies that we can discover. And the third one and most important is to act. We try to eliminate those gaps and pain points to finally unlock the full capacity of execution of the process. So, how can we do all of this? Of course, by supporting our analysis on tools like the one Marta, my colleague Marta is going to show you right now. Okay. Thank you, Marta. I cannot share my screen until you. Thank you. I cannot. Can you stop sharing and I will share. Okay. Thank you. So, I'm just guessing that you love what Albert just told you and you're wondering how you can implement it. Well, we have partnership with the main actors of process mining. Theron is with its execution management system that integrates not only process mining, but a whole universe around it. Mining with edge pair that together they cover process mining and test mining. Mining venue that by itself covers process mining and test mining with a very user-friendly interface and process goal and timeline PI. And although they are all good options and that's why we have partnership with them, we have a special focus on Theronis. And you might think, well, why? Sorry. So, the execution management system, as I mentioned before, the truth is that they build an ecosystem around process mining with, let's say, satellites around it. And some of the things that Theronis can do is real-time data ingestion, thanks to its multiple preview connectors, process mining and test mining, planning and simulation scenarios. So, with Theronis, we can simulate how the process is going to look like once some changes are done, of course, visualization and action flows. And that's one of the main difference with the rest of the solutions. Because with the same connectors that we were able to read data in real-time, we can execute actions within the systems in order to prevent the inefficiencies that were discovered during the measuring and understanding phases. This is just an example of what we are going to see in a minute, but we wanted to show you this slide so you could get used to the task force and to remind you once again, the steps or the phases that we are going to cover. So, the first one is to measure or to know the process, then to understand the root causes of why it's not going the way we wish the process will go. And finally, to act, to prevent these inefficiencies from happening. And before we move to the demo, I would like to introduce you to the use case that we are going to see. So, we've chosen the order to cash because it's a very intuitive use case that I believe that most of us are familiar with. And we have experiences from the customer side or from both sides. This process covers from the moment when we as customers do a full chase, we want to buy something, until the moment we get this good and they invoice this phase and they deliver the data as fast. And you might think that this is not that complex, but when you start to take into account the logistics, managing the store, what the complexity of the process grows exponentially. And just once again, I would like to remind you the three phases. So, measure, understand, and act. So, this is Solonis, the solution that I mentioned before. And this is the first thing that we are going to do. So, understand the use case. In this case, we count on 30 million orders and four billion dollars of quality amount. And what we see here is about all the steps that this order is following. So, as I mentioned before, how the customer plays the order, it generates the document, the invoice, how it's sent to the logistic hub, it gets to the customer, we clear the invoice and they deliver the data as fast. And the first thing that I got your attention is that we have 209 variants. What does mean? Well, the thing is that Solonis automatically has detected these variants. And just imagine the effort and time that this detection might have cost if we had to do it manually the way what it was done before. So, if we want to see all the variants how the process looks like, we can add more and more variants. And what we see is that we do no longer have this straight, perfect line that we saw at first, but instead we have like a branched process and it's not like that simple anymore. And I would like to focus on these four variants because we are going to see some of the things that we are going to see later in the demo. So, in the second variants, for instance, what we see is that after the customer placed the order, we detected a stock out. And what is the consequence of this stock out? Well, what we see is that in the first fire, we had like the process took just four days. And in the second one, it took seven. So, basically, we double the time just because of this detection. And also in the fourth variant, what we see is that after the place was done, because the product is not available anymore, we have to cancel the order. So, this is just something that we see because we get the objective end-to-end process. But maybe we don't want this level of detail, and we want just to have a higher level of the process. That's why we define the KPIs. In this case, we are focusing on on-time delivery, reducing returns and cancellations, and increasing productivity. Because what we see from our KPIs is that we are quite far from what we should, where we should be. So, this is something that we should in the first one, we are going to look at is the on-time delivery rate. In this case, we see that we are quite far. And the most concerning thing will be that in the last month, we suffer our kind of a drop. So, we want to analyze why. These are the the task force that we saw in this slide. So, on the left side, we have the process. And on the right side, we have analysis focused on the specific API that we are trying to study. And we want to like see how the analyze impact on the process. So, in this case, we are going to focus on the late borders. And what we see is that the variance and the process has changed, because we have selected like a group among the whole population. And what we see now is that the first variant, it suffers from a stock up. So, this is something that we should put attention on. But if we want to look in a more general way, we also can analyze this on-time delivery rate based on different dimensions. For instance, upper category and those clients channels. So, all the dimensions and all the granularity that we have in the data, then it allows us to go deep dive into all the data and all the process. And in this case, we're going to focus on a specific equipment and t-shirts. Because, well, they kind of have like different on-time delivery rate. And we might be wondering, well, what's the difference between these two products? Well, we can do this comparison thanks to a benchmark in dashboard that allows us to contrast two different products or different dimensions. So, the first thing that we see is that in the t-shirts category, we see as the first variant, let's say the most common variant, the nice line, straight line that we saw at first. But if we see this product of equipment, what we see is that the first variant with 19% of occurrence, it suffers from a cancellation. But we may want to just extract this cancellation and forget about this cancellation for this specific analysis. So, we can just delete and not take it into account this group. And what we see if we extract this kind of population is that while for the t-shirts we maintain the same variant, well, for this product equipment, now the most frequent variant is the stock of the tech. So, it should mean you at this point of the demo. But this is something that we have to just put the code on. Also, another thing that we should be considering is the cancellation rate. If you remember, in the dashboard, the executive dashboard, we also had the cancellation rate and the return rate. And if we focus on the cancellation rate, what we see is that it's almost double as it should be. And also, we are going to repeat to zoom into this geeky. So, here we have the same sequence. On the left side, we have process and on the right side, we have the analysis. And what we see as the root cause of cancellation is that the first one is because of stock issues. It's something that we already knew. But if we consider the second main cause, what we see is that the customer doesn't need the product anymore. And we've chosen this second root cause because it has a direct impact on our financial operations. Because if we look at the process, what we see is that after the delivery date change, we get a cancellation. But the thing is that the order was already being sent. So, we have a double consequence. So, the first one is the money that we're not making because of the cancellation. But also, as the process was on its way, the product was on its way, sorry, we've invested our money. So, it has a direct impact on our accounts and we should take action. And these are the action flows that I was mentioning before. And thanks to these action flows, we can implement automations inside these systems. So, based on the cancellation risk, we can calculate it thanks to a prediction and interacting with more contextual data. We can split or divide the process in terms of this cancellation risk. So, if this risk is higher than on threshold, let's say 30%, we will update the priority order within SAP and then send a message through Teams. But if it's lower, we will update the delivery date and then send an email and just inform our manager. So, just what have we done until this point? We've understand the end-to-end process, we've identified the root cause and we've act in terms of preventing this from happening again. And just imagine that you've implemented all these actions and we can see the results six months later. And the results are that our on-time delivery rate has increased, the cancellation rate has decreased, and the first-time rate or perfect order rate has also been increased. Well, I hope that you like what we have shared with you and you find it interesting the same way that we do. And the last thing that we would like to ask you is... How can we help? Thank you very much. Thank you. Thank you. Thank you. Alberto, Marta. Thanks for that presentation. I've certainly enjoyed the process of listening to that. I thought it was pretty clear. We have a little bit of time for some questions. So, let me fire away. Are you prepared? Are you ready? You look nervous. Don't worry. They should be fairly pain-free. So, we have a question from Ignacio, who asks, how do you deal with the visualization and analysis of processes that have a very wide range of possible steps and possibly the process path is very big? Who wants to take that one? Well, in our experience, we try to see all the steps in a hierarchical way. So, we can zoom in and zoom out in terms of going deep if we need a friend-specific product. And, for the rest, we have a more general view of the process. And also, you have to take into account that all this dashboard that you have seen is all very flexible. You can do whatever you want in your processes, doing drill down and zooming as Marta said, and building any kind of KPIs and passwords that you really want to see and the ones that you are really worried about. Okay, great. We have another question here from Maria, coming back to what you were talking about near the beginning, Alberto, about choosing a perimeter. So, it seems like that maybe could take forever. How do you know that you've chosen well a perimeter? I mean, it sounds a little bit stressful. Give us a bit more guidance here, I think is what Maria wants. Yeah, maybe it's stressful. But at the end, what you need to to think is that wherever you have a process that you may think that is complex or is relevant for your business, just try to do some quick analysis. You don't have to do the whole range of the process and take every single trace just to do a quick step, let's say, assessment. Okay, so just try to go to the pain points that you think that are pain points, of course, you are not going to discover that until you do process mining, but just try to focus on a specific area, try to gather some traces of those specific sub-processes in the process and try to do just as small, it is just one week or two weeks analysis and see, okay, there is a lot of pain points in just this specific focus. Let's see in the whole process. Okay, so this is a discovery process also. Sorry if it's stressful for you. Let's help, I guess. You have to just start somewhere. Yeah. So you said at the beginning, Alberto, also that everything in life is a process. Do really? Everything? Surely some things are impossible to measure and quantify or do we really believe that we can break down everything into something that is measurable? Of course not everything, okay, but we believe, we firmly believe that every single business process in most of the companies can be measurable and if it's measurable, if it's supported by data, then you can apply process mining. Fantastic. Just as a final question to you both then, how do you think process mining will evolve in the next few years? Where do you think we could get to with this technology? So let me say that I think that process mining is just the center of the technology, okay, and at the end what you can do is just to see the process from the very beginning to the very end and then to apply a lot of automatization to your process. What Marta has shown you in the demo is just a part of the things that you can do, but at the end what you can do is to control the whole process and also the optimization. So you can obtain a hyperoptimization that we're saying in mean site, okay, having the full perspective of the, not only the inefficiencies of the process, but also the hyperoptimization that you can apply to those processes. So at the end you can control your whole processes from the discovery to the lot of specific actions to do your process in a touchless way, let's say. Marta, do you have anything you wish to add there? Yeah, just to compliment that what we are seeing is that all the solutions are more or more integrated. So it's no longer just process mining or does mining, but the whole solution and that's the powerful in this kind of solutions. Thank you very much to both of you. That's been a fascinating talk. I hope you enjoy the rest of the conference and thank you so much for your presentation. Thanks very much, guys.