 Rwy'r enghreifftaf, gyda cylynig, bwysig a'r ffwyl, I wneud, tyfu Groddach, ond mae'r derddog i'r gwaith. Mae'r wrthglffa hynny,ser o'ch cymdeithas. Maen nhw wedi bod yn ei bod eu salaf, yn llei a yw'r rhwb i gwaith ar y mae'r gwaith. Rhaid, a wneud amdalu nhw, Dyna ohono'r eu ddiwrnod o 5 meilio. Bydd yma'r i gyfrifat, y ddechrau, i gweithio ar gyfer o flynyddiadau. Y ddaf yn ei wredeithiau'r gwaith yn pandyn. Yn gyfnod, mae'r ffzwischen yn y cyfnod yma yn cyfnod gwnod gwahanolwyr y mwy. Rwy'n chwarae mawrd letter o fewn gwladas, ac rwy'n cyfnod, ac yn bwysig iawn, mae eich gwybod yn dros yma yw gwladas, dwi'n glwstio am dweud o'ch gafoddau. A bod oeddo chi'n gweithio yw'r g casual? — Diolch. Felly, mae'n gweithio'n dda i gyd yn fwy yn i gyd. A'r rydyn ni, yn eu cyfrifio arlau meddwl, rydyn ni'n gwladas. ynglyn tube, mae'n gweithio bod wedi gwahanol sy'n biased i chi. Dwi'n gwybod yn ddygu'r ekspert i eich storio, vsfynu, lyrddau, ganun, sylwydr, ddim yn ei gyfysgol a phredin iawn. Dwi'n gweithio pan i chi, sydd wedi gwneud hyn. Rwy'n meddwl ydych chi'n ddweud yn ysgol hwnbwn. Mae'n mynd i chi'n gweithio, dda, rwy'n ei bethau i'n ddweud yma. Efallai yma, ymbyllch yn deall. Mae'n ddweud diwyll yn deall, Menedwch ei ddweud, ydym eich gyda y ffazigau bwysig. Mae'r rhaid o gyda'u bod yn gweithio'r diolch i siaradau. Fy authorities yn rhan, mae ydych na'r loedd am bwysig hwn. Rwy'r cyfleu'r bobl yw yma. Hysnw'n ddim nدwyd yn enwedig a'n bwysig hwn. Mae'n rhai dod amwysig yng ngheil. Mae'n mynd i pwysig hwn o ddiweddiant, Cyn cyfnod dros yw amsrwyr. Felly beth sydd y ffwrdd bros yng ngyllgor. Byddai'r flwyddyn, ryngw ddeinidau'r blaen. Mae'r llyw, maen nhw'n allu bryg erfanyddiais. Felly sefydli'r byddai'r ffwrdd yng ngyllgor yw. Felly'r frasig erfanyddiais, mae'r bryg yn amlasku. Ac mae'n ddwy'n cael eich bodi gweithio'r ryswr. Felly mae'r llyw pwg ddechrau'n gwyllt o'r ryswr. If you wanted to load in a workflow, the only thing that would change is that it would go to DRF for flow. So we got a generic API to allow you to load and build lots of different types of knowledge. It's a simple way to do the validation and then you just build up the knowledge base. The knowledge base is the compiled executable form of all this knowledge. One of the things that's different to us is you do not choose the engine which you execute on. You build your knowledge base and then execute on top of that. So whether you execute with the flow engine or the rule engine, it's all taken care of invisibly. This is spring integration. So this is the XML equivalent of what you just saw there. So what this is doing is saying you have resources of a rule flow type from that particular URL. It's also adding in a decision table. It can also add in a rule flow. So it builds up a knowledge base of all these different types of knowledge. It's all compiled into one executable form. We then build sessions from these. So the sessions are the short-term way to operate on that. You can think of it in process terms, the process definition and the process instance. In rules terms, it's the rule base and the working memory. We use the word sessions a very generic way of working with this. Once you have everything within spring, you can then get camel integration for out-of-the-box services. So camel allows you to chain together what they call processes. You can connect those up to web services, HTTPP, and then you can channel those into your K-session 1. It allows you to get declarative sessions out-of-the-box, rules, workflow, whatever. There's just specified the type of incoming data transformation. It's that simple. I'm running behind time already. So let's go on to a bit more. The example you saw was how you do the definitions, and you build definitions in executing. This is a stateless session. Stateless means that it is something that execute once is thrown away. It's like a function. You give it the data, it executes, and when it's finished executing, it returns. So it's very simple. You get the session, you get your data. We're just doing a little bit of searching here, just checking the data before. We execute it on our session. I am working memory or process instance. We execute it on the session, that returns, and now we can check the values. This is a very simple way, quite common in validation, mortgage applications, insurance of processing your data. So let's go on to a bit more complex now. You have a room, you've got sprinklers, you have a fire and alarms. This is for building your definitions for a fire control system. So you have the first rule. When there is a fire, turn the sprinklers on. So there's a fire. These are bindings, by the way. They give you a reference to a field, a reference to the object. And we make a join between the fire and the sprinkler. I'll come on to more in this later on. And it's false because the sprinkler's not on. We turn the sprinkler on. Turn on the sprinkler for a little bit of notification. When the fire is gone, turn off the sprinkler. So what we have now is a difference there. It's on is false, on is true. And we're checking here when there is no fire and the fire does not exist. Then we turn the sprinkler off. I will come on to a bit more for those who are a bit confused about the bindings and stuff in a minute. So, not only do we want to turn sprinklers on and off for a room, we want to be able to raise alarms. Now, we don't want to have an alarm for every time there's a fire on a room. So we have the same way you saw a knot to say when there's no fire, we have exists. So when there is one or more fire, do an alarm. But I only do it for the first time there's a fire. And the same, when there are no more fires in the building, turn the alarm off, retract it. So it's a way of being able to do it with collections. It's called first order logic for anyone who's in academic terms. And finally, when there are no alarms and when no sprinkler is on, say everything is okay, this would just basically, for these different rooms, put them into the working memory. I will put these slides online. I don't have time to go through this line by line. So it basically puts all the data into the session, into the working memory, and it calls firewall rules. That's very, when you put stuff into the engine, especially on the rule side, the calculation is done there but none of the consequences, none of the actions are executed until you tell it to here. It basically runs through and says everything is okay. If we then create a fire and we basically tell it there is a fire, a kitchen of fire in the office, say fire or rules, and raise the alarm, turn the sprinkler on for the kitchen room, turn the sprinkler on for the office. If we then turn the, remove the fire from the kitchen, remove the fire from the office and tell it to go again, turn the sprinkler off for the office, turn the sprinkler, that should be off, sorry, off for the room. You cancel the alarm, everything is okay. So what you have there is an example of a validation system, first of all, in mortgage or insurance type applications. Then you have a monitoring type system based on the alarm. So let's go a little bit deeper. Count, cash flow, accounting period. So I told you this is going to be quick, by the way. So you're going to have to concentrate. There's a lot to do. What we're going to do is we're going to add dentists and credits for an accounting period for a given account. These are all simple examples. I'm going to start with SQR because it'll help you understand how the engine actually works. So select staff account period. So select staff account, cash flow and account period. These are your tables, these are the data. We have a join where the account number of the cash flow is the account number of the accounts. This is standard SQL. Everyone here should be able to understand this. We're checking for when this is the credits and we've got a date range to correlate the select to a given quarter. What we're doing here is we're creating a view. This is effectively a view. We're going to create a view that gives us the cash flow credits for that quarter. We're going to create a corresponding view for the cash flow debits for that quarter. As you know with views, rows materialise in the view based upon the data that's in the tables. So if we had these tables populated with these rows of data, these views were two rows there and one row there with two credits and one debit. If we were to have triggers on the views for each row, each materialised row is going to execute this trigger. This is just going to increase the balance. This is just going to decrease the balance. We end up with a balance of minus 25. Actually, you couldn't do this on a database because databases have a problem called mutating tables. You can't change databases you can select from. In this imaginary world, this would work. What is a rule? You've already seen the format. You have rule, name, quotes optional. If you don't have spaces, you don't need quotes. Attributes which control the rule execution behaviour. You have the left-hand side, which is the when. When site happens, you have the then side, which is the right-hand side. When site happens, then do this. Then do these actions. So there's the left-hand side where the person equals Mark. There's the right-hand side. Prince, hello Mark. What's the difference between this and the rule? Methods must be called directly. You pass specific, isn't it? It is imperative. Do this now. Do this now with this data. It is a command. Rules can never be called directly. In a view, you cannot put data into a view. You cannot say, I want this data in this view, this row. You have to put it at the table. It materialises into the view. This is exactly the same thing. You put data into the working memory and it materialises into the view, into the rule. So a rule is like a view. It helps pass specific instances because you put it into the working memory and then that is materialised into the view. So we have a tool called patterns. A pattern is of the type, object type. So you have tables in SQL. You have classes in Java. In rule engines, you have object type. This is a very simple one. You have the object type, which forms a pattern. This is made of one or more field constraints. A field constraint consists of a field name and a restriction value to that. This is a very simple pattern. That is the foundation of this. So now we are going to take this simple SQL and we are going to convert it into a rule. Everyone who now understands how rule engines database work, how views work, how the views can click across products when you have the joins. Standard SQL theory should help you understand the rule engine. So this is doing the credits. So this is saying select staff from accounting period. Select staff from account. The difference is rule engines are a superset of SQL. They are more powerful SQL. One of that power comes from bindings. AP, ACC. This gives you, if you were to select from your row in your table and you can create a variable which can point to a column in that row. That is what this is doing. It is saying select staff from account period and AP negative negative reference time. Then it says select staff from account and it creates a binding both on the whole object and from a field. It is a select staff from a cash flow. It has a literal constraint. It is a type of credit. There is our first join. So we are a cash flow account number equals, see it says dollar account number. So we can go up there. That account number is on this field. So that is like saying where select is exactly the same as this there. The execution semantics is exactly the same as SQL. There is our date range where date is greater than AP.start and date is less than AP.end. This is the second same way of doing those two things there. So here we have our credit rule and our debit rule. We have the data, produces two rows there, one row there. In a rule engine we call the materialised row, we call it an activation. So an activation is the rule plus the matching data. So here we have two activations. Here we have one activation. Those activations will fire, execute its consequences, which increase the balance, decrease the balance, balance minus 25 or simple stuff. I am just going to show you a few more patterns. I am not going to go over these in detail, but this is a literal constraint, a variable constraint, multiple restrictions. It is multiple restrictions on that. This is just combining the two. It gets a little more complicated because you can start to do expressions. These will be on the slide. You can come back to them later and it gets further complex still. Believe it or not, this is something we are quite proud of because if you were to use, whether it is JS or whether it is Clips, iLog, no one has a rule engine that is as expressive as this that can have all these ands and fours working with nest excesses, maps and arrays. For someone who worked with Java or JavaScript, they are still back in the 1990s. One of the things we are doing with rules is dragging the system and saying that Java developers can work with, that Groovy developers can work with. It does not feel like it is built on lists like the other systems are. Very quickly. A rule engine is a production rule system because you can have a little JavaScript engine that is a validation, that is a rule engine. To classify it, we are an expert system. You have different types of expert systems. We are a production rule expert system. We will actually work with becoming a hybrid one. We have both a production rule system and a backward shading system in prolog life, but that is art research. You have a production memory, that is your rule, your views. You have your working memory, that is like your tables. You insert, update, retract into your tables. Then there is a process that takes the tables and rules, and then combines them together to create an agenda, which I will show you in a second. That is basically how a view would work to materialize the rows and the views of the new tables. So, table, table, table, object type, object type, object type. Two views, two rules. This is the bit that makes a little bit different. If you were to be able to have one view that could aggregate all your other views, this is a list of all my views. This is a list of all my views. This is what this does. It basically says, I've got two rows in my credits rule. I have two rows in my debit rule. Two activations, one activation. The agenda will have three activations on it, the order in which they fire. So, I hear, I've set this, bring a new answer before the attributes can control the execution behavior. I've set the salience here to 50. Salience is a form of priority. Default is zero. So, all my views are not a zero. So, all the default rules for credits and debits are all on zero. They are what's called in conflicts. Because they are of all the same level of priority, they can be executed arbitrary. Rules are, the more arbitrary you can get execution, the better. You do not want to have imperative controls in your code. You want to try and limit that. Anyway, this one here, we clearly can't do this until the calculations are done. So, we have to give some control there. So, by doing minus 50, we say, do these three rules first, don't care which order those three rules happen, just as long as those three rules happen together, then you can print balance. So, that's conflict resolution with salience. Very quickly, to get over the mutating tables issue, we have a two-phase system. I'm not going to go into this too much, but it means that in Java land you populate the tables, you populate insert and modify retract. Build up the agenda, to build up an agenda of all your possible activations. Then it goes to the agenda, based upon the conflict resolution strategy which you just saw, pops the first one off and it goes into the consequence to evaluate the consequence. When it's back in the consequence, then the right-hand side, it's back into the work in memory action phase. So, it will insert and modify retract. So, that means if I was to put some data in and it created 100 activations and the first one was popped off the agenda, it was evaluating and it changed some data that meant that 99 of those other activations are no longer true. They'd actually be taken off the agenda and that's what adds a rule engine to get over the mutating table issue and make sure that no rule fires that it's not true. So, that means that just because something is true and activates doesn't mean it doesn't necessarily fires. It has to be true when it attempts to fire. So, it has to be OK. So, got your own flow. This is another way of controlling execution of play. Quite simply, it's a way of saying when a rule is allowed to fire. So, there's a declarative format to give procedural control of when rules are allowed to fire. So, rules will say when this is true, do this. It has no idea of when or now or order. So, you can think of it as rule orchestration. So, we add a new attribute calculation. That means that this rule might be true that it still can't fire. Until the process engine gets to the point here until the process engine gets to there it can't fire. When it gets to there, it allows it to fire. So, it allows the rules to have control of the the workflow to control the rules. We have many more ways in which the rules and process interact. Just too much to do today. So, just to highlight in there, there's a rule flow group. Here's a number, I guess, example that combines processes with rules. And then you can see side-by-side processes and rules there, because we actually now have the rules to control the processes as well. And we have all sorts of stuff that won't even get there in a handful of time. One of the things we say is that when we go with rules is we're not being a rule engine anymore or a process engine because rules and processes and event processing and semantic ontologies can't live on their own. And each vendor obviously starts from a base and they try and grow that, and everything else is quite weak. So, about three years ago we took a start and said, no, we're going to have rules, processes, event processes, all those first-class citizens. We need to define a generic API to make this work together so that one doesn't perceive as stronger than the other. And we want to make sure that we can work at a range of modelling techniques. So, you have the typical SOA, which is Decision Services where your rule engine, your process engine are completely coupled and one will call out to the other. Typically, the rule engine is in the status format you saw earlier on. Validation. Calculation. That's why SOA is a business event because people couldn't get more complex modelling working. That doesn't mean there isn't value in the complex modelling. It just means that assistance hasn't been good enough to make it possible. So, the rules allow you to go right back to the other end as well to work with very tightly coupled rules and processes. And we allow you to work any end of the spectrum. And this is what we call behaviour and modelling. It's about taking an application, looking at the behaviour and modelling it. We do not make you go process-orientated or rules-orientated. You use the software to solve the problem of the way in which you want to solve it. We recognise that when you're working with rules, process event processing, they all follow the same life cycles. You design, you simulate, you test, integrate, you collaborate, you deploy, you execute, you audit, you have human task interaction management. Human interaction, human task interaction is not just a processing. It's also a rules thing. So you have to start thinking about things in a different way. At the moment, because BPM is the big baby, everyone tends to just shut everything to BPM and rules are an afterthought. So you have to think differently to get the best out of these systems. We have fully integrated debugging and auditing. What that means is that when the rules, when the processes are executing, we capture all the events in the system. Everything asks us to admit events. When you start a process, when you enter a node, when you insert data into the working memory, when you fire a rule with all of its events, if you were to collect all of its events, you can create a correlated log of causality. Causality means what caused something to happen. If I have this rule, it does something and it causes this process to start. If this process is executing, it causes these rules to fire. If you're working with Sarbanes Oxley, if you're working with anything where companies will have dedicated people just correlated reports, and I have some of those jobs to do that, it's what we're trying to say is if you use our software, you get this out of the box, you can fire that guy, and he's gone. Or you can make him do something useful, which would be nicer. So here you can see the rules, I think I've got a little zoom in, there we go. It's basically saying the correlation as well. So here the rule is activated, the row is materialised, here it's fired. And because it's part of a rule flow group, the rule flow group has to activate in order for the rule to fire. When you start to see the correlation between the rules and the processes, one of the big things we push is domain specific processes. Because rules is all about declarative modelling, all about behavioural modelling, BPMN, and even BPM2 does not cut it, one of our areas, our birth areas is the medical domain. We need to build processes that have language, semantics, dialogues that the technician, that the skilled person understands. So that means that the left-hand side icons need to be icons they know and words they know. Likewise, they drag that across and open a dialogue box and have to be things that they know. So drills makes it trivial to design domain specific processes. Domain specific processes Literally you can get up and go in 50 minutes, it's a meta-infile, you drop it in and it just says this is the icon, this is the dialogue box and it just appears in the clip. It's trivial. We have a number of what we call work items that come with drills as examples. So here we are taking a series of Apache common stuff for automation. So this will find files find files on a disk, does a 4-inch composite mode, for each file it finds will log it, it runs some rules to check, that's it might be checking dates or whatever it needs to do to decide what happens. It then uses an archive one so the result of all this will go into the archive, we then copy it, we email it, and we end. And if you were to click on one of these, the associated dialogue box would pop up from that. This is incredibly important, it's good for you developers as well because it keeps you in a job. It means that you're going to work with a business analyst and you're going to say how would you like to capture your knowledge, your business on how, what is the codification of your business knowledge or company? What do you want your processes to say? It's really important so it means now we give you the tools to work hand in hand to tailor your process development environment for your business analyst. And then it means they get a tool which allows them to work in a way they can understand. Because even BPM2 is so low level and you'll understand when it comes to the next one. If you are a medical technician you'll understand I need to take a blood pressure and I need to go and do some BPM medication I need to notify the GP, I need to be a follower and you'll understand the order and the orchestration of these. Most of these workflows are not complex. They're not complex and the technicians can understand the proceduralness of these. But they can't start with gateways and splits and all these really complex stuff so they need icons that they are trained on. They will tell the developer this is what I want my workplace to look like, this is what you need to develop so I can get my job done and now I can do something and I can do it quicker, I can do it more efficiently I can, someone is trained on if you develop these domain specific processes it allows you to create intelligent methodologies which means when you have a new person on they don't have to learn BPMM2, they just learn the domain specific workflow themselves. It's incredibly powerful tool and it's not something that anyone addresses in the market at the moment and I say it's a big growth area for us people are just looking at this big organisations do their own workflow tools because they can't they can't use the commercial ones they can't pay for something that's this flexible and when they bring this on themselves it costs a lot of money so to have something that's generic, that's powerful that comes with the rules integration everything else we do, they absolutely love it and we've got an announcement done in October where the US Navy healthcare saw this and OSDEs after 10 helpers saw this and they were just like, we have to have this and they invested massively into the project allowing us to hire 10 people to work on this side of things so it's incredibly important Anyway, back into the hard stuff I only wanted to touch on processes so you saw not earlier on it's just saying when I have no red buses these are called conditional elements you have the pattern and then you have this little note here it's like a we just call them conditional elements when there are no red buses in my system in my session in my working memory when there's one or more red bus for all when all my buses are red or when all my buses that have two floors are red so there's just a way to look at sets of data to make decisions based on sets of data typically rule engines being from their list lineage do not work well with Java classes, if you work with Jets if you work with iLog, if you work with what they do is they actually have an internal representation it's basically a list in a row you map your classes onto those and it makes it almost impossible to work with nested objects so you work with Jets or you can work with the strings and you know numbers you can't start to work with more complex object graphs as you call them so draws introduce this thing called from so from allows you to set patterns just to type a filter that's what it's doing it's filtering data coming through it so that data doesn't have to come from the working memory it can come from anywhere so why can't I just say well I've got a person in the working memory match him, I've got a binding term and now I can actually filter the results of this expression this is an emblem expression we just evaluate and it returns a collection and we iterate over that and results of that collection are iterated through here it's a bit like a correlated sub query if you're in Oracle you can actually do joins with stuff that's not in your current database it's a bit like that so you're doing joins with stuff that's not in the working memory and this gets more powerful because whereas before we were joining with something that was in the working memory we can have what we call globals a global is sometimes not in the working memory it's like a service variable that's available here we have a hibernate session that's just globally available and it means that I have a zip code in my system I can't put all my people in my system it's too big, I can put the zip codes in so what I want to do is go through the system zip code by zip code and for each zip code I'm going to put out the people and process them locally within that rule just for that zip code so it's a hibernate named query sets the parameters see that dot on the zip code there that's joined from there so let's say select star from the zip code for each row that comes back in a materialised view it's then going to do a correlated or nested query and we're just going to filter all the people it's very very simple more powerful seeds so this is collect it will collect everything it can find based on the pattern you give it so here I'm saying collect on my red buses it returns a list you can then evaluate the size of that list you have a cubilate a cubilate allows you to do aggregations this is fully pluggable we come out with some average total so we have a number of these but these are fully pluggable so FedEx are using this to do geospatial analysis they have a system which you can get on my demos where they it's called now basically it's analysers all of their environments and uses aggregations to work out their vehicles is there a rate of climb of the vehicle too much because certain things are very sensitive to their environments they can't go up too high it's a temperature too hot so they use these aggregations to continually summarise the environments and they hook this into other subsistence to do those continuous calculations and this is part of Charles fusion which I'll come on to in a minute so this is saying I have all my red buses get a reference to the takings and put it into this function the result of this function is derived value return it, it's a number in this case it doesn't have to be a number and then the number of grid so it allows me to look at set of data and form calculations on those this starts to take us into the territory of functional programming actually this from key words allows us to take a production rule system and extend it to functional programming to allow these conditional elements to be nested and chained so this is quite a complex example showing how we can combine stuff that's in the working memory along with stuff that's from a hibernate named query on the accumulation using a more functional approach to programming I won't go into the details now you're just saying it's there so one of the things that often happens is you don't want to know just when is this true do this that's not enough when is this true do this but I want it to do it every hour because it's an alarm or I might want it to do it every month so we introduced this concept called timers and Charles had this thing called duration duration was a way of saying delay in the firing it's saying when is this true wait 30 seconds before actually doing it and if it stops being true before it gets those 30 seconds then stop what you're doing so what we're saying here is when the light is on after a minute and 30 seconds turn the light off so it's a timer that will turn the light off after a minute and 30 seconds if the light is still on the interval based semantic is based on the JDK timing detail so you have the initial delay and then the iterations after that so this one is just the iteration after that so this is the equivalent of that if it wants to have an initial delay before the continual iteration started so this is basically having a way to make a rule continuously firing based on delays and iteration periods of course the next thing is if I can do that why can't I just combine this with the chron scheduling and when this rule is true based upon a given chron definition in here I told you I like all this stuff based upon the chron definition in here does everyone know what chron is here? it's a chron just a way of defining scheduling and it's incredibly complex very difficult but very powerful so we can now include chrons it means that when this rule is true based upon this chron definition it will fire at regular intervals we can combine this further so if I only do a lot of time to fire at regular intervals we need to be able to say when it can fire so just because I've got an alarm I want this alarm to fire every hour but really that's only true at weekdays so this is a sign that belongs to the rule engine not the process engines so this is allowing to say that only at weekdays and these calendars are pretty pluggable based upon quarter calendars so you give a quarter calendar to draws no point to find your own calendar so it's pretty impressive because you have a high priority then I can have an opposite rule that's mutually exclusive it's mutually exclusive because the calendar is now weekends you can have a different priority there so now the interval base is four hours so you have two rules that are mutually exclusive based upon the calendar definitions that you can use the standard quotes ones where you can define your own so now you have timers both interval and chron based in calendaring all within the rule engine because you have a process engine fully integrated that means that you can check event wait states and then unlike the typical systems which are process oriented not only are you saying do this at weekend you're giving it conditional information when to do it as well so it creates a much more powerful environment much more powerful is everyone's brain hurting? I said this would be intense how are we doing? okay we should just make it we're going to knock it up a little now I'm going to try and teach a bit more about rule design so you have a thing called truth maintenance TMS and inference inference is a scary word and you have companies like Cortical going around saying you don't need the ference it's too scary, it's useless because the tools don't do it and they try and use really complicated examples to show why the ference is something you don't need actually ference is very very simple it's very very useful and if you can explain it right it becomes quite obvious and it's not a scary word anymore so let's go through an example about issuing bus passes so children have bus passes and adults have bus passes so what we're saying here is that when I have a person whose age is under 16 created a child bus pass for them when I have a person who is 16 or older give them an adult bus pass this couples the logic because imagine I've got a company and the company is split in two you have one department which chooses the policy who is a child and who is not a child and you have another department which issues the bus passes the people that issue the bus pass don't necessarily care where that person is 15 or 16 they're just told issue bus passes for children issue bus passes for adults so the decision making is two different departments and what I've done here is that the department which issues the bus passes and now tightly coupled to decision making of when it's 16 or over what only is it tightly coupled it means they are exposed to all the information to them which they should not have to care about and it makes the change process complex because it means that the department that decides the policy to say 16 or over or 16 and under they have to issue change requests then go to the other department they then have to apply those change requests it makes a very brittle process to passing knowledge through your company and changing policy decisions what happens when a child stops being 16 as well so it's monolithic because actually that shows better than a decision table because it is bringing all your logic into one place it is leaky because it means that one person who's making a decision is seeing information they should not need to care about it is brittle because what happens when a person stops being 16 so how do we get over this so truth maintenance has a thing called logical insertions logical insertions is a way of saying not only are we going to put some into the working memory not only are we going to create something and put it in it's lifetime will depend upon a condition only while that condition is true will it exist so only while this person is less than 16 will we have this thing called his child so this child is what we call an inference it is a fact that represents the result of a decision it is a fact that represents this person is under 16 it means we have an object which encapsulates some decision making process in both the way it gives it semantic intent it's a child and the way it gives it encapsulation because you don't know it's a child in fact you don't have to know what makes a child you just know that this person is a child so it gets an encapsulation it gives it the decoupling and logical insertions it means that this inference that we've made that this person is a child will only exist while the person is under 16 as soon as that person stops being a 16 that fact is automatically deleted removed from the system and then we have a rule that is mutually exclusive otherwise it can get a bit weird that says when the person is over 16 we logically insert that he is an adult so we create an object which encapsulates a decision that creates an inference and then we use logical insertion to maintain the truth of the inference so this thing that we've referred this bit of information that we've discovered that we've decided on we always know it's true based upon some other logic so now that means we have one department that's saying when someone is a child, when someone is an adult that's their responsibility and they publish these rules which creates these inferences we have another department who's responsible for issuing bus passes so now this looks much better they say when a person is a child give them a bus pass so we've got decoupling we've got encapsulation we've removed the leakiness much nicer same here, when a person is an adult issue an adult bus pass we can combine this further because when we give the child bus pass we want to have a request to have it back obviously so now we can say when there is no bus pass child bus pass when there is no longer a child bus pass all this person is your request so this all happens automatically so you logically insert a child bus pass that child bus pass is based upon the inference we made by another department is less than 16 when a person becomes 16 the inference is automatically retracted because that inference is automatically retracted we also the child bus is automatically retracted because that means these inferences the chain of truth cascades and if that truth breaks at any point everything below that point is automatically cascaded up so I know this is a little bit complicated I hope if you just get a bit of it it's great you've all done very well we're almost there so anyway this is basically saying when the logic is cascaded back to this specific point is your request it's all done automatically by the system so truth maintenance and inference is helping with knowledge responsibilities it encapsulates your knowledge it provides semantic abstractions for those encapsulations because if a person is rated in 16 or less than 16 what does that mean? I don't know what that means it means different things to different people you know if it was age and consent that's different in different countries so it allows you to encapsulate to give it semantic meanings it's quite important it makes things readable it makes them more maintainable and the truth maintenance as well helps add integrity and robustness so I hope that's just a little smattering of of rule based theory this is the last bit this is about draws fusion so draws fusion extends our existing rule language with event processing capabilities or complex event processing capabilities there are a number of different engines that have evolved for this called query based engines so query based engines typically are based upon SQL and they look for changes that stream of data and they use a design pattern called event condition action so basically it means they're going to submit an event based upon something they've found for me this is actually quite limited and I do another talk but I had more time where I started to show comparisons with draws and Esper because not only do you have to run two different languages for doing the same thing one language is a very much a subset of the other these query based systems do not support side effects what happens if the information you're querying changes rule engines do they don't have all the powerful features that rule engines do like truth maintenance and not only that you have to learn two different APIs you have to learn two different ways of building it means that you have to learn everything two or three times if you were to use a process engine a rule engine and an event process engine you have to learn the API for each you have to learn the language for each the annoyances of each one and as you know you can learn one thing quite efficiently when you go to the next one you get less efficient so you guys should spend a week learning how to redo the same thing how to build something how to deploy something how to check the errors all these three different systems and as you get older you get to be in your 20s you don't mind doing it when you get to 30s you get sick and tired I don't want to deal with this it's not my job it's not interesting it's not fun anymore it's like when I was 24 I would install a different Linux distribution every week now I hate that it's like I go out to Windows I don't use Windows I use Linux on this this is not a Windows desktop so anyway we haven't much time draws uses a full rule based approach to complex event processing tip code do as well many of the vendors who are query based put up a lot of fun to say why you need their special system their special system can do this many of them try to put down raw engines it's all just bollocks it's simply because the existing raw engines have not been designed yet to do that it does not mean they can't do it so if you take GES and you started trying to do complex event processing on GES of course it's going to blow it's not the GES can't do this and there's no reason why these systems can't be extended so we've taken many of the things that's needed to address complex event processing made it possible in a raw engine we did ok so first of all a raw engine has a single point of insertion which creates a bottleneck so if you've got 10 different queues that stream the data coming in what you don't want to do is to have to keep have them all going through one point of funnel and the other problem they'll say is that if you've got a working memory it sees everything and if everything has to be evaluated it can be quite slow and cumbersome so it creates these things called entry points an entry point is a way to partition a knowledge base and these are named typically for each stream of data you have will become an entry point so here we have the home broker stream that will be collected up to JMS or to H2P or whatever you want and it'll have a little producer consumer which will affect the insert into this named entry point so that's on the API side on the language side we can now have a pattern from an entry point so what it means now this pattern here doesn't filter everything that's in the working memory only filters stuff on this stream but only that because of the way draws work naturally these streams obviously and these are entry points can be correlated with other entry points and we try and make sure that each entry point does as much its own thread as possible so what it means is that each entry point is automatically on its own thread and we will take care of the synchronisation of the joint between threads we try and do as much in its own thread to get the efficiency of throughput so that takes care of that rule engines do not have temporal comparatives let's do comparisons in time draws is now extended with all 13 temporal operators as I understand it makes it can allow you to model any type of time and you can do this by saying this comparator and different comparatives have different arguments because you can say one second after this because after is between one second and ten seconds so this is saying when I have a bi-acknowledgment event there's between one second and ten seconds after a bi-order event to help you understand visually A before B A before B, A before B, A before B A before B, A before B shows the visual connotation of that we support all 13 of those just means that basically you can express everything in time it's not just important to know when time happens when something happens it's often more important to know when something does not happen so what does that mean if I have a bi-order event what I want to know which is really important is when the bi-acknowledgment does not happen so what I'm saying here is when I have a bi-order event when it does not happen between one and ten seconds if I have a bi-acknowledgment event then I need to do something that affects me with three different areas I've got a working memory which is actually the default entry point I have my home broken stream and I have my stock traded stream it's correlating all three of these checking for absences and events how much code do you have to write to do this in order to cure it a lot and you have to maintain it I showed this to Deutsche Bank they bank for all about time everything they do is about time and they write this complex system to stop at a time aggregations at a time and they have to try and test all this when you have data and rules that are working at a time it's incredibly brittle very hard when you take someone who spends day in day out dealing with waitstays dealing with timers and synchronisation and all the low level code to try and make these things happen and you show them this they fall instantly in love it gets even better so we can take these patterns and we can create windows and we can say when this pattern happens over a period of time so we can say when this pattern happens over a time window of five seconds or we can go for counts when this stock ticker happens over a thousand tickers because what we do is orthogonal you learn the rule language you only have to learn a few keywords to extend the rule language prevent process and capabilities that means you've already learnt a cumulant you've already learnt from, you've already learnt all your pattern language when you're learning your rules you don't have to go and learn a whole new thing for the CP it's just an over keyword the 13 operators you've already learnt and that's it, now you've got four CP capabilities with your existing knowledge so allowing you to do more by learning less think about it doing more by learning less is great this is going to produce the average stock price over a time window of five seconds if you have a stock price of a time window five seconds is more than 100 as I said these functions are fully pluggable there's a Java interface you register that and you can do anything you want there and it will work with any subsystem that you like that does statistical analysis to try and show a little bit how this works with processes we very much when you start to have something that has built-in event capabilities you want to start designing for an architechur what does that mean it means you want to make sure that everything in your system emits an event do everything win events everything that happens, whether some of these to consume and not do events, every state change so within a rule engine every time you insert something every time you start a process if this was a business application you would encapsulate events to someone is fired, someone is hired you buy a stock, all these are events and if you can model everything that happens, every state change that happens in your system and you just emit these it allows you to create systems which are far less brittle it allows you to create systems which allows you to do things with them which you might not necessarily attend to so you design a system in a good event model up front and then you try to build things with correlations so if you think about a process, what is a process a process is just a sequential correlation of events as it goes from node to node to node it is just an event and it has a state change and there is also an event so if you can emit events from these it means the complex event processing side can suck all this in and it can analyse this so just a very very simple one here I have a process started events so I have an order process and it is saying that every time an order process starts this event will be emitted automatically by the system and I say over a time window of one hour do account aggregation if over a period of an hour this process has started more than a thousand times then do something one of the things that where systems are going I am pretty much at the end now so I think we have got five minutes left systems are moving more to what we call dynamic and adaptive so dynamic means the systems can change on the fly that means rule edges have always been dynamic you can add rules, you can move rules it is a mistakeful system while they are running you don't have to take it down you don't have to put it back up again and it is a save of processes in our system and rules you can add processes, you can move processes remember the knowledge base is a composite knowledge you can change any parts of these not only that our processes allow you to change sub parts of that we call this dynamic fragments orchestration so this is about the dynamic side now if you can have a site that is monitoring itself you get a site that is adaptive because it can monitor what is doing and then it can change itself you start to get a site that is dynamic and adaptive systems of the future will be that, they will be both dynamic and they will be adaptive they will be self-monituring questions thank you so much for your patience if anyone has a headache tonight I am very very sorry there is a hell of a lot to go through there is so much more it takes about three or four hours to do this justice I hope you understand just a small amount of it and I hope people have learnt something today questions