 So, I wanted actually some guidance about our voting system. There have been requests by various people who wanted to use the basic engine for tasks which are widely different from what we use it for. Even though I say so myself, the voting system is fairly good at running independent votes and detailed elections. Most of the results I think that you see are already automated in the progress reports. And it's a fairly open election in voting process. So, I can either vote through a whole bunch of talk about what the voting is, where it came from and how it is currently. Or we can jump to the meat of the box and look over where you guys want to take it. History, I, we used to have a security team called Ben Daly. Who, the guy in the votes, created 2000. And around 2000, it kind of disappeared. Like, it was when they disappeared. Whole chunks of money, but we didn't have a SPI back then. And he was back. He was back, he was back, he was back, he was back, he was back. So, there was a ban, etc., etc., all of them. Ian Jackson stepped in in 2001 and asked Raul Miller to run the vote. Because there was no secretary during the NPL election. Raul ran the vote. Forgot that the NPL elections were supposed to be secret. And so that's the only NPL election for which we have fully-publicized votes from the government. Well, he was not used to running it. He was called upon to run the vote. He had to learn the system. I think 2001, it's quite a dilemma. And the system, as it was then, had been created for one giant stretch. It was running.sport. It may have come in. It would open a database. It would text file a database. It would lock people. It would go through, do the parts, the GPG, signature, nature, it was okay. Parts, the contents, make sure what the vote was. Figure out what the votes would be and write the database back. So if the other votes came in at the same time, you had problems. And this whole thing, if there was ever anything wrong with LDAP, then that vote was lost. Because there were no records there. Either it went through and you wrote the DB or it did. It scared the hell out of me. Now that is put in some database. But still, if LDAP was down, you would lose all the votes that came in. So I was tapped in 2002 to see if I would do the project separately. I don't know, in the moment of being this, I agree. But then I asked Rao what we had to do to run the vote. And there was this giant stretch. No way am I monitoring responsibility with LDAP. So the devotee was actually written, the first draft, during the DPL election. When the elections mattered, all the had was something that would take the mail and store it in a folder. The rest of the devotee was written while the election was being run. The votes that I had there, they should be identical. That means if you got votes coming in, you rerun that thing, you should get the same results. So every action you take is identical. Second, the first thing you do is you save all the data. No matter what happens to the devotee, even if something crashes in the middle, you should not lose any data at any intermediate stage. And the third thing was to make it not scary to me. Each stage ideally should be independent, separate scripts, stuff that I can understand in one vote. I kind of slipped a little bit on that one. So I was told by many people, AJ and Bidey and everybody, they told me to fix the whole system, don't rewrite it from scratch. AJ was kind enough to tell me that I shouldn't be doing this, because writing main handling systems, especially those that handle GPG are not easy. He wanted to save me, because not everybody can write systems in one vote. Otherwise everybody would have. And stubborn as well, so I decided to do that. Devotee divides the current implementation, divides the task of doing the vote in, I think, about nine stages. The first stage runs sort of brought forward. All it does is it takes the mail, puts it in the school directory, it will block it. And it increments, it's kind of like made in IR, I wrote my own implementation because I didn't know very many mainly IR implementations of how data is processed. And I had 24 hours before the vote was supposed to start. So I was kind of under 5% of the directors who put like, I can use block, I can implement the number I and do this. So that's stage one. You save the mail. Nobody ever touches anything in that directory. None of the rest of the developers. Stage two, copies, any mail that has not already been copied from the school directory to the working directory. So the school directory is always the same timestamp, whatever it is. The working directory is where everybody is. So apart from the schooling, every other devotee script runs in sequence out of crowd. Each one of them blocks the popular directory. At any given time, only one of the other devotee scripts will be running. The spooler kind of runs in sequence. Copy, the next step is mind-handling. We need to separate out the body and base. It's the coding or whatever. Parallel to this, there's the GPG. People subscribe just to a doc here. We do that GPG coding there. Sometimes they also have mails of money. Sometimes when you have a mind, GPG mind, PGP mind, input in mail, you have two parts. If you have not encrypted it, they measure, then they assume it. Some mailers who are unhelpfully input this after this. Some mailers don't input this. There's also the line ending. The RFC said that all the line endings would be backslash r, backslash n. Evolution, for example, doesn't care about that. This gives you backslash n or whatever it's going on. It doesn't keep going. So a lot of care has to be taken that you cater to all these various forms of inputting and encrypted messages that come to you so that otherwise people scream at you. It works for me. I send it to myself. I have to report it. Why not you? So now we have GPG reported. We have done the mile separation. We've got the boat sitting in a nice, clean format. If the GPG stuff succeeds, then it goes to LNAP, set the season and the fill-up is the same. Then you parse the boat, carry it. At any state of the living problem, it goes through a put-semitage in the log directory. All of this is being done in some directories of the top voting directory. So every script reads from run directory, writes to the next directory, and so on. So we're changing scripts and their input and output are changed from one directory to another. Then if the boat succeeds, whether it's a secret or not, you do the resuming that all of us have voted and we know what the vote distance is back. If you have never made a mistake, you know what the vote distance is back at LNAP. So these are the steps that happen whether you send it or either in LNAP or in LNAP, and all this is recorded in the log directory. The thing is, if at any state things fail, like if you put in GPG, if it was not signed, then the GPG authentication step fails, the file is dropped in the log directory. And from that point on, everybody else will ignore this method. If the LDAP fails, you will have GP. If you run GPG again, you will see that the output directory already contains something. Yes, no, usually it's not there. So if the output file already exists, it doesn't do it again. So it doesn't do any more calculations than it has to. Is it based on which files? It's based on which files? Which files? No, I just do it without a problem. You could run the same thing from make, but it is a sequence. I have got something called dbt-chron, which is what I speak to the... It is chron, because now I have got a mostly automated method of setting up the new board. I go and run this script, it sets up the chron job, it sets up when the board starts using the app, sets up when the board finishes using the app, and I'm working on something that will send off a signed message to dbt-pressure, I think, to it. There's a signed message to dbt-prison as soon as the results are there. So the human effort is mostly in making sure that the candidate or the option names are correct to consider devoting. And from that point on it mostly is you just get it run and it starts and stops the work on its own. The problem with the current process is, as I said, it is all based on and each script reads from the output directory or the dbt script and writes into the input directory of the next script. So it is not easy right now to say let's not do dbt-checking, let's just do LAP, or just forget LAP and we'll do it. Or to insert another authentication method. So it works, but it's kind of a package. And I apologize for that. I wouldn't have done this if I wasn't under the gun of having people tell me that I had to do it. I had three weeks to write all this stuff in but that's how long voting lasts. So I would like to change this because there have been various people who have asked me, they want to use this mechanism, they want to use parts of this. For example, if you're doing the web-based voting, you don't need the mind-handed front-end. If you're doing web-based voting, you already have with HPA access password checkings, you don't need GPG or LDAP checkings. SPI probably doesn't need the first three steps. No mind, no GPG, no LDAP. Authentication has been taken care of. But the rest would be nice to have a standard method. Currently the only voting thing that we have are our Debian's variation of control set. But there are various kinds of voting algorithm out there, you know, first pass papers, all kinds of things. SPI also doesn't do the same thing that Debian does. So if I could plug in, so it seems to me now that I have time to read and think about design I have to go, oh my god, what will I do if I can't get it done? It seems like this is a blackboard pattern. Every vote that comes in is a job that is placed on the blackboard. Every job has a certain requirements. The default requirements are stated in private whether you need authentication or not. Which set of authentication methods need to be applied. What kind of processing needs to be done that you want to say more than what you could have multiple parallel results. Communication back to the people who vote in your, whether you both succeed or didn't succeed. Secret ballot, non-secret ballot. All of these can remain with tasks that so the blackboard pattern for people who are not familiar with looking at professionals. Blackboard pattern is where there are processors jobs that are placed on the blackboard. So like human, when you go look at the blackboard, find something that you can do, you take the job off the blackboard. You go off, you do whatever the job needs to be done that you can do it. Then you come back and you place the job back on the blackboard. This way, the blackboard pattern can work on itself. It's well enough to find that it will be easy enough to vote. And it will also be flexible enough that you can say that throw away bits and pieces. If you think of this as a plug-in, a modular framework where you can plug in various things. You can say plug-in various authentication but you can plug-in various authentication. There is somebody who wanted me to start sending mail back. Can't say yes, I am on AOL which is quite interesting but I have told them that no for Debian, I don't think I am quite ready to support AOL instant messaging when you send your mail. But it is not a bad request. I would like to be able to support other messaging protocols. What is a jabber module and why can't you send your vote in by a jabber? It's the XML protocol. There are XMLs in major protocol. How can you send a jabber message? All I need is a way to be able to read your jabber message and say ok, just develop a vote and it was signed by the right key. Anytime you want to redo just remove every other directory except for the original school directory. And then you can rerun the vote. I mean, when I do that and I have done that when I made changes in 2017 I just turn off the NAP and NAP so you don't get multiple notices for the same thing because I need a new vote but not flexible. There is no code in devoting it. It's not time to write it but then configuration becomes later. You have to be able to configure each script. Right now I have one configuration for devoting and there is lots of development. So I have the idea of in inscripts you might want to run many different inscripts but they need to be in different orders so you can prefix them with numbers so what you could do is you could prefix the sub-directories of the working directory with numbers and when a script so your capacity would run in the middle of a number and the output of a given other script would go to the directory with the next number. So you don't have to go to the level of direction right now about the direct reading you just say that you are script number one so you read from directory number zero and write from directory number two. You are script number one you can just have a directory of scripts in the directory assembly to those scripts isn't that familiar? They have the same name the final names are the same so message number xxxx it's so complicated that it's just being passed through the final name is the same the context can be broken for example, if you look at this text there are a bit there are a different text I can imagine some use cases where you do not have serialized process you can imagine having a vote where you could vote by name all by web or by chat so you would have one sequence of and I'm not sure that storing what it works storing the data in five words basically what we have here is a state machine currently has just one string of serial operations but you can mention other multiple inputs for multiple voting and other inputs maybe even parallel there are some parts that can be done there is a the point being this here with the directory the dependencies for example in order to do the exact and I was thinking that it's a problem that could be a result of changing the language using the database could work it will also provide you there are string problems as running a database on master could be hard a remote database will bring new right now everything is dead simple I can remove files I can get things to work again you can use have a more complex structure even still dealing with files but now I can't use BI anymore because you can still have multiple pass periods so the transcripts have the same output work well actually I would rather not go into implementation I'm going to rewrite the voting I'm going to rewrite it in voting because I need to learn the language we all need to learn the language well right now I need look the blackboard pattern will handle everything I want to stick at the level of algorithms and patterns at the moment because we can get lost in the implementation details I'm hungry I don't want to stay beyond the e2plot dinner starting right can you just make it configured by a shell script which won't listen the backends in the right order we could but you are looking again on the web we can go to the database we can do more complicated data structures on the file system is after all the data but what I need more than the implementation details we can always discuss what I want to write in this book is a brainstorming the idea that the same vote can have multiple means of sending in votes that was this is a new idea that they wouldn't even consider so far so how would we handle getting the blackboard pattern when a new job is injected onto the blackboard its requirements come up so the processing the input processing that is done for if it came in by a team then you have to do mind processing if it came in by a jabber you have to do some kind of accessibility stuff to it in order to grab the wrong message that GPG etc. and similarly on the web you can say that bypassed the whole checking thing because the web CGI been already somewhat indication sorry which can be diverted and each model has an input forward which needs to be read we don't feed jabber messages into the mind process we have to have validation and specification of the output from and you got only for modules that's fixed that makes a lot of sense well they do have two different patterns they work here talking about pipelines but there the processing has been set in priority you cannot change in the quality of the system is reduced by using the pipeline pattern as opposed to using the blackboard pattern because the blackboard pattern there is no preset pipeline but we can do everything that the pipeline pattern does using the blackboard pattern because each job is set of requirements each requirement can have prerequisites and therefore the dependency between the requirements each single unit comes in looks at the job and sees I have a certain capability and the job has certain need I also need to have for my capability there is a prerequisite if the job has the prerequisite processing is already been done I can make the job and I can do it this exploits any parallelism that is in the system and I can say that these are the subject of jobs that are in the blackboard that I can work on it doesn't really you don't really have to search that means you can add a new processing unit and a job with a new requirement any time without having to ever change any bit of quality because your pipelines are being derived dynamically based on the requirements of the job and the processes that are present you can plug in more processing units every task even calculating results or sending things back in the processing unit I can plug in a new authentication mechanism or a communication mechanism and the job comes in because the requirement will automatically be met I don't have to ever set any pipeline the advantage of this is if this is a framework which can create some dynamic pipelines and I can give to people who can use this system to do whatever they want so it can be used beyond devotee I did consider the pipeline but then I decided that the blackboard one was better because then you can remove processing units and you don't have to construct pipelines it seems like it's not even specific it seems like it's a general blackboard kind of framework which you can then apply what I wanted what do you guys want to use this for what kind of processing units do I need to implement at the first oh beyond what I already have one of the things to get to that point is I was looking at this trying to be able to set up for this special opportunity so that for example one of the things I was looking at was the ability of arbitrary people to say hey I want to vote these are the options that I want I want people who have a GPG key that said I want to trust people or the stronger than I could say any male or people who are in this arbitrary career already anybody who can get to a panel on my machine I mean anything you can to separate people and then make the vote and suddenly you can do it on one form or something and say hey run the vote start time here, end time here results to wherever run it and so for example if you have a small organization and you want to run a vote it makes it easy you can set options for the complete tell to run everybody can send mails this is cool essentially we are talking about if you have a modular voting if the modules are available you pick your input modules you pick your authentication modules you pick your communication modules how you can make that and you pick your result calculating module and then you pick what, how the results are published and all this can be done on the web so what are the other processing modules jabber was one of them cj beach command line I want to share it with the user system if it's an organization like an old example bbs or just a shared server by a separate organization of people if they all log into the server and want to command both of the unassisted that can be extended to just like the ability to SSH to the vote machine or the sole purpose of running a vote system yeah something that injects the vote I could provide that does anybody have a piece of paper I forgot to I need to note these things down you have either the local login scenario or the SSH to a special account then you either case the authentication is already taken care of SSH to a local account that is sure it should be at least the person who is running the vote to decide whether you still need authentication or not if you can log into master then if you can log into master and are in who by DW I don't know if there are any other accounts of master it's possible maybe a local admin can decide they are non-human role accounts yeah if you can do it yes so then we probably we don't know what it is sorry once on the patchy yeah triple W data it's the code for anybody can write their own model so that takes me out of the loop so I don't have to work so hard I think however that doesn't mean you have to write a lot more documentation people tell me that Ruby doesn't need documentation it's a nice language that someone else has written well if you do it right you can make it language enough that you use the modules and framework because if you document the design well enough and we get a protocol I'm not really sure how I'm not really sure I can design I can commit to designing a interface that is language writing since the framework is hard work what is the final list presumably you want to have a paper trail on the disk for everything that is done you probably want to stick with plain files like them it might be a reasonable case for XML I don't know speed it becomes an issue the problem with the current team is that the model of scripts is not good and there are various reasons you can tell that the problem actually is that I want to move to a general solution I want to move to a blackboard pattern so that I have received requests from a co-op a farmer's co-op in France who want to devote it and they want to use condos because where is the next one what I would do was to stick with the current scheme of working directories and quantify the worker so that they accept input directly as an argument and make the whole thing what do you figure out what is the chance script that reads from here right to there and this will allow you to set up the pipeline this will also allow you to have several input methods so we have just two workers that are writing the same directory and this sounds pretty natural the hard thing about it is that it requires people to know what is going on it might become too I have been asked to package devotee if I package devotee you know the kind of people that have been using it they would want to use it for their look maybe there could be some standard setup that is already ready and standard I would suggest also that the directories have a blackboard framework that is more general than voting and could be used for other things too even media and I would recommend that you then give the name devotee to your set of voting modules for this framework which would itself be more general you want to make a separate separate out the framework and packages separately and devotee is just one user of the framework I think that actually is what you are proposing whether you separate it out or not the framework itself has no dependency on the notion of voting that is true it would probably be pretty specific to the idea of storing jobs as black files and having files that is why I do not want to talk about it right now that is the way to do it and I would probably stick with it but there is no reason we could change the modules so that the messages go through and what is the Ruby editor called there is a special editor that the Ruby folks have been trying to get me to use instead of Emacs but that is based on some things it is not a blackboard mechanism they have worked internal storage and so we can follow that framework and all the modules then write to it looks like a pseudo file system inside it but it is sorted in various ways so we can change our storage we are not well into the files I definitely think that whatever the storage system you use the file format itself should not be tied to any specific language for example, to give an example it obviously won't use it should not be Java serialized objects whether it is fly file with a documented format or XML or something else like this the framework should not specify what is the file format anyway because devotee would like to still use email for the email input and then the intermediate form I already have all we need is that means login ID or fingerprint and what the vote was I think the framework is not you imagine for each job on the blackboard I think it would have two parts you have the data block which would be in some specific format specific to what stage the job is at and then you also have another data about what stage the job is at and that would be in a second file and the second file would be in a well-defined format I think the blackboard that has some kind of best practices format and I think it would be reasonable and easy to document and it would be and it would even be very specific and anybody can parse them in a simple format and do whatever with the data and the framework itself would be very lightweight it would make a lot of deal with that and the framework talk and I was thinking of template I think well the blackboard pattern actually makes sense yeah the fact is one of the reasons that I personally don't know it was confusing at first I had to say that for all of you there you cannot create a script so that people like you can come in and understand that and see what it does and this new thing should be the blackboard should be even easier to configure than what I have now that's good you mainly matter defining which models you want to use which in some way of feeding data to your input model yeah do you have a few global parameters about a few global parameters actually I don't know what you are going to get the key ring or how to log in to LDAP that would be part of the config of that model but you just have like listed models and then just a few very maybe some place where the modules don't and they have their own config files and you don't even need to have any way to master the modules you just need to make sure that before you start feeding data into it you have all the necessary modules programs running I would say to even monitor the directory or monitor the blackboard for new entries whether it's get to yeah and it could be with McFan or whatever and you could just notice when there's something there so you just have all the components and all the money before you have any data being fed into it then it will all end with it there will be one thing that will be necessary for this which will be the prerequisites for example currently the way that LDAP works it looks at the fingerprint of the person who has signed the book it uses that fingerprint to query the LDAP database to make sure that they are in the mic that prerequisites will show up in the metadata for each job for each job but there might be some other way of using LDAP so there is a LDAP module based on this configuration where is the LDAP in some fashion I can say that just take the user with the name in the from header see if they belong in my database forget about GTG we don't have a GTG here so now even though you have the LDAP module what it is using as input has changed and that is the configuration of the LDAP module so the there is dependencies for the LDAP module might change from word to word depending on whether you want to change GTG into LDAP or not so this metadata about which modules are dependent on what well that doesn't mean if one data would have that it would just have prerequisites that would have keywords of what has been satisfied but which ones in particular module would require in order to continue as opposed to just ignoring them as it is for the present that's really up to the individual module that's it well since I put a new so a lot of the configuration file and then it would decide dynamically which prerequisites it really cares about and which ones it doesn't I think I'm looking at this it's all first say of course it is still devotee we have always in certain steps that we need because our constitution says so it has to be GTG checked it has to be LDAP checked we have to extract how the person voted we have to send a acknowledgement back and that is not a different way and finally the results have to be handled every time a vote comes in so this list of tasks that need to be done for that particular these are the requirements for the job you can either think of them as global parameters or you can say that by default every job comes in with a certain set of requirements and these are those a job coming in from the mail input then also has a mine parsing which comes first so you might each front end might want to add a pre-processing stage so you so just like we have there needs to be some order that doesn't depend on the model so the first thing we do for votes is do a GTG check now depending on how the ballot is injected some other processing is done processing or XML transition but the GTG model doesn't have a pre-processing it doesn't care what it cares is that it has got either a crypt text or plain text plus signature blocks somewhere that it can run out and if you consider that pre-processing is the message yet in a format that GTG can deal with and if you just consider that to be a client that starts on set but then you know some previous model that puts the message into the correct format that GTG can deal with but in any case now you have got a specific set of flags which might change from a job from what you do pre-represents I call the flags we are talking about things it's just a mannequin stuff does this sound like a good idea or is this something people would want to have it's not much more flexible I have this is about time I'm still only talking about the network question I haven't read the questions if you have this sequence or module and they converge on this targeting module then I think it should be able to have two instances two different instances the same module we are sending back messages because I think if you get a work by chat you don't need work by chat if you get a work by you you don't need work by chat so the way that this is handled in the blackboard type of thing every processor that processes a job takes it off the blackboard and that includes adding other requirements which I think might be easier to understand transcriptions this is a task in which a voice file comes in at one end somebody listens to the voice file types in the text and then there is somebody who checks it proof reading so I am using this my last project was working at a hospital trying to get the doctors with this and all kinds of requirements one other thing that happened was when you were listening to the voice file you realized that the doctor you thought it was about patient edge but the doctor is actually talking about some other patient and now you won't have their address so what when you listen to the job this person has to have wait a minute you need to have an address check wait a minute I could not understand part of this so could a proof reader could you take a look at the job so the process thing is changed in a blackboard pattern by the process you can add more requirements on the job so that's why it's more flexible than a hard-coded pipeline which is where it goes in either one part or the other so jabber if it comes through the jabber there is a jabber translation it will add something back like you said that we need a jabber app to be sent to such and such a place so now the requirement goes along with the job and then the jabber app and this acknowledgement has really possessed you you need to have all the checks done I guess you could so you could maybe when you adjusted you could start out with some requirements say if it comes through the jabber it would be adjusted with the jabber requirements it would be a bunch of requirements to the application like the voting and you would only move a job you would only record a job as finished when all the requirements have been satisfied and so that you can even verify that everything has been done properly that's a requirement they could just publish more the published module only runs when everything else has been done it could be interesting to have a scripted program that can prove that every message that comes in builds out here what I was thinking actually what I was thinking of static verification what actually I was thinking well was right now when messages come in the original email which they seem like they are all described as soon as I have done my processing because I don't need to carry them through in fact it would confuse the rest of the module I was thinking instead of trying to input and output forms that I create we think in terms of object there is a job object and various processing changes the job object and you attach various things to the object itself now when I say attach something to the object all I mean is you add a pointer to the metadata that this was the previous form in such and such a directory now the next step has been done in the metadata points to a separate file in the end of the directory or we can give a database reference the backend is implementation defined when we are calling the thing we can either say so the concept of that is the job object has the metadata that points to all the processing that has been done which has not been done yet and there was hey we can even do check this object into hvn and every time we do processes we do a hvn commit so we can use hvn as a backend I mean I wouldn't do it for voting alright but if you are doing document processing using our framework and various things are being done to the document then I can see using hvn as a back end is reasonable for the document you carry a page for the back end you should do some research whether such a form exists and you should not I have been this I have been working on this medical transcription system so we did the documentation to clean the document medical says that they are willing to sell us a blackboard pattern module for 1 million dollars US per seat per year I mean all those does that sound like a workflow with Q you know kind of like people on all that stuff enthusiasm on all that stuff well it is kind it is a workflow model the blackboard pattern is a workflow model but the Q implies it is kind of like Q and not in those kind of stuff you just drop stuff in the Q and the Q stories and some of the guys say ok is there something in the Q for me and you have to get out of it the Q is the blackboard the Q is the order the Q is the order not in those I mean you have a Q you have this kind of communication device and they call that Q but it is more like more like a lake and simply well that is the blackboard so the idea is you have a single Q but you have about a dozen different Q runners some of which are just ignoring half the messages but high to the Q that can get not like that and then they get great active messages how does that work? do you question my mind in some other way? just say we will do it now but there is one in one unit that requires Italian so the tally sheet is a separate object in this part of the processing of this is that the tally sheet changes when you run the tally module on your job technically your maid could have write in any order you can't rely on email as being an ordered service that could be part of the metadata for that job well there are two things right now if you send things to devotee, two emails devotee just articulates the order in which the job was received if something happened to your email and your first mail arrives later than your second email you will get the wrong work right now so we thought that the order in which you send the board is the order in which they are processed unless you get the app in between I guess or you probably get the app in the proper order yes, if once you get the app you can send the email in full because that means you are in process that's what you have to do the issue you might have is that it may be 30 in total but because right now the way you have it and if something gets stuck in the queue which queue is that and which process is that you can go straight to the script and say this message this doesn't work but you have a blackboard you have a common school area things get stuck in there it's going to be really difficult with blackboard but every job that comes in can have a differential id I see that if I look at the output even so we can create a time span based id system where we can state that you can always have a common semaphore which governs what id is given to your job we probably will have to otherwise we will be confusion if there are two jobs with the same id we couldn't feel the heck out of everything we have that so we have to kind of semaphore that no matter which way if we get the id and the semaphore will make sure that messages coming in at the same time kind of get the same order of id no matter which system they come in we can still check you look at the output you can see that if 91, 92, 93, 94 have been done but then there you have got 96, 97, 98 and notice something went wrong and that is how I check right now at the end you are enforcing a pipeline on the back of the id I am not saying I am enforcing a pipeline I am just saying that I notice that there is one thing stuck and then I would come in an hour later if I still find a thing stuck then I would investigate as easy to find a culprit there in that case well it will be easy because you will know exactly what job it is which process what stage of processing it was in and then you maybe have something that no Japanese requirements for some reason somebody put a requirement and nobody fits and you have to think about why doesn't anybody take this that is very easily checked at injection time you put something on the black board every processing module has to register them just on the plug-in thing right every plug-in goes in and registers themselves to the framework if you get a requirement and there is no registered looking for it yet you flag something for that that we have got an unknown requirement and I guess the framework will then catch the case of a would a plug-in be able to tell the framework some list of prerequisites that it is going to be expecting of this job so the framework can check that all those prerequisites are possible to satisfy within the current configuration yes set it for application it would actually be a little more static you do all this checking before you start accepting jobs the black board also allows you to dynamically add requirements based on how you process this job do you think the requirement can be satisfied by something you tried to show before no the module wouldn't know what had been registered before the module can add a requirement then it goes to the black board properly and the black board says, whoa, whoa, whoa you have just added a requirement that I don't know anything about and then it fires out a message to my pager or something and wakes me up and I go investigate what the heck happened you say, oh I forgot to enable this other module oh there is a bug in this new framework system somebody screwed up yes I don't see it possible but how could that multiple jobs that can run in parallel like multiple modification mechanisms it's not like you can get a message by Jabber you might want a notification by Jabber and at the same time by email because Jabber might be lost for some reason we do it the same way that we do now if you send two different words, the last word no, yes one word multiple modification mechanisms because it's like you will get to a point where you say how do you find this message and how can it will make that convert two jobs, how do you know how do you know there are separate jobs today you've got to drop any by email you've got only one job and you want two notifications oh two notifications or three or whatever or whatever this is a configuration for your blackboard the metadata is always presumably if you've got all the information there in that message or you've got some way of getting the information, like I get an email from you I need to know what the jabber is if I've got a module that can convert query an external database and say ok such and such there will be a login has such and such jabber IDs then there can be a when I'm hitting on the board I can say that if we have jabber information also add on the requirement that the notification has to be set by jabber this is what I was talking about that you add some default processing when you're setting up the board and then if we don't have a jabber ID for you it will log then it logs in sorry if no notification could be sent then it's important that it will set you can install it so the advantage of all this is that we can if I do the plug-in architecture right you can have all kinds of modules put in the pipeline that check whether things succeeded whether such and such so you can say that send jabber notification to everybody basically another thing would this be flexible enough to make it reasonable to have different error paths I mean with the pipeline I guess we have like many different places that are job connected to the system if something goes wrong then you go to somewhere or like with your current architecture you have lots of different things that can go wrong and every check every check could fail and if it does what happens to the job the first time that the check fails now we log that thing and as soon as we have worked the error log for any code no more processing is ever done so with plug-in business with blackboard pattern I imagine what you have is that each module would have a way to take a job off decide that it fails and put it over into a different pile instead you put it in the same pile modify the metadata and say that this job is failed so no other module would pick up a failed job so nothing else will touch it and then by the end of the road you just have this oh no there is one thing that will touch it they want to go in guys an error module will go in that's what it means how many of you can just remove the other requirements except for to go to an error module I mean other requirements will disappear but either way you