 Okay, please welcome Rob Cronin, who will be giving us an introduction on the process towards the formal verification of maintainer scripts. Okay, thanks gentle. Okay. So I would like to present to you a program which we recently started. I would start by explaining to you what actually this title means. Tell you a short story about how this program came into into living. I will tell you a little bit about what we intend to do during the next years. And finally, I have a few questions for you. In fact, you might also have some questions for me, but I also have questions for you. So, okay, so let's start with the title towards the formal verification of maintainer scripts. So the first important part in the title is the word towards and I start with the disclaimer. So we just began well, officially we began last year, but we mostly began work during the spring of this year. This is very new. There's nothing to show yet. So I don't have any results yet. I don't have any tools yet. I hope to have something in a few years, but we are just beginning and I'm just talking about what we are going to do in the future. So this is a collaborative research project over five, at least for probably over five years. And I'm just telling you about my plans for the moment. And hopefully I will be able to tell you more concrete stuff in the in depth concepts to come in the future. Okay, maintainer scripts. In case you don't know what it is, probably most of you will know what it is. So in a binary package, in a depth package. So this is from the policy. You have two basically two things. You have a table of files, which are just installed on your file system. Then you install the package and then you have a second set of files, which are confusingly called by the policy control information files. And among these so called control information files, you have the maintainer scripts, which will be executed during installation removal and upgrading of your packages. And among these maintainer scripts, you have four different kinds of maintainer scripts, which might be included in your package. It's not necessary. They might be missing, but many of the packages are in fact including at least some of these maintainer scripts. So you have pre-inst, post-inst, pre-RM and post-RM. And roughly it means that, well, you have in your binary package, you have the files, which will be placed on your file system when you install the package. The pre-inst will be executed before you start placing these files on your file system, before you start unpacking the table. The post-inst will be executed after you have installed the files on your file system. And usually what this script is doing is configuring your package. And then you have to reverse. It's the opposite. So the pre-RM will be executed before you will remove files from your file system. And post-RM will be executed after you have removed the files from your file system. Okay, and this is just the first approximation. This is what's happening when you install, freshly install a package on your system or when you remove it from your system. It gets a little bit more complicated when you do an upgrade, because when you do an upgrade you want a combination of maintainer scripts coming from the old version of the package and of maintainer scripts coming from the new version of the package. And it even gets more involved when you look at failure conditions. So it might happen that during installation of a package one of these guys here fails. And then when you look it up in the policy chapter six, you have a detailed description on what's going to happen. And in fact, if that happens, then some of these scripts are executed with a special argument, something like upward install, which tries to roll back the failed attempt to install the package with the goal of undoing everything which has been done during the failed attempt to install your package. Now, the third part of the title is verification and what do I mean by that? Well, I even call it formal verification to make it more clear what I mean. So it's about proofs, about proofs of correctness of programs. And this is something which is a very strong thing to have if you succeed in constructing such a proof in order to do that, in order to be able to construct a proof of correctness of a script, of a program in general, you need several things. Well, the first thing that you need is, of course, you need a formal model, a very precise definition of what it means to execute your program. And that would include both an exact description of the execution model of your programming language, in this case of a script language on the one hand. And on the other hand, you also need a very precise description of what the data is that you are going to manipulate by your program. And in our case, the data that you manipulate by executing such a script is, of course, the state of your current installation of the machine, which includes the file system and even more. So this, of course, has to be described precisely in a formal way in order to be able to do some proofs about it. Okay, that's the first thing. And the second thing is, of course, well, if you want to prove something, you must know what exactly you want to prove. So in that case, this means that what you need is a precise statement of what the program is supposed to do. And there are several ways of doing that and the normal way how you would express such a statement of correctness of your program, which you would like to prove after having stated it, is something which is expressed by pre-condition and the post-condition. That would mean the pre-condition and the post-condition would express, maybe in our case, properties of the file system or in general of your states of your system. And such assertion would express that whenever you are, whenever you are in an initial state which satisfies a given pre-condition. Let's imagine the pre-condition would express your file system conforming to the file hierarchy standard, for instance, something like that. So whenever you are in some arbitrary initial condition which satisfies your pre-condition and you execute your program, then at the end you arrive in a new state which again satisfies the post-condition, so the terminal condition which you also have expressed in your correctness statement. To be precise, there is a subtle difference in whether you actually require that your program terminates or not. Okay, but this goes a little bit too, maybe too much into details about program verification in any way, in our case, maintainer scripts usually are expected to terminate quite, quite quickly. Okay, so you have an initial condition and the terminal condition which you would like to have verified for your program. And this is not testing. So this is important to understand. So what you get by formal proof is something which is much more, which is much more than testing. In fact, what you get from a formal proof is a certification that whenever, so whenever here is the operational word, whenever you are in any arbitrary situation which satisfies your pre-condition, then you are guaranteed to obtain at the end something which satisfies your post-condition. You won't get this with testing, well at least not easily, because with testing all you can test are particular test cases which you have designed and all you get by testing is that well, when you run this particular test in a particular initial starting configuration, then your program will pass the test or not pass the test. So it always, testing always gives you only a correctness assertion for one particular situation and it's not a general assertion for any arbitrary condition, starting condition which you might have in the beginning. However, there still is a connection between formal verification on the one hand and testing on the other hand and I will come back to that a little bit later. About the properties, well here I have stated something like pre-condition, execution, post-condition. In fact, when we look at maintainer scripts it's even much more interesting the kind of conditions that you would like to verify about the scripts and I will come also to this in a few slides. But let's first tell me a little story. So that's a story about a little package called a CMI CREP and this package received a bug report, in fact a critical bug report and in the bug report it was written, well it started like that. The Emixon install script is over-sourced, it inappropriately attempts to compile all .el files in some directory inside Lisp. Well, that's already not good but it's not a catastrophe either for now but then it continues and compounds the problem by removing .el files which may belong to different packages and this of course is very very bad and this absolutely should not happen. Critical severity was certainly appropriate for this kind of problem and in fact in this case the reporter of the bug report was still quite kind in formulating it in these words and not being more bold about what was going, what was happening, yeah. Okay, so now let's try to understand how this could happen. First I should explain what's going on with this Emixon stuff so you probably know that you have for Emixon stuff you have programming languages for Emix macros which is .el, Emix Lisp and Emix Lisp can be compiled to Emix bytecode. However, we have in general we have different versions of Emixon in Debian, well for the moment we only have an old Xemix 21 and we have Emix 24 but at different points in time we have different versions of Emix like Emix 22, 23, 24 in the archive and all of these have different bytecode formats. Now Debian decided that we want to install when we install Emix stuff with a package that Emix Lisp files are compiled to bytecode. However, since we have different formats of bytecode for different versions of Emix this might mean that we should include in the binary package compiled bytecode for all the different Emix versions and it was decided to do it not that way but to compile Emix code while you are installing the package. So this is going what's happening with this Emix stuff so if you install a package which contains Emix Lisp code then during installation of the package in fact this code is compiled for all the versions of Emix that you have currently installed on your system into bytecode for that particular version of Emix. Okay and this is roughly how it works well to be more precise in fact it's not really part of the post-inst script it's part of the Emix installed script which in that case is included in your package but this Emix installed script is called on by your post-inst script so the post-inst script will call a script which belongs to the Emix common infrastructure and that script in turn will call the Emix installed script particular to your package so this is only indirectly part of the post-inst script anyway this is what's happening here basically for all the flavors the versions of Emix which you have installed on your machine in fact you would compile your Lisp stuff to bytecode and place it in a directory which is proper to this flavor of Emix and it would do that for all the versions of Emix installed on your machine and of course something similar is happening when you're installing a new version of Emix then also you would recompile all the packages which contain Lisp code and which you have installed. So this is what's happening on installation of the package in the post-inst and when you remove the package of course you do the reverse and you remove all the bytecode of the packages. Now when you look, when you come back down to our small CMI grab package this is what was in the post in the Emix installed script initially and in earlier version so there was a variable defined which contained which was a template for the directory which was going to hold the compiled bytecode and then the directory would be created and then there was some invocation of Emix which would compile the stuff and place the compiled bytecode into these directories which had been created here and which initially were proper to this package. This was at installation and at removal of course you had reverse so you would define the same variable and then remove the ELC the compiled bytecode files and finally you would remove the directory and it was written in this way because this would fail if the directory contains more than ELC stuff and this was of course on purpose because if that happened then something was wrong and in that case of course the script should fail and must fail. Okay, however at some point the maintainer decided well in fact there was only one file, one Emix, Emix, that's the difficult word, Emix Lisp file in this package so he decided that it was overkill to create a directory only for this package. So he decided that was too much and he wanted to change it and he changed it but he got it wrong because what he did is well he just removed the package, the dollar package stuff here, well he kept that to make dear, minus P it doesn't matter and then he created on installation of the package still the bytecode files in this directory now one level up and then you remove what he did well he kept this definition of the variable and he removed now he's also kept this line which removed all the ELC files in this directory and this is of course what went wrong and this explains what happened and why the package maintainer received this critical bug report. Okay, so what we can learn from this little story is well the maintainer, what the maintainer did was really stupid because such a mistake he should not make such a mistake and frankly there is no excuse for the maintainer to have overlooked this error in the modification which he did in the maintainer script and I'm allowed to say this here in public because I was myself the maintainer of course of this package who did this stupid mistake. Well but then when you make a mistake you can learn from it and this made me thinking so maybe this is something where we can improve our process that we have in dbion and where we can try to analyze and find these kind of errors and detect them in the archive. What's also interesting about this is that well testing you can ask yourself whether testing would have detected this error and well it depends because if you do testing which is too naive then you would not detect it. Too naive means if you would take like a change word, a minimal change word where the site list directory where bytecode is placed is non-populated. In that case when you test installation and removal everything works fine because you just install your package in an empty directory and then you remove it and then you come back to the initial configuration. So this test case would not detect this kind of error. There are other test cases which would detect it. So if you would test installation and removal of the package in a situation where you have already populated your system with some other e-max stuff then you would detect it because in that case some files not owned by this package would be removed. Okay so testing not always detects easily this kind of errors and what we would like to have and this is now the scope of this research program is that we would like to obtain an assertion that in fact maintenance scripts do not do any of this nasty stuff in whatever initial situation you install, you execute these maintenance scripts and in which situation you install and remove packages. Now you have seen also on this example that what we had here is a stupid mistake, an easy mistake. It's a mistake of the kind, if you look at it or if I show it to you you say well of course it's obvious how could I miss that and what you can get with formal methods and automatic tools are often analysis which can find this kind of stupid mistakes but this might seem trivial to do but you must not forget that we have 50,000 packages now in Maine most of them contain maintenance scripts and no one is able to look and audit all the maintenance scripts that we have in the binary packages so what we need is really automatic tools which are able to do this kind of quality assurance analyze, run over all the archive and find all this kind of stupid mistakes which might exist in the maintenance scripts and maybe which are only triggered in particular situations. Okay, this is another slide, this is a slide which I usually use when I talk to people who do not know Debian so what is at the root of the problem why we have this kind of problems is that we have shared infrastructure between packages so things would be very, very easy if we had isolated silos where every each of the packages would install its own stuff without touching any of the stuff in the other packages but that's not possible, Debian does not work that way and this is the case on various levels, it's also the case with shared libraries it's equally, it's also the case with the file system so in general we need infrastructure which is shared and we must have packages which for instance install add-ons, plug-ins or however you would like it like to call it, which interact with other packages and which share parts of its infrastructure think of tech, e-makes in our case and there are some others and this makes the problem of course much more difficult and interesting if we hadn't that there are in fact some trivial packaging systems which don't have it but with these packaging systems you cannot package anything interesting in fact you just can package toys alright now I have already talked for 20 minutes so what we are going to do is, well formal verification and again a disclaimer the disclaimer is well in principle it's impossible to do it's impossible, it's much too hard to do it perfectly and you can never expect that we will develop a tool even 5 in even 10 or 15 years which you can fill in all the maintainer scripts and then you run it, you push on a button and then you obtain just a list of all errors or you obtain a certificate which says this script is perfect and it contains no error at all and there are two reasons for that the first reason is that the execution model is much too rich it contains a lot of stuff, it contains already all the programming language, the scripting language and it contains also, it should to be perfect it also contains a complete description of the state of the Unix operating system which is of course quite impossible to do that's the one reason and the second reason is that even if you restrict yourself to a very very simple programming model say while programs, while it's an else which manipulate integers it's already what we call in theory undecidable that is it is not possible in principle to write a program which gives you in the final time an assertion of correctness of a program this is similar to the halting problem if you ever have heard of that the basic reason is well if such a program existed you could apply it to itself and from that you could derive a contradiction so it's even in theory it's not possible to do it however the fact that it's not possible in theory should not stop us from trying to do it anyway okay well this might sound strange but in fact when you have a theoretical result that something is impossible or is too expensive to do then often this does not hold in practice because in practice the worst situation often does not occur or only occurs in very particular situations so often you can still do something which gives you a result in particular situation it's also interesting to compare this to what we did in the Edo's and Mancusi projects previous projects of ours where we analyzed package dependencies like conflicts and so on you probably have heard of that that was a different difficulty because in that case we knew already in the beginning that it's possible to do in principle at least in principle but for these projects the challenge was to find a solution which gives you a result in a reasonable time and we managed to do that but in this case it's much more difficult because in principle there's even a principle problem of finding a way to solve the problem at all okay so now I have to speed up so let me just say about maintainer scripts so most of them let's say are POSIX shell scripts in fact you may have other stuff and POSIX shell scripts and maintainer scripts you might even have elf executables there's at least one in the archive but in most cases they are POSIX shell scripts with some extensions mandated by policy which I have written here so echo and echo minus n test when it's built in must support and or your floccal scopes and stuff like that but basically it's POSIX shell now what how do we look at these POSIX shell scripts well the good thing is that maintainer scripts typically are quite easy quite small programs and we look at them just as a transformation of the file system tree which means we ignore everything else we ignore starting of services stopping of services stuff like that so we just look at the file system tree that you have at the beginning of the execution of your script and what you have at the end of the script so from a mathematical point of view this is already quite much more treatable you have a tree which you transform in a different tree but when I say that this is of course an abstraction first of all you know all that the file system structure is not really a tree because you have hard links to regular files you also have symbolic links stuff like that and it also means that we ignore of course anything else which might be going on okay so let's look at scripts and well the first problem with scripts is that when you look at the POSIX standard well the POSIX standard is not the most is not the nicest text to read in your leisure time and the script shell language is not the best designed programming language in existence and this concerns both syntax and semantics so there are several things in it which are very very hard to analyze formally and this usually is an indication that it's only quite hard to program correctly shell script because when you have a problem in theory it usually indicates that there also is a problem in practice however what is good is that we don't have the full complexity usually of POSIX shell in what you find in maintainer scripts in particular recursive functions which are allowed of course in theory do not occur in maintainer scripts exit codes so maintainer scripts only look at whether it's zero or not so in a fatal error or not loops are only used in quite restricted ways mostly for loops which are quite easy to handle well they are easier to handle than general while loops and the while loops we have in maintainer scripts are often in fact hidden for loops because it's quite often the construction where you have something which you pipe into while read so in practice it behaves more like a for loop we have luckily we have the big TPKG lock which means that it's not possible to install several packages in parallel so we don't have to care about problems between of concurrency in execution of scripts and there are some other simplifications which we can use okay so I have to speed up a lot I think so what we are going to do is well we will not work directly on shell scripts on POSIX shell in fact we will use a domain specific language which we called Koli which is also the name of our project in which we will rewrite the scripts we are going well at the beginning we will rewrite by hand the scripts into this language before feeding them to the formal tools at the end what we want to have is we want to have a compiler which takes a shell script and translates it in our language which is more sane and more and much more treatable for formal tools this language is well we are just doing that at the moment this language might still be is still quite close to the execution model of POSIX shell in particular concerning control flow and the usage of exit codes so it's probably not the right language that one might use if you want to replace shell as a programming language for maintainer scripts but it's something which is very useful for us when we want to do verification okay now probably the most interesting part is what are the properties of the maintainer scripts that we want to analyze and to verify well first of all you have properties which you can express with this pre-condition execution post-condition paradigm and what you can expect here is something like well you might want to have that when you have a maintainer script and then you execute it in a reasonable initial condition then you don't get an error it executes without an error the difficulty in this is of course what does it mean reasonable okay frankly I have no idea no precise idea at the moment what reasonable means it should mean well it's conformed to the file hierarchy standard it fulfills everything which we have written and the policy possibly something else so this is something which one possibly might want to have but it's for the moment not exactly clear what it means we might also have properties like what we call an invariant in verification that is when you have some condition at the beginning you also have the same condition at the end here for instance you could have properties like compliance to FHS or to the Debian policy more interesting are relations between scripts and this is also interesting from a theoretical point of view for the people who are working on verification because to my knowledge no one has ever yet tried to verify these kind of properties of scripts but do we have here so first of all we have this strange symbol this is functional composition so it means we have first script and as I told you we look at it as a function transforming one tree and into another tree so this means first executing an I something I call I and then executing R should be the same as the identity or in other words the R would just be the inverse of this of the I where is this interesting for us well you might expect that when you install in general when you install a package on your machine and then you de-install it then you come back to the same to the same situation that you had in the beginning now if you expand this into scripts it would read like this and here I use this notion from functional programming which means piping so if you have an initial file system F and you would pipe this into the so this is what's happening the first three steps is what's happening at the installation phase first you run your pre-installed scripts then you unpack your stuff you run the post-instruct and then you remove the package so you run your pre-RM you remove stuff and you run the post-RM and then at the end you should obtain the same F then you had in the beginning and this is something which you would expect to hold for any files as an F for any and that's important and this is really an important word here that's why it's written here in red besides that it's not really true what I've written well when you remove stuff then this holds up to log files and configuration files as it is written in the policy however when you purge packages this of course should hold and now when you think again of our small CMI prep package this in fact did not hold in fact if you would choose for F here a file system where the side-lisp directory is not populated then you have this equality however when you choose for F a file system where you have already some e-max lisp stuff installed then it doesn't hold so it's very important that you have here this for any files as an F and that you try really to prove something which holds for any initial situation okay that's interesting I'm not sure whether this is sufficient maybe we want something like this so here I also have an S and this strange symbol it means also for any S so this would mean that if you take any other maintainer script which comes for instance from installing other stuff then we might also expect that well we install a package then we install some other stuff this would trigger execution of S then we remove our package if you do that then we would expect that this has exactly the same effect as only installing the other stuff without installing our package before and removing it afterwards this seems to be a different condition a different property to what I've shown to you before I'm not completely sure but it looks a little bit different and this is in any way anyway it's something which certainly we would like to have for maintainer scripts commutation of scripts means it doesn't matter in which order you execute them this might also be interesting you might want you might know that post inst scripts are executed in the order which is compliant with dependency of packages which of course poses a problem when you have loops dependency loops between packages because in this case it's no longer deterministic in which order scripts are executed so if you know that the order in which you execute scripts does not matter then in fact you know that in fact you can execute them in any order so it would be important for installation of packages with dependency loops on the one hand and I think there are also some other cases where you have reordering of installation scripts I think I'm not sure but I think that Embedian is doing it also for efficiency reasons and execute scripts sometimes in a different order so this is also something which is interesting which is interesting to do and then we have item potency so that's my favorite so Debian policy 6.1 says in fact maintainer scripts must be item potent now when I talk to a mathematician he says well that's nice I know what it means item potency is when I execute a function twice it gives you the same result as executing it only once however this is not what Debian policy means when it talks about item potency Debian policy says well if the first call failed or a board halfway through for some reason the second call should merely do the things that were left undone the first time if any and exit with the success status if everything is okay so that's much more involved because it talks about the fact that the first script might fail and then you execute again the script and for any possible way how the first execution of the script failed still the second execution should at the end give you the same exact effect as only executing it once with success and frankly so this is quite interesting but I don't know what it means I don't know what it means because what does it mean that you might have that your package insulation fails your script fails the first time and in the same situation exactly the same situation it succeeds it doesn't make any sense well it might happen that you plug in one heater too many and power fails and you shut your system off by an external reason that would be an explanation but I think what policy here means is something more because you might have for instance that you install you make a first attempt to execute it you make a first attempt to execute it and then maybe for instance you have a reason for failure which is internal to the system like your file system is full then you would remove stuff and then you would execute a script again and then it would succeed but this is a different property because it means that between the first execution of the script and the second execution of the script you have some action and I don't know really what these actions are which you could execute between the first and the second invocation of the script yes you have a remark I want to complicate this property as that the maintenance script should be a serious of atomic important steps and every step is important in your first meaning so you just in the words of this way then it will be important in the second sense yeah but it's not atomic, script are not atomic so if a script fails half way through it's not atomic on itself but of course it's not atomic because it can fail in the middle but this second definition means that it consists of series of atomic steps every step can either succeed or fail and every step is important in the first meaning of yours that's basically and if it's failing the middle you run it again then every step is important well this is only two when the different steps are independent of each other only if that is the case you can conclude from the fact that the individual steps are idempotent that the scripts are also idempotent however if the different steps that your script takes depend on the order in which you execute them that is you have different steps which depend on each other then you don't get idempotency of the script itself from idempotency of the atomic steps not necessarily okay so maybe we can discuss this later if you're interested but anyway I find it quite interesting to look at these properties and try to understand what they mean let me just say even if I don't have much time it's important for me to say when I say there's something I do not understand in policy let me say for me maybe one of the best points in Debian is that we have policy and if you do any kind of work in this direction of analyzing what we have in the package what we have in the scripts policy is absolutely priceless so with all policy we could never do anything like that and I think there is no other software distribution which has anything which is even close to what we have here in Debian policy and in the description of what's supposed to happen in packages so it's very important to have this and without it we would be completely lost okay now I don't have time to look into the particular techniques unfortunately which we are intending to apply let me just say that well there are two different kinds of techniques which you're planning to apply to this problem one of them is using so-called deductive verification and for this we have tools which already exist in Debian some of them are free software, we have a platform for so-called deductive verification which is DFSG3 it's called Y3 we have it in Debian and it uses so-called Z solvers or SMT solvers which are very important to do the last step in this verification attempt Z solvers even if I don't have the time to explain what they do Z solvers have become very very strong over the last decade and people in verification talk about the SMT revolution so it's probably the most significant advancement in formal methods that we had over the last decade some of them are free, not all of them are free, some of them are free and some of them we have them already in Debian and we are intending to use them so in the end the project will look a little bit like this so we will have a maintainer script, shell scripts some of them, not all of them, we will be able to compile in our domain specific language then we will have as a front end a verification platform which might be interactive but can also be used in a batch format which knows about file systems and stuff and as the execution model of the programming language which will understand the programming constructs generate from these proof applications and pass them on to specialized solvers so this is a research project over four to five years so we have four years for the moment with the possibility of having one year more we have sponsoring so thanks to our sponsor at this occasion which is the National French Research Funding Agency which gives us quite some money to higher PhD students and postdocs and gives us travel money, for instance for me to come to Debcon here and talk about this stuff we have three academic sites which are specialists on these various aspects of this project and who will work together, something like 12 to 15 people on this project now what are the questions, the questions that I have to you so I have shown you a little story about one of my packages and if you have any interesting stories about faulty antenna scripts and I am sure some of you have, so I certainly would like to hear about them the properties, so I found some properties which are interesting to verify and important for the project, so maybe there are some others that you can think of that are interesting, so I would like to hear about them and finally antenna scripts, what's your stand on antenna scripts I know there are some people, Nils, unfortunately isn't here so I know Nils, he wants to get rid of antenna scripts as far as possible oh you are here, okay, very good so should we get rid of them, can we get rid of them can we improve the situation of antenna scripts so what do you think about antenna scripts okay, thanks for your attention a lot of time for questions but maybe we should, you can speak one or two in any way so any comments or questions so you said you started last year, this is currently not even working or do you have a prototype that does something there is no prototype yet, there is nothing yet so officially we started last year and most of the people who are working on this are like me teachers so we don't have so much time to do our research and while we hired a student, a PhD student who is going to do a good part of his work and he arrived a few months ago so we are just starting we are now currently defining this more abstract language we don't have a concrete syntax yet, we only have an abstract syntax we are defining its semantics, we are thinking already about verification so there are no tools yet, nothing is working for the moment of course there is always time to add complexity to your ideas while you were describing the view you have on file systems how you will treat it as a treat even if it's not you could do although it's much less, much more error prone I think but you could also look at the process tree say whatever happens inside the process group which of course time plays a very important factor there but you said that you are explicitly not counting process punning or killing I think it will be a very interesting addition so this would add a lot of complexity when you have to consider concurrent execution of stuff on the one hand, so on the other hand concurrency is of course one of the places where formal tools can really add a lot and give you a lot of benefit because when you have interleaving of processes then there are so many possibilities that really it's impossible to do it well it's very, very hard to do it by hand in your own brain and this is where formal tools can really help you to analyze the situation however this concurrency stuff becomes much, much more complicated and we would really like to abstract away from this and just look at sequential transformations of pieces thank you very much