 So good morning and welcome to the second day of the conference. Some of you I hope had a migrating trip to the hill but those of us who are over the hill I don't think we should get there. Anyway it's a pleasure to have Alexandra Silva give the invite to talk on an algebraic framework to reason about concurrency. I mean one of the great things about FST-TCS is that we do get in good young researchers who are at the top of their game and Alexandra is one such of these people. So she is a professor at the University College London and got her PhD from Radbud University in 2010 working with young Rutan and Marcelo Bonsangue. And it's about 20 years since I heard Leyan Rutan talking about co-algebra and Alexandra's thesis on plainly co-algebras is something which is worth reading. She won the Press Burger Prize awarded by EATCS in 2017. The BCS gave her the Needham Award in 2018 and the Royal Society's Wolfson Award she got this earlier this year. So welcome. Thank you. So thanks very much for the invitation. It is a great pleasure to be at FST-TCS. I was due to come here in 2010. I had a paper but this was a week before my PhD defense and my supervisor at that time was very scared of me traveling to India a week before I had to defend. So he forbid me of coming and it has it has been one of my biggest regrets not having to be able to come to India and present that paper. Sadly the co-author who came and presented the paper became very ill in the months after he visited India so my supervisor will always stand by that decision pointing to that. Anyway so it's a great pleasure to be here today and I'd like to spend the next hour telling you a little bit about some work I've been doing the last three years. It is it is a summary of a collection of papers but also a summary of some of the things I want to do in the coming years and about some what vision the vision I have to reason about concurrency. This is joint work with two of my students and some other collaborators in London and in the Netherlands and today for those of you who follow the news in the UK it is a very important day in the UK it is election day and it's probably one of the most important elections we have had in the last few years and it's all about Europe and funnily enough this project was completely developed since my move to the UK but it was also completely funded by European funds so I think it is quite telling that today we are deciding on a new prime minister and whether to leave the EU so like ranko I decided to include the European flag on my talk and hope for the best at the end of today for the UK so we have a project website with with a lot of the things I'm going to talk about and some of these people are finishing their PhD and are on the market this is my PhD student Tobias Capay and the other student Yana Wagner and post-op Paul Brunet who's currently post-doc in in London but is definitely on the market so if you want to hire some good people this is a little advertisement for them as well so the broad context of this work is program verification so we we have programs and programs have bugs and we want to find this bugs because you know ultimately we would like to have programs that have no bugs and and this bugs can come from from a source for many sources sometimes we have bugs because the language in which we're programming is not does not have a very precise semantics for those of you who have programmed in C a little bit you might have come across some of this phenomenon in which you program something you're convinced it's doing a but it's actually doing B and it's very subtle to kind of figure out why the program is not doing what you're looking for sometimes the error is introduced by the compiler so you have a program the language might even have a very nice semantics but then you have a compiler and whoever programmed the compiler might have made a mistake a compiler is a program so it might have bugs as well then it might be that the hardware in which you're running the program has is faulty and so when you're running your program you're expecting a certain instruction to be executed in a certain way but there's a problem with the hardware and therefore the result you get out of your program is different than what you started with so the fact that you have all these sources of errors and the fact that programs in general can get very complex very quickly make program verification very hard it's a bit like an iceberg you start looking for a bug and you start going deep and deep it's an it's a no no ending task now things get even harder if you look at how this program executes in context because especially nowadays we see that a lot of these programs are executed in the cloud and you have some distributed computation going on or maybe you have a program that is communicating is installed in one device but is communicating with another device and you're transferring data around and your the result of your program this depends on this data and there are synchronizations missing etc so you have even more layers so the layers of complexity are never ended when it comes to verification and yet we would like to be able to assert properties about programs and we would like to be able to give assurances that certain programs do what they are supposed to be doing and so what you see a sort of trend in program verification is that you start with a program or a collection of programs in which you want to find a bug but then you don't really analyze that program in its entirety you have to abstract away what the program is doing and this abstraction can be can be very different so you can do an abstraction by saying well you know what I'm gonna restrict looking at my program without data values so I'm actually just gonna look at the control flow and maybe for a certain property you're looking for that's enough maybe you don't care about the data or maybe you do care about the data but then you say I don't care about the specific data but I only care about certain intervals so actually I can cluster my data so I can I can say well anything between 0 and 1 million for me is the same anything above 1 million is different and so you kind of close your data and you end up having an input set or an output set that is much smaller because you divided it or maybe you don't want to look at the body of your functions and you want to just analyze how function calls are working and so you kind of mostly delete or abstract away what the program is doing and just keep the function calls with their variables or maybe you have a very specific property in mind and that property only talks about one variable and so if the property only talks about one variable then you can do what is called slicing which is go through your program and basically ignore anything that operates on other variables so you build a sort of dependency graph of that variable and you get a collection of things that matter but all the rest you can kind of throw away so there's all kinds of techniques that you can use to make the this task of finding bugs or analyzing programs a bit easier or at least feasible up to a certain to a certain point and let me start by giving you a very sort of naive example of what I mean by abstraction and it's mostly what I will be using in this talk the type of abstraction I'll be using in this talk so imagine that you give us a task to a student the following write a program that prints and smile is separated by a space so I don't tell you what N is I just tell you some positive N and you have this character this smiley character and you have a space character so so my students came up with two different programs so the first program says okay I initialize some counter I at one and while that I is smaller than N then I print the smiley and then I print the space and I increase the counter okay the other one initializes the counter at one prints a first smiley because well I said that I want N to be greater than zero and then while the counter does not reach in she prints a space and then a smiley etc okay so they give me these two programs and one first sort of sanity check would be to ask are these two programs equivalent are they really doing the same thing who thinks they are doing the same thing so if you want to check if they are doing the same thing you need to you know you could stare at them for a while because they are small enough that you can stare at them but a way to also check they are equivalent is to kind of abstract a little bit some of the details in the slide and that's what I'm gonna do and the way I'm gonna abstract away this example is by using this triangle you can think of programs if you go back here you can think of programs especially sequential programs as a sequence of actions so you have an action and then you have another action and maybe you have a choice of two actions maybe you have a repetition of actions like with this while loop so in essence this type of programs you can write them almost as regular expressions if you abstract away enough and I'll mention in a moment how so if if one exploits this sort of this side of the triangle in which programs can be written as regular expressions then regular expressions and regular languages have a long tradition and there's a lot of results to do algebraic reasoning and also call to break reasoning using automata so you can do equivalence at this level so if you want to ask if two programs are equivalent you can look at whether their abstractions are equivalent and then you can go back and say something about the programs themselves of course you cannot say everything about the program because you will perform some abstraction to get the regular expression out of the program but you can you can say some things about what you learned in your algebraic reasoning at the lower level so for instance for these two programs one could think of this program here as follows I first the first character I see here is a smiley and it's inside a while loop that takes executing for n times and I don't know how many n how much n is so it's an arbitrary number of n times so actually I can abstract away this whole while loop using a star by saying okay I see a smiley and then a space and I do this a finite number of times for an arbitrary n and then at the very end I print another smiley the other program starts by printing a smiley at the beginning and then does the same thing in the while loop but in which the space and the smiley are reversed okay and so now the question is are these two regular expressions the same and for those of you who who know a little bit about regular expressions and syntax you will see that these two regular expressions are almost the same except that the brackets are somehow slided to the right and this is an instance of an axiom called the sliding rule and so in fact these two regular expressions are the same and so in fact these programs or their abstractions at least are the same which basically is telling me if I think of these regular expression and its language as denoting the traces that this program can produce it's just telling me the traces of the programs are the same okay so the abstraction here is basically ignoring instructions like print and replacing the while loop by this star okay there's a little bit of work there but once that is done then we can basically just look at that another way of checking that they are equivalent is by translating their expressions to the deterministic finite automaton sorry these are actually non-deterministic the non-deterministic finite automata that accept the language of those expressions and then basically use these automata to run a by simulation game to show that they accept the same language but at this level so at the level of regular languages and regular expressions and automata there's been a lot of a lot of work on on equivalence now with sequential programs we know roughly what to do when we want to abstract away and we want to analyze their behavior now moving on to concurrent programs this is an example taken from from a web page on weak memory models it's not super important but basically it's a program that has two threads t0 and t1 and these threads are operating on variables x and y so we have these two variables x and y and each of the threads has a local register thread 0 has register r1 and thread 1 has register r2 and so I'm executing these two things in parallel and I know that when I start running this program x is 0 and y is 0 and I would like to assert at the end that after I run this thread that what I have is that the two variables cannot both be 0 so this is a bit confusing the notation that that is used here but you should think of this as not what is in there I think I have it in the slides so I want to assert basically the negation of of this conjunction which I wrote I wrote here so basically what I want to say is the following this is a concurrent program I don't know in which order the instructions of these threads are going to be executed it could be that I execute line a line b line c and line d it could be that I execute a c b d it could be that I execute you know a c d b I could do any order of these lines and I'm and I'm assuming each line is atomic but given that the lines are atomic I know nothing about the order what I know is that if I execute a if I either execute a or c first then either x or y will be one so in the next step eventually either r1 or r2 have to be one hence at the end of the execution it is impossible that I have both registers to be zero because when can I have r1 to be zero well for r1 to be zero it means that c hasn't executed right because if c executes then y is one so if I made it here with r1 0 it also means that a has executed because in each thread I have a sequential it's sequentially consistent so each thread executes sequentially so if I made it here with r1 0 then x is one but if x is one then whenever I execute line d r2 has to be one and the other way around so if I made it here with x being zero and r2 being zero then y is one hence when I execute line b I must have that r1 will become one okay so this program this two line program takes about five minutes to explain which is slightly puzzling but it is it kind of I want to illustrate a little bit how things get much more interesting once you add concurrency and there's a lot of examples like this especially in this sort of weak memory model world a lot of examples of programs that are two threads with two lines and have simple conditions like the ones on this slide yet reasoning about them is really hard and techniques to reason about them have popped in the literature there's all kinds of things in the last few years and we still are to find sort of an automated way to to fully reason about such programs and so this is kind of the motivation for for the work I want to show you I would like to be able to reason about these programs in the same way as we did before for sequential programs so I have some sort of dream and wish that sort of regular expressions enriched with special constructs should enable us to reason algebraically about these programs and hopefully it should also guide us to produce the right type of automata to capture the sort of control flow of these programs and reason about control when and data flow so this program here for instance I would like to be able to write an expression of this kind I would like to be able to write something like if I start with X is zero and Y is zero and then I compose that with these two threads T zero and T one in parallel then this whole execution should imply that at the end I have I do not have R1 is zero and R2 is zero and then I would like to be able to to use this property to verify this property by using some equivalence checking as I did before for regular languages so that that's kind of the wish the wish that that I have and it would somehow help us in verify at least some properties of concurrent programs the thing about concurrency is that as I said it gets quite hard so when we started this project back in 2016 I was reading a paper by by Hoare and Struth on concurrently Niels Bra and I thought basically they had the answer and I just had to kind of figure out how to apply it to these examples but as we went on I realized that yeah things are much harder than they look to starting with and it's a bit like this XKCD cartoon I mean six months later or in this case three years later we're still working on it but so let me give you a roadmap of this talking in one slide so basically the idea is as follows if one starts with clean algebra and this goes back to the 50s you can kind of analyze sequential programs in the style of what I showed a few slides ago then if you want to talk about control flow so if you want to have variables and analyze their Boolean values then you move to a slightly richer language called clean algebra with tests and this was introduced in the 90s by Cozen and people in his team and on another dimension back in 2009 clean algebra was enriched with a parallel operator giving rise to something that people call concurrent clean algebra and so the kind of our original idea was well we have these two things we should just be able to merge them and get a system which we would call CCAT that would allow us to reason about programs that have both control flow and concurrency and in fact when we started looking at it there was a paper by Peter Gibson that was already hinting that that this should be possible and there was a paper by Peter O'Hearn and others as well in 2015 also kind of hinting that this should be possible the problem was that you know there were a lot of hints and a lot of conjectures but very few proofs on what actually worked and so our initial plan was to just fill the gaps that these papers had and what I will show you is that filling the gaps showed us that in fact this strategy was not quite the right one so let's start from from the beginning and let's spend a couple of minutes just going through the basic definitions and results on clean algebra so clean algebra is the algebra of regular expressions it was introduced in the 50s by Stephen Cleaner and it basically gives us this nice way of reasoning about traces or regular languages and doing code patterns so regular expressions are a nice scene a nice compact syntax to capture patterns in traces so for instance you can you know write the multiples of three in binary or you can write all words over an alphabet and Cleaner in his paper in 56 gave this this result that became a cornerstone in theoretical computer science saying actually regular expressions are equivalent so the denotational semantics of regular expressions regular languages are also the languages that are accepted by deterministic finite automata and that translation between syntax and operational semantics in terms of automata became a technique that is exploited in equivalence checking and sometimes using expressions is very convenient because it's easier to capture the pattern but when it comes to implementing say an algorithm on equivalence checking is traditionally easier to translate the expression down to an automaton and then doing the equivalence at the automaton level so this sort of correspondence between syntax and on the one hand and operational semantics on the other cleanest theorem has become a sort of tool in theoretical computer science to check for equivalence and that observation was then extended to two other languages so cleanly algebra allows us to capture patterns but it doesn't allow us to talk about control flow so the only thing it allows us to talk about is if you say if we're now thinking at the program abstraction level you can talk about atomic actions so those would be the letters of the alphabet we can talk about an execution aborting or or skipping that's the sort of the zero and one there's a non-deterministic choice in this equation composition and then the star that allows us to talk about repetition and also in his paper cleanly and later other people proposed axioms or equations that would enable us to reason algebraically about equivalence so things like you know non-deterministic choices are important and commutative and associative and also and this goes back to cozen in the early 90s two equations that say that the star is a fixed point and is in fact a least fixed point so that this iteration can be built as a least fixed point and so this set of equations was proven in the 90s to be sound and complete for language equivalence so if two expressions are equivalent so if two regular expressions are equivalent then there is an if and only if their languages so the regular languages they denote are equivalent and this is kind of this is quite powerful because it's a finite axiomatization and you can really use it to do program transformations if I give you a program and I give you its abstraction you can you can use these equations to reason about it or you can do the translation to automata and do the reasoning there but in any case these correspondence between automata on the one hand languages and then regular expressions has a very tight connection and you have both the soundness and completeness theorem from cozen in the 90s and cleanest theorem from the 50s kind of gives you this loop and you can basically reason about traces in any of these formalisms and get something about equivalence that's kind of neat the attention on on equivalence algorithms came back a few years ago and and there was so there's a famous algorithm from the 70s hopcroft and carp which is nearly linear to check equivalence and then back in 2012 I think Damia Poose and and Philippe Bonke did an improvement on that algorithm and have a new sort of technique to check for equivalence so actually this problem of equivalence of regular languages is still quite active there's still quite some active research on it so if we want to now add some control flow meaning we want to encode basic imperative programs like if then else or or while well loops then what cozen came up with in the 90s is the following well take your regular expressions so take your regular algebra and look at your alphabet and split the alphabet in two parts one part you used to talk about program actions as before and the other part you call it tests and you generate a Boolean algebra from those tests and you use that Boolean algebra to basically encode the tests that you have in these control structures so if I have an if and if then else so if I have an if be then PL skew then basically I can think of it as a regular expression in this new language that looks like this so be after P non-deterministic choice not be after Q for the while loop I do BP I iterate on BP and at some point when I exit this iteration then not be should be true so if one does this encoding then the traces of these regular expressions will give you exactly the traces you would expect in the control flow graph of this program so let me make this a bit more precise so every program of this sort or every expression say will now be assigned not just a regular language over Sigma like before but a regular language over an extended alphabet in which the traces look as follows it's an element of at which are the atoms of the Boolean algebra that I generated from the tests followed by a program action followed by an atom and so on so the way to think about it is as follows I have a program and I have a bunch of variables Boolean variables and every program action can change the value of my Boolean variables so the behavior of my program is always sequences of state which is an atom then a program action and that program action might change the state so I need the next atom and so on so for every action I have an initial atom and then atom after I execute my action and so this is what what it will look like so for instance for the if then else this would be the semantics so I have an atom that captures the fact that B is true followed by P and then because I know nothing about P in this case I can have an arbitrary atom beta and on the other branch I have an atom that's that knows that not P is true and then I execute Q that's a typo that should be Q and then a beta for the while loop I have traces of increasing length so the number of pieces increasing and either I exit my loop immediately because B was not true and so I never see P or the B was true and I saw P once but then not P was true so I exited the loop or I executed P twice and so on so you have this new type of languages but in essence if you now kind of ignore the fact that I have these new symbols these new atoms these are still regular languages and I still can reason about them as before using automata so cat was object of study in the 90s and indeed in the similar vein to cleanly algebra there was an axiomatization of equivalence proposed there's been a cleanly theorem proposed and algorithms for automatic equivalence and there are some mean some nice applications on on verification of compiler optimization for instance okay and so this this sort of strategy so you start with an abstraction in the first instance cleanly algebra and then you extend it with tests and now suddenly you can talk a little bit more about other properties of the program not so much just what the actions are but also what the control variables might do to change so this sort of program has been applied to to other languages and I just briefly show you that so this is what we call the cat tower principle so starting from cleanly algebra cat was introduced by by cozan in the 90s to reason about simple imperative programs and then about six years ago we extended it with some networking constructs to talk about reachability networks and later with a probability probabilistic choice that allowed us to talk about congestion and currently we're looking at the extended it also with concurrency so this is kind of the where sort of the two lines of work I'm doing join and the idea of these extensions is that for for every extension we want to have the semantics and the algorithms of the languages to be preserved to be there so in the case of cat as I said in the 90s they did the exercise again so they didn't they did an axiomatization and they did a cleanly like theorem and they did algorithms for equivalence and all all these extensions that we've been developing in the last few years have this property that we want a kind of hold on to the nice decidability properties of regular languages and that restricts a little bit the properties we can verify but gives us a hold on how fast and how well we can verify certain properties and we the reason why cleanly algebra is a good basis for a lot of these languages is that it has a compositional semantics meaning that you can verify larger and larger programs by looking at the smaller components so these enable sort of scalable verification okay so back to this was just a little parenthesis but now back to the main topic of the talk so we looked at cleanly algebra and we looked at cleanly algebra with this so now if you remember in that square going down there was concurrent cleanly algebra so concurrent cleanly algebra was introduced in 2009 as an extension to cleanly algebra with a parallel operator that basically says as you'd expect if I have a program P and a program Q I put them in parallel and they should execute concurrently the semantics was was proposed in this original paper as languages of commsets or partially ordered multisets and basically the idea behind the semantics is instead of having regular languages of traces now we need to capture the fact that certain actions are going to happen really in parallel so we know that certain actions might happen sequentially but there will be actions of which we don't know so for instance the following program a followed by b parallel C followed by D would be assigned a semantics a pomset that looks as follows so I know that a is before B and C but I know nothing about the order of execution of B and C I do know though that B and C are before D but basically this is the the only thing I know about the traces of this program okay and so in 2009 this was kind of the the state of the art on on CKA unfortunately the sort of tick box that we had for this extension was not there so in the original paper they didn't really look at providing a sound and complete axiomatization or at the decision procedure or the cleaning theorem they did propose however some axioms that they deemed were reasonable for instance you know that the parallel is commutative and associative that interacts with the non-deterministic choice so distributes over non-deterministic choice that if I put a program in parallel with a program that the boards then that should be a board this axiom is slightly controversial but let's let's take it at face value that it makes sense if I put a program in parallel with skip then that should be the same program then the other thing they also proposed was how to reason about the interaction between sequential composition and parallel so if I have in parallel two threads that have themselves a sequential execution what can I say about the interaction of e and h and g and f so what they proposed in this 2009 paper is that the following should hold if I have e in parallel with g followed by f in parallel with h this is what I'm describing here then that should not give me more behavior than if I first sequentially compose e and f and then I put that in parallel with g and h and this was an axiom that was referred to as the exchange law because you would kind of exchange the sequential composition and the parallel and was what Tony Hoare and collaborators said captures through concurrency and it captures more than it has sorry it has interleaving as a special case so in many algebraic framers to reason about concurrency the parallel is equated with interleaving so you say e parallel in with f should give me the same thing as first doing e and then f or first doing f and then e and what what these authors argued is that you want more than that you don't want to equate things with interleaving you want to really capture traces in which you can have true concurrency so you can have actions truly in parallel and this was the equation they proposed as capturing that and so when we started looking at at the axiomatization they had proposed we decided we were going to try to prove that this is axiomatization was actually sound and complete and so we we started this this project with the student with the goal of ticking all the boxes and so in 2017 we actually showed that you can you can give a clean a theorem to to pom set languages so if you take the semantics that if you take the semantics that these authors proposed using pom set languages you can actually show that there is a class of automata a well class a well-defined class of automata that accepts exactly the languages denoted by cka expression we then later when torn to show that the axioms that they proposed were indeed sound and complete and we characterize it the free model of this algebra this was in 2018 and so in some sense we take the box for cka so that's made it made us happy and we thought okay so next we do ccat and it should be easy as well and then hopefully soon we would have cnet cat which is this concurrent language to reason about networks which was our ultimate goal back in 2016 so the starting point for for ccat was this paper by Peter Gibson from 14 in which Peter said well you start with cka so you start with with cat you add the cka parallel and so cat is basically regular expressions as I said before but then you have this test which are Boolean algebra terms and you look at this so you look basically at bringing cka and cat together and you give semantics in the usual way meaning you take pom set languages like the authors of cka proposed but now not over actions as before with cleaning algebra but over those traces that have atoms actions atoms actions so Peter proposed a semantics in which these terms would be assigned to pom sets that have both atoms and program actions and and show that you could you know you could do the usual algebraic algebraic results on this language and he was very happy about it and published a paper and we were very happy to find this paper and thought we can take it and use it and then we started playing with the system a little bit and we discovered that there was a slight problem namely if you take the system that Peter proposed namely taking you take cka you take cat all the axioms and you put them together you could prove the following so you start with a program that has a test P and then an execution E and then a negation of P and you can do a few applications of the rules of of cka and so for instance here the first one I'm using the fact that sequential composition is always below parallel so this has to do with interliving action I showed here I'm using the axiom from cka that says anything in parallel with one is equal to itself then here I'm using the exchange law so the cka law then here I'm using the fact that E after one should be E so I'm down to this expression now P and P bar sequentially composed because these are tests is the same thing as the and from the Boolean algebra but P and not P is just bottom bottom is zero and zero in parallel with E zero okay so what have I just proved well I've just proved that P E not P is the same thing as zero which in fact says that every test is an invariant of every program so what this is saying is that if I have a test P and I execute any program E at the end not P is true so this being equivalent to zero is actually equivalent to a whole triple of P E not P which basically tells me that every test is an invariant of every program so this is an example of something that so when Peter did it algebraically it made sense his paper has no mistake I mean the paper is completely sound the problem is for the domain specific you want to use it in so for the verification task at hand if you look at the expressions as programs this makes no sense I mean if I I suddenly have an algebraic framework that tells me that every test is an invariant of every program then I won't be able to verify anything very interesting about it so there must be something that's in Peter's framework that went wrong so there was something in this combination of CKA and cats that didn't go very well for for the task at hand so I said before that one of the actions was slightly controversial and it's one of the axioms I used in that derivation so maybe we should drop that axiom that E parallel with a board is a board or maybe it's the case that you know the exchange law that was proposed as the axiom of two concurrency is not quite quite there or maybe it's the fact that you know I shouldn't be replacing things and they're parallel so freely so there you know there seems to be ways in which we could look at solutions for this however if we take a step back again and you think about the type of programs you're looking at you will realize that maybe the problem is elsewhere so this is a this goes back to Berkstra and Ponce in 2011 but let's say you have a chicken crossing a road and it's a very smart chicken so it looks to the left and it looks to the right and you know while it's looking to the right from the left there's a bike and so when the chicken crosses the road it gets hit by the bike the chicken did look left and did look right and there was nothing the problem was while the chicken was looking right some something happened on the left that changed the state of the road so the road suddenly had a bike and that's exactly the inside that we took in order to try to fix the problem with with CCAT all along in CAT for sequential programs we identified the conjunction from Boolean algebra with sequential composition and that was really neat for CAT because it meant that the Boolean algebra of tests was actually a sub-algebra of the the climy algebra that we were looking at however once you have concurrency you have interference so the state of your variables so between the the sequential composition of P and Q there could be something if this is executed in context in parallel with another program there could be something in between that actually changes the value of these variables so you cannot identify anymore the end and the sequential composition I mean you can say something about it you can say that the end will be below that but you cannot have an equivalence and you also cannot identify the program skip with the top of the Boolean algebra so that's not valid anymore because when you put this one in in parallel and the top this will give you different semantics and so this brought us to changing a little bit the diagram we had before and instead of having moving from climy algebra to climy algebra with tests before we now have a new system which we call climy algebra with observations which distinguishes the Boolean algebra structure namely the conjunction from the sequential composition and with that distinction you can actually develop a new algebra climy algebra with observations that behaves exactly like climy algebra with tests when you're looking at sequential programs but will give you the the power to combine it with the concurrent operator without generating the interference that gave the problem Peter had in his system so this is work that appears in conquer this year early September and there's an upcoming paper that's under submission in which we we actually merge the two systems and we show that at this stage indeed you can repeat what Peter did and show completeness and soundness and decidability and so going back to the concurrent program we started with some slides ago what we did was basically taking the axioms of climy algebra and Boolean algebra we keep conjunction and sequential composition distinct we then add concurrent climy algebra axioms and based on that you can then prove a soundness and completeness theorem that says if two expressions are the same their death semantics in terms of ponsets are also the same and you can also give a decidability procedure and this brings me pretty much to the end of my talk I hope I gave you a flavor of this program of using regular expressions and climy algebra as a basis of verification frameworks this was the square we started with in 2015-16 and hope we would be able to fill the dotted bits very easily unfortunately this required a bit more work and some changes on the way to develop a framework an algebraic framework to look at both control flow and concurrency and to to give a sound and complete axiomatization and decidability we are currently instantiating this C call framework to actually do verification of this little litmus test that come from weak memory models and we're looking at the sort of partial function memory model to do the verification task and soon hopefully we'll go back to our original motivation which was an application in network verification and to look at things like stateful firewalls and and other network tasks and and to verify that they indeed do what they are supposed to be doing and this brings me to the end and I'm happy to take some questions. For climy algebra with thresholds there is a provisional dynamic logic right that works well with it is there any logic for your algebra CKO that's a good question let me think for a second there has been work on on sort of concurrency deal which I would hope will have a connection with this but I haven't looked at it and I I mean this this work is like just fresh so I I don't think anyone has looked at the connection. We have missed something but you sort of go away from Boolean algebra of tests to these observations it really expand on the POMSET model with observations can you see something. So I should have been a bit more careful so the observations themselves still have a Boolean algebra structured the only thing we do is distinguish between the sequential composition and and the conjunction so we're still have the Boolean algebra structure there. So you had on your slide one not equivalent to top yeah is that something that you can derive from PEP bar equals 0 I mean how do we understand this thing do we understand this as an inconsistency which gives you all kinds of things. So the only thing I meant by that is that we removed we removed this axiom from the system because we won't so you see when you have so when you have P and Q smaller or equal than P followed by Q so that still holds if you would have one equals to top then you could derive back that conjunction and sequential composition are the same so if you have one equals top you basically do the identification axiomatically you see what I mean so if I don't remove one equals top which which was there before so before conjunction was sequential composition and one was equal to top those are axioms of clean algebra tests because they basically say that the Boolean algebra is a sub algebra of the clean algebra so you identify the top with the one and the bottom with the zero the this junction with the plus and the conjunction with the sequential composition now if I if I want to remove the identification of of conjunction and sequential composition I have to remove also one equals top because if I leave it I actually can derive that sequential composition and and are still the same so this is I can show I can show you the derivation offline. Okay in the new system then what changes I mean now so in the new system whenever you have an expression P and Q if you have top and Q you can derive the top and Q is Q but if you have top followed by P you cannot derive that that's P so top is now is now a Boolean value that between being asserted that top is true so that all variables are true and the execution of P some interference could have happened in the meantime that's that's the point but I can show you this offline might be easier. Thanks for interesting talk so I'm just wondering about the original problem that you started out with how do you solve that about the you know the assertion at the end of the concurrent program oh right you need a whole logic or do you know so I you see that expression written there x equals zero that expression that is written there below the blue box this expression this is now an inequality that I can verify in the new system so it basically tells me that if I start with x equals 0 and y equals 0 and then I execute t0 in parallel with t1 where t0 and t1 are expressions just denoting these assignments then these must imply that r1 equals 0 and r2 equals 0 cannot happen so the note of it is true so this is the encoding of horde triples propositional horde triples into clinical algebra with tests so we use the same including yeah yeah so we give the side ability so we can check this note and now we are in order to to look at the data values that are inside t1 and t0 we are looking at this partial function model that will make it even will make the decision procedure more efficient once again thanks for the very nice talk my question is related to what Deepak has so how do these procedures compare with the automata theoretic methods in a sense they they are so the the underlying procedure for the side ability is automata based so I didn't show the details but there is an there is an underlying procedure that is automata based I mean something has to give in right I mean there's an undecidable problem so the programs we can look at and the properties we can look at are very restrictive right because I'm restricting the so the number of tests and the number of variables and the number of values that you can put in these variables is all finite so I restrict everything to find out yeah so it's it's part of my abstraction so part of my abstraction is to say I have a finite domain but I guess there are some recent results which show that with some I didn't really acquire memory models even with finite domain so with release release well yeah but so again there is undecidability even with finite yeah you're right but again the type of things you can write with these expressions that like also the branching here will always be finite so I don't have a parallel star I cannot nest the parallel under a star so I guarantee that I remain in this decidable fragment so but I agree that is restrictive but as I said at some point in my talk it was also part of the goal to remain in that decidable fragment even though we might be restricting ourselves in terms of the properties it's still enough to look at some of these small programs and for the networking application we believe it's still enough to look at some interesting reachability properties so so for that we're happy so we kind of compromise on the class of automaton properties my question is actually similar to what he asked in the example if I have a program where so you're taking each control flow statement to be a Boolean like a state variable right so if I have say a comparison like x is less than one or something you treat that the entire expression has one state variable and then with this true or false yeah but if you want to say include say all the national numbers into it no so so you can do this in this system so the also the test of course you cannot do things like you know x is smaller than y plus one that's not possible so it's always x compared to a constant so you have you have a finite number of constants you have a finite number of variables and you're comparing those variables with constants and then of course you can encode a limited form of copying and so on but but no impossibility results that like you cannot do that or like maybe you can get a system which is like you don't get complete I would say it should be possible by other results I know from verification we never we never looked at it but it should be possible to look to get a sound framework in which you can do some form of of copying and comparison and an infinite data values when you said that you use automated things as what happens to the brosovsky derivatives in both the syntax and the semantics yeah so we have a construction that uses derivatives very similar to to brosovsky's in order to obtain something that we call pomset automata that is an automaton model that goes back to geisha in I believe in the 80s so so yeah so there's a there's a construction similar to brosovsky derivative that you can that you can use there thanks very much thank you