 First of all, I suspect that if you read your program, I am here in the last talk because I am representing the sponsors. So, I am the only FSTTCS sponsor, I think we will get to give a talk, but they pushed me to the last slot. So, anyway, so as Akshay said, this is an old, old topic and I was somewhat surprised to find that it was not as settled as I believed it was. So, I will try to explain why. This will be a, you do not have to exercise your mind in this talk. So, I think Dietrich has already, I think, wiped out everybody's stamina for the afternoon. So, there is going to be no algebra in this talk. So, I do not know if it has got AL or A. It is not even a G, it is maybe a semi-precious gem. So, I am not sure. So, I am going to start something very weird. So, the reason I am giving the talk is about robots. So, there was this group in MPI who have been looking at a kind of process calculus for robots. So, they call it a motion session calculus. I do not think they want to call it that, but they have been forced to call it that and I will explain why. But basically, it is a process calculus with primitives for communication because you would expect, so these robots are autonomous agents, so they have to talk to each other. But they also move around. So, there has to be some abstract way of representing the fact that they are moving around and taking time and they have to periodically synchronize and all that. So, the reason I think they are forced to call this motion session calculus is because they were forced to use this thing called multiparty session types. So, their problem was the following which is that you have this system behavior that you want to implement and you have this process calculus description of the components of the system and you want to know, of course, essentially model check. You want to check what the components are doing, what the global specification is supposed to do. So, the approach which they were led to because they were using a process calculus is this notion of a session type which is essentially a PyCalculus based way of talking about communications. So, you have a kind of global session type, I am not going to tell you much. I will flash some examples, but after five minutes you will not need to know anything about this. But these session types, so there is a global specification which is tells you what the overall communication pattern should be like. There will be a local projection of this on to each component and then there is a kind of condition, a type checking condition which will tell you whether the merge of these local things is equal to the global thing or not. And that will tell you that the global specification is decomposable like this. And then finally, of course, we have a real implementation, you will check that each component type checks against the local specification. So, that is the idea. So, this is the kind of thing that this, so they actually built this robot in MPI. So, that is a picture of this robot from their paper. So, they have this thing which will kind of go on wheels and pick up something. So, it has an arm which can be extended and put down. So, they have, the actions are things like the, you know, the cart will tell the arm to unfold itself, go and grab something, fold itself, come back, go forward and all that. So, this is the kind of description that they want to put in their programming language, okay. And I am not going to show you the full syntax, but this is the kind of program that they generate, right. So, it looks very process calculus issues, all you need to know. The only thing I probably should highlight is that you have these, so this is DT business, right. So, this DT business is what is to do with this time and motion. So, it is a kind of synchronization in time primitive, we say. Right. So, now, so, for example, if you want to fold the arm of the robot, then it might have a, I do not know which side, lower arm and upper arm. So, it first folds this and then it folds this, right. So, the cart tells the lower arm to fold it, lower arm tells the upper arm to fold, and then the upper arm tells the cart, I have done it, right. So, this is the typical thing. So, you have a, on the left hand side, right, you have a kind of a global specification. And now the thing about the global specification is it is given in terms of what you might think of a synchronization, right. So, the cart and the lower arm synchronize and a fold instruction is given, then the lower arm and the upper arm synchronize and another fold instruction is given. And then the upper arm and the cart synchronize and an okay comes. Now, when you project it onto the things, you have these asynchronous things where you send and receive messages, right. And now you want to check that the sending and receiving implements that synchronized behavior. So, this is at a high level what this session type business is trying to do. And the point about this particular example is that you have three components and they are talking in a kind of circle, right. So, cart talks to lower arm, lower arm talks to upper arm and upper arm talks back to the cart. So, you cannot really do it pairwise. It has to be all three at a time. So, there, hence this multi-party as a positive binary, okay, so far so good. So, now it turns out that this multi-party session type theory, if you want to prove stuff comes with a lot of restrictions. So, in particular, for instance, you have constraints on external choice. So, external choice in this context means I am waiting for a message. So, now supposing you have a protocol in which you send, you want to find out at which end of the room you are. So, you send two robots in opposite directions and the first guy who tells you is reach the wall will give you an idea of where you are. Now, you cannot do that, right. You cannot wait for messages coming from two different sources, okay. So, these are the kind of things which was one motivation to look for something outside multi-party session types to reason about these robot programs, okay. Another more serious problem is that this whole multi-party session type thing seems to be imploding because there was a paper in this year's popular by Yoshida and a co-author which kind of confess that last 10 years of multi-party session types was all wrong. So, these are some of the comments like consistency does not hold for many protocols. It is this multi-party session time framework cannot prove type safety of any process, okay. And then it also says that people have built on these wrong theories and built papers which are even more wrong, right. So, it is not very clear whether this thing is sound or not, okay. So, as a result, when I was just met Krupa Klastier at FSTCS, he asked me whether there was another way to do this. And so, I went there in April and of course, I knew nothing about this robot stuff, but it struck me that essentially what they're talking about is message passing, right. And that's the whole point. So, the whole point is they're talking about global and local message passing specifications and asking whether a global message passing specification is decomposable into local message passing specifications in a reconstructible way, you don't get more, you don't get less. So, that brings us to this ancient topic of message sequence charts. So, for those who are not that old, you might wait to remember what a message sequence chart is. So, a message sequence chart is a picture like this. So, it has these vertical lines representing the agents or processes and the horizontal arrows are the messages. And you label the messages with the identity of the message, okay, whatever is the information conveyed on the message. So, typically you abstract away and get a finite alphabet and so on. So, this is a sort of trivial interaction between say, the customer and an ATM. So, when you try to put your card into an ATM, then you might be, might put in a password and then it might send it to the bank to authenticate and then it might reply and then you ask for some money and then it'll ask the bank whether you have enough balance and then come back. So, there's a kind of, typically this is a protocol and you want to make sure that all your interactions follow these kind of protocols. So, here is a less meaningful message sequence chart. So, this has say two clients in the server and so, the server is the S in the middle and you have C1 and C2 and they're sending requests R1 and R2 and expects to get back some grant. So, because it's asynchronous, for example, C1 here, for instance, sends an R1 and then doesn't get another request in time, reply in time. So, sends another R1 which crosses with the reply and so on. So, the point of this is that you can take these messages and decompose them as you are going to anyway when you localize them into sends and receives. And so, from this kind of picture, which is not very formal, you get something which is slightly more formal, which is a label partial order of the events. So, you have certain events, receive events and essentially the assumption is that these channels, we are going to assume that these channels are FIFO. So, if you have two messages sent between the same direction between two processes, they will be received in the same order. So, locally each channel, each pair of processes connected by a FIFO channel and otherwise the only constraint is of course, you cannot receive a message before it has been sent. So, sends precede receives and each process is sequential. So, you have a, so these, so you have a local process order, you have the message order and you get a partial order. And now, as you would expect, once you have a partial order, you can represent this partial order by its linearization. So, you have a word representation of these partial orders. So, you can take this one message sequence chart and write down all the ways of linearizing it which are compatible with the things. So, you could have multiple linearizations and just like we have been hearing about traces and so on, you have some commutation which is possible because of independence. So, in this way, you can talk about either languages of sets of messages, message sequence charts as pictures or sets of message sequence charts as represented by the sets of their interleavings. So, there are sets of sets of interleavings, but these interleavings of course are partitioned, they do not, so you can, you can actually throw away the second level of sets. So, you have a large set of interleavings, the interleavings have to have some closure if you have one, you must have all of them, but modulo that it works exactly like traces. So, you have a notion of the message sequence language, ok. So, once we have these sets of message sequence charts as a possibility, the question is where do they come from. So, one way of having message sequence charts is by just writing several scenarios and this is typically what people do in say when they are doing software specifications. They will say that this is what happens when the ATM transaction succeeds because of a correct pin, this is what happens when it fails because of a wrong pin, this is what happens when it succeeds because of correct balance and so on. But you could have more complicated things which repeat and so on. So, you can draw this kind of a automaton like specification which generates sequences of interactions by traversing the automaton. So, you have the nodes labeled by simple MSCs, so their patterns and then as you walk around you generate a path of nodes and then you just write down the MSCs that you see and you get a large MSC, right. So, this is just a concatenation of these MSCs. So, there are some subtleties and concatenation because of asynchronous and all that we will not talk about. Send from an earlier thing could be after received from a send from a later thing and so on, but modulo that this is the obvious thing, right. So, this is our going to be our assumption that you have these global graphs which generate families of message sequence charts, right. And our question is really going to be how to get from there to a local thing, right. So, what is the local thing? A local thing for us will be a kind of automaton, so a communicating finite state machine. So, we will have communicating finite state machines which are essentially agents which can do send and receive. So, they could do internal actions also, but not terribly relevant for today's talk, so we will ignore them. So, every agent either sends a message or receives a message when it does it is on a particular channel. So, it will say P to Q. So, P exclamation mark Q is typically P to Q send M and Q receive from P. So, the first thing is always the agent who is doing it and the second thing is the other agent, right. So, this is a producer P which is sending messages to Q. So, P keeps generating messages and Q keeps consuming them. Obviously, Q can consume a message only after P is generated, but P does not have any constraint on stopping, it does not have to wait for Q. So, the channel between P and Q is not bounded in this case. And if you represent it as a message sequence chart, the behavior of this thing, you have this simple ladder like thing where P keeps sending messages, but you could have you know an arbitrary kind of slice here where for instance, P could have sent a bunch of messages and Q could have received none of them, right. So, Q is so Q is still here and P has reached here. So, P has sent some 17 messages for example and Q has received. So, there is no bound on the channel. So, although these are communicating finite state machines and the state space is finite for each individual component and there are only finitely many of them because you have unbounded channels, the global state space is unbounded. So, these things can do more complicated things than what we saw with message sequence graphs. Those message sequence graphs basically take individual MSCs and tile them, right. So, you do the MSC at this node, then you do the MSC at that node, then you do the MSC somewhere else. So, as you go along, so each MSC has a kind of there are boundaries, right. So, you have a block, then a block, then a block and you have finitely many atomic blocks and you keep composing sequences of these. Now, you have some bizarre message passing automata communicating finite state machines like this, which can construct arbitrary, see this thing you cannot slice anywhere and so there is no place where you can stop it except at the very end and have a block where all the messages have been received. So, you have to go to the end and this thing can actually generate arbitrarily complicated blocks. So, it can generate infinitely many non decomposable MSC. So, this particular communicating finite state machine would not have a representation in that graph format, because the graph format will always have a finite number of blocks which keep it ready. But this is not the direction we want to go, we want to go the other direction. We want to start with the graph and figure out whether it can be represented in this format, ok. So, so this is our question and the title of realizability. So, given a message sequence graph, is there an equivalent communicating finite state machine? The equivalent you mean that it generates exactly the same set of messages, message sequence charts as the message sequence graph does by traversing its paths. So, this one will generated by its interactions through the local communications and the graph would have generated by paths in a global sense. Yeah, so it will be since the linearizations because of the FIFO assumption, a linearization can always be uniquely mapped to the underlying partial order and then uniquely to a picture, so it's all the same. Yeah, no, so it will always have, because it is independent, it will, it should always have all the linearizations of a given thing. So it cannot choose to do only a few, so not in that sense. So, it must be in the sense of a trace, it would be trace close. So, it would be if it covers one message sequence chart, it should give all the linearizations of that. But it would not naturally happen because the automata are distributed, so you cannot actually program it to do it any other way. Wild cards and the names, no, right now it's everybody is named, I mean, so there are explicit names, so and there is no process creation or destruction, it's not dynamic we are assuming a static, it's arbitrary, but it is known, so you know who you are going to talk to and you know you can name them explicitly, so there is also no broadcast in this, you can't just send out a message and hope that everybody will get it. So, this is a very limited kind of model, but it fits, so the point I was just coming to is that this fits fairly closely with that process calculus that I kind of sketched in the beginning, because that also has essentially the same kind of point to point communication mechanism with channels between them and there also they do assume that these channels are free for. So, there is it's almost, I mean, if you go modulo the syntax, it's essentially that that process calculus allows you to define certain kinds of communicating machines, I mean, the finite statement is not obvious, but it they have recursion in their language and so on, but it's essentially the same problem modulo the presentation of the machines, so the robot programming thing is that they had a different way of describing, so okay, so right, so this is a kind of global specification. It says that the cart and the lower part of the arm synchronize and fold, then the lower arm and the upper arm synchronize and fold, and then the upper arm and the cart synchronize and finish, and this is a distributed specification, it says that the cart first sends a fold message and then waits for an okay, and then the lower arm sends a fold message, and then receives a fold message from the cart and sends it to the upper arm, right, so the question is does this match this, right, so for us this is essentially now we want to be given by an MSG, and this is our communicating finite state machine, so these are the three processes if you want to call it, and you want to ask whether the interactions that this allows are going to do precisely this or more than this or less than this or whatever, yeah, so this is given a specification, and the third thing that you'll be given is a program, right, so that's in the session type world this is all happens, you take this, you project it to get that, and then you first check whether that equals this, and then you check whether each probe, whether the cart process matches this cart specification and so whether they type check, so for us that that step is eliminated, we just have a global specification, we have a local implementation, and we want to ask whether this local implementation, whether the global specification admits a local implementation, this is what we're asking, so is this global specification implementable, is it, so that is the realizable, no, so we want to know if it can be done then there will be presumably a synthesis procedure to do it, but you could also build in a model checking procedure later on, you say okay I know one way to do it but is this way doing the same thing, that's a different question, but right now it's enough if you solve the synthesis problem, so basically what I was saying was so this is an alternative way to doing the session type thing, so in the session type world you have the global thing given as a global session type, and then you project the global session type into a local session type, right, so we are just trying to duplicate that in a different formulas, eventually if you solve this problem then you have to go back and map it to the robots which we are not talking about yet, but just saying that you could translate this whole problem and ask this question about message sequence charts, so coming to Anka's question, so what does it mean, right, so what are you allowed to do, so if you look at the problem which the session type people were asking and also that these robot things were asking, you're saying that you have a global specification in terms of a certain set of messages and you want to project onto the same set of messages, right, so you want to project onto the same set of messages and if you project onto the same set of messages do you get something which is the same or more or less or what, okay, so but we will talk about extra messages in a minute, right, so we want to ask what happens when you take an MSC specification and just look at what happens on each process and say now if I glue these local things back together do I get the same MSC or not or do I get some wrong things, okay, so here is a canonical example, so if you look at these first two MSCs, right and then if you can look at the last one, then the claim is that if I look at this portion of this MSC, okay, for the first two processes P and Q they are indistinguishable, it doesn't, they have no idea whether or not R and S exchange an M message there because they don't participate in that, okay and symmetrically if you look at the other part of it, right, this part and this part, right, then they are equivalent for R and S, so if I present the third MSC to each of these processes, right, each of them will have a witness between M1 and M2 which it believes is there, so if my correct specification is do either M1 or M2 and not M3, right, if my specification is do M1 or M2 but not M3, then it is not implementable, I claim, if I just project, right, because my local projections on to M1 and M2 allow M3 as an implied behavior, so there is a closure of, by implication, so you can do this even with synchronous communication, you can take automata on distributed alphabets and project them and compute their product and then you get a kind of shuffle closure there, so this is equivalent of that, right, so this is the problem, so typically when you project, you will of course get back whatever you got because the projections will add up to what you got but it will also allow new combinations which you did not want and the question is can you rule those out or can you say that those will not happen, so the first result in this space of this particular problem, the projection problem is that this is actually undecidable and not that this is going to make any sense but it uses a version of post correspondence problem to do it, so it is a somewhat unexpected thing because as I will mention briefly afterwards, this actually holds even when the, so I mentioned before that the global system in a communicating finite state machine is logically finite state in terms of the space of state space and it is infinite state in terms of the channels, right, now in this particular proof the gadget that they produce from post correspondence problem has bounded channels, so you have a bounded channel spec which actually produces an undecidable property, so basically this problem as such is hopeless but it is not expected that the problem is solved in the sense that everything will work, what we are asking for is does this, is this realizable or not, so we want to know that, right, now what this is that in certain settings, so this thing is tricky because it depends on what assumptions you are making as I will point out, so in this general setting this paper by Allure, Etasami and Yanakakis says not possible, so now we can move to a restricted case, so one restricted case is that you actually force your behaviors to be nice, so this is the so-called regular case, so what you can do is you can take a message sequence chart like this, right and you can see in this chart who talks to whom, so P talks to Q, Q talks to R, P talks to R but R does not talk back to anybody, so from the graph at the message sequence chart at the top you generate this lower thing which is the communication graph which tells you who is talking to whom, now in this what you can see is that typically P can keep on sending messages to Q and to R because it is not waiting for anything in return, so in principle this kind of a thing allows a kind of unbounded flow of information from P to Q and R without expecting any return, so this could potentially generate an unbounded channel, right, so if you force these things to return, right, if you force that graph to have strongly connected everything that goes forward comes back in some way or another, then you force a kind of acknowledgement that after a certain number of messages have been sent you will have to wait for a message from somebody, so you will not be able to send one more message until that message reaches you but implicitly that guy is also waiting for something, so all the messages have to get flushed in some finite way, right, so if you have this property that in every cycle where you are, see because if you do not have a cycle there is no problem, you just have a linear run, you can only send as many messages, if you only do 10 actions you cannot send more than 10 messages, but if you do 10 actions in a loop then you can send any number of messages, so in any loop if you are forced to wait for acknowledgement, okay, then you get a bounded behavior, right, so that's precisely what this regular case says that in every loop in the graph that MSG which is generating things, if you see strongly, if you see that all the processes which talk in that loop are communicating with each other, so they all require each other to talk to them back, then you are guaranteed that you have bounded channels, okay, and then the whole behavior becomes regular in the normal sense, I mean the whole communicating finite state machine that is underlying this whole thing behaves exactly like a finite state machine, including the buffers as states, okay, so in this regular case, okay, then you can apply a kind of analog of Zillonka's theorem but you are forced to add messages, not add messages but add content to messages, so you don't change the MSC structure, the MSC as a picture looks the same, the same messages are sent but it is just say whenever you send M you will say M comma x or M prime comma y, so you will send an extra payload with every message, and this payload will have some time stamping information which will allow you to reconcile various things, right, so this is one strategy which one could do to relax the, so you have first of all characterized it in terms of a stronger assumption on your MSG, so you have taken a MSG with a certain property, namely this locally synchronized property of having these strongly connected components in every loop, and then you are allowing a little bit of leeway and the implementation by adding content to every message, and then you can actually synthesize, and this you can do even without this very strong thing, so there is a weaker notion, though it looks a bit peculiar, but globally synchronized is weaker than locally synchronized, okay, so globally synchronized things which are not locally synchronized can also be implemented by adding extra content to messages, okay, so if you add extra content to messages we have a little bit of idea what is going on, but our question in this robot world is we do not add content to messages, right, and there Allure et al say that you are in an undecidable world, this is just a digression, but it is a bit strange that I claim that this Allure thing reduces to each MSC in the reduction actually having boundary channels, and I also claim that if you have boundary channels in some sense you can do that other thing, right, so it turns out that when you have boundary channels and you synchronize, you can actually end up doing strange things, okay, and that's why it's undecidable, but we will not get into why this happens, okay, so now in the same paper Allure et al Sami and Nyanakesh ask, so the thing is that the bounded channels is only part of the thing, see, you can have bounded channels, but you can allow kind of context-free behavior, because if I have two bounded channels, but they are synchronously going forward, then you're saying for every time P sends a message to Q, R sends a message to S, so you get some A to the N, B to the N kind of behavior, which is not contradicting the boundary, so boundary channel on its own is not enough, you need also this extra thing that everybody talks to everybody, and that's not there in that thing, so their thing is not locally synchronized, it's only bounded channel, of course, they are also doing a different reduction, but because they are doing the projection and not the extra message thing, but that's really the thing, so anyway, so in the same paper where they show the undecidability, they ask the following question, okay, so this is very similar to, so supposing my specification again now is M1 and M2, so I want to have M1 and M2 and I want to disallow the crisscross, so one crisscross is where P1 thinks it's doing one and P2 thinks it's doing two, right, so M3 is one behavior where they start off doing the wrong thing, now the claim is that this behavior in the projection model will get rejected at this point, because P1 has sent a one, so it's going to wait for a one, right, we see the two is going to say no, this is not the one I was looking for and same with P2, so they can detect in this case that this M3 is an undesirable behavior, but the cost of detecting this is that they get stuck, right, they are not able to say anything, basically P1 is waiting for a one, it's getting a two, so it's not going to do anything, so it's stuck and P2 is waiting for a two and it's getting a one, so it's stuck, so the system is deadlocked, right, but it does not accept the wrong thing, so if you just think of it as language acceptance, this is okay, okay, this is implementable, but if you think of it in terms of desirable implementation is probably not okay, you don't want your system to suddenly hang and say it hung because some undesirable thing happened, right, so they, so the question is now if I look for an extra constraint on the implementation, so one is it is projection, but also the implementation that I produce should not deadlock, right, so without the deadlock constraint we know that it's undecidable, that's what they said, now they say what if we add the deadlock constraint, so we don't want any old implementation, we want a deadlock free implementation and this is decidable for bounded channels, so they show that if you have bounded channels and if you insist that your implementation must be deadlock free, then you can decide whether or not a given specification is implementable or not, okay, so the same paper, one is undecidable proof, this is, so they call this safe realizability and they call the other thing weak realizability, but these terms are so vague that it's difficult to keep track of it, okay, so this is the status now. No, no, bounded is not enough, see bounded means the channels are bounded, but what we said is that if you have this, this local synchronizability in terms of the communication pattern, you need that also, so you could have bounded channels, okay, so just a simple example is if I have this kind of a thing, so supposing I have, so I exchange messages in P and Q and then I go and exchange this is R and S, now globally it will show if I call this unit an A and I call that unit a B, I will have a shuffle of equal number of A's and B's, right, so globally this is not regular, but no channel is unbounded in this and that's because if you look at the communication graph, you have a cycle with P and Q and you have a separate cycle with R and S, but they're not strongly connected as a whole, right, so that is the kind of subtle point between these two, okay, so since I know that you are losing track of where we are, so where are we, right, so if you have regular MSG specifications, those in which you have this locally synchronized loops and so all the channels are automatically bounded and everything is nice, then you can add piggyback information on messages and get an implementation, Zealonka's theorem type thing and you can do a similar thing with this slightly weaker model of globally synchronized things. If you are only allowed to project and you have no other constraint, even if your things are bounded to start with, it is undecided, okay, and if things are restricted to be deadlock-free and you start with a bounded thing, then it is decidable, okay, but that means you have started with a bounded thing, so there is this case that we have not looked at which is what happens when you have an unbounded thing potentially, but you want a deadlock-free implementation through projection, right, so what is the situation there, okay, so first of all what is this deadlock business about, this deadlock business as we saw in that, if you look at that example, right, so the deadlock here or here, for example, the deadlock happens because P1 and P2 are autonomously choosing which one to go to, right, there is no coordination between P1 and P2 as to whether to start an M1 or to start an M2, so P1 can happily start an M1 and P2 can locally choose to start an M2 and then they reach this contradiction, so essentially whenever you have a choice, if you leave this choice to be taken in this way by multiple processes who do not talk to each other, you are definitely going to have possibilities that they take the wrong choice and you have incompatible things, okay, so you can rule out deadlocks by making choices so-called local, right, so the very first picture I drew, okay, as an example also has this non-local thing because see at this point I have to go this way or this way, right, now when I go this way or this way either the left-hand process decides to send an M or the right-hand process decides it in N prime, now if they autonomously decide the wrong way, the left guy sends an M and the right guy sends an M prime, then it is going to be an illegal behavior and it gets stuck, right, so this is going to be a deadlock, so we don't want this, right, so typically not just the initial thing but anytime when you split and take one of two paths or one of many paths, you want to make sure that that choice of path is taken uniformly by everybody and the only way to do it in this model uniformly everybody is to make it done by one person, there's no other way because there is no synchronization to coordinate, so you can only have one person in charge of that choice, so it has to be a local choice decided by one person, one of the processes, okay, so essentially what a local choice is that if I look at a choice like this first of all they must be they can't have multiple initial, so a minimal event in the partial order has to be a send because it can't be received, you have to wait for a send to get a receive, the receives must be starting from the same guy, right, so I can choose to go there and send an M or go there and send an M prime but it can't be the I choose to send an M there and my partner chooses an M prime there because that doesn't work, so you must have a single minimal event in each of them and the same same person must control all these guys and then the subtle point is this guy must have been talking in the previous one also because he can't just slide here from by not talking through the whole thing, right, so the last thing is that this process who makes the choice must also have participated in the node above, so in this case for instance if I if I flip this direction then I'm good, right, if I say that the left hand process chooses to send an M and go left or M prime and go right it's fine but if it's the right hand process which is choosing the M prime then I'm in trouble, right, so this is this so this is a kind of syntactic restriction which guarantees in some sense that if you follow this MSG you should not get into deadlocks, okay, if each guy locally follows because whenever you make a choice the other people will be guided by that choice, okay, there are some subtleties but more or less that's right, so this is a question now, question is what happens now if we say that the starting spec that we have has this property, okay, it's local choice, no this is not but if I flip this arrow it would be, so this is global choice because it says that each successor node should have a minimal event, yes, this is a minimal event only one and this is a minute because this is after that but the second point is that each minimal event is on the same process across all choices, so the so if it is a send by the left hand guy in all of them it has to be a send by the same left hand, so if P sends P must send everywhere, so basically P is deciding the choice by sending a message, it can't be that P is sending a message or Q is sending a message and one of them will decide because we don't know how they will talk to each other, so the question is what will happen in this case, right, so this case it turns out was actually answered even earlier, okay, so there is this old paper by Loick and Claude Jarre at the end which was revisited, so other people are also revisiting because they have a 2000 version and they have a 2015 version of the same paper, okay, which is a slightly elaborated general version, so what they point out is that it is certainly not the case that by restricting to local choice you eliminate this problem, it's not trivial that it's not like the regular case, the regular case we said once it is regular that is locally synchronized, then you will always get an implementation, right, so here what they are saying is that you are looking for deadlock free implementations, so you must start with local choice things because otherwise you have a potential for deadlock but even then there is a problem to be answered it's not trivial, right, so this is their example, so let's try and see what happens here, so okay, so here the thing is that, so, okay, another problem with this is that they have a dual notion in which their MSCs are on the edges and not on the nodes, right, so these are the nodes and they have an edge labeled MSC graph, so you can do a loop of M1 followed by a single, so it's M1 star M2 is the thing, right, so M1 says A sense to B and then B sense to C and M2 says A sense to D directly and then D sense to C, now the problem with this is that A is sending to B, it doesn't have to wait for the next send because not A is not waiting for any information, this is the old problem, right, so A can send to B, go back, come back here, send to B again, come back here, go and send to D and now B could be very slow to receive, so D could read the message, right, so you could end up with this scenario, okay, where A is sent to B once and then go on to that thing and B is starting to send to C, sorry, B is starting to send to C but meanwhile this has happened, A is sent to D and D is sent to C, so this part is already completed, right, so this message has not reached anybody because it was slow to get generated in its avatar, okay, so this is a kind of example to show that even if you have local choice, so these are local choice because there's only one minimal element which is, I mean, in each of them there's one message sent by A, so A either sends M1 to go left or M3 to go right, so A is deciding how this thing moves but within that because of the asynchronous of the different channels, you still have this confusion possibility, right, so it's an illegal behavior because once C has got this, right, then C assumes that this has not happened because if this had happened it should have happened before, so C should get this before it gets this because all M1s happen before M2, all capital M1s happen before capital M2, right, so if it receives this M2 message it assumes that A directly went that way but it did not, so what they do is they do actually have a characterization of this case, right, so they actually describe some graph theoretic condition under which such a local choice MSC will have a legal projection which gives you the same behavior which is the problem that we wanted to answer, right, when it will, when it will admit a realizable implementation by projection, right, so if it is reconstructable in this way then the projections will do it for you, they you don't have to do any more work, if it is not reconstructable they also have some way of synthesizing some extra messages to disambiguate and stuff like that, so they can fix the problem but the problem we are really interested in is yes no, yes it is decomposable by projecting, no it is not decomposable and if it is yes then you don't have to do any work, right, so it looks like the problem is solved, right, so where is the question, so the problem is that in the course of proving this they make certain assumptions which theoretically are fine but they are problematic when you want to apply it, so one of the assumptions they make is that every non-trivial node in your graph, so you have nodes which is the initial node where you start, okay and you have these you could have a sync node like this one where nothing happens but they assume that every other node branches, so they assume that every node in the graph makes a choice meaning you can't do something and then do something, you have to do something and then choose again and the logic that they had is that if you want to do something and then do something I'll just collapse them and make it one long thing, right, so theoretically is perfectly reasonable but this is an assumption that they have these choice nodes everywhere, so every node is a choice node, okay and the other thing is that they assume that if you send a message m in one node then that same message m is not sent in any other node, so it will have some distinguishing characteristics saying if it's m1, m here it'll be some maybe m comma n1, m comma n2, so somehow all node labels across the nodes all message labels are distinct, okay, so these are two assumptions which I don't see any way to remove from their characterization and get it to work, right and this unfortunately doesn't work, okay and this robot example this doesn't work, so you cannot write your robot programs in this format, okay, so that's why we are revisiting this problem, so this is the thing that they would not allow, right, so you have a node which is in between which is not making a choice, right, so they would push this there and make that into a choice node but it's not quite the same thing because now see this is an extended version of their old example, the point is that a sends to b, now c is waiting for a message from a and from b but a and b send their messages separately to c, okay and there is no logical reason now there are different assumptions you can make about it but there's no logical reason why the message, see if two different people send you email there's no reason why you should receive it in a particular order, right, so c has no idea whether when it gets the message from a, c has no idea whether it came through this route or it came through that route whether to wait for this b or not, right, so this is a question and this problem cannot be addressed in their characterization because it doesn't exist, this graph doesn't exist for them, okay, so where we are is we believe we can solve this problem, okay but we are not telling you because we have already made several wrong attempts to solve the problem, so hopefully we have a characterization but it's also unclear what is the complexity of this question when you remove this extra filtering that they do, that's all I have to say Yes, with additional content, see that's the thing because I was looking at your paper yesterday not that one but the JCSS version of it 2006 which I had referred, so the point is that you want to reconstruct it without additional content, right, so we want a projection based reconstruction, so in that paper because that is the statement of the problem for the robots, see, okay, so this is okay, so this is one thing, right, so this is my, so now I talk to industry, right, so the point is that if you're a theoretician and industry asks you a question which you can't answer, you change the problem but if industry asks you a question they can't change the problem, you can change the solution, you can give them a suboptimal solution but you can't change the problem, so this is the problem, that's because these robots are actually deployed using this calculus, right, so they are going to actually just take that specification and then just deploy it, so they cannot compute this extra information, so these things are kind of small computing, you can't add extra code to these things, they're just going to say receive this message then take that message, so they are just very low, low capacity objects, so you have to say up front what is the thing that they have to do, right, so that's that's how I understand it, but anyway so that is a given that you can only project.