 work with Hong-Fai-Fu. And the idea is that we look at infinite probabilistic systems modeled by probabilistic push-down automata. And we have a specification given by a finite system, a finite transition system with probabilities. And the question is, can we establish whether the infinite system is simulated by the finite one? Okay? That will be the main question. Now, I guess you all know what is a push-down automaton? What we have is a finite set of control states. We have a bunch of labels that are attached to the transitions. And there is a set of finite set of stack symbols. And there is a transition relation. And what I use is the disnotation where you go from control state P with stack symbol X. And then you read the next symbol A and you go to a configuration which is Q, control state, and stack content will be alpha. So here is a famous example that you all know. It recognizes the language zero to the power n, one to the power n for n larger than zero. The idea is you enter the control state. If you see a zero, you push an A. The Z simply is denoting the empty stack in the beginning. So if you see a zero again, you push an A. At some point, you do an epsilon transition to this. And here you're going to pop the ones. And you get to the success state only if the number of ones coincides with the number of zeros. Good. I guess you all have seen those kind of examples. What's the semantics of a push-down automaton? It's based in terms of an infinite label transition system. Where the states are the configurations of the push-down automaton. And it's typically defined as follows that if this is a transition in the push-down automaton, so if you can go from PX to Q alpha, then you can go from the configuration PX beta. So beta is a stack content by reading the symbol A to Q alpha beta. Good. This is all well known stuff. Why is this interesting? Why are we looking at push-down automaton? Well, push-down automaton are actually a very elegant model to model recursive procedural programs. So one way to view this is as follows. If you have a configuration like this QXY1 up to YN, then the intuition behind such a configuration is that Q is the current, let's say, valuation of all the global variables in your program. X could, for instance, contain the value, the current value of the program counter plus the valuation of all local variables of the current active procedure. And then for all procedures that you already have invoked, or let's say in the whole chain of recursive process calls, you have an activation record that keeps track of some local variables plus a return address. Okay? So this gives a very natural model for recursive procedural programs. Then you can model transitions as follows. If you invoke a procedure, then you get something to go from PX to QYZ. You get a new activation record and pop, push it on the stack. If you return, you pop an activation record from the stack. And if you do things like an assignment or some kind of, let's say, normal program construction, then the instruction, then you go from PX to QY. Okay? So in that sense, you can quite naturally model those kind of systems. Verification of Poisson automaton is actually well studied and is actually also currently studied. There was a paper this year at FASA about this Microsoft tool which is using actually Boolean programs and looks at PDAs for recursive Boolean programs. Good. What is our interest? Our interest is probabilistic Poisson automaton. I'm going to explain you what it is. We are viewing them as implementations, as programs, probabilistic programs where you have assignments which are randomized. And they have to be simulated by a finite specification. And my final specifications will be finite probabilistic transition systems. Before I go to that, I first, some results about the results between checking, for instance, by simulation and simulation relations on Poisson automaton. Now, this is known to be decidable. Interesting is the proof of sterling in 2000 because I get back to that because we're partly, partially using his kind of techniques to establish a similar result here. It's, in fact, X time complete as proven last year by Kuchera and Richard Meyer. If you look at variants like weak by simulation, then this turns out to be undecidable. If the number of control states is larger than one, the number of control states is one, it's unknown. And if you look at strong and weak by simulation between a model given as a push down automaton and a specification given as just a finite label transition system, then this is a p-space complete. Okay? So, simulation or by simulation strong and weak is both, let's say, p-space complete. If you look at simulation, then strong similarity between push down automaton is undecidable. Actually, old result by Jan Frieser-Holt and Hans Huttel. And the strong weak simulation between a push down automaton and a finite transition system, so by simulation was p-space complete, this becomes X time complete. So it's, let's say, in a different complexity class. Good. What we are going to show is that strong similarity between a probabilistic push down automaton and a finite probabilistic transition system lies in the same complexity class, it's namely X time complete, which simply means adding probabilities to this result does not involve, let's say, the complexity. Good. Why do we look at probabilities? Well, they're used in randomized distributed systems. If you want to talk about the performance of systems, if you want to model components that can fail, then typically you are talking about the failure rate. Security protocols are heavily based on probabilities, so there are many reasons to look at probabilities. Good. So our specifications are probabilistic transition systems. Let me explain you what these are. These contain the state's actions, and the only interesting part is the transition relation, which takes you from a state by an action to a distribution over the states. Okay? So a transition is denoted as follows. We go from a state with an action to a distribution mu, and that's a distribution over the state space. Good. So they generalize labeled transition systems. If you take all distribution to be Dirac, which simply means with probability one, you go to one specific state, you just have a labeled transition system, you get Markov decision processes where the transition relation is simply a function and not a relation. Okay? So if you're more familiar, you're more happy with MDPs, you just use MDPs. This is a small example. Here you have those four states. In state S1, you can either do with A, and then with probability one you will take the self-loop. You can also do a B, and then with probability half you go there, and with probability half you go there, and so forth. So I think pretty straightforward. Then you can define notions like probabilistic by-simulation. So an equivalence relation R on this state space is a strong by-simulation if, well, if S can go to a distribution mu and T can go to a distribution mu such that the probability of going to some equivalence class is the same. So the probability of mu to go to some equivalence class C is the same as going for mu to that equivalence class then. Actually, those two states, S and T, are called by-similar. And then by-similarity is simply the largest such relations. Good. Here is a small example of two by-similar probabilistic transition systems. Here you see a transition system on the left. Here a transition system, probabilistic transition system on the right. Why are there probabilistic by-similar? Well, the only thing is that this state has been copied. As you see there, there is two times this state. And here the sum of going to a state where you can do an error transition is 0.1 plus 0.1 is 0.2. And that's the same as in the left picture. So therefore, they are by-similar. Good. Import some facts about this probabilistic by-simulation. It actually turns out to be the same as lumping, a well-known notion in Markov chain theory. It coincides with notions like probabilistic CTO star equivalence. Checking between finite models is P time complete. Minimization can be done quite efficiently using a variant of paid star jump minimization. And you can yield to exponential state space savings. What is the difference then with a simulation pre-order? A simulation pre-order is now defined as follows. Now you take a relation R, not necessarily in equivalence. And this is a strong simulation if, well, for any pair of states in that relation, the set of enabled actions in S and T coincides. And if S can do a transition to some distribution, then T can go to a distribution nu. For some distribution nu, such that mu and nu are related. And this R bar is the lifting of the relation between states to distributions. I'm going to explain to you what that means. Strong similarity is then simply the largest of such simulation relations. Good. So what does that R bar mean? And the way to define it is by means of weight functions. This is a concept defined by Jones and Plotkin in the late 80s. That says the following. You give me a relation between states. I give you a relation between distributions on those states. If there is a certain function, which is called a weight function, that basically distributes the two distributions over the states. In such a way that the following conditions hold, if you take the sum over all the Ts of W, you get exactly the distribution over S. This is the symmetric case where if you take the sum over all S, then you get the distribution over nu. And you only assign positive numbers to those states which are related. Now, this is technically a bit involved. Therefore, I have a small example. So suppose I have two distributions. One is on the left, where you go with 2 over 9 to that state, with 5 over 9 to that state, and so forth. And the other distribution is depicted on the right. There is a weight function between those two distributions. Why? Well, there are these dashed lines indicated with those numbers. And why is it now a weight function? Well, the easy way to understand this is you read this picture from left to right. You see that the amount of probability mass going into T is, for instance, 5 over 9. And that's actually the sum of the probability mass that goes out of T. These are exactly the sum of this 1 over 9 plus 4 over 9. And this applies to all the states. 2 over 9 goes in here, and the sum of the outgoing ones is 2 over 9. And the nice thing is you can also read this from right to left, and then the same property holds. For instance, if you go to W, well, this is an interesting case, to go to U with 1 over 3, then you see this is exactly the sum of the probabilities to go here and to go there. Now, this is a weight function, this dashed line with these numbers. And the weight function exists. Then this distribution is being simulated by the other one. Good. And here is an example of two probabilistic transition systems based on the same example. This state is being simulated by that state. Why? Well, it's exactly the example that I had on the previous slide. 2 over 9 and 5 over 9, if you go back. 2 over 9 and 5 over 9 plus 2 over 9 to some deadlock state Z. These are exactly the successor of S1. And here you have these three successors, which correspond exactly to these three, and then there is a certain deadlock state which I can reach with 1 over 9. Because there is such a weight function, it means that S1 is being simulated by S2. I hope you get some feeling about what a simulation relation in the probabilistic contact means. What is relevant to know about the simulation relations is that actually if you want to check it between two finite probabilistic transition system, this amounts to checking whether the maximal flow equals has the capacity 1. Okay? So it's a maximal flow problem. If you are fully probabilistic system, so you have not, let's say, in a state, the choice between two distributions, then actually simulation equivalence and bi-simulation coincide. We consider a model with non-determinism, and there the two notions do make a difference. Checking simulation can be done in this complexity, if you remember N log N was bi-simulation, this is a bit more actually has some higher complexity, and actually the way to achieve this is using parametric flow algorithms. It preserves a certain safe fragment of probabilistic CTL, things like lower bounds on reachability probabilities or upper bounds on reachability probabilities and lower bounds on box properties are preserved. And people use this for abstraction. So typically you make a model, you make it smaller, and then you would like to establish whether the smaller model is simulating the original model. So there is where people really use this notion of simulation. Good. So we are using probabilistic push-down automaton for our implementations and those finite transition systems as our specifications. So what is a probabilistic push-down automaton? It's just a push-down automaton, as you are used to. The only difference being that if you have the transition relation, the target is a distribution over the configurations, and that's it. Okay? So here is a small example. I will get back to that in a minute. Suppose in this simple example we don't have any control states, we just have those three stack symbols, and this means if I have a z, then with probability x I push y on the stack, with probability 1 minus x I push d on the stack, and this means, for instance, similarly, if I have a d on the stack, then with probability x I pop it, which means epsilon, the empty words, and with probability 1 minus x I push another d on the stack. That's the way to read these distributions. Good. What is the configuration graph? I don't think I have to go into all the details. It's an infinite probabilistic transition system, where the states are the configurations. Okay? Similar as push-down automaton. Same example as before. What's the state space of this one? It actually turns out to be a Bernoulli random walk. What you see is as follows. Take, for instance, this state z. Then with probability x I push a y on the z, which means I go to the right. Okay? Then I get to, for instance, this configuration yz. With y I can, for instance, push another y with probability x, which means I go to the right with probability x and I get to y, yz, and so forth. Okay? And this means a doubly, an infinite sequence that goes in. Some key results about what is known about these probabilistic push-down automaton. Well, there's actually quite something known about fully probabilistic push-down automaton, so those that have no nondeterminism, because they are equally expressive as recursive Markov chains, a model by Atasami and Yanakakis. Qualitative PCTL model checking is actually decidable. Qualitative means is the probability larger than zero or is it equal to one? Quantitative properties of omega-regular properties are decidable, worked by Esparza, Richard Meyer, and others. And actually you can also compute some expectation values. What about bisimilarity? Well, so far there is only result by Brasdil and Kuchera and some co-authors, and they say the following strong probabilistic bisimilation between such a probabilistic push-down automaton with nondeterminism and the finite pts is ex-time complete. What we are going to show is that probabilistic simulation lies exactly in the same complexity class. And this is not the case in the non-probabilistic case. Okay? That's the main, let's say, thing. So what's the take-home message before I get into some more details? This is the case in the non-probabilistic setting. If you want to check whether a push-down automaton is similar to a finite transition system, then this is p-space complete. If you want to know whether it's simulated by a finite transition system, this is ex-time complete. In the probabilistic case, it works like this, and this is actually the achievement in this paper. So what's the problem statement to be more precise? We're interested in the theoretical complexity of checking the strong simulation. So this is the following decision problem. You give me a configuration of the push-down, a probabilistic push-down automaton, let's say of the form p-alpha, and you give me a state s of a finite probabilistic transition system. The question we have to check is, do we have that p-alpha is being simulated by s or vice versa is s being simulated by p-alpha? We can do it in both directions. It's not great, and it is ex-time complete. And actually, if you fix the number of control states and the number of states in your transition system, then both problems can actually be solved in polynomial time. Good. What's the approach? The naive approach will be the following. First, you will know that checking a simulation relation between an ordinary push-down automaton and a finite label transition system is ex-time complete. Okay? So what we could do is just take the proof technique for that result and carry it over to the probabilistic case and you're done. Well, that does, unfortunately, not work. Why not? Because the result of this... I mean, the proof of this result is based on verifying modern new calculus against the PDA. So it relies on the fact that you can check modern new calculus on PDAs. Okay, so if you would like to do something like this in the probabilistic case, you need to have something like quantitative model-checking on PDAs, but we already saw, as far as I've proven, this is not decided. So this approach does not work. Okay? Good. So what we did is we applied a technique that has been developed by Stirling in 2000. Stirling used... called this technique the extended stack technique, and he used that for checking strong bisimulation between probabilistic... between push-down automata. Okay? And the idea is the following. What you do is you take an extended stack symbol u, which is actually a function, right? So this extra symbol that you add to your stack alphabet is a function which maps a control state to a configuration. And the configuration of Stirling was, well, you do this if this q u is bisimilar by P alpha. Okay, we adapted it as follows, and in the proof this is quite a drastic change, you map a control state q now to a set of states in the finite probabilistic transition system. Okay? So we don't map to configurations anymore. And the idea is the following. If you have q x alpha simulated by s, then we add this stack symbol u where this stack... this set u is exactly the set of states such that q alpha is being simulated by s. And what you then do is you add this function u to your ordinary stacks, and then what you allow to do is then, or anything what you normally allow to do with stack symbols. Then what we did is we devised a tableau method for deciding probabilistic similarity. Such that you get the following theorem in a sense. s simulates this configuration on the left, if and only if there is a successful tableau which has as a root exactly what you want to prove. And then we use a refinement technique on the over that tableau or based on that tableau for getting the exp time complexity bound. The hardness result is actually trivial or almost trivial because that follows directly from a reduction to standard bisimulation, having those direct distributions. So that's not a big deal. So the big deal actually is basically this adaptation of this stack technique by Sterling and to develop this tableau technique. I don't have much time to go in this tableau technique, but I would like to give you some flavor about how it works. This is the probabilistic transition system that we have seen before. It's a four-state machine. This is an example of a probabilistic push-down automaton. It has three control states, r, p, and q. And what, for instance, you see is that in this state, this configuration px with probability half on input symbol b, you go to q, and with probability half you pop x and you go to control state r. So here you see that there is some probabilities to go from the middle, let's say, sequence of states to the uppermost or the lowermost sequence of states. Now, what's the claim? The claim is that pxn plus one is being simulated by s1. So what does that mean? Well, take, for instance, px square. The claim is that this is being simulated by s1. Okay? Good. How to prove this? Well, this is a bit involved. You have to show that actually a certain relation is a probabilistic simulation. And how does it work? What we do is we use this tableau technique. So this is what we need to prove. We have some extended stack symbols. Remember those match or those map basically control states to sets of states in the finite transition system. So these are two stack symbols, u and v. And then we go as follows. Suppose, so this you have to read, if these premises hold, then I can conclude what is above the line. It's not the way you're probably used to, but in those tableaus it goes from bottom to top. So this is a reduction rule that tells you if this holds and this holds, and in addition I know that this holds, then we can conclude this. And how does that works? Well, you know that pxu is being simulated by s1. Let's assume that this is the case. Then you have to check that for every s in u, q, so in the function, the stack symbol that maps this control state to the set of states in the transition system, that this relation holds. Okay? If you check this for p, well, then it's easy because p is mapped onto the empty set, so I don't have to check anything. q is mapped to s2. What does that mean? I need to check whether qx is being simulated by s2. And what happens for the rest is that r is being mapped to s3, so the last thing that I have to check is whether r of x is being simulated by s3. And this is what you do, and then basically you now have to, of course, check whether these two things hold. That's not difficult to check because it just applies by unfolding. You know that these two transitions are available in the probabilistic push-down automaton, and then I can conclude when x is being simulated by s4, that then qx is being simulated by s2, and so forth. These are meant to be green. Why? Because they are in the simulation relation, so these are successful tableau nodes. I know that these conditions are true. I don't have to deduce any further. And if you continue this, I don't want to go through all the steps, then you get, in the end, a finite tableau. That shows that actually the whole procedure becomes decidable, and then if you combine this with the refinement technique, you get the exponential time upper bound. Okay? Good. One final bit is combined transitions. If you look at these two pictures, this one on the left is not being simulated by the right one, and what's the problem is that, of course, these numbers, 0.9, 0.1, are different than 0.8 and 0.2 over here. But the leftmost intransition is actually a convex combination of the transitions on the right-hand side. All right? Here you see that there is an intransition that can move to that state with 0.9, and there is an intransition that can go here with 0.5. If you now take 3 quarters times this probability plus 1 quarters times this, so you take a convex combination of the two, you get exactly 0.8 over here. So they look quite the same, and if you would take a policy or a strategy or an adversary to resolve the non-determinism, then any probabilistic one, well, you would admit it because it would allow you to get such a convex combination. So this gives rise to what they call a combined transition. So this is exactly what a combined transition is. It simply means it's a transition intuitively, which is the build from the convex combination of other transitions. Okay? So I have a combined transition from S to a distribution mu. If there exists a sequence mu i and d i, such that there is a transition to mu i, the sum of all the d i is equals 1, and mu is exactly the sum... It's basically the convex combination of d i, this constant that you multiply with mu i. And in a combined simulation, and what you say is the following, if S can do a transition to mu, then T can do a combined transition to mu. Okay? So in the conclusion on the simulation, you are more liberal, and you allow those combined transitions. Now, what you can show is that all the results that we have in this paper carry also over to checking this combined simulation. So that brings me to the end. This is what you should remember if you want to remember something from this talk, that in a non-probabilistic case, by-simulation checking between a probabilistic... between a push-down automaton and a finite transition system is p-space complete. If you then go to simulation, it becomes x-planned complete. In the probabilistic case, both by-simulation and simulation checking lie in the same complexity class. Else could you study? Well, you could, of course, study weak simulation. You could look at other kinds of models like visibly probabilistic push-down automata where you restrict, let's say, basically the way of pushing and popping on the stack, and you could look at special cases like one-counter, stateless, and so forth. That's it.