 And, the talk is about the reachability problem for the classic model of timed automaton. Maybe this is an overkill, but still what is a timed automaton? So, it is roughly an automaton with clocks. So, you have a finite number of states. Here S naught, S 1, S 2, S 3 are the states. You have some finite number of actions. Here A, B, C, D are the actions. Now, in addition, you have a finite number of clocks. So, here X and Y are the clocks. So, the clocks are assumed to have a value 0 at time t equal to 0, and they increase at the same rate as time progresses. Now, of course, because this is an automaton, there should be edges between states. So, what are there in the edges? So, you have an action. And additionally, you can have a constraint over clocks. So, for example, this constraint says that if the automaton is in state S 1, it can take this transition only if the value of X at that time is less than 1, that's it. Now, to get something much more meaningful out of a finite description, you also have the power of resetting a clock back to 0 when you take a transition. So, this reset, in fact, allows you to measure time duration between events. So, this is what a finite, I mean, this is what a timed automaton is. Now, what is a run of a timed automaton? It is just a finite sequence of transitions. We are interested only in finite sequences of transitions. So, I'm not, okay, fine. So, starting from the initial configuration, where the initial state is S naught and X and Y are 0, suppose the automaton spends 0.4 time units in state S naught. So, the value of X and Y increase to 0.4. And then it takes the transition. So, the value of X stays the same, but the value of Y becomes 0 because it was reset. So, similarly, I'm just explaining what is written, that's all. So, this is just a run of the automaton. The run is said to be accepting if it ends in a green state. Green state or final state or accepting state, you can call it whatever you want. So, and what are we interested in? Given an automaton, does there exist an accepting run? Very natural question. So, and this problem has been solved. It is known that it is decidable and additionally, you know that it is PSPACE complete. So, it was proved in the paper introducing this model by Allure and this model, just a small recap of the solution. So, where does the challenge lie? So, you've given an automaton, you want to know, starting from the initial state, can you play a path till the accepting state? Now, at each step of this path, you want to know if the next, the following transition is enabled or not, right? And to know that, you want to know what are the clock valuations reaching that point. So, this immediately says we need a good way, an effective way to handle this uncountably infinite space of clock valuations. So, this is the challenge in this, in this automaton, okay? So, what does Allure and Dill, I mean, what do they say? They say that, okay. Now, take some constant from the automaton and based on that, you partition this infinite space into a finite number of regions. Now, what is the property of this region construction? So, within two valuations from, I mean, of a region, you can play the same set of parts. So, there is no point of distinguishing those two valuations. You might as well consider them as a single bundle. Now, once you have defined this regions, you can take a cross product with your states of the automaton and build what is called the region graph. It's a very natural transformation to the region graph, and it is known that the region graph is sound and complete with respect to reachability. That is, if you have an accepting run in the region graph, there will be an accepting run in the automaton and vice versa, okay? But unfortunately, the regions, the region graph is too big to handle. So, you want something better, a much more intuitive and efficient solution is to actually collect all the valuations that reach a particular step. So, I'll just work out an example. So, there are two clocks, x and y. Initially, you start with both of them being 0, right? You let time elapse, sorry, at q naught. So, what happens? The values of x and y change, but both of them are equal. I mean, they change at the same rate. So, you collect all the valuations that have x equal to y. Now, the automaton wants to take the next transition, but then only the valuations with value x less than or equal to 5 can go across. So, they are the ones who go across. You elapse time again, do the same thing. Only the valuations with y greater than 7 can cross, and when they cross, the value of x gets reset, I mean, gets reset. Now, what is the point of me showing this entire contraption? So, if you look at the valuations that reach a particular state of the automaton, there is something nice about them. They can be described by simple constraints involving just the difference of two clocks compared to a constant. And this is not just a phenomenon happening here. This is what happens in general for every automaton. And this is the intuition for defining what are called zones. So, a zone is just a set of valuations defined by two kinds of constraints. Either a clock compared to a constant, or the difference of two clocks compared to a constant. For example, this is a zone, and because of this nice structure, they can be represented by difference-bound matrices, which have efficient operations on them. So, representing zones and working with zones is a cakewalk now, okay? So, then we build the zone graph. As we did for this previous slide, you just start collecting all the valuations, and you start exploring the automaton one by one. This is a much more efficient solution, and this is also known to preserve state reachability. This is sound and complete with respect to reachability. This is just a small problem that it might actually not terminate. If you keep doing it, it might not terminate. So, an example. So, you reach the state q1 with both the valuations equal to 0. You elapsed time, as that is what you have to do. Now, you want to take the transition, the self-loop, okay? So, only the valuation with y equal to 1 can take the self-loop. And that valuation, after taking the transition, sets its y value to 0. So, what happens? You get back this valuation. Now, you elapsed time. So, you have come to a new zone. Now, you want to know from this new zone, if I take this value. I mean, if I take this transition, do I reach something new? Is there something dramatic that is going to happen? No. And you actually end up visiting an infinite number of zones. Actually, you don't end up. That is the problem. The algorithm is non-terminating. So, instead of blindly exploring the zone graph, you want to abstract each zone. I mean, you want to give some abstractions. That is, instead of considering the zone, you inflate it with respect to an abstraction function. So, what you have done, you have actually added lots of valuations. You have put in lots of valuations. So, the aim is to come up with a finite graph that represents the behavior of this potentially infinite zone graph. So, now that you have added extra valuations, you have to make sure that these extra valuations don't do something more. I mean, if you are not able to take a transition before, you should not be able to take a transition now. And of course, if you were able to take it before, you should be able to take it now. And the child, when abstracted, should again be sound. This goes on and on. So, the aim is to come up with an abstraction function that makes sure that this graph is finite. And the bigger, the coarser your abstraction is. I mean, the more you inflate it, the better it is for you because you hope to get a smaller finite graph. So, what is the challenge? Come up with coarser abstractions, keeping the soundness constrained in your mind. And this is also done. So, there have been, these are some nice abstractions that have been defined in the literature. So, this is also showing the hierarchy with respect to inclusion. So, for a given zone, this extra plus is contained in the closure and it is contained in the A. And these two are incomparable with respect to inclusion. So, with the argument of the previous slide, you want actually to make use of this biggest abstraction. You want to go up, you want to come up with your, with a way to use the biggest abstraction. Of course, all these abstractions are known to be sound and complete. And there's just one constraint that you want your abstraction to yield a set that is easy to handle. So, you add this extra constraint. In addition to soundness and completeness, you want your abstraction to give you back a zone because we know how to handle zones very well. So, why not force this constraint? Say that your abstraction A should give you back a zone. And so, only these two abstractions have been used for implementations, okay? And they are called convex abstractions because a zone is a convex set. So, an abstraction yielding a zone again is called a convex abstraction. So, so far I have been talking about history. And in this talk, we will be seeing how to use this non-convex closure abstraction. So, it turns out that this is the first non-convex abstraction to be efficiently used. So, what is closure? This diagram might be familiar to everyone who knows about timed automata. So, this is over the XY plane. This is the division of the XY plane onto regions. The region abstraction defined in, defined by the very first paper of allurental. So, I have just printed it here. Now, suppose this is the zone. What is the closure of this? It is just the union of all the regions that intersect the zone. Now, you can clearly see that this is non-convex. Okay? So, the closure can potentially yield non-convex sets. Now, how do we use this? Of course, the standard way is to, is to build your coverage, the reachability tree of your zone graph. You start exploring the zone, I mean, one by one, and you abstract it at each point. And when you reach a zone, which is known to be contained in some other zone, you don't want to explore it further. Because you know that whatever you can do here will be done from there. So, you are actually wasting your time by exploring this. So, and this kind of inclusion, the direct inclusion from A, I mean, from between two zones is very efficient. It has a quadratic algorithm over the number of clocks. But why not plug in closure to this? Because closure is not convex, you cannot plug in closure to your A. As in, we do not know how to, rather, we do not know how to store closure as efficiently as that of zones. So, let us make use of the efficient representation of zones. Do not store your abstracted zones. But then, this can potentially be infinite. To avoid that, use your closure for termination. So, instead of just checking for a normal inclusion, check if your zone is included in the closure of some other already visited zone. So, what does this imply? This means that every region that intersects this zone, intersects this zone too. So, you are not gaining anything more by exploring this. You might as well stop here. This calls for a very efficient algorithm for checking inclusion of a zone with respect to the closure of another zone. Look at the number of inclusions that you need to do. Each time you come up with a new zone, you have to check if it is included with respect to the closure of some other zone. So, you want it to be as efficient, at least you want it to be as close to the normal inclusion as possible. Because in the previous algorithm, we were just using inclusions. Now, you need to use inclusions with respect to closure. So, you want an efficient inclusion test, at least as close to your normal inclusion algorithm. So, this is one of the main contributions of the paper that you have a zone, you have two zones, Z and Z prime. Checking if Z is contained in the closure of Z prime can be reduced to checking this question on all the projections over two variables. Suppose you project it onto the XY plane, and if this projection is not included in the closure of this projection, of course, the two zones are not included. But on the other hand, if it is included on all such projections, then you can conclude that these two zones are actually included with respect to the closure. So, this is the non-trivial part. And we should thank a very good observation made by Patricia Bouille in her paper Forward Analysis of Updateable Timed Automata. She uses an observation in the proof of some lemma, and that observation, when exploited, will give you this. Now, okay, so nice, you can decompose it to projections. So, what have we achieved? So, the number of projections is just a mere X square. So, the complexity of this algorithm is just quadratic in the number of clocks. Why did I say mere? Because this is actually having the same complexity as checking if a zone is included in another zone. So, checking if Z is contained in the closure of Z prime is as cheap, as costly, as checking if Z is included in Z prime. So, I'm not going to go through the proof of this, of course. So, what we have achieved, we will be using, we will be computing the normal zone graph instead of abstracting it at each point, and we will be using the closure for termination. And this is aided by the good inclusion test that we have, the very efficient inclusion test that we have. Okay, so you see that the closure depends on this parameter alpha. What is this alpha? It is just a bound function that associates some constant to every clock. Now, the smaller this bound function, the bigger the closure is, and the happier you are. So, the next quest is to look for a very good bound function, come up with as tight a bound function as possible. So, again, there is some history to it. So, the normal, the very, very naive way of doing this is to look at your automaton and take the maximum constant occurring in some guard. So, for x, you put 14, and for y, you put 10 power six. And with this, you get a huge graph. Ah, yes. So, okay, so I think I'll let the spoiler out anyway. But then, there was a better way of associating constants. So, you see that 10 power six is relevant to y only in this state. Now, to come to that state, you have to come through q two. But when you take this transition, you are actually resetting y. So, this constant 10 power six is actually hidden by this reset at state q two. So, you don't have to associate 10 power six to y even at q two. So, you associate different bound functions to different states of the automaton. This reduces the size of the zone graph by a good amount. So, this is called the static analysis. I mean, this is called the static analysis to get the constants. But then actually, this is also not enough. Why? Now, look at this example. So, by the static analysis algorithm, you know that since y is not reset, it will have 10 power six. But a little bit of careful observation shows that this q two is not reachable at all. I mean, here you are asking your value, I mean, you are asking your x to be greater than or equal to two, and here you are asking it to be less than one. So, it can never cross, it can never come. The timed automaton can never come to q two. So, instead of associating a bound function to every state q of the automaton, we want to associate a bound function to every q z in the zone graph. This zone graph gives you the behavior of the automaton. So, at this node, the maximum constant that is relevant comes from the path below it. So, you look at every path, collect the maximum constant occurring before a reset. That is sufficient. Now, how do we do it on the fly? So, initially assume that I have not seen anything. So, my bound function is minus infinity for every zone. Now, I'm seeing some guard. Let my node know that I have seen three. So, update your bound function. Let your ancestors know that I have seen three. So, you do a propagation. Now, suppose you have explored further, and it turns out that some zone, some node is actually, some zone is actually contained with respect to the closure of this zone. But then, okay, it is contained, fine, but it is contained with respect to this alpha. But this alpha can change during the course of the algorithm, right? So, what do you say? You say that, okay, with the information that I have now, I know that it is contained. So, let me call it a tentative node, and I'm not exploring. I'm not in any urgency to explore that. So, let me just stop, and make this a tentative edge. And then, when I learn new constants, in addition to knowing my ancestors, I mean, in addition to letting my ancestors know about this new constant, I also let the tentative nodes know. Because the rationale for not exploring this is that whatever you explore here, you can do it from here. So, the tree that you get here will not give you a bigger constant, okay? So, you actually copy the constant here, even to your tentative nodes. Note that even when there is a disabled edge, you have to let everyone know about it. Of course, for reset, there's no problem. So, the algorithm proceeds by an exploration phase. I mean, you do an exploration. You get lots of tentative nodes. Now, you check if these tentative nodes are still covered. If not, you explore them. So, it's an exploration. Then you follow it by a resolution. Then you explore. Then you resolve, and so on. And you stop when all these tentative nodes are still covered. And this procedure is also finite, because the constants that you get here will not be greater than the constants that you get during your static analysis. So, this procedure is going to terminate. And finally, you will have a tree in which the bound function of a node is bigger than the bound function of its children. And the bound function of a tentative node equals that of the covering node. And the theorem is that this procedure is correct. So, overall, what do we have? You compute your zone graph and use this efficient tests with closure for termination. And additionally, you get some gain because you're calculating your bound functions on the fly. Of course, by just giving an inclusion test for closure, we have not come up with something better than that is existing before. Because what is existing is the best one that is used in current implementations is this. And we have for this. But then we could also manage to compose this one and this one to get an abstraction that subsumes both of them. And still we could come up with an efficient test for this composed abstraction. So, we can claim that we have a nonconvict abstraction that subsumes all the ones that are used in practice. And we are also calculating the bounds on the fly. So, yeah, some experimental results. So, we have a prototype in which we have checked some standard benchmarks. So, we have implemented our algorithm which uses this abstraction and on the fly bounds. We have actually implemented upals algorithm. We have considered upal because that is the closest to our theory in some sense. So, we considered upals algorithm and we implemented it in our tool. In upal, this is the one that is used and the constants are obtained by static analysis. We also checked it with the upal tool itself. Now, there are some blanks. So, these blanks show that after some considerable amount of time, the tool did not actually stop. I mean, here we have the number of nodes and we also have the time. Before you jump into any conclusion, this does not mean that our tool is better than upals tool. I mean, that of upal. This just happens because we suspect that this is just due to a different order of exploration. And then the inclusions are happening in a different way. So, it's ending up unfortunately taking a lot of zones. So, but our aim is to compare the effect of abstraction and this OTF bounds. So, we implemented upals algorithm in our prototype and enforce the same order of exploration in both this one and this one. And then we looked at the nodes, the number of nodes that are obtained and the time that it takes. So, if you can see, there is a good decrease in the number of nodes. So, basically this abstraction and the OTF bounds are actually giving a decrease in the number of nodes. And why did we give the time column? We want to show that this extra work that we are doing is not costing an overhead on time, this constant propagation, this inclusion tests and stuff. In fact, for the last two examples, we are getting a gain in time also. In particular in this FDDI example, the DBMs are huge and still the closure inclusion manages to give you a good decrease. So, the closure inclusion is actually efficient. But then of course, for the CSMA this one, there is an overhead, though not very high. And one last comment, if you can see our prototype cannot handle bigger models. But of course, upal manages to handle them beautifully. So, we strongly think that our algorithm would benefit upal if implemented inside. So, this brings an end to my talk to summarize. I just showed that we can efficiently use a non-convex abstraction. I mean the efficiency is as good as that of normal inclusions and we can also learn the parameters for abstraction on the fly. So, you are exploiting your semantics to the fullest. You know what is relevant, what are the constants that are relevant when you are actually doing your reachability analysis. Now, this leads to a lot of other things to look at. In particular, okay, now that you have decided you want to propagate some information when you are actually exploring. Now, why propagate only constants? Can you propagate something more? Then, so far we have been talking only about automata without diagonal constraints. So, can we extend this to automata with diagonal constraints is the next question. Yeah, I'm done, thanks.