 Hello. My talk is about a tight lower bound for street commutation. And this is joint work with Yang Cai. So first, the introduction. And so we have heard a lot about the model checking since the beginning of this conference. And I figured to talk about automata theoretical model checking. So basically, the system is represented as an automata, and the property is also represented by another automata. So roughly, the things is, say, the verification reduced to be language containment problem. And it can be reduced by these two operations. So the intersection is fairly trivial. So the commutation is somehow crucial in the whole efficiency of the algorithm. So for this reason, the commutation is a fundamental problem in automata theory. Actually, this is just my example. So when we talk about the eye-knee commutation models, it's natural to ask whether the commutation is closed under commutation and how efficient it is. So during the past, say, almost 50 years, there are steady improvement on Hucke-complementation algorithm. Until fairly recently, 2009, Shui got the result, n squared times ln, where ln is roughly this number. And this match well with Jansson-Lorlbaum, ln is also this. So basically, there's a polynomial type for this Bucke-complementation. So there are automatas beyond Bucke. So two guys are very important. So one guy is called the straight automata. So this is not the typo. And another is called the Rabin automata. So straight automata can express the strong fairness conditions. Basically, if a transition is enabled, the infinite open is going to be taken, infinite open. And the Rabin automata is a deal of a straight automata. So basically, it can express something called fair termination. That every infinite commutation is unfair. So that means the program terminates fairly. OK, so there are the commutation beyond Bucke. And by sequence of result by Kuffman and Vardy, we have known Rabin commutation procedures for Rabin automata, straight automata, and generalized Bucke automata. So meantime, we have the corresponding lower bound. Because all these automatas are generalized Bucke automata. So Jansson-Lorlbaum also applies to Rabin and the straight. Also, Jansson-Lorlbaum in 2006 paper gave a lower bound for generalized Bucke. But still, there's a big gap between for straight and the Rabin. So because here we have this K parameter. And this K is called the Rabin index or straight index, depending on the context. So this K can be as large as 2 to n. So because this K appear on the shoulder of all this complexity, so that means it could be double exponential in terms of n. So in the previous 24 hours, we have this Rabin commutation result. So this is a messy notation. But roughly speaking, the Kuffman-Mardy's construction is optimal, and we cannot do much about this K. This K has to be 2 to n. And it has to be on the shoulder of another 2. So that's it. So now in this talk, I'm going to talk about the result on straight commutation. So for straight commutation, we have this nice result where we have this theta on the shoulder. So in this talk, we talk about how to get the lower bound. The up bound was published in CSI this year. So this is the result I'm going to talk about next. So quick introduction to notations. Basically, what is Obigo automata? It's just a classical NFA, non-deterministic automata with special acceptance conditions. So almost everything is the same as before. Actually, everything is the same as before, except this F could take many forms. And this F is called acceptance conditions. So omega automata is classified according to acceptance conditions. So for Booke, basically the info row means the set of states that you visit easily many often by row. So the info row, so this means the row visited F easily many times. Then this row is accepting. So for straight, this condition, this F, is actually a tuple of pair of states. So here we see the length of the tuple is called the index length. So for this case, suppose we have k pairs. Each one is called GI bi. And the condition says there exists an index, for all index. If row visit GI often, then it also visit bi often. So this is the straight condition. And this ribbing condition is just a deal of the straight condition. So we just do a negation of this condition. We get this part. It says there exists an index such that row only visited GI finitely times, but it visited bi infinitely many times. So here is just an example of Booke automata. Say if I put the if to be assigned q1, contains q1, the q2, then what language this automata accept? It accept all words that contains infinite many b's. So similarly, if I just make a small change to this F, I change this F to a k tuple, which has two pairs, then what language it accept? It's going to accept almost as before, accept those words with alternating a and b. So the computation problem basically to get another automata, which is exactly accept the commentary language. So now it's time to talk about lower bound. And it's based on the synthesis of three proof ideas. So the first one is called the full insight. I'm sure this concept has been used and appeared much earlier in 96, but we found this paper gave a good presentation. So a full insight basically identifies certain runs with dual properties. And we call it the fulling runs. And when they paste these runs together in certain ways, it can induce non-accepting runs. So in the next slide, we'll talk about each one in detail. So the full automata. So this is kind of a misnomer. It's nothing full about. So the idea is this, since the ultimate goal is to construct contradictory runs, so why should not start with runs directly and build words later? So the traditional way, we first find a word, then we have run. But because the word may be so difficult, so sophisticated, too very hard to find. And since the final goal is run, let's start with run and worry about the word later. So even with these two weapons, this does not mean the things are easy. And because of the explicit description of a fulling run may still be hard to construct. So this is Yen's breakthrough in 2006. And the essentially properties of a fulling run can be characterized using rankings. So in this way, basically further reduce the workload of this construction. So let's first say something is called the fulling set. So this is just one type of definition. So a set of pairs is called the fulling set. Say here, this fulling set is still defined on words. So basically, if i not equal to j, then x, i, y, j does not belong to l. If the index is the same when these two words join together, then the final word belongs to l. OK, theorem says, if a language has a fulling set of certain signs, then any NFA that recognize l should have at least that number of states. So that's how we can get lower bound. So full automata. So there are four points I want to emphasize. So as I said, there's a difficulty because fulling word could belong and hard to be guessed right at the beginning. So we focus on runs. This is the question. And there's a concept of lifting that this every possible unit transition graph is treated as a letter. So there's no difference between word and graph. Graph is word, word is graph. So under the power was demonstrated long before in this paper, this 2-2-n lower bound for complementing NFA is very short and easy to read. OK, so I talk about the full automata, then we talk about the data graph. So what is a data graph? So let's forget about the definition. It's so easy to understand when we show this demonstration. So basically, we can just forget about this left-hand side because all information has been encoded here. And suppose this every letter appear in this word. Basically, this is a unit bipartite graph. This is another one. So basically, the unit graph will characterize all transition relations of the automata with respect to a specific letter. So what full automata means, I just draw any pictures as I like, and I treat this graph as a word. So ranking, the last part. So the ranking represent the properties on graphs. And the diversity of rankings is a complexity measure. So since Clark introduced this ranking in 1991, many improvement has been done, has been obtained by using rankings. So basically, all combination constructions for omega automatas of all common types has been discovered by Kuffman and Vardy. So on the Yen in 2006, kind of like a reverse engineering and prove the lower bound of a bookie connotation by using rankings to construct a full insight. So how much time do I have? Three minutes, or 15? So let's just quickly go over what the ranking-based connotation is. So the idea is this. So vertex on this graph, each level, every vertex is associated with a value, a integer. And the association at a level can be built as a function, mapping from the state side to the integer domain. And this function is called the Cotee-level ranking. So if we want to complement the automata of type T, the ranking we are looking for is called the Cotee ranking. So the Cotee ranking are required to satisfy something called the local properties. And another thing is called which can be characterized by step-by-step check of the automata. So another special kind of Cotee rankings called the old Cotee rankings. Basically, they're talking about global properties. And these global properties can be characterized by a bookie condition. So that's how the whole connotation is done. And this is a generic connotation scheme. You can see this F prime is where we check the bookie condition. And this data prime is defined to capture local properties. So I will read all these details, but this is basically the scheme of the connotation. And say that this is the general theorem, say basically a graph is a Cotee accepting. That means it's a run graph for a world that shouldn't be accepted if only for this graph or the myths on old Cotee rankings. So I gave you a quick example of how bookie complementation is done. So we are looking for code bookie rankings. So this code bookie rankings has two definitions with two items. So basically, the first says if a vertex with all the number, then this vertex shouldn't belong to the final state. And the second says if we have an edge from level i to a level i plus 1, and the number is non-increasing. So here is an example. Say we have this is the data graph. Every vertex is associated with integer value. And this is a level ranking. This is another level ranking. The whole thing is called a code bookie ranking. And you see this automatizes the principle we shown. So the q1, q2 are the final state sites. So for this guy, it has all the number. So it shouldn't belong to the final state. So basically this satisfied the condition and is a code bookie ranking. This is just show the flavors, the rankings with data graph. And next, we show how to do a construction to get what we want. So first we design something called a full straight automata. The definition looks messy here, but is very clear in the example. So the important thing is about this is something called a q ranking. Q is just a name we picked. And the q ranking is a function from state side to this range. And which can be viewed as a pair of rankings, r and h. So r is called r ranking, h is called h ranking. And let's forget about the r ranking. So this is not very important because the contribution to the complexity is minimum compared with h ranking. So let's talk about h ranking. So h ranking basically is a map q to a permutation of index. So suppose the index size is k, then basically the value of this function is just a permutation of 1 to k. So here is an example. Consider the case where n is equal to 3, k is equal to 2. So we have three states. These are main states. So the first column is this r ranking. Let's forget about them. And the second column is this h ranking. Each one is a pair of natural numbers. So all other states are auxiliary states to facilitate the construction. So basically that's the full straight automata. This relates the data graph and so on. OK, the next important thing is we are going to build a world. And remember, world is graph. So basically we are going to draw a graph with certain properties and this graph we call q world. So for each q ranking f, we are going to define a world called gf. So if this world is basically a data graph, every level is ranked by f and the following four properties. And also very messy. But let's forget about 1, 3, 4. And let's zoom in 2. This is the essential one. So what does this property size? So basically for each state q, they're exactly k path. So rho 1 to rho k such that for each i, rho i satisfy four conditions. And now this is a little bit of a basic tool. Here I changed from the subscript to function notation to just the same for type setting purpose. So let's forget about h and the q because they are parameters. Basically this is what? This hq gave you a permutation of indices. And we just pick the value in the jth position. So this is everything in the parentheses. So let's forget about that. Let's just oversimplify this as bj. So what does this mean? So rho i does not visit bj if j is less than or equal to i. And the second says rho i visit bj if j is greater than i. So this first two things about the visit of b. The last two things is about the visit of g. So the rho i does not visit gj if j is less than i. And the rho i visit g i. So they're still not very straightforward. But I hope this picture can help you. So this means no visit. This check means visit. And this star means no care. So what all this condition says is this? Forget about this hq. We're just saying that this is the trivial permutation. Then what happened is this? Basically, there exists the i where at this position g i is visited, but the bi is not visited. And before that position, neither g i nor bi is visited. And after that, we don't care about the g. And we require that the bj are all visited if j is greater than i. So that's the idea. So everything in English is kind of cryptical. So we are almost done. We say this word, this is a word, gf, repeat infinitely many times. So we say it has a rabbit nature. What does that means? Basically, that means the word repeat infinitely many times. And at each two points, we have k paths. So no matter how we choose these paths, we are going to have a minimum index here. And this g i is visited, but all bi are not visited. OK? So here I just list one repetition. Remember, there are infinitely many. But we are going to have a minimum i. Minimum i with this property. So g i is visited infinitely many times, where bi is not visited, only visited 590 many times. That means this word should be accepted by rabbit automaton. So we say this is a rabbit nature. And suppose we have two different f and g, and we do this pasting, and let this repeat infinitely many times. This plus is just a standard repeat, non-zero finite many times. OK? So then what will happen? We have this. In one fragment, we have this i, g i is visited, but bi is not visited. But because h is different from h prime, and this is the permutation. So we are sure that this unsatisfied obligation is going to be satisfied by someone here. And reversely, we have this unsatisfied obligation is going to be satisfied somewhere here. Sorry, this i is the first place where h is different from h prime. Because there are permutations, everything before them are equal. So let's just forget about that. So suppose this is 1, this must be 2, this is 2, this must be 1, because there are permutations. So then they kind of complement each other. So basically, we have for every i, if g i is visited infinitely many times, then bi also visited infinitely many times. So this should be accepted by straight automata. So we say this word with a straight nature. Now we have two lemmas and one theorem. Basically, for any q rankings f, this guy does not belong to La. Where for any two different q rankings fg, this guy is the subset of La. Now it's straightforward to get the conclusion that the state says of any commentary automata of a is no less than the number of q rankings. So the number of q rankings going to serve as the lower bound of the complementation construction. OK. So now lower bound is reduced to circuit design. This picture has nothing to do with our paper. I just copied to show the idea. This is the circuit design now. Basically, you get the circuit and you show this circuit has some properties and you're done. So there are some difficulties, because we have k desired path between each horizontal point. And we want to make sure they do not interfere with each other. So the solution is we do something called parallel composition. And basically each word divides into k segment. And path rho i is only active in the i-th segment where the property is fulfilled. In all other segments, the rho i is dormant. And just to make sure the property satisfied won't be violated. OK. So this is if I can borrow some timing graph, the idea is this. We have segment. And this rho i is going to have the property fulfilled here. And then just the maintenance. And similarly, rho 2 and the whole graph now is called a q-word. OK, so complexity. So when k is small, so here basically we say k is equal to o n, then the number of r rankings. Remember, we forget about this number of r rankings, because it does not matter too much. It's the factorial n, the number of h i rankings is the factorial of k to n. So if you do the calculation, you will have this number. And the good news is that even k is large, say we use k equal to omega n. That means the k is above linear in terms of n. There's no change of the number of r rankings, but the number of h rankings is just this. Basically, k is saturated at n. So the final result will be 2 to n square of n. So conclusion. And we have showed lower bound for straight commutation. So combine the result with what we have and people have, which we know obtained by other researchers. So now we have kind of a complete picture of commutation. So you'll notice that we have theta on every shoulder of this complex notation. So in a recent paper, we also showed for determination, all this has the same complexity. So of course, this NFA, where I just put NFA here to show this phenomenon. For commutation and determination, we always have this kind of uniform complexity. So the question is, this uniformity is a coincidence or not. So I think it may worth investigation further. And thank you for your attention.