 C'est mon plaisir d'introduire Renko Lajik, qui a sa PSG-99 de Oxford, et presque depuis, il a été professeur de l'Université de Ravwick. Il a fait beaucoup de contributions dans le quartier, donc il a rejeté l'automate avec ce fameux papier sur l'LTL avec 3 quantifiers, et l'algorithme paritien de paritien de paritien de paritien. Et beaucoup de travail aussi sur ce système de vectorisation avec le state, sur le pétrinette, sur le branching de ce type. Il a eu le meilleur papier à l'arrivée de ce dernier stock, pour le breakthrough de l'arrivée dans la complexité. Il a été une longue durée de l'arrivée, que l'arrivée pour le pétrinette n'était pas expérimentaire. L'arrivée est galactique, je ne sais pas où c'est. L'année dernière, ils ont prouvé qu'il n'était pas expérimentaire. Avec ça, aujourd'hui, il parle d'un homme, je ne le sais pas très bien, mais on verra si Fenkel est correct ou non. Merci beaucoup Paul, merci pour l'introduction, et pour l'inutation de parler ici à FSTTCS, c'est très bon d'être à IIT Bombay et d'être en Indie, et c'est très bon que Paul, qui m'a introduit, est de Cachan, et beaucoup de travail que j'ai faits dans les années, est lié à mes collaborations avec plusieurs personnes à Cachan, Stéphane Demri et d'autres. J'ai un peu d'un titre inusuel, qui est qu'Alan est correct, et c'est des exemples de plusieurs conjectures sur les variants traditionnels. Ce sera plus clair, ce qui est le sens de ce titre, mais qu'est-ce qu'il y a sur ce titre? Je pensais que depuis demain, dans l'Université Générale, je pensais que ce serait bien de montrer une map de l'Union Européenne, à la place où mes coauthors et moi viennent. C'est donc basé sur le travail joint avec Jérôme Leroux, qui est ici, ou qui sera ici, et puis avec Philippe Mazowiecki. Jérôme est à Bordeaux, et Philippe Mazowiecki est maintenant à MPI en Zarbrewkin, et Slavomir Lasotta et Ocec Czervinsky de Warsaw. Ok, donc, c'est ce qu'on parle des systèmes traditionnels, mais qu'est-ce que les systèmes traditionnels? Je ne vais pas vraiment les définir, mais j'aimerais clarifier, qu'il y a plusieurs formes qui sont vraiment tous équivalents, ou essentiellement les mêmes. Donc, nous avons des systèmes traditionnels, nous avons des systèmes traditionnels avec des States, des vaches abbreviées, et des vaches abbreviées avec des double S. Ensuite, nous avons des pétrinettes, et puis, peut-être, c'est un terme non-deterministe avec des machines minstres, avec aucun test. Donc, tous ces formes sont basically the same. Il y a des features de la translation, qui sont très facilement translatées, d'une à l'autre. Il y a des features que l'on doit être un peu attentionnées avec la translation, comme quand on va d'autres systèmes traditionnels avec des States, d'autres systèmes traditionnels avec des States, et la dimension, typiquement, augmente par 3, par ajouter 3, et ainsi. Mais, ne faisons pas trop peur sur ces systèmes, parce qu'ils ne sont pas vraiment rélevés pour ce talk. Même si on regarde, et on fixe les dimensions, mais dans ces cas, on regarde les vaches avec double S. Donc, n'hésitez pas. Maintenant, pourquoi nous étudions tous ces systèmes, sur les pétrinettes, ou sur les vaches? Bien, d'un côté, ils ont beaucoup d'applications pour modéliser un analyse de hardware, de softwares, des processus chemiques, des processus biologiques, des processus business, etc. Donc, c'est une bonne raison. L'autre est que, en fait, les pétrinettes arrivent dans différents secteurs, de théorique, de computer science, en étudiant des objets complètement différents. Ils sont très proches de l'héologique. Ils arrivent à l'intérieur des databases. Ils arrivent aussi à l'intérieur de concurrence, etc. Donc, c'est vraiment un modèle fondamental de concurrence. C'est, par contre, un modèle fondamental de concurrence qui a été créé depuis plusieurs années. Il y a beaucoup de outils qui ont été développés et appliqués, etc. Donc, un sujet central dans le théorique de VAS est le problème de l'achat d'achat. C'est un problème de décision. Il s'agit de deux configurations. Une configuration initiale et une configuration finale. Dans la terminologie de la pétrinette, ce sont les marquants initiales et les marquants finales. Est-ce qu'il s'agit de la configuration initiale pour la configuration finale? Ce problème a été fameux pour ses difficultés pendant les années. Qu'est-ce que ça veut dire? En fait, même avant, le problème de décision de l'achat d'achat en 1976, Lipton a montré que c'est difficile pour l'exponentialité. Et puis, il y avait des proofs partiaux. Je ne suis pas vraiment sûr de ce qu'un proof partiaux est. Mais il y avait des proofs partiaux et c'est vraiment un proof complet. Le premier temps où le problème a été décider en 1981 par le maire. Et puis, un an plus tard, c'était un papier stock en 1981, je crois. Mais un an plus tard, le stock Cosseraggio a donné des proofs partiaux. Mais quand je dis un proof simplifié, n'oubliez pas que ce proof est toujours difficile à comprendre. Et, pour exemple, un whole book a été écrit juste présenté par Rotenauer, juste présenté par Cosseraggio. Donc, dix ans plus tard, Lambert a développé le proof Cosseraggio et a utilisé un peu plus de travail et a utilisé pour montrer des résultats plus basés sur ce que Cosseraggio a fait. Et puis, dans une série de papiers, 2009, 2011, 2012 par Jérôme Le Roux, il a fait beaucoup plus de progress. Et en fait, comparé avec un algorithme de Cosseraggio pour le problème de richability pour nous, comparé avec cet algorithme, qui est un algorithme très compliqué, Jérôme a en fait étendu un algorithme très trivial, mais c'était basé sur un travail complexe qui a établi que, si la richability s'étendu, il y a une définition définie dans le algorithme de presse qui sépare l'initiel de la configuration finie. Donc, l'algorithme trivial consiste en deux algorithmes semi-algorithmes qui circulent en parallèle. L'une s'étendu et l'autre s'étendu c'est pour un formulaire de presse qui donne l'invariant. Donc, c'était un bon progrès. Et puis, en 2015, 2019, Leroux et Schmitz ont réussi d'obtenir l'accarmanie, qui, en un sens, c'est la plus plus non primitive d'une function. Ils ont réussi d'obtenir, en termes de function d'accarmanie, le temps d'obtenir d'un algorithmme pour VAS. Et, en fait, l'obtenir qu'ils ont obtenu est primitive récursive pour la dimension fixe. Plus précisément, si la dimension est D, c'est en FD, plus un petit constat. Et puis, en 2019, après beaucoup de années, en relation à le travail de Lipton, en 1976, ce qu'on a réussi d'obtenir, c'est d'obtenir un plus bas qui est en termes d'un tower d'exponential. Donc, ça montre que le problème n'est pas élémentaire. Maintenant, selon votre point de vue, vous pensez, ah, c'est tout résoluant maintenant parce qu'on a un très bas bas et l'oppère bas est galactique. Mais, en fait, pour les personnes qui sont dans ces choses, un énorme gaffe reste entre un tower d'exponentials et la function d'accarmanie. Donc, en fait, en fait, on ne comprend pas le problème d'exponential pour la pétrinette d'aujourd'hui. Donc, ce que je vais faire aujourd'hui n'est pas exactement ce n'est pas élémentaire bas bas mais un peu, je pense, des observations d'observations que et un peu plus de travail qui ont eu cet effort. Et j'espère que je vais vous intéresser dans la pétrinette d'aujourd'hui et j'espère que je vais aussi mettre un peu de lumière sur comment nous avons obtenu la pétrinette d'aujourd'hui qui, je pense, est intéressante et ça va me donner de retour au titre de la discussion. Donc, j'ai dit que je ne vais pas définir la pétrinette d'aujourd'hui mais ici, c'est une présentation possible de la pétrinette de la pétrinette donc, il y a des variables x et y ici juste deux variables, x et y et ils sont des numéros naturels ok? Donc, qu'est-ce que ce programme fait? Il fait x equals x plus 1 pardonne-moi, je vais utiliser ce genre de notation x plus equals 1 donc, c'est, on commence à 0 donc, quand on va regarder des programmes comme ça l'idée c'est que tous les variables ont commencé à 0 et on est intéressé à savoir si le programme peut exécuter et terminer avec tous les variables contre 0 donc, c'est comme les initiales et les finales configurations consistent de tous les 0s dans les variables ok? et c'est sans loss de généralité donc, je vais me dire ce programme fait donc, premièrement, il juste x plus 1 et ensuite il répète un nombre non-deterministe un nombre de fois peut-être 0, peut-être 1, 2 fois peut-être beaucoup de fois x equals x minus 1 et y equals y plus 2 et ensuite il répète un nombre de fois y minus 1 et x equals x plus 1 donc, decrement y by 1 et increment x by 1 et ensuite, il répète donc, qu'est-ce que l'exemple fait ici? bien, dans les deux dimensions parce qu'il y a deux variables, x et y on a commencé à 0, 0 et ensuite, x equals x plus 1 nous sommes ici à 1, 0 et ensuite ce qui peut arriver est qu'on arrive à 0, 2 et puis 2, 0 et donc donc, après des paires de ces loops vous voyez c'est basically doublant x c'est doublant x et et en en poursuivant y et celui-ci est transférer la valeur de la vague du y à x donc, après les zigs-ags on peut en arriver à 2 pour power n après doublant la valeur initiale qui était 1 n times on peut arriver à 2 pour power n Et puis, les restes n'ont des paires de loops qui font l'opposite. Elles peuvent halver la valeur de la fin de temps. C'est la partie de la ronde que l'on a détruit dans la ronde. Nous pouvons donc rentrer vers où nous avons commencé. Et ensuite, nous prenons une de x et nous sommes en arrière à l'origine. C'est un exemple de ronde de 0,0 à 0,0 de ce programme. On a un petit programme qui s'appelle Linear in N. Mais ce ronde est exponentiellement long. Ces loops sont répétées souvent exponentiellement. C'est assez intéressant. Il semble qu'une petite vache peut avoir une très longue ronde exponentielle, déjà en dimension 2. Est-ce vrai que ce programme, le plus long ronde de 0,0 à 0,0 est exponentiellement long ? Est-ce vrai ? Qu'est-ce que tu penses ? Paul est correct. Ce n'est pas vrai parce que ces loops, comme je l'ai dit, n'ont pas besoin de rentrer tout à l'heure. Elles peuvent tous s'éteindre. Elles peuvent tous exécuter 0 fois. Et ensuite, nous avons juste x et x qui sont incrementés ici et décrémentés là-bas. Et c'est comme un ronde trivial. C'est très court. En fait, il s'agit d'un exemple de flatvas. Qu'est-ce que c'est un flatvas ? C'est un flatvas avec des loops non nestées. J'espère que c'est clair que ce programme n'avait pas de loops non nestées. Flatvas sont une classe importante pour nous. Ils arrivent en pratique, c'est plus facile d'analyser. C'était un flatvas. En fait, il se trouve que si la dimension est juste 1 ou 2, si la dimension est moins qu'à 2, et si ces constants, comme juste 1 et 2 ici, les constants qui sont ajoutés et subtractés aux counters, si ils sont mis en unerie, le problème de richability peut être soulevé en NL, c'est en NL pour les dimensions 1 et 2, et en fait, les langues les plus courtes, lesquelles se trouvent à la richability, c'est un polynomial. Et cela requiert un travail pour montrer que, pour la dimension 2, cela a été fait dans un papier en 2016. Et pour la dimension 1, cela a été fait dans un papier en 1975. Quand ces constants, comme 1 et 2 ici, peuvent être plus grandes et succincts, puis, la richability, c'est en NL pour les dimensions 1, c'est par une reduction straightforward du problème du sub-sum, par exemple, et les langues les plus courtes de la richabilité exponentielle, c'est très facile de voir. Alors, qu'est-ce que la question naturelle d'arriver de cette table? Bien, qu'est-ce que sur la dimension 3? Ok? Et flat-vas avec les constants dans la dimension 3? Et cela a été un problème ouvert pour plusieurs années, et vous savez, nous avons conjecturé que la complexité NL pour la complexité et un peu de polynomial sur les langues les plus courtes de la dimension 3 n'est pas le cas. C'est mon premier contre-example que je vais vous présenter aujourd'hui. Et puis je vous présenterai 2 plus. J'ai 3 contre-examples pour y aller. C'est le premier. C'est plutôt une simple équation que nous vous savez, nous sommes habitués d'une school primaire peut-être où quand nous multipliez ces fractions ok? Ce que nous avons, c'est juste N plus 1. Je veux dire que c'est N plus 1 sur 1, mais vous savez, tout sort de cancelations d'exception pour N plus 1 et 1. Et ici, nous avons N plus 1 factorial divided by N factorial. Ok? So, what is, how is that equation going to help us? Well, we can actually kind of write it as a program, which is very similar to the program that I showed you before. But now we have 3 variables x, y and z. Ok? And hopefully a picture will help. So, now I'm kind of looking at a run of this program in three-dimensional space, where I have, you know, x and y and z. So, at the beginning I set x and z to 1, and then I, some number of times I increment x and z. Or in other words, at the beginning what I do is I set x and z to an arbitrary positive integer. Let's call it capital M. Ok? So the beginning, what this run looks like, it goes from the origin to M in x and z, but y stays at 0. So this is what it does in that first phase. And then how do I implement that equation from then on? Well, what I then do, this kind of looks complicated, but what I then do is starting from this point where x and z are M and y is 0, I kind of zigzag between, if you like, between the floor and the floor is the x-z plane and this wall, which is the y-z plane. So I zigzag between them, keeping z constant at M. So z stays at M. But then I zigzag, how do I zigzag? Well, I zigzag by repeatedly essentially multiplying multiplying x multiplying x by by these fractions. So my first zigzag multiply, you know, it keeps decrementing x by N and incrementing y by N plus 1. It does that many times and then it transfers the value back from y back to x and then the next zigzag does the multiplication by N over N minus 1 and so on. And then where I end up where I end up at is I end up back on the floor so y is 0, x is now M times N plus 1 because I multiply by all those fractions and so in the end I have multiplied by N plus 1. But z is still M, z is constant, it's still equal to M. And then in the last phase what I do is I go back from there to the origin but not at this slope 1-1 but at the slope N plus 1-1. What have I shown? Why is this interesting? It's interesting because the only way in which I can start with all variable 0 and end up with all variable 0 is by having a perfect picture like this. Why is that? These loops, as we know they don't have to execute as many times as they possibly can and these multiplications where the fractions don't have to be exact because the loops are nondeterministic but all these fractions are greater than 1. And so if any loop if there is any remainder when dividing and multiplying by these numbers if any loop doesn't execute completely we'll end up with x being strictly smaller than M times N plus 1. And then we're not going to be able to go back to the origin using this slope N plus 1-1 at the end. So what this construction gives us is a flat vase and in fact it's a linear path scheme for those who know so it's a special kind of flat vase in dimension 3 where the shortest runs witnessing reachability long, y exponentially long because this value capital M it has to be essentially divisible by the least common multiple of 1, 2, 3, 4, 5, 6 up to N because all these multiplications all the multiplications by the fractions can have to be exact and we multiply that capital M by these fractions one by one ok? So it's a nice example which shows that that our conjecture about polynomially long witnesses of reachability for flat vase in dimensions 3 and above is wrong, it's false so we disproved that and unfortunately and also we can we can do some further work and the reachability problem for flat vase is actually NP hard but the best dimension there that we can manage at the moment is 7 I think rather than 3 so we still don't know the complexity status for for this for this problem in dimensions 3, 4, 5, 6 I think ok? so that's my first example let me move on to vase so I'm going to kind of drop the flatness restriction and consider vase so specifically look at this table I'll tell you something that's known about vase and coverability and reachability runs and fix dimensions versus arbitrary dimensions so this is my second topic so here what we have is shortest coverability run what is a coverability run so I told you that that what I was interested in here is runs that start with all variables being 0 and finish with all variables being 0 this corresponds to to the reachability problem but the coverability problem which is easier for vase corresponds to just having runs that start with all variables being 0 but we don't care what values we have at the end so the only important thing is that the end of the program is reached so those are the runs that I call coverability runs so where it's not important to reach the origin at the end but just to finish so how long can its shortest coverability runs be well back in 76 the same work that I already cited by Lipton it shows that if the dimension of the vase is not fixed then actually doubly exponentially long runs may be needed even for even for coverability and Rakov showed the corresponding upper bound and in fact those two papers together established that as far as coverability runs shortest coverability runs are concerned they are exactly doubly exponential for general vase for shortest reachability runs the work of Le Rouen Schmitz and our work from this year show the shortest the headline results of those works are about the complexity of the reachability problem but actually if you look a bit further into them they show bounds about lengths of shortest reachability runs and they are again Akhermanian and non elementary but what about vase in fixed dimension so if the dimension is fixed then what do we know about coverability runs well in fact Rossier and Yann back in 1986 they showed that in fact singly exponential runs suffice if we're interested in coverability if we don't care whether we get all zeros so to speak at the end but for reachability runs has been known I mean how short how short for vase in fixed dimension can these runs be and the natural conjecture has been to well at least to explore whether they are perhaps also exponential I mean there was no evidence to suggest that in fixed dimensional vase shortest reachability runs need to be more than exponentially long in other words perhaps all the dimension is the only parameter that requires these high bounds that we have when the dimension is not fixed but again this turns out not to be the case so I'm going to show you an example where already in dimension 4 so vase in dimension 4 has I'll give you a family of vases in dimension 4 where the shortest reachability runs are doubly exponentially long ok already in dimension 4 now to present you that looking numbers at first the previous one my first countre example was in terms of that n plus 1 factorial divided by n factorial now it gets a bit a bit more hairy but actually you can look at this in detail but I'll tell you what's important about these numbers so what we have here is n plus 1 equations so a bunch of fractions raised to some powers and multiplied equals some fraction and then again some powers of fractions multiplied equals some fractions and so on and the last one is quite small and quite simple and the previous one and so on so what is n plus 1 here we have 2 to 2n plus 1 divided by 2 to 2n here we have the the opposite fraction so 1 over that and that same fraction repeats here and the powers are 2 to n 2 to n minus 1 and down to 2 to 1, 2 to 0 so in fact all of these powers sum up to 2 to n minus 1 so what we get is just the original fraction and the rest are the same so each of these equations is easy to see and then what we actually do to get the equation we really want to work with is we multiply all of these equations so you can think of you can think of kind of the columns here that have the same exponents the exponent here is 2 to n minus 1 2 to n minus 1 and this column is 2 to n minus 2 and so on here it's 2 to 1 and here it's 2 to 0 those are the exponents so when we multiply all of those equations together what we get is basically it's a product consisting I mean consisting of n plus 1 terms so i range from 0 up to n each is some fraction raised to the power 2 to n minus i and then all of them I mean all of them multiply together that product ends up being equal as some further fraction SN over RN and what is important is that all of these numbers SNi, RNi, SN and RN they are exponentially large in n I mean they are all of the form 2 to 2n plus 2 to something multiplied some small number of times and so on so they are all exponentially in n and they are raised to these exponential powers but somehow the product of all those fractions on the left is just a single fraction exponential number over an exponential number and moreover what is significant is that each of these fractions is greater than 1 and also they are ordered so as i range from 0 up to n these fractions get progressively smaller but they are all greater than 1 and of course SN over RN is also greater than 1 so it's a fascinating fact that numbers like this exist for all n so these are indeed numbers that are just a little bit larger than 1 raised to these huge powers and then the end result is a simple fraction exponential divided by exponential ok so how are those numbers going to help us with constructing a kind of a nasty example involving four dimensional vase well the vase is quite simple in fact and as before we are just going to kind of program this equation that's all that we are doing programming this equation and we are going to have four variables x, y, z and w which is to say the the vase is four dimensional so what will this program do at the start it will initialize x to some positive multiple of RN what is RN, it's this denominator here so x initially will be some positive multiple of RN and the z will count that positive multiple so x will be equal to z times RN after this initial stage and then this four loop it's not actually kind of in the vase strictly speaking it's syntactic sugar it's a macro so what it means is that what it's body is to be inline n plus 1 times for the corresponding values of i but forget about that if you want so this four loop goes from 0 to n it literally runs through this product and then what we are going to be doing is multiplying x repeatedly by all these fractions so how are we going to do that well we use that pair of loops with x and y so here we repeatedly subtract the denominator from x and add the numerator to y and here we transfer the value of y back to x so those two loops should do one multiplication by a fraction s and i divided by r and i we use w to count the loop through that's the inner loop that runs from 1 up to 2 to n minus i because we need to multiply with the same fraction 2 to n minus i times we add to w w is initially 0 we add to w 2 to n minus i and then whenever we multiply by a single fraction we decrement w and at the end what we do is at the end we repeatedly subtract sn the numerator from x and 1 from z so remember that x was initialised by some number of times z times rn when then we multiply that x by all of these guys what we should get according to this equation is what it's exactly sn times z and in this last loop we kind of test that's the case and the point is that since at the end we need to have 0s in all the variables that test that test indeed checks whether x is equal to z times sn now can anybody see some kind of really suspicious things here this looks like well at first sight maybe it's promising but it also kind of looks dodgy so so what is what is a concern here yeah so this is I mean that is an issue it's not such a big problem because these constants rn, rni, sni and so on you know they're a little bit complicated but they can be computed fairly easily and hardcoded into the program another concern is that these loops they're just repeating non deterministically many times there's no guarantee how many times they will repeat but like in the previous example that we've seen if any of them repeats for fewer times then it possibly can and indeed if any of the multiplications by the fractions are not exact if there are any remainders that will necessarily cause the final value of x just before that final loop to be strictly smaller than sn times z and then this cannot decrease x and z down to zero but there is one more thing that's really worrying here which is that I told you that the variable w is used to count in a loop that the outer loop goes from 0 to n but the inner loop goes from 1 up to 2 to n minus i but there there was like no test that w here ends up being decremented to zero I mean we never tested that so how can this possibly work so the beautiful thing here is that this relies on this ordering of the fractions and we start by multiplying with the largest fraction and then a slightly smaller fraction and then a slightly smaller fraction and so on so indeed what can happen in this program is that we don't multiply by the first fraction as many times as we can we kind of save some w to be used to multiply by the second fraction than is prescribed and so on but the thing is that the second and third and so on fractions are smaller so if that happens that can only result in a smaller value of x at the end than is required so in fact for this program to run from all variables 0 and terminate with all variables 0 the only way it can do that is that it kind of perfectly perfectly computes this product of fractions and in turn that is only possible that is only possible if the initial value of of x is divisible by a doubly exponentially large number, why doubly exponential because if you look at just the kind of what happens when i is equal to 0 we are multiplying by this fraction an exponential number of times ok and so the initial value of x has to be divisible by rn0 power 2 to n which is a doubly exponentially large quantity so this vass indeed has it's a 4 dimensional vass and it's shortest reachability witnesses are doubly exponential ok now that completes my second example and my third one is going to be for a slightly different kind of system and that's again I'm afraid in this area there are multiple names for the same thing push down vass or grammar vass there are other names too so how can we think of those systems they can be written down as programs like this one except we have also calls and returns to procedures that's one way to think about those systems so we have a push down stack so I'll give you a small example of such a program to kind of warm up imagine that and this will also explain why the difference between unary and binary encoded constants is not relevant when we work with grammar vass so imagine that I want to add this binary number these are binary digits bk-1 bk-2 up to b1 b0 to x to my counter x so this may be in a succinctly given grammar vass well what I can do is I can actually instead of doing that I can call this special procedure p0 and p0 what does it do well it adds this binary digit b0 to x and then it calls p1 twice and then what p1 does is it adds b1 to x and calls p2 twice and so on it adds the most significant binary digit to bk-1 so it's a kind of an easy example but you see that one can do quite a lot when one has a push down stack in conjunction with being a vass now I'll show you another example but before that let me try to summarize what is known for grammar vass for grammar vass I'll focus on dimension 1 so when there is only one variable so these are kind of unbelievably simple systems in some sense there's only one natural value variable and a push down stack I mean what could be easier than that so in dimension 1 the coverability problem is known to be next pace so Leroux Sutra and Totske show that in iCalp 15 and it's known to be p-space hard and we can't close this gap between p-space and x-space the reference here is to the MSC thesis of a smart guy at Warsaw Julius Straszinski who worked on this with several of us and obtained this p-space hardness as his master's result in Warsaw so this is known for the coverability problem for the reachability problem and this is the problem that has been you know whose tackling has been advocated by Alain Frankel for many years this is still open we don't know whether the reachability problem is even decidable all we have is this p-space in terms of the complexity we don't know nothing else about that for dimension 1 for kind of arbitrary dimension it turns out that reachability is reducible to coverability interestingly but at the cost of one further increasing the dimension by adding one more variable and at FSTTCS also back in 2011 maybe somebody remembers where FSTTCS in 2011 was Atting and Ganti showed that when the grammars or when the behavior of the stack is restricted to finite index a finite index grammar is where so a grammar of index k is where every word can be derived using at most k non-terminals along the way ok so if the grammars are restricted to being finite index then actually these systems are and they are kind of not difficult but very nice and insightful translations between these systems and vast hierarchical zero test and decidability of reachability for the latter has been shown by Reinhardt in his habilitation thesis and by by Bonnet in his PhD thesis but has never been publish and I am aware of maybe three four five people who have ever read this proof of decidability so the rest of us kind of I mean we we know and we like those colleagues and we believe them but you know I'm not it's really difficult to read that that stuff but without the finite index restriction we know nothing about the you know whether the problem is decidable it's conjectured to be decidable so so that's kind of an embarrassing state of affairs regarding our knowledge of grammar grammar vast but back to showing you an example this is a simple grammar vast it's just one procedure q so what does it do this is a non deterministic choice so we either increment x call q twice decrement x in return or we subtract 2 to power n and we add 2 to power n in return remember that the values of the variables are restricted to be non negative so this this decrement of 2 to n can only succeed if the current value of x is 2 to n or larger ok so if kind of in terms of pictures if we think of runs of this grammar vast we get trees like this where you know we start at the top and then we traverse the tree and we add all the numbers as we go along but at every point the current sum has to be non negative I mean those are basically runs about what one dimensional grammar vast kind of do so for this particular grammar vast it's runs can be depicted as trees of this kind where we kind of do plus 1, plus 1, plus 1, plus 1 and then when we reach a leaf at the bottom we do minus 2 to n plus 2 to n and then as we go up we are subtracting 1 as we are going down we are adding 1 and basically the only way that we can start with x equals 0 and return with x equals 0 is to have a binary tree whose depth is at least 2 to n so then the size of the tree is doubly exponential it has to be but actually it turns out that in polynomial space we can kind of handle G vast of this kind it turns out and it's not very difficult that we can compute we can compute the maximum increase of x and how large x has to be to achieve that maximum increase and so on all in polynomial space so this kind of example although it looks bad at first it's not so difficult to deal with and this in our ongoing work to close the gap this gap between x-space and p-space actually this is not such a big problem but I'll show an example that we don't know how to deal with now and it uses the same numbers s, s, n, i r, n, i, s, n r, n and I'm not going to show you the grammar vast written as a program but I'm going to show you what shape of trees we get as it runs so at the start I'm afraid it's probably not very easy to see from the back but I'll tell you what this does at the start we kind of this is the top of the tree so what's happening is as we are traversing the tree in the you know say in fix ordering at the start we are repeatedly adding r, n to x that was our denominator on the right hand side of that big equation we are repeatedly adding r, n but to come back up at the end so there will be some big computation down here but to come back up and reach 0 at the end we need the same number of times to subtract s, n and s, n is larger than r, n right so this this is we call it a borrow loop so we are borrowing r, n, some number of times but with a promise that will pay back s, n the same number of times right and s, n is larger than r, n so then what happens below in this example is firstly what we do is we generate 2 to n copies and this is easy to do with the grammar of us you know size basically n we generate 2 to n copies of the following kind of behavior which is a long loop that keeps subtracting r, n, 0 on the left and then adding on the right s, n, 0 the same number of times so this is like a multiplication by that first fraction s, n, 0 divided by r, n, 0 and we do that 2 to n times then after that we do a similar thing here we branch to width 2 to n minus 1 and we do it with the second fraction we are implementing that equation ok and at the end we do it just once and the fraction is s, n, n divided by r, n, n ok so so basically what's happening here and these loops can be called invest loops why invest because we are here we are paying we are paying r, n, i on the left s, n, i on the right and s, n, i is larger so we are investing something smaller to get something larger many times so the only way that so to speak all of these investments can generate enough to pay back this borrowing at the start the only way to do that right and if you remember the only way to do that is to repeat these loops doubly exponentially many times right that's needed for the for the exactness by the way the ratio of of the non terminal that appears here is s, n over r, n so that's the maximum multiplicative constant that that the value of the variable going in on the left can be enlarged by when we come back to when we complete that computation and come back up on the right it's s, n divided by r, n so what we have here is an example I haven't given you that as a program but it's not difficult to write that it's an example of a jiva's a gramma vass in dimension 1 where the shortest reachability or even coverability even coverability runs have this kind of doubly exponential depth and one of my key contributions to this work was to come up with the name for this kind of run which is a long neck here and lots of long tentacles at the bottom something else that we came up we came up against in a work on gramma vass is in fact the ABC conjecture and and that came up with studying what happens in situations where so to speak the investments at the bottom are not quite enough to repay the borrowing at the top but there is some kind of small gap and studying how small that gap could be and for those of you who are not familiar with the ABC conjecture I would encourage you to look at for example the Wikipedia page about the ABC conjecture and where it discusses towards the end briefly the current state of affairs where it seems that it's not proved yet there was a claim apparently a credible claim to begin with that it was proved but it turns out that that has not really been accepted yet so it's one of those difficult mathematical problems which has ended up from our point of view in an unexpected way being related to gramma vass in dimension 1 and that is further testimony than Alain Fonquel was right to advocate the study of one-dimensional gramma vass because not only is there this apparent link with the ABC conjecture but also finding these giraffe octopus examples has then led us by realizing that actually we can use some of the ideas encountered there on vass it has led us to the non-elementary lower bound for the reachability problem for vass so thank you very much why the past tense in the title of your book past tense in the title of your book that's a good question I suppose it's from the point of view of this of this result of the non-elementary lower bound for the reachability problem for vass but you're completely right that it shouldn't really have been in the past tense because as I think I have conveyed and this is probably my main message is that we still know very little I mean all these very simple to state problems that we just have no idea whether they're even decidable let alone the complexity and this grammar vass in dimension 1 are an example and it seems that they do have deep connections in unexpected ways to other such questions I know that you have a student comment sur le cause avage du algorithm for reachability yes that's right we have an implementation of the coseraggio algorithm and we we thought that it would not be good for anything other than other than studying the coseraggio algorithm because at least for some people they really understand things when they're written when they're programmed with proper comments and so on but actually it turns out and this is surprising that on some benchmarks this tool outperforms state of the art coverability checkers and this is in cases in negative cases where we don't have coverability and what's involved here is a straightforward reduction from coverability to reachability so it seems that coseraggio's algorithm at least in more testing remains to be done but at least in cases where coverability fails or rather where the system is safe the coseraggio's algorithm can quickly discover some invariance that in effect show that the system is safe which we didn't expect and do you know the complexity of this sub part of the coseraggio algorithm not yet, no it's a good question not a good question you have the title of i know it is initial experimental results but i'll be interested to talk offline ok, thank you is there any way to relate this to decidability of some kind of arithmetical theory extending prasburger with experimentation and other things is there a way to rephrase some of these questions as what are also known to be extremely hard problems regarding decidability of arithmetical theories to be honest i don't know of the top of my head it could be worth exploring there are rich connections between the reachability problem for vas quite a number of problems in other areas to an extent where sylvain schmitz has proposed regarding the reachability problem or rather the class of all problems that have reductions to and from the reachability problem is a complexity class now when we go to grammar vas i'm not aware of so many connections but this is a good question when we go to branching vas which is another well-known extension of vas reachability is still not known to be decidable then there are connections to database theory to linear logic but as far as arithmetic theories go i'm not aware at the moment but there could be just maybe my ignorance