 So let us listen now to Mikael Wollner, speaking about compacted binary trees, which admit strachid exponentials. Thank you very much, Jelan. And thank you very much, first of all, to Kassha and Norm for this very nice conference. I think it works so far very well, and I'm really enjoying the conference here. And I also want to thank you, the referees of the small extent, the small abstract. I'll try to be, they gave me some nice advice what to do, and I'll hope to, I'll try to be not too technical. I'll try to introduce everything thoroughly. And then at the end, I'll show you what kind of extensions we can have. But then my third thank you actually goes to one-to-one, actually, as well, because right now I can go a bit faster and go to the interesting combinatorics, and I'll show you now how we got the result that Juan mentioned and what we can do. So if you have questions, please feel free to ask anytime. And I'm happy to discuss now all of it. Okay, first of all, let's start light. What is a compacted binary tree? Well, really simple start binary trees. We all know them. I'll denote by circles, internal nodes. I'll denote leaves by squares. They have our degree zero. They will all be rooted. And in the same as for the case for Antoine, they will be plain. So there's a left and right order of the children and how we can construct them easily, recursively, a binary tree is either a leaf or it consists of a root and the left and the right binary tree. So let's not, why are they nice? What can we do with them? For example, one of the nice use cases is they can use to store arithmetic expressions like for example, this arithmetic expression here, x square minus y square times x square plus y square. How can they do that? Well, we all know, we just put that, we label the nodes and then this corresponds basically to this arithmetic expression. But here we lose a lot of storage. If we want to have all that in, for example, your favorite computer algebra system, then they won't store like this binary tree but they use a compaction procedure like the one Antoine introduced. Just let me go through. I use more or less the same idea. Basically, we go through post-order. So we start here with the leaf and every time we see a new element, we remember it, we give it the unique name and we remember the root and its children with the ID. So this one gets now ID one because the first time we see x zero zero and in post-order we go on, obviously it again, we don't need to store it, we just remember there's a one. Then we go on, obviously we see something new, we see a times and this is a new element, we give it ID two. And then on the left, we have a node at the subtree which has ID one and ID one. So that's what we save here. And that's how we go on in post-order. Basically the next one is here, we see a new element and so on. Like this what we create, we create a list of elements which uniquely decomposes our tree into its unique sub-trees. The unique sub-trees are here in gray if you can see them and the ones which are repeated are in white. So this one has here seven unique sub-trees and then the representation I'm using and that's the one we know already from the previous talk is like in form of a DAG in form of a directed acyclic graph where instead of keeping these elements we just put pointers to the first occurrence in post-order. So this was here the first time we saw the x, the first time we saw the y and so on the first time we see this. And then we are interested in this object and that's for me a compacted tree here just a compacted binary tree computed by this procedure like we've seen before. So compacted trees, what is important for these elements? It's very important that the sub-trees are unique. That's why we look at them. That's why they are so efficient in storage. Then I've just shown you this efficient algorithm. So this is now the second time you see it. Now you should be an expert on this algorithm. This is a nice take-home message. If you remember only this one, that's very nice already. I'm very happy. So here it has been shown that in expected time versus linear expected time we can compute the compacted form of a binary tree. Then it started here the analysis here using analytic combinatorics started with Flasher-Lacy-Palin state in the 90s where they've shown that if you take a tree of size n it has compacted form of expected size n divided by square root log n where the constant is explicit depending on your model, depending on the labels of the nodes and so on. So what that means? Well, you win a lot. You win a square root log n, so for large n you really, well, it doesn't look like too much but it's definitely not of the same scale. Where does this appear? Well, we didn't make it up. It appears in XML expressions and compilers. It's known as the common sub-expression problem used in this as a data storage and so on. And now we come to the problem of this talk or one of the problems because it's the first discrete object. We ask the real-est question. How many compact trees of compacted size n exist? Compacted size I define as the number of internal nodes and we simplify everything even more. We don't care about labels. It's complicated enough. We just look at unlabeled binary trees. And here is the main result and here is the last thing I promised. Now you know what a compacted tree is and the next thing in the title I haven't introduced so far is a stretched exponential. And a stretched exponential is such a term and base to the n to some sigma. And here we have shown together with Andrew and Vangier that the number of compacted binary trees of size n behaves asymptotically like n factorial four to the n times this stretched exponential. So we see an e to the three a one n to the one on three times a polynomial term n to the three on four. And what's quite nice, this stretched exponential is completely explicit. So what we see in here is a one, which is the largest root of the famous area function which is just here an integral representation. I will talk more about the area function a little bit later. And what we conjecture on numerical experiments is that actually this is not only a theta result but actually they should also hold an asymptotic equivalence with a constant which we can compute to very accurately, very, very accurately, but our method does not give the constant. So so far we can only prove that there is a constant such that it's upper bounded and lower bounded by this complexity class. Okay, so this is the first domain result on compacted trees. And then I want to go a little bit more in the sense of there's many computer scientists. We use this result as well where we adapted this method to count deterministic finite automata. So let me start as well. What is a deterministic finite automata? And the reviews asked me, especially interested in this part. So I decided to introduce DFA's as well, show how they work and show how our method applies to these results. So DFA, I'm sure most all of you know what it is. We're just a short recap on a binary alpha that AB. So it's a graph. Here you see an example. We have two outgoing edges from each node, which is called state, which are called labeled A and B. Then we have an initial state. While here I labeled it's Q zero, it will always be Q zero. This is the initial state, which acts kind of as a root. And we have some final states. They're colored here in green. These two are final states. Then such a DFA basically gives you as well, it defines a language. And the language is the set of accepted words. So how does it work? If you have a word in a tool at the alphabet, AB, you start at the beginning and you see, for example, an A, then you can go on to the path of two, oh, I'm sorry, traverse the path at the edge which is labeled A. Then you're in state Q one. And then you go on. And every word which ends then in an accepting states is called an accepted word. And the set of accepted words is the language it defines. So in this case, we have the accepted words A, because we can traverse here. We have AA, we have BA, and we have ABA. And now it's called minimal. If there is no DFA with pure stage that accepts the same language. So here, this one is the minimal DFA. So if you're bored, you can think, you can try to prove that this is really minimal for this language here, basically. And what is important here, that it's acyclic, meaning that there's no cycles except that the loops here for this DFA. And this is for finite language. This will be always the case. So counting minimal acyclic DFA, that's what we're interested here in, has been work, well, a couple of people has already worked on that. But the asymptotic number of the minimal number of acyclic DFA is on a binary alphabet, I denote by MN, was so far unknown. So people started, people work on it for Domonacki, Kiesman, Charlotte, and Liskowitz started, were quite interested in between 2002 and 2006. But the best bounds were so far known off by an exponential factor. So the previous work, we're on the compacted trees and we studied the related structure, which we call to relax trees, give you upper and lower bounds. And they already give you that, okay, there has to be a stretch exponential in there, explaining a bit why it's complicated to count them. And these were off by a polynomial factor and to the one on four. But now, I'll show you how we can use our method in order to get the main result here. And we show that the stretch exponential appears here again. And now the polynomial factor is N to the seven on eight. We see again, the N factorial, we see an exponential growth of eight to the N, all that makes sense because we have binary and then we have for every state, the possibility to be accepting or not. So this is your two to the three, eight. We see the same stretch exponential, this is a bit mysterious, and I'll motivate why it appears afterwards. And we see explicitly as well, the polynomial term, but again, we can conjecture the constants, which should be in front or which we conjecture is in front, but we cannot prove that this holds as an asymptotic equivalence, but we can compute it with a very high precision. So again, this is a theta result, meaning we can compute an upper constant which constants times this is bigger than this and constant times this is smaller than that. Okay, so this is not an introduction of the two discrete objects I'm looking at. And now let's go a bit into the detail. So what is the area function? Why does it appear and what's it doing here? So the area function is a very classical function in physics, theoretical physics, but also of course in mathematics. It's defined, it has an integral representation like that, but what I prefer and what we will see and what it's defining is a differential equation. So the area function satisfies this differential equation. So the second derivative of the area function is equal times X times AI times, X times the area function. And this will be the reason, well, this is the correct, this times and boundary constraints. Well, this gives you two solutions and the one which goes to zero for infinity, for X to infinity is the area function which AI, which you call the AI. We will not need, we don't see the second solution, the second of this differential equation. Then the large drud, here you see a plot is at, which you denote by A1, this happens at minus 2.3838 and this will appear in the stretch exponential. So just a few side remarks in combinatorial analysis where it has appeared before, it has appeared for example, in random maps or in the area, in the Brownian excursion area. Let's just through some side remarks. So let's start, do some combinatorics and that's the bijection to the decorated path. Antoine mentioned on the previous, in the previous talk, now it's very nice. I can make this now explicit. Okay, let's start. We take compacted, no, a DFA. So let's take this DFA. So the first thing we basically do, we have again in green accepting states in what non-accepting states, what we do. First of all, we highlight a spanning tree by depth first search traversal. We don't care about the sync. Note that we think that all our results are for DFA's recognizing a finite language, meaning that the unique sync will never be an accepting state. So basically here, what we do, we don't care about the sync and we go depth first and we get a spanning tree. This is now shown in black. Then we color the other objects in red, the other nodes, the other edges in red and then we draw it as a binary tree because we interpret as Antoine, like the true or false. We interpret like he did true, left and false, right. And I just take A left and B right, B right. And then we get a plain binary tree. And that now looks already very much like the DAX I've shown you before of the compacted binary trees. What we do next, we label the nodes in post-order. Again, post-order, like in the compaction procedure and we label them as we see them. We see this one first, then we see this one. This is third one we see, the fourth one and so on. By this construction. Mekai, you are doing projection from compacted binary trees. I'm doing now DFA to, no, no, no, I'm doing DFA to a compacted structure. And then this is, if you want, this is an intermediate auxiliary structure now. And then my goal is a path, decorated path structure, decorated dig structure, dig path structure. Because you also had like label same week respond to left child and right child in this context. Yes, yes, exactly, yeah. So exactly here, I had A, B, A left, B right. Like this, I have the left, right order on the children. That's why I can forget it now. That's why I don't care about it anymore. And then what we do, we just forget about the labels. So here that we're here, but of course we can also just remember, sorry, I have to have shouldn't point on my screen, we can only have to remember where it's pointing. Okay, so this is now my structure here. And I'll show you how we get a decorated path structure from this. So basically what we do, then while we traverse the tree, of course, again, we go along the contour of the tree. And what we do, we start here at minus one zero for technical reasons. And then every time we go through a pointer, a red pointer, what we do, we go to the right. And we add, we mark make X into the box, into a box, which is below. And here we see numbers from one, two, three, four, five, six, and these numbers correspond to the pointers, the notes they are pointing to. So we make a cross here and we go to the right. Then we go through two red edges again. So we go to the right, we go to the right, and we make a cross in the path in the box below, which corresponds to the one. And every time we go up, every time we go up a black edge, basically what we do, we also do an up step here. And we label according, because every of these black edges corresponds as well as uniquely, can be uniquely associated to an internal node. We also color it according to this note here. This is basically the two here. And then we go one. We see it, we go up again. So we go up, this is number three. It corresponds to three here as well. And you see it's colored with a white node, with a white color, because this was a non-accepting state. And that's how we go on. We go on in here, blah, blah, blah. And basically what we get, we get here then this structure here. We get here, this is a bijection from our DFA's, another intermediate compacted structure to some kind of dickpaths, which marked boxes. Okay. So how do these paths look like? Okay, these paths are actually what we're going to analyze. So these paths are start at minus one zero and the end at NN. Then furthermore, they always stay below the diagonal here. And what happened after the first step? I mean, this is a technical step. And then furthermore, one box is always marked below each horizontal step. So basically what I've shown here, when we do a horizontal step, here for example, we have one possibility. If we do a horizontal step here, we have two possibilities because there would be two boxes and so on. So basically this corresponds to the weight of the right jump. So the weight of the jump depends on its height. And if we do a vertical step, then we always have two possibilities. It's either green or white. So every up step has just weight two, which I call it here by, which I gave you a name too. So every up steps gets a weight two. So basically we live in this lattice and we look at paths which live in this lattice. And by the bijection, the number of these paths is the number of my acyclic DFA's. So basically what I'm then interested in is just these paths. And well, these paths are not so hard to come up with a recurrence release. So let's say call A and M, the number of paths ending at NM, okay? At the end, we want to end at NM. Then A and M is basically, what is it? It can jump from here. You can jump from below. So this is N, M minus one, and we have two possibilities, weight two. Or if we want to jump to the right, then it's a right jump. So it's N minus one from N minus one M, it comes from the left and the number of possibilities depends on my current altitude M. So if I'm at two here basically, then I have three possibilities, which we see here all on this one, we have three possibilities. And this gives you here this recurrence relation. And we're interested in a number of paths ending here. This is not these paths, but minimality. Minimality has not been taken into account. Minimality is not hard to grasp. It's just a bit technical, but basically what happens for minimality is we just have to be careful for leaves in my path construction. So basically Antoine introduced the word spine. The spine is here, everything is these trees. The spanning tree is basically the spine. And in this spine, there is some leaves. For example, this one here, the one which is with number four. I'm sorry, jumping up and down. So we are not minimal. So this may be a bit fast. In here, if we copy, if this object here copies something, reproduces something we have seen already before. For example, this is not allowed. For example, now I've changed here to three into a one because this four now just copies basically what happens here. This note here is not exactly the same as this note here. And what we can show is that the number of possibilities here depends just on the previous notes we have seen in post-order. Again, like Antoine's case, it's the same. We don't care what happened before, just how much things happened before, how many trees happened before because we can copy each of these trees basically by a certain configuration here on the labels. This is now a bit technical maybe, but basically it just means in our path by ejection, what's happening while we just have to be careful for some objects which are like leaves. And this leaf here is a red edge, red edge and up. It basically corresponds to a right, right up. All these objects here, I have to be careful. And here some of the possible crosses are not allowed. And how many are not allowed? Well, it's quite easy. M of them are not allowed. And it was maybe a fact. M of them are not allowed because if I'm at altitude M, like if I'm basically, M plus one, but on. If I'm already here, I have seen, I'm at altitude three. I have already seen one, two, three notes before which correspond to unique sub-trees which I'm not allowed to recreate. So just to recap, this just means that I have to subtract a certain term which corresponds to a right, right up. So from here, if I'm at this possibility, I'm not allowed to do M configurations which correspond to the unique sub-trees. So I know on slides, this is not easy to grasp, but basically what it means, this is my new recurrence relation and this new recurrence relation isn't but also the objects counted here which are the object paths ending at NN correspond to the paths ending to the minimal DFA's we're counting. Okay. Any alternative meaning of the green vertices? The green vertices are accepting states. Accepting states. So you can, I can interpret some of them. Yes, yes, yes. As long as I don't copy a new one, I can color them green or white. But basically, we just want to look at these things and basically this is my paths. And in these paths, a first simplification is, okay, let's say, let's transform it a bit to make it a bit easier. So we divide by N factorial. This basically what it will do because we have here always the one, two, one, two, three, one, two, three, four, five. The weight of a path is the product of its weights. So we get rid of these, well, not to get, we don't get rid of it, but we rescale and we divide by two to the M because if we go up, we always will collect M up steps if we reach this point. And then we do a certain shift here which I just demonstrate here in a second. So the first step is this factor here changes the grid into, we get rid of the green ones. The green ones are just now weight one and we rescale here to have weights which are between zero and one. This will help the next analysis. And then the final step is like we change the order and this basically just flips the grid. So basically you can think of, we start it here, we start here and we go now through a tick-like path. We always stay above the x-axis, we end here and we have certain weights. And then this transforms the recurrence into this. Not again important how it looks like, we can, you just have to use this transformation to do it. But the idea is now that we have this path structure and the advantage of this one is that N increases in each step. Meaning every step brings us further to the right. Now we have a one-dimensional path structure which is funny weights. And the interesting thing here is that the weights get smaller the higher we get. So one is for the jump, for the up jumps here. Here we have one half, two thirds, whatever, the higher we get, the smaller the weights. And this is now what I want to use in order to give an intuition why the stretched exponential appears. And for this, I'll talk quickly about in the side note about push-tick paths. A simple family, well not so simple, but the nice family which is very much related to our objects. So let's take a tick path of length and a path staying always above the x-axis and just doing it one one or one minus one. So up, northeast or southeast steps. And if a path reaches height h, we give it weight two to the minus h. So the higher it gets, the less weight we give him. So we want to punish him if it goes too high up. And then something happens, something nice. So let's consider paths with maximal height n to the alpha. So here h, let's say it's n to the alpha. Then it's well, it's known as results that the total number of these paths is roughly four to the n times e to the constant times n to the minus two alpha. This is the total number of paths, this is a known result. Then my weight is two to the minus h, where my h, this is the path which can't go higher than n to the alpha, basically the average n to the alpha is two to the minus n alpha, which I can rewrite again into this form e to the minus log n alpha. At the total number of paths ending, having a weight n to the alpha is not just a product. So this is basically the mass of paths of ending at of height n alpha. If I give him this weight as well, so it's this product. And now we see something very interesting happening. We have here this term and you have a n to the one minus two alpha and n to the alpha. So the high alpha gets the smaller this, the smaller this one gets on the bigger this one gets. So we have the path want to go up because we have more paths if we have tall paths. But if we have tall paths, it's high tall paths then they are pushed down again. So two phenomena push against each other. And this case, the maximum occurs exactly at alpha one on three. Well, this is just where is the maximum of this thing. And this gives you, this is the reason for the stretch exponential here. So we push these two forces, basically, they have an equilibrium basically that the most mass is concentrated then here. And it's very similar in our case. So let's do a bit heuristic, let's show big numbers, which where things happen. So what we're interested in was this E and M. So let's plot some of these numbers here basically. So what I show here is the sequence I've shown before for large N and different M, several M's. So length 100 on the left and the length 1000 on the right. So we see a similar picture as Antoine showed before. So some peanuts happening here and other stuff here. And it's very big as well like in Antoine's case. And what we want to do, okay, we're actually interested in here per paths with M equals zero. So we want them to come back to the x-axis again. Okay, so let's zoom in a bit onto the left part here. So if you zoom in, we see something happening. Now I'm zooming to the left part and let's make it a bit bigger, the N, let N equals 2000. And this thing seems to converge to something. Well, we can already guess to what is it converging. And now the idea behind this is that I want to show you how we kind of found that it's actually the area function. So what we guess first is, well, we see here a very large scale on the y-axis. So the first thing we do, okay, it seems like there's an independent factor of the amplitude. The amplitude is dominant, it depends just on the same length N. And then what there is, it's not a function which is this limit scale, which depends on M and is rescaled by some function G depending on the scale. And here we guess again, so we guess the structure we guess that it's N to the one on three. And this basically this was motivated by this fact that here we have very large scales to just depend basically on N. And here interesting things happen if we have M close to N to the one on three. And what we do then, we use this in the recurrence I showed you. And then after some technical steps, what do you do? You plug it in, we looked at the quotient and what we see here is this shape. So the quotient HN over HN minus one, if this is true, then this has to be true. Behaves like two times some derivatives of F where what I've done now, I've rescaled M as well in the scale of N to the one on three so that I zoom into this part. And Kappa is just the constant in corner N to the one on three. And if we assume now equality here because everything is not correct here, assuming that this is the asymptotic expansion of this quotient, we get basically that HN has this shape with a stretched exponential in the C here and that the second derivative behaves like this. And this should look familiar. The second derivative is something times Kappa times the function itself. And this is just a shifted error function. This is just heuristic, but this is our motivation for the error function showing up in here because it's dominating these asymptotics. And then boundary conditions tell you that the C should be actually the root of the error function. Okay, stop with technical parts because I don't want to bore you too much. So the inductive proof is now very basic. Basically we use these ideas and what we do we find upper and lower bounds which where the error function is hidden in it and where we use their previous heuristics as a guiding theme in order to get ideas what's happening and then we fiddle as long with these things until we have suitable upper and lower bounds and do an inductive proof on N and M. All this is technical. I don't want to hide it, who is interested and happy to discuss with it afterwards or we refer to our papers and we discuss afterwards but what we show is that this holds here asymptotically and A and K and B and K, they satisfy the asymptotics both the same asymptotics just a different constant. And this brings me basically to the end. So what I've shown you today is a first of ejection to decorated paths. This is the one I've been mentioned before. Then these ones are easy to analyze by recurrence relation. Then we showed you some heuristic analysis of this recurrence relation showing the appearance of an area function and afterwards which I omitted here was the inductive proof. And here what we get is a lower bound and an upper bound with the same behavior asymptotic behavior. Only the constant is unfortunately different and we cannot grasp it unfortunately with this method. This gives you basically the asymptotics for the minimal deterministic finite automata recognizing a finite binary language and compacted binary cheese which was the first start of this work. So further problems, what I've mentioned previously, multiplicative contents, we don't even know if it exists. Could fluctuate, we don't know. Then of course it's interesting now to start that we have a handle on this object, some statistics. Number of words in this language is what is the length of the longest words in minimal languages. And so on and such stuff. And then of course you're always interested to apply our stuff to other problems like the one from Antoine, it looks very promising and very interesting, very tempting. And it has already, I have to mention that it has already applied to another problem coming from biology. We're counting the number of three child number networks in phylogenomics. So if you have another tricky recurrence relation to try, please let us know. We're happy and I'm happy to discuss, we're happy to discuss that. One was already suggested by the referees which is not a language which is finite anymore but close to finite. It's called piecewise testable and this looks very interesting, very promising. And we think actually we should be able to get some results on that. And all that looks very promising. So thank you very much for your attention and I'm happy to answer questions. Thank you very much, Mitjana. Yeah, I have a question. Sure. Maybe Sergei, you're first. Yeah, I didn't really understand well but they say we're rooted or not because you need decomposition. So this is a rooted they say or? The trees were rooted. The DFA's have an initial state. So it kind of gives you a root. Ah, so you start from one initial state and then? Exactly, exactly. So you're always in the DFA's, you have an initial state and from this one on you parse your whatever you have in there. So basically your words, whereas the nicest one or whatever. For example, well, I told us Q0. Q0 is my initial state. So one of them should be marked as initial state. But you say like automaton recognizing a language from the alphabet with two letters. So do you make some connection with the language resulting in this generation or it's just the structure itself? Question is what do you mean by connection with the language? So every language has a minimal DFA. This one DFA is unique. So every DFA defines a language but there might be multiple DFA's defining the same language. And one of them is unique and this is the minimal one. And we count the minimal ones. So this kind of recount every automaton we're counting in here has a unique, every language has a unique minimal one. So we basically counting the languages, counting minimal, we're counting languages. Maybe if it's a question like, can it be possible that to the same language there could respond like two minimal automaton? No. No, okay. No, so it's kind of a complex, so if you give me a language you can think of the minimal automaton as a complexity measure of the language, if you want. Kind of the number of states could be a complexity measure for your, a simple one for your language because this is a finite, well, this is the thing you need in order to parse your language and it's the minimal one. Yeah, but it's- It's a three to one. Yes. It's a one to one. Yes. Sinha, you have a question. No, thank you. Thank you, Mijir. I'm interested in your result and my question is about two different direction of some kind of generalization. One is beyond the finite language as some laborer suggested. And for the finite case, I think the minimality is somewhat trivial and you can encode some minimality condition in your decorated parse. But there, I think maybe there for example, piecewise testable languages, maybe minimality will be a bit more non-trivial, okay? I agree. Okay, but I think there's possibilities there. It depends where, I've quickly looked and for example, piecewise, I'm not an expert on piecewise testable and I've never known about that before. But what I think what could happen there is that we have two states, of course, one accepting and one non-accepting final state with the loops because so far our final state is always non-accepting because otherwise, of course, it would accept stuff. It would accept everything when you ever reach it here. So if it's just this basically and of course there's technical issues, then I'm pretty sure we can deal with it. Basically we would have just two sinks in there and this is doable. Several sinks is not a problem and we could live with that. So I can easily, I think encode a class which is maybe boring, we can create a class of automata or languages which are infinite and which we can analyze. For piecewise testable, I have to go into the details. But I'm optimistic actually because it looks like a simple relaxation or just let's say the next logical step, yeah. Okay, thank you. And another direction is ternary alphabet. Alphabet consists of one and two letters. Yes, yes. Maybe, so three-dimensional dyke paths like, yes. Notional is needed, but okay. Do you know the serial nickel result about random exaministic automaton? Yes. Okay, his result is that if the letter, there are three more letters, then the almost all accessible automaton is actually minimal. Well, actually to be honest, I didn't know this one now, but yes. Okay. So my question is, do you have any such conjecture about some statistical properties about automaton? Yeah, this would be, is a really interesting question. So this is a project I want to go on in the next years, let's say, because it's something interesting. Well, I'm interested in when some stretched exponentials appears in other structures and statistics. I have to be honest, we have not really looked at statistics. We're happy now that we're able to count. For ternary alphabets, what we get here, and this is also for ternary compacted trees and all these structures, we don't get a three dimensional paths, but what we can do, we don't have a dig path below a slope one, but we have slope one on three, one on four, one on five, one on six, and so on. And then basically, because it's basically similar to the bijection from ternary trees to kind of subclass of Lukasziewicz paths if you know these ones. And this is basically what happens. Basically, we just use different building blocks there. So we don't need to go into dimension three. So also this is doable. And here we have very, there's a dependency. And so far what we think, what we see, but this is just conjecture so far, is that the end to the one on three in the stretched exponential seems universal, whatever universal means. The constant changes, the error is there, but it's changed with the constant changes, but the end to the one on three in my final result should still be there. So this is already quite nice. So of course the model has some influence, but basically it will change something here, it will change stuff here, but this end to the one on three seems to be there in all these cases, ternary, quaternary, and so on. Okay. But it's ongoing work. But yeah, yeah, yeah, that's very good questions, yeah. So I will read your paper and I will send a message if I have some idea. Please, yes, of course. Thank you. Yeah, please. I would like to ask a question, please. Sure, yeah. Yes, you just mentioned the class of relaxed compacted trees so where the unicity of sub-substructure is not a constraint anymore and is there also a stretch exponential? Yes. Yes, I'm totics. Do I have it here on my slides or not? No, I didn't put it on my slides, but yes there is and the only thing which changes is that the polynomial term. So it's like compacted trees, so everything is the same but here we have a three over four. Okay. End to the three over four and we could get even more terms. So compacted and relaxed trees are just off by an end to the one on four, polynomial term end to the one on four and this was our first ounce for the minimal deterministic automata and they are actually exactly in the middle if you see what is length one on four intervals just split into the middle, yeah. But yeah, it's even simpler actually everything there because you don't need minimality and technically it's much simpler because then I have not talked about it but the minus in the recurrence relation makes it complicated because compact relaxed trees don't have a minus they just have a recurrence relation here which has just positive terms, positive coefficients. This term makes technically it really complicated but for relaxed it's actually easy. Yeah, but if you want to, feel free to talk, we can discuss this stuff and maybe work together on something. Because in the context of BDDs there are also subclasses where the unicity is not constrained anymore. And so I think this is a good starting point for the method then because this then that's it. Yeah. Okay, thank you. You're welcome. I have a question if I may. Yeah, thank you. You mentioned that the approach where it was applicable to other sequences, right? I'm wondering to what extent, I mean, if I have a recurrence relation like this, so linear with polynomial coefficients and several variables, how is that gonna work? In particular, is the heuristic approach gonna be similar or can it be radically different? And it's important that you have only two variables? Yes. Well, that's your question. So far, all we know with this approach is like that it's a two-variable recurrence relation which is big like I would say and not too complicated. And something similar popped up for phylogenetic trees, for example. The slides are online, you can click on everything on the link and then you see the papers or you just Google. So yes, it was important that it's a two-variable recurrence relation and actually we used a lot the guiding principle from the phylogenetic path for an intuition. And this, what happens for more variables? Things get messy, I guess. I don't know, I don't even dare to say, to think about what happened there. So far, it's like we have here, basically if you look at this, we have here weight one for an, if I'm not mistaken, and down step and something like if you can think of this one minus two over N, one over two over N plus N, something like ish. And this here is a weight which is close to one. So we have a weight one and the weight which is close to one. And this is somehow the balance coming as well from higher I go, the more it pushes it down and so on. And this actually, let's say it's nearly negligible, in a sense for the asymptotic, it's actually, so this is just makes things just complicated. So the basic structure we are looking at at the moment is the bivariate stuff, which is like the client and where we have a weight in front of one of the object, which is nearly the other one, but depends on the height. And this is actually just the reason for the phenomenon of a stretched exponential, which is quite amazing. And I guess if you have a slightly different relation, you will have something different from the airy function, or is it somehow universal? Good question, I don't know. Okay. Yes, that's a very good question. Thank you. If I may, I would say that it's not really, it's not limited to the airy function. In fact, it depends on the recurrence that we have. For different recurrences, maybe we'll have functions that are governed by other differential equations, I think. Okay. So, okay, I see the stretch exponential comes from these types of recurrence, but the details give you different equations. Exactly. So I had to drop here these proofs, but about these connections here, it's in no paper, but basically this is like tick-like. And if you change things here and so on, you get, let's say, other derivatives as well, if you want, for example, whatever, and this is the defining thing here for airy. Okay. Heuristically, of course, but well, you can prove it. And this depends on the recurrence if you change the recurrence slightly, you change this part here as well. Okay. This is nice that you've decorated the two aspects. Thank you. So we have a question. Yes, some question. So in fact, you have a bivariate autonomic expression. So it's a DPE equation that follows the ENM. So you can use bivariate function to represent ENM. And what you have is a autonomic expression. Yes. Variable, yeah, okay. Yes, so for instance, you put some of ENM times Z to the N times Y to the M. And your expression explains that you have a autonomic, this equation is a autonomic. Do you try to get the asymptotics directly from the... From the functional equation? Yes. Yes, I did that before using this approach. And because you're binding N and M, I couldn't, I didn't get too far, I have to be honest. So it did not work, but I didn't get it because of the interplay between N and M. And see what you mean? I had some functional equations, some representations and so on. But things here get messy ahead. I can give you a PDE here, but I didn't get anything out of the PDE. Okay. Maybe it's a possibility to get the constant. Of course I agree. But so far, we were not successful to get it. So we could write it down, but it's complicated. It's actually not so complicated, but it's, well... In a sense, we had the feeling that it hides this interplay between weights and height, which we had to separate in order to get this here, which is basically here hidden in this block, which is in here basically. And for the other, for the PDE, I didn't see how to get it. I had some discussions on it a bit with Shen Kuei and... Well, it's not easy. Because for instance, when Shen Kuei analyses the steering numbers, typically we have the same type of differential DPE equations. Okay. There is some stock phenomena. It explains the end to the 1 over 3. So it's always, but it's always a PDE. So I'm not able to decouple the derivative with respect to one and the other variable. So I call it actually about holonomic. It's like if it's a differential equation in all variables. No, no, no, no, no, no. That's different. Okay. Yeah, well, different authors use different notions. I would call it holonomic if it's a holonomic in each variable. No, no, no. At least it doesn't... So we couldn't get far with this notion, with this PDE. PDEs are complicated. Yeah, certainly. Yeah, yeah, yeah. No, I agree. Olivier is very good. I tried to play with it for a long time. Well, I think we don't have some powerful tool for PDE that works everywhere. And the one we have looked at is quite intertwined between variables. So we didn't manage to separate them directly. So it's something like that. I agree with you. But sometimes you can use some match asymptotics approach. So if you have the... You can expect that you know what is the behavior of the function. And you just let the... You put it inside the DPE and you try to match the coefficient to have the asymptotics. It's just a risk. But in general, it works very well. So... Can you send us a reference? Because we don't know about this method, apparently. I have no reference because I do this with Asuka very frequently. But I know no reference. I think I discussed something like that to Schenck way at some point. Try to assume what is reasonable to have as a solution. But let free some constant and put it inside the DPE and try to find the value of the constant. It's... For instance, when we have a C-Series... Yeah. It's more difficult. Yeah. So far I know just some cases which have been worked out. And a collaborator of Schenck way and he did something like that. I think for... Was it tries? Was it something like this? I can't remember what it was. I don't say... A proof. Yeah. Maybe... No, I agree. It could give access to the constant. This would be one of my hopes. It would be one of my hopes, Olivier. I agree. It's what I... Yeah. Okay, thank you. I suggest that you continue, that we continue either on this card or during the open problem session. So we can close the tool. Can't thank again the speaker. And we can move to...