 Okay, let's go ahead and get started then. Thanks everybody for making it out. This week we've got Miko Spona from the University of Florida who will be talking to us about non-rational generating. Take us away. Hi, thank you for the invitation. I love to go to the University of South Carolina. This is my fourth time that the last two times were conferences in 2015 and 2009, but my first time and also the last time when I presented something at this seminar was in mid-November of 2000, and which was a very similar time to now because right now we don't know who the next president will be and back then we didn't know who the next president will be even though it was in the middle of November in case you remember the Bush versus Gore time. Okay, so you can see my shared screen, right? So I'm going to talk about non-rational generating functions and then there will be a particularly interesting to me class of examples. So let's see, why can't I go to the next? Why can't I go to the next? I guess I can go manually like this. Okay, so in this class, sorry, in this talk when I say generating function, I always mean the ordinary generating function. Okay, so just this thing. This talk, there will be no exponential or other generating functions. So here is the main idea of the talk that sometimes we have an enumeration problem and we can't solve it. It's, if we try and we can't solve it, we are trying to get an exact formula and we cannot find it. And then we don't know that we don't find it because we are just not good enough or because the problem is actually difficult, okay? And sometimes one thing that helps that can actually give you some kind of reassurance that the problem is actually difficult is when you can say something about the generating function of this enumeration problem, even though you can't find it in its closed form because if you could then you would have solved the problem, but you can say something about it that shows that it's not an ice generating function. That usually indicates that it's not an easy enumeration problem because if it was easy then there would be a nice answer to it and the generating function of that nice answer would be nice too. Okay, so I will give a brief introduction to the parts of generating functionality that we need today. It's maybe 10, 15 minutes. I apologize to the experts like Gabor, you can go to sleep, but just wake up after 10 minutes. All right, so yes, because I don't know that the whatever function here doesn't work on my computer so you will see some things twice. So for as far as niceness of generating functions is concerned, there will be three classes of generating functions in terms of how nice they are, which are positioned like this. Rational functions are the nicest, right? The ratio of two polynomials. Algebraic generating functions, those that solve an algebraic equation are not quite as nice but still nice and then there are defined generating functions. I will tell you what that is, which are not as nice as the previous two but still it's something. If your generating function is defined or different should be finite then there is still a chance for some kind of an ice formula. Okay, so the first class is rational and generating functions. The SI set is the ratio of two polynomials. And this is equivalent to the fact that the coefficients of your power series, so these are usually the numbers that count objects of size n, satisfy a recurrence relation with constant coefficient. These numbers are constants, okay? So for instance, fn is equal to seven times f9 plus three times f8. fn is equal to seven times fn minus one plus two times fn minus two, something like this. Okay, these are the rational functions. So that's the nicest. It's often easy to find a solution for that reason because you look for a recurrence relation with constant coefficients. Sorry about this. I can't, it seems that I cannot go from one page to another, so the pause function of slides. Ah, okay, now I've made it something, oops. No, it still doesn't work. So I just have to manually go from one page to another. I don't know why that is. Okay, so we were at rational. Okay, so then the next class, a little bit wider class of power series is algebraic generating functions. Algebraic power series. So we say a power series is algebraic if it solves a polynomial equation like this. So if a power series is not algebraic, then it's unlikely that you will have a super simple formula for the coefficients, but there can still be a formula. Just it won't be, it will contain more than just the four basic operations and roots. Also the other problem with algebraic generating functions that there is not an easy, sufficient and necessary condition for the coefficients. So remember, above I said that if a power series is rational, then its coefficients satisfy a recurrence relation with constant coefficients. For algebraic power series, I cannot say such an easy equivalence. Okay, everything that's rational is also algebraic, namely of degree one, because then it satisfies a polynomial equation with degree one, never multiply your rational function with its denominator and you get the numerator. And then there are those defined or differentially finite power series. Okay, so the way to define this is the following. Take all the derivatives of your power series and that certainly spans a vector space over the field of rational functions. And if that vector space is finite dimension, then you call your power series D finite or differentially finite. So for instance, E to the Z is D finite because all these derivatives are E to the Z. So that vector space will have dimension one, cosine Z because it's every other derivative is cosine and every other derivative is sine. So it will have the vector space will have dimension two or ln one plus Z. None of these are algebraic as it can be proved, but they are all D finite. So on the other hand, it can be proved. It's not completely trivial, but it's not difficult either that algebraic power series are D finite. It's in the Stanley book, Enumerative Combinatorics, volume two, chapter six. So D finite power series are bigger. That class of D finite power series properly contains that of algebraic power series. Now, remember I said that for algebraic power series, it wasn't easy to say what is true for the coefficients, but for D finite sequences, it is. So a sequence is called polynomial recursive. So it's my sequence. It's not the power series, it's a sequence of numbers. It's called polynomial recursive or p recursive. If it satisfies a linear recurrence relation this time with polynomial coefficients, okay? So for rational functions, the coefficients in the recursion had to be constants. Here they have to be polynomials, okay? So there is at least this aspect in which D finite power series are nicer than algebraic ones. So you could say at this point, sometimes people ask if I can give an example of a problem whose solution, of a simple problem whose solution is a sequence which is not polynomial recursive or whose generating function is not D finite. And my answer is the bell numbers. The number of all partitions of the n elements set, right? It's generating function is e to the x minus one. And the bell numbers do satisfy a recurrence relation, but it's not with a constant number of terms. This k is, you can't fix that. So that keeps growing. The bn can be obtained as n choose i times b i and sum that for i goes from zero to n. And that's not a lot here. Here this k has to be fixed. Okay, so all right. These are the three classes of power series I wanted to talk about. That's in order to put the rest of my talk into context. Okay, so now back to my main topic of non-rational generating functions. So let's say that you have an enumeration problem that you can't solve. And you wonder that maybe this is difficult and maybe I could prove that it's difficult by showing that it's, at least the generating function is not very nice. I mean, it would be great as if you could prove that the generating function is not even D finite, but that is very difficult because there are very few methods to prove that something is not D finite. Maybe I will talk about that later. So how do you prove that something is not rational, that the generating function is not rational? How do you prove it if you don't know the generating function itself? You don't know the generating function itself, but how can you maybe know enough about it to prove that it's not rational? So what we will use is what Flajole and Sej recall the main law of combinatorial asymptotics. Okay, so this is important. If you have a combinatorial generating function, meaning that all the coefficients are non-negative real numbers, then the exponential growth rate of the coefficients is equal to one over R where R is the radius of convergence of F around zero. So you kind of learn this in calculus, but this is what we will need today. In combinatorial generating functions, there are nice properties sometimes that are not true in calculus in general. Okay, so the notion that you might not know is super criticality. This goes as far as Sino first introduced in the Flajole-Sejvik book, no, maybe it was in some papers before, but there the theory is developed. So that's the super criticality that's a property of a relation between two generating functions, okay? So let F and G be two generating functions with non-negative real coefficients that are analytic at zero. Let's say that G zero is zero. And let's say that F and G are related to each other like this. I will mention soon that this is extremely frequent in combinatorics. This happens all the time. The two generating functions are related to each other like this. Then this relation is called super critical if when you put in the radius of convergence of G into G itself, what you get is bigger than one, okay? So let's see what this means. It has an analytic meaning and a combinatorial meaning. So what is the analytic meaning? So I put in the radius of convergence here and if it's bigger than one, that means that sometime earlier it was equal to one, right? Because G is a combinatorial generating function. So if you put in a smaller number then you get a smaller number. So it's monotone in Z. So if G at RG is larger than one then there's a smaller number of alpha where G of alpha is equal to one. Hence this denominator is zero. Hence this function F has a singularity. So F has a singularity before G does. So in other words, the radius of convergence of F is smaller than the radius of convergence of G. And therefore the exponential growth rate of the coefficients of F is larger than that of G, right? Because I said that the exponential growth rate was the reciprocal of the radius of convergence. So if F and G are connected by this very frequent relation, then the coefficients of F have a larger growth, strictly larger exponential growth than those of G. Is this clear? Everybody's on mute. Could I please get a verbal yes? Yes. That's good. Thank you. Okay, so now what does this mean combinatorially? Combinatorially F, so I said that this happens all the time. F equals one over one minus G means that one is equal to, that F is equal to one plus all the powers of G. So if F counts structures that can have several components and each component is counted by G and the order of the components doesn't, in the order of components is, it matters but the components have to be, they cannot intermingle. So they have to be like interval. You cut up an interval into smaller intervals. Okay, so if F and G are combinatorial structures so that F is put together by components which are G, then if the relation is super critical, then F grows faster than G. And the exponential growth rate of F is larger than the exponential growth rate of G. Those are some examples and then in two minutes I will start my main talk. Okay, so for instance, if F is the generating function for the number of ways to tie that one times N hat with red and blue times of size one times one and three kinds of times of size one times two, then GZ has this polynomial generating function implying that its convergence reduces infinity. So of course G at infinity is bigger than one. And therefore the relation is super critical and indeed F has this generating function which in that the coefficients can be, you can compute that the growth rate of the coefficients, it's something like three, but it's certainly not zero. In G the growth rate of the coefficients is zero because there are no coefficients larger than the second one. But here the coefficient, this has growth rate about three to the N. Another example is let F be the number of ways to split a semester into N days of non-empty segments and then choosing one day for a class social in each segment, then GZ describes what you can do in one segment. If your segment has M days, then you can make M choices. So this is your generating function for G. Okay, clearly this has growth, exponential growth rate one, but F has this generating function which has exponential growth rate 2.618. Let's declare something. This has to do with the Fibonacci numbers. Okay, but now let me give you a famous example there which is not super critical and we are getting close to a real subject of the talk. Well, okay, that's gonna come too but so sorry, there's difference like coming first. If G is rational, then this relation is super critical. Okay, so you can see that I'm getting with this. In order to prove that something is not rational, it would be enough to prove that this relation is not super critical. And that we will do later today, right? Because if G is that, let's prove this theorem first. So there are two cases, either G is a polynomial or it's not. If G is a polynomial, then it's convergent everywhere. The convergence rate uses infinity. So G at infinity is of course bigger than one. So yes, that's the condition for super criticality. So the relation is super critical. Otherwise, if G is a rational function that is not the polynomial, it has at least one singularity and all those singularities are poles. What do we know about poles, right? And from complex analysis that around the pole, and this is important, this will be the punchline later that around the pole the function goes to infinity, which is not true for other type of singularities. So not necessary through any way. So, right, so G at RG is infinity, which is again bigger than one, completing the proof that this relation has to be super critical in if F and G are rational. In fact, if one of them is rational, then the other is necessary for rational, okay? So therefore, as I said, if you want to prove that some generating function about which you don't know much is not rational, it's enough to prove that this relation is not super critical. And the way to do that, one other way to do that is to show that F and G have the same convergence radius or the coefficients have the same exponential order, right? Because we argued before that if the relation is super critical, then F grows faster than G. So if they have the same growth rate that put their coefficients, then the relation cannot be super critical, so the functions cannot be rational. Okay, that could be my punchline. So a famous example of a non-supercritical relation is the Catalan numbers, where F is this famous generating function, which can be obtained this way from G, where G happens to be Z times FZ. And because, in this case, G is just Z times FZ, wherever F has a singularity, G has a singularity. So they both have dominant singularities at one-fourth. So the relation is not super critical because F and G have the same exponential order, the coefficients have the same exponential order, F and G have the same dominant singularity, whereas if the relation was super critical, then F would have a smaller dominant singularity. All right. Okay, so that was the intro talk. So if you were sleeping, this is a very good time to wake up. Okay, now I start applying what I said. Permanent mutation patterns will be my main example because I work on them a lot. They are widely studied. Their generating functions are mostly unknown. There are just a very, very few sporadic cases when you actually know what the generating function is. And there is a long-standing, difficult conjecture about their generating functions, which I will not solve, but I will go a long way. I mean, long way, I will make a step towards it. Okay, so this, the next two minutes are maybe the most important for the rest of the talk, but I'm just trying to get to a figure here, which is worth 10,000 pictures. So yeah, if you came with the idea that you will pay attention for five minutes, then please make it these five minutes because otherwise the rest will be unenjoyable. So we say that the long permutation like P contains a smaller permutation in this case, Q. If you can choose entries in the big one, which don't have to be in consecutive positions, but they relate to each other the same way as the entries of the small one. So five, eight, six, that's the same pattern as one, three, two, because the first entry is the smallest, the second entry is the largest, and the third entry is the middle one in size. So in this case, I can say that this long permutation P decreasing case. So for short patterns that is if Q is of length three, then it is known, it is known, it's not trivial, but it's like an advanced undergraduate exercise or beginning graduate exercise that if Q is of length three, any one of the six patterns of length three, then the generating function, then the number of permutations of length N avoiding Q is the Catalan number, which has this generating function, which is algebraic. After that, so once Q is longer than three, it's no longer true that the number of permutations avoiding Q doesn't depend on Q itself. So it's no longer true that Q can be any pattern of length four, for instance, and A and Q will always be the same. It's very far from being true. Otherwise, there wouldn't be this field of pattern avoiding permutations. So here is one example. This is my result from 1996. The story of this is that I already handed in my thesis and then I found this result. So I withdrew the thesis so I could put this in. I was so happy about it. So if Q is this pattern of length four, then this is the generating function, which is again algebraic, not rational but algebraic. By the 24 patterns of length four, there are only three that are inequivalent and this is one of them. This is the biggest equivalence class and the only one for which the exact form, well, kind of, the only one for which the generating function is known in this sense that I can describe it like that. Okay, I said that the other exception when something is known about the generating functions of pattern avoiding permutations is when the pattern is monotone, then it is known that the generating function is differentially finite. However, if K is larger than two and it's even then it's not algebraic. So in particular for length four, one, two, three, four, the generating function is not algebraic and that's surprising, right? Because I could tell the generating function of this one. So the monotone pattern is not the nicest because this one has an algebraic generating function and the monotone one doesn't. Okay, how do we prove that those generating functions are not algebraic? Because we have a good estimate of their size. And there is a theorem that says that if a power series is algebraic and the growth rate of the coefficients is alpha to the n over n to the K, then that K has to be an integer and a half. It cannot be an integer and that would happen in the case when K is even and more than two. So those functions don't have the right growth rate and that's why they are not algebraic. And that's about the only way that we can tell that something is not algebraic for as far as pattern avoiding permutations. Otherwise, of course, there are other results. Okay, so what do we know about the size of pattern avoiding permutations? There's a celebrated theorem. It was the Stanleyville conjecture was opened for 23 years and then Adam Marcus and Gabor Tardos solved it without actually knowing that they were solving it. They proved that this number, the number of permutations of length n that the point of given pattern Q is less than a constant to the n. They solved the theorem, a fairly hard enough question from matrix, extrema matrix theory, and that happened to imply this result. So this result was very difficult to prove because if you think about it, this is a very, very ambitious statement, right? There are n factorial permutations of length n and only a constant to the n of them avoids Q. So very, very little. So most methods of most probabilistic methods are not fine enough to prove that something is that small. Nobody knows what the best value of that constant is. There were conjectures and then they were disproved. The conjecture somehow said the value of the best constant should be a polynomial function of the length of the pattern. But that was disproved in 2013 by Jacob Fox who showed that there exist patterns where the constant is an exponential function of the pattern line. So now we don't even know what the conjecture. Yes, there are explicit formulas for these two in terms of numbers, but that for the monotone one doesn't mean a nice generating function. Okay, all right. Yeah, I said that, so yeah. And so, yeah, let me tell you this because this will be another punchline. So remember I said before that for patterns of length four, there are really only three non-equivalent cases, this one, this one and one, three, two, four. For these two, we have an exact formula for the number of, for these numbers. However, for one, three, two, four, four it is wickedly difficult. So we don't even know the exponential growth rate better than a factor of two and 2.6 or something. So we know it's bigger than 10.8 and smaller than 13.5. So this is just a really, really difficult question. Doron Seiberger says that maybe even God doesn't know what is the value of A1, A1,000, one, three, two, four. Also, people ran also sorts of gigantic programs which showed that if the generating function for this pattern is a d finite, then it has to be d finite with some horribly high degree, like more than 500. Remember d finite means the finite dimensional vector space. So that vector space would have a huge dimension. Okay. So there was the Seiberger Newton conjecture in 1996 and I will talk about the single pattern case. So let, again, let's recall that this is the number of permutations that avoid Q. And the conjecture was that for all Q, that this is a single pattern version. This sequence is polynomial recursive or equivalently the generating function of this difference should be finite. Note that if instead of avoiding one pattern, you want to avoid multiple patterns, then the conjecture was disproved by Igor Puck and Scott Garabrant maybe four years ago or so. Their counter examples are huge, like 500 patterns of length 80 or something, 500 patterns of length 80 or something like that. But for the single pattern case, when you want to avoid one pattern, this is still open. So the single pattern, again, the conjecture is that, or the conjecture was that the generating function of for the number of permutations avoiding a single pattern will be d finite, differentially finite, equivalently, the numbers will form a polynomial recursive sequence. And so what I will do in this talk is that no, I can't prove, I can't decide a question whether they are d finite or not, but I can prove that for the overwhelming majority of patterns, the generating function will be not rational. Okay, so for all patterns except like one over and square of them. And also it will be funny to see what's the first permutation for which I cannot decide a question. Okay, so we will prove in two different ways that this is not rational. In fact, I will prove a little bit more that they are not rational. I will also prove that the dominant singularity is not a pole, right? It could be technically that they are not rational, but the non-pol singularity arrives somewhere later, but no, I will prove that the dominant singularity is not a pole, hence they cannot be rational. All right, so here is a simple definition. We say that the permutation is skew in the composable if you cannot cut it into two parts so that everything before the cut is larger than everything after the cut. For instance, three, one, four, two is skewing the composable, but three, four, six, five, one, two is not because you can cut it say after five so that everything before the cut is bigger than everything after the cut, okay? If P is not skewing the composable then you can cut it in a unique way to non-empty skewing the composable string, strings, so that in each of them, the entries are bigger than in the next one. These I will call the skew blocks, okay? Here will be some examples. This has four skew blocks, right? In each block, the entries are bigger than in the next block, but each block itself is skewing the composable. It can't be cut any further, okay? So here is my first theorem and I will later I will make it a little bit stronger but this is the main push. So let Q be skewed, so it is missing, let Q be skewing in the composable pattern that does not end in its largest entry. Unfortunately, I have to say that. You will see that that's important and otherwise it's not even true. Well, I will talk about that later. So if Q is like that, it's skewing the composable and it does end in its largest entry, then its generating function is not rational. By the way, almost four permutations of a given length are skewing the composable and don't end in their largest entry. So these are numerically not big restrictions. So if this happens, then AQ is not rational. The proof is remarkably simple for a theorem of this strength and then it's even more annoying that for those remaining small cases, the remaining few cases we don't know and we didn't know now for almost two years. Lots of people wanted to do it after my paper came out and then it's still unknown. So it's probably difficult. So clearly, because Q is skewing the composable, P avoids Q if each of the blocks avoid it, right? If each of the skew blocks of P avoids Q, right? Because that's the point that if Q is skewing the composable, you cannot start a copy here and continue it here because that would make it skew the composable, okay? So this means that you can build up the generating function AQ like this, right? Q avoiding permutation can be put together from skew in the composable Q avoiding permutations so that you place them like this. Hence, this generating function identity holds. You are seeing where I'm getting with this, right? That's exactly what will lead to the question of supercriticality. So remember that we said that if A is rational, then this relation should be supercritical so these two sequences could not have the same exponential growth rate. But I will show that they do and so that we'll prove that they are not supercritical and so they cannot be rational. And this is quite easy to do actually. Let P be your flanked N and let it avoid Q. Now I fix the new N3 N plus one at the end, okay? So let's go back to my picture. Here is P, each of these blocks avoid Q so the whole P avoids Q. Now I put a new big entry N plus one here. So that new permutation of flanked N plus one still avoids Q because that last entry N plus one is the maximal entry. It could only play the role of the maximal entry in any perforated copies of Q. But remember in my theorem, I said that Q does not end in its largest entry, okay? So in other words, that new permutation of flanked N plus one that I created still avoids Q, hence this in a chain of inequalities forms, right? The first one is trivial. Again, at this N1 that means skewing the composable. It has one skew block, okay? And so on the other hand, because in the, on the other hand, the new permutation that I created by putting an N plus one at the end is of course skewing the composable because it ends in N plus one, right? N plus one must always be in the first skew block. So this is true in other words, the exponential growth rate of this sequence on the right is not smaller than the exponential growth rate of the sequence in the middle. Therefore these two sequences have the same exponential order. And so this one on the left also has the same exponential order because it's the same sequence. The sequences on the left end and the right end are the same sequences just shifted by one. So this contradiction completes the proof that they cannot be, they cannot have the same exponential order, right? So now I know that this generating function is not rational as long as Q is skewing the composable and it does not end in its largest entry. Now, if skew is not skewing the composable, so if that's the problem then you turn it backwards, right? You take the reverse of it. Then the reverse of it is automatically skewing the composable and clearly the number of permutations avoiding P, the number of permutations avoiding Q. So avoiding Q and avoiding Q's reverse are the same. So that takes care of that problem that in the composability problem. Now, how about the problem that it has to end with the largest entry, right? So there you can do some trickery. So if you have the permutation that ends in the largest entry then you can take its reverse complement. So you reverse the permutation and then you subtract each entry from N plus one, right? And then you play with this a little bit and then you see that the only time you cannot avoid, you can't defeat the conditions by the story taking reverses and taking complements is the following case. When you have a pattern that starts with one, sorry. So your theorem will be true for all patterns which are skewing the composable such that at least one of the following conditions fold. It doesn't start with the entry one. It doesn't end with the entry K. Or if both of those are bad, like if you take the monotone pattern, one, two, three, four, five, six, then what if it's will effect equivalent? And I will tell you what that is to skewing in the composable pattern that satisfies at least one of the first conditions. So we have two patterns are called will effect equivalent if they are avoided by the same number of permutations of length N for every N. And there are theorems which say for instance that one, two, three, so the monotone pattern is avoided by here, so the monotone pattern is will effect equivalent to the one where you flip the first two entries, okay? And then the second one already is fine because it doesn't start with one. So therefore my theorem of non-rationality holds for the monotone pattern because it holds for this pattern two, one, three, two, one, and then monotone increasing to K. Okay, and as I said again, if your pattern is not skewing the composable then you can just reverse it, so that will be fine too. So that you do have a problem when your pattern starts with one and ends with the maximal entry and it's not will effect equivalent to anything else. And unfortunately, I mean, I don't know unfortunately or not, but you should have a problem there because the theorem is false for the trivial problem, the trivial pattern one, two, right? If a permutation avoids the pattern one, two, that means that it's the monotone decreasing permutation. There is one such permutation for every n, so the generating function is this and so it's obviously rational. But is this the only exception? Probably, but we can't prove it. So what is the first time that I cannot decide the question for a pattern? So the first entry has to be one, the last entry has to be the maximum entry K and it cannot be the monotone pattern because for those I can, so that the smallest you can do is one, three, two, four. See, again, that wicked pattern, that one for which we can't say anything about the size. But that's also the smallest one for which we cannot decide rationality even though people strongly believe that the generating function will not even be finite, let alone algebraic, let alone rational but we cannot prove rationality. So that innocent-looking fact that it ends with its largest entry and starts with the smallest entry is we couldn't beat that and not just me, many people try it. Let me show you a slightly different proof because it has its own advantages and it's not very difficult from here. So let's denote by A and two, the number of permutations of flanked N that avoid Q which have two skew blocks. These are those which are skewing the composable so they have one skew block, okay? So if AQ1 is rational, then it's dominant singularity. The singularity that's closest to zero is at pole, okay? So if it's a pole, then around the pole as Z approaches that singularity, the function converges to infinity or diverges to infinity. So if A, which is a monotone increasing function converges to infinity at that pole, then somewhere before that, there's a point where it's bigger than one, right? All the coefficients are non-negative. Sometime before you reach infinity, you will get bigger than one but you will still be finite. You will still be within the convergence radius. There, a contradiction occurs. What do we know about numbers that are bigger than one? What we know is that if you square them, they go up, okay? So their square is bigger than themselves. So take the square of your power series, your generating function for skewing the composable permutations. That gives you the generating function for permutations avoiding Q that have two skew blocks. So that would mean that this number is bigger than this number, right? If you put in the same Z1 into this generating function, you get a bigger number than when you put it into this generating function. And that's a contradiction because these are both power series with non-negative integer coefficients, yes? And I said that An1 is always as large as An2. Again, that can be proved very similarly to what I have done before by sticking a big entry at the end. So that's a contradiction because this long chain of inequalities shows that the left is at least as large as the right. So that there are more, right? Whereas this says the opposite. So it cannot be that you have more powers, you have more objects with one component than two components and have a rational generating functions for that. With generating functions for that. Both of those things cannot happen. If you have a generating function that is rational, then there would have to be more objects with two components than with one, okay? How could we prove the missing kids? So I couldn't, here are some, some atoms that might work or I couldn't make them work. Here is another interesting fact about super critical structures. So again, just think about super critical structures as one kind of object is put together by the other kinds of object as components. It is known, it's in the flagellate book, for instance, that in that case, the expected number of components is linear, okay? So translating this to the language of pattern avoiding permutations, if the generating function would be rational, then there would be a linear, on average, there would be a linear number of skew in the composable components. And that sounds really, really false because in the case is when we know that expectation, it's not only not linear, but it's a constant, okay? It would be enough to prove that the expected number of components is like n to the 0.99. And that would be enough to show that the generating function is not rational, but we can do that. This is what I said before that it proved that the dominant singularity is not a pole, right? Because I only argued about the dominant singularity and I proved it wasn't a pole. Because if it was then, yeah, then I could find a place where the generating function was bigger than one. So in fact, I proved even a little bit more than that. I proved that at that singularity, the value of your generating function has to be less than one. Here is another approach, which is very annoying that I couldn't make it work because it would be a nice example. So note that our proof works for this pattern, one, two, four, five, three, because it doesn't end in the largest entry, but it doesn't work for this one because it starts with the smallest and with the largest. So, numerical evidence suggests that more permutations avoid this one for which the theorem works, the theorem works, then this one for which the theorem doesn't. Unfortunately, I can't prove that, but let's say that I could. Then I could prove, this is not obvious, but I could prove that the two generating functions would have the same convergence radius. And then I would put in that same convergence radius in those two generating functions. And if indeed the coefficients on the left are bigger than the coefficients on the right, then this would have, this would follow. Uh-huh. But the left-hand side is finite because for that, my theorem works, right? And so, so remember I said that when you put in the dominant singularity in a generating function for which the theorem works, you have to get a number less than one. So that most one. So this would be a number which is at most one. So this would be a number which is at most one. So that R could not be a pole for this generating function, AQ2, right? So that that would work nicely. Finally, and this is the last thing I will just mention that in his book, Animative combinatorics volume two, Richard Stanley lists six general families of combinatorial objects, very wide ones that are counted by the same sequences and shows that those sequences are always algebraic with some very basic assumptions. So yeah. Now lattice paths, polynomial dissections, some threes, that sort of thing. And just because there's a very wide class of generating functions, which is algebraic, then you could think that occasionally they will be rational because an algebraic generating function typically contains a big square root and under this or other root. And occasionally, if your class is wide enough, it could happen that those, what happens to be say, under a square root sign will actually be a perfect square. And then your generating function could by accident turn out to be rational. But with this method, I can prove that that's not the case. We are short on time. So I will not go through the details, but by the same kind of argument, there is a lattice path and I turn it into another lattice path which is in the composable because it never touches the baseline. So none of those functions, which are like the go-to-place for algebraic generating functions, none of those are rational. So this is all I wanted to say. I hope you found it interesting. And I also hope at some point life becomes normal and I can visit South Carolina again. It doesn't have to be in a presidential election year. Thank you. If we could all take our speaker in some way, then we'll open it up for questions. Okay, do we have any questions for our speaker? Oh, sorry, I haven't looked at the chat now. Oh, okay. Just three thank yous, okay. So, okay, no questions. So I'm certainly looking forward to your next physical visit to Columbia. Thanks. I would like to ask a question that there are some hundred years old problem on which really nothing happened. This is like the stamp folding problem. So you have a horizontal string of stamps and there are perforations between them. And the question would be that how many ways you can fold this string along the perforations to a single rectangle shape of a stamp. And it has an even nastier version which is the map folding problem that everybody knows whoever the overcar and using a map. Is there anything known on those problems that are hard and they have no nice generating functions? I don't know. Thank you. I don't have to look at it because I think in this method, there is a lot more than it was used for. So I will look into that. So the stamp folding problem. Stamp folding and map folding. Do I have any other questions? Okay. In that case, thanks again. Be close and have a good weekend everybody. Bye. Bye. Bye.