 Alright, thanks everybody for making it out today and thanks Grant for agreeing to give a talk. Today we have our own graduate student Grant Fix who will be talking about recurrence ranks of sequences. Go ahead and take us away Grant. Thanks for the invitation drew and thanks everybody for tuning in and giving me an hour of your time to listen to what Josh and I had to work on so. I'd just like to take a quick minute to thank Josh for the opportunity to work with him and for his endless patience and encouragement through our entire process so I look forward to what we can continue to do. And in the years to come so this title the presentation recurrence ranks of sequences so what we're going to do is we're going to develop a couple definitions for different kinds of ranks. As it pertains to sequences so the context for. The context for this work is so Josh just does a lot of works in a lot of different areas but one of them being hypergraphs and and spectral hypergraph theory and that was the motivation for these ideas so characteristic polynomials are something that we can look at in terms of graphs we can look at the adjacency matrix of a graph and like any matrix we can look at its, its characteristic polynomial so we call that the characteristic polynomial of a graph. And there's a lot of information known about these characteristic polynomials for graphs because there are a lot of linear algebra tools that you can apply to gain information about a graph based on its characteristic polynomial. So for hypergraphs, not nearly as much as is known about these characteristic polynomials so a couple examples of some things that are known for example, a three uniform hyper edge now what that looks like is for a simple 10 like me it looks like a triangle. So three vertices, three uniform just means that all of the edges in the graph are three subsets of the vertices so in just standard graphs we have a vertex set an edge set, and every edge is a two subset of the vertices. In hypergraphs we relax that condition and any subset of the vertex set can define an edge, but the most convenient hypergraphs have all of the edges are the same size or connect the same number of vertices. So we call that that cardinality of an edge set or the number of vertices that every edge connects we call that uniform. So three uniform hyper edge is three vertices, and there's one edge that connects them it may be drawn like a triangle, you might think of it as like a face, for example. But that's that's what it looks like it has characteristic polynomial, given by this so if our variable is lambda than the characteristic polynomial that graph is is lambda cubed times lambda cubed minus one cube. So right away you notice that the degree of this polynomial is 12 there's there's a lambda cubed and then the term lambda cubed minus one cube that that contributes nine to the degree. And in graphs just by themselves, the degree of the characteristic polynomial is given by just the number of vertices, or the, or the order of the graph that we're working in razz. Here right away we see with hypergraphs that the characteristic polynomial degree is much, much larger, typically then the actual the number of vertices, or the order of the graph now a K uniform hyper edge in general. So again, just trying to picture this if you're not familiar with hypergraphs. It looks to me like a case cycle if we're just thinking about it in terms of graphs but really what it is, is it's K vertices all connected with one edge. And sometimes it's drawn like a case cycle sometimes it's, it's kind of shaded in so it looks like the face of some kind of surface. This is the characteristic polynomial of the K uniform hyper edge and what I really like you to notice is, there's this this large power of lambda. So, zero is a root of this characteristic polynomial and then it has the K through to be as well to some relatively high degrees. So there are other results known about characteristic polynomials of hypergraphs. Greg Clark, who worked under Josh as well he graduated two years ago when is off at Oxford at a postdoc there. He did a couple results which are looked at a couple different hypergraphs. Some of them I have included here just for some reference for us. This is called the rolling hypergraph, because JK Rowling wrote the Harry Potter books and this is a symbol that comes up in there. But he found he and Josh work together, found the characteristic polynomial for this hypergraph. And again we can see that the degree is quite large, and the, the, the, the sheer size of this degree is is one of the reasons that it's a huge characteristic polynomials of these hypergraphs. The computation is NP hard in general. And just thinking about it the way we compute characteristic polynomials for grabs we're taking the determinant of some matrix. The adjacency matrix for hypergraphs is tensor or a hyper matrix however you want to think about it so it's this for three uniform, it's a three dimensional block or box of numbers that we have to try to take determinant and still notions of determinant for those tensors. I won't get into exactly what those are but it's much more complicated to compute compared to the nice methods that we have to compute determinants of just general matrices. Now, the, the reason I include this in in here is if I draw in an edge here and edge there. What I'm drawing now is the phanoplane. So it just has two additional edges so this graph has seven vertices it's a three uniform hypergraph. So all of the edges connect three different vertices in the graph if I just add these two additional edges. Then we turn it into the phanoplane. It's actually unknown what the characteristic polynomial of the phanoplane is so just, just adding those two extra edges takes us from a hypergraph whose characteristic polynomial we know to a hypergraph whose characteristic polynomial is still unknown. So that kind of provides us with a little bit of context on just how hard these characteristic polynomials are to compute. And this particular example that I want to consider is this, this thing called the hummingbird hypergraph that looks something like this. This is this hypergraph falls into a very nice or very simple class of hypergraphs namely linear trees. So linear just means that any two edges that you choose to take the intersection of their vertex sets, and the size of that intersection is at most one. Any two edges have at most one vertex in common. They play nicely in terms of different things and trees just in the graph contexts are our graphs connected with no cycles. So you can kind of see here that this graph is connected. And then there are there are a couple different notions of what it means to be a tree in in the hypergraph context because there are a couple different notions of what it means to be a cycle but no definition you choose. There's no cycle in this particular graph hypergraph because no matter what two vertices you choose there's really only one way to get between them again noting that each of these triangle looking things are really just one edge. But anyway this this hypergraph has characteristic polynomial given by this quite large. So let's look at a lot of different routes but there's the one route that we do see here is zero. So here, the factor lambda has degree or multiplicity almost 21,000. Quite large. But anyway, the thing that all three of these examples have in common is that zero was a root of all of the characteristic columns we saw. The uniform hyper edge had this factor lambda the to the K times K minus one to the K minus one minus K to the K minus one. A decent factor there of just lambda so there's the zero root in that characteristic polynomial. The rolling hypergraph characteristic polynomial had this landed to the 133rd power. And then as we just saw in the hummingbird hypergraph, the multiplicity of zero there was almost 20,000. And it seems like zero is pretty common as a root of many of these hypergraphs so what we wanted to do was kind of study and see if we could figure out some bounds or maybe exact ways to compute the multiplicity of the zero eigenvalue for a particular graph. So one technique that comes up fairly often in trying to prove results for hypergraphs and hypergraph characteristic polynomials is to take some notion from graphs and see if you can extend it to something interesting in regards to hypergraphs. The fortunate thing in this setting is there isn't a nice result for graphs that allows us to compute or gives us interesting bounds on the multiplicity of zero for a given graph so we couldn't really pursue that that avenue so we had to come up with something new we had to try something else. And that's where these, these sequences come into play and we'll see how this, this comes up here so let's just suppose we have some hypergraph it's got some characteristic polynomial I'm going to call it f of x. Then what we're interested in at the moment is the multiplicity of the eigenvalue zero as a root of this polynomial so there's a couple ways to think of that if you just have some polynomial for example maybe you have like x to the fourth plus x cubed. So after an x cube data there and you're left with x plus one. So one way to think about the multiplicity of the zero root of a given polynomial is just the largest non negative integer. So that when you take your original polynomial f of x, divide out x to whatever this this power is we'll call it M. So that's one way to think about the multiplicity of this zero root so I'm going to let G bar be the reciprocal of G so G is defined to be essentially f of x, what's left of that polynomial after you divide out all of the copies of just x. So the constant term of G is non zero. And but G is, she is still a polynomial so the reciprocal of a polynomial. Here's a here's a formula for it essentially the way you can think about it is take G as a polynomial just rearrange all the coefficients. Flip them just reverse their order so the leading coefficient and the constant term flip the linear term coefficient and the code degree one or the degree minus one coefficient they flip you just do that the whole way through. But more rigorously you can think about just taking the polynomial G, plugging in one over x and then multiplying everything by x to the degree of the original polynomial we have that's the algebraic way to reverse the order of all of the coefficients. But no matter how you'd like to think about it. I'm going to let G bar of x be the reciprocal of this polynomial G. And I'm going to look at this this difference here so because G bar is the reciprocal polynomial of G, then the constant term or G of zero is actually the leading coefficient of G bar so remember that subtraction outside of two logarithms turns into division inside of the log so really what I'm doing here is this expression takes G bar and divides by the leading coefficient. And just then if I if I combine these in that way, I have a monoc polynomial, or the, I guess rather the log of a monoc polynomial but overseas as polynomial factors. And I guess I want to look at the factorization of just the polynomial G over the complex numbers that way I can write G bar as a product of one minus these roots times x, because then again multiplication inside of a log turns into addition. So outside the log so I break that up. Look at the sum and that's that's one way to represent those. The nice thing about reciprocal polynomials is if I have a root of the original polynomial G, as long as that root is non zero, then one over that root is a zero of G bar. And that allows me to factor it in this way, noting that zero is not a root of G, because we took f and divided out all of the, the powers of x that all of the terms had in common so zero is not a root of G so we don't have to worry about anything here. This is one minus some non zero constant times x. And so there's there's no like extra terms or extra information here. We really do have degree of G terms in this, in this sum. In the log I can I can look at the, the Taylor series expansion of log. And you might say well, what about convergence throughout the entire, the entire presentation we're going to look at some some power series concepts. I'm not terribly concerned with convergence. In, I think all of the cases you could take a small enough disk around zero and we get convergence but we're really just concerned about these formal power series. So if we look at the power series expansion of log, we get something like this. So then what we're interested in is notice if we would flip the order of summation here. The coefficient on x to the J for some particular J is then the sum of the degree of G terms, but these these coefficients are all some complex number be I raised to the J power and then we're dividing by J but essentially, aside from this little divisor. All of the coefficients of the powers of X and this Taylor series are the sum of some finite number of J powers and complex numbers. So sometimes we can be given information about a sum that has this kind of form. And what we would like to do is express these coefficients in this form in this, this is a sum of some finite number of J powers, and then we can take all this information and go backwards and try to gain some information about the original characteristic polynomial of the graph because we know, we know the degree of the characteristic polynomial. It's actually, if we have an n vertex hypergraph uniformity K, I believe the degree is n times k minus one to the n minus one. So we know the degree of F. If we could gather some information like this about either G or G bar essentially we're the same kind of thing just the context makes them look a little bit different. But if we have some information like this, and we can find whatever this this positive number needs to be the smallest, the smallest positive integer, so that we can express all of these coefficients as J powers. As the sum of that many J powers, and we can determine the degree of G and G bar. And then, comparing that with the degree of F, I can tell us what this, this non negative integer m needs to be, or maybe we can provide balance or something like that. But that's the game that we're going to play. We're going to find the smallest integer are so that those coefficients times J can be written as a sum of these are different J powers and then that tells us some about the degree of G bar. Or I guess really G itself. Okay, so why is this useful so there are some situations like I was mentioning where we can get information about the polynomial log of G or maybe G itself, and that provides us with some extra information So, we'll go through we'll develop the tools that we that we found that we can use to help us get this information, and we'll apply it towards the end. Okay, so let's take some complex complex sequence. Okay, I'm going to let's see be the sequence where this will be preserved the whole way through I'll keep defining it where we need it. But we'll let's see be the sequence that we're looking at. And what does it mean for it to satisfy an art order linear recruits so I'm sure all the faculty have seen all this before but for the grad students I know from personal experience that it's nice to be able to understand a little bit of a talk. So, I'll go through, kind of slowly, maybe, and go through these definitions and talk about them a little bit. But if I have some sequence, we know what recurrences are. We define sequences using a recursive definition all the time. This is an art order linear recurrence me well think about the Fibonacci sequence okay if I have two consecutive terms of the sequence add them together you get the one after that. So, if I call that fn for my for the Fibonacci numbers fn plus fn plus one is fn plus two. So that would be a second order recurrence, because it takes two of the previous terms to define the next one. So if we think about moving that, all of the terms over to one side, then it's a second order recurrence but we really have three terms over there. So that's what this expression is. We really have r plus one terms here. But because we've said it equal to zero. So what we're saying is the terms with index zero through r minus one are provide us with enough information to define the arth term of the sequence but again moving that all over to one side we get really r plus one terms on that side so all of these are complex coefficients some complex constants. And again our sequences is complex that's why the constants are complex but essentially what we've done here is, we have a non trivial linear combination of r plus one terms of our sequence, I get a zero. So that's what we mean by an art order linear recurrence linear just because all of these coefficients are just constants. That's what we're going to do with with anything else. We can look at the characteristic polynomial associated with a particular sequence. And essentially all we do is replace C to the I, or C sub I with X to the I. So if we just think about this for one second. We're going from from zero to our. So the first term is a zero C zero and then a one C one. Essentially what we're doing is AI is going to be the coefficient of X to the I in this polynomial we call this the characteristic polynomial associated with the particular recurrence that we have. And we call the original recurrence simple. If this characteristic polynomial has distinct complex roots. It's a complex polynomial so we can factor it into completely factor it over C, get all these linear factors determine all of the roots. And if it has all distinct roots, then we call the original recurrence simple. The reason that we're concerned with these simple linear occurrences is because there's a really nice age old result in in this theory of linear recurrence is that if we have a simple linear recurrence like this or find an order linear recurrence where the characteristic polynomial has all distinct roots, then we can express an arbitrary term of our sequence. As the sum of the our roots to these these n plus first powers times some appropriate complex. So, the nice thing again about these simple linear recurrences is we get this form for any term of the sequence. So we define the moment rank of some complex sequence and we denote it in this way. It's the smallest positive integer. So there exists non zero alphas distinct betas that have this form so essentially what I've done is, I've taken the form from the last slide and use that as the definition for moment rank. So it might be possible to write or express the terms of our sequence in this kind of form for different values of R maybe an R one and an R two or three or something like that. But we're going to take the smallest or the minimum possible value for this for the top end of this series or finite song. So some of the game that we're going to play here is try to find some equivalent conditions to a sequence having moment rank R. And then we'll look at an example then of how this helps us in the work or the case that we're considering. So what we already talked about highlights for us that if a particular complex complex sequence has moment rank R. It's equivalent to R being the smallest positive integer so that our sequence satisfies and Arthur or simply recurrence. So, almost from the definition, we see that those two things are equivalent. Definitely moment rank R implies the recurrence business. And it's not too difficult to see that the, the implication goes the other way as well proving that these two things are equivalent so before we continue any farther, it would be really really nice to know that the moment rank of a particular sequences is well defined. When I started working with Josh on this. I know that it was possible for a particular sequence to satisfy multiple linear recurrences simultaneously so this question by itself I, I didn't even consider because I didn't realize that was even a thing like example if I go back to the, the fibbing the fibonacci numbers. We all know the recurrence that they satisfy, but they satisfy infinitely more linear recurrences as well if you look at the characteristic polynomial for a particular recurrence. If you multiply in some other distinct linear term. So take another root for another value that's outside the roots of the characteristic polynomial multiply in the factor that's associated with that distribute everything out you get another polynomial degree is one higher. And if you go back and take that that polynomial and go back to a recurrence, we are original sequence is going to satisfy that as well. So actually you can take one linear recurrence and generate infinitely many other recurrences that the sequence that you have satisfied. That was a foreign concept to me but once I was introduced to that, that makes me realize that this question is actually valid and important. Let's take a sequence suppose that we have some kind of form like this so what I've done is, I've assumed that our sequence values can be described in the moment rank definition they can be described in two different ways. So I'm going to take the sum from equals one to R of the alphas times betas to the m plus one but let's also suppose that there's another positive integer s. So that the original sequence terms can be described as the sum from Jake was one to s of alpha prime beta prime again to the power. Let's take the ordinary generating function of the sequence that we're looking at so we're multiplying in by z to the n and taking the sum from n equals zero to infinity on both sides, and by doing this what we can, we can use all of our complex analysis tools. So now we have all of that at our disposal to kind of give us some information on what's really going on here so anytime we have multiple sums again we're working over formal power series but I think if you take a small enough disk. It all works in in regular power series. Anyway, but we can switch the order of summation, and now the this interior sum is the sum from n equals zero to infinity. You notice we have a z to the n. We have a beta the n plus one. So if we pull the alpha and one copy of beta out of this internal sum, because this this interior some only depends on can pull alpha and one copy of beta out and now we have z times beta to the end power. So that's one over, that's the form x to the n one over one minus x. So we, we look at those we make that that simplification and now we have two finite sums of these rational expressions these fractions. So if you stare at it for 10 seconds or so and think about a little bit of complex analysis. So all of the betas have to be the same because the poles of both of these functions have to be the same set. And then once we we make that conclusion. The alphas are going to be the same thing as well so R and s have to be the same value and the multi set of betas. I guess they're just sets because they're just so the set of betas and the set of beta primes are also the same. So we assume that there were two different representations using the moment rank definition for a particular sequence and derive that well really they are the same thing. So what this tells us is moment rank is is well defined for a particular sequence and it makes sense to continue this endeavor of trying to find conditions or settings where we can find equivalent conditions to the moment rank of the sequence being some number. So, there's a variety of different contexts where we where we turn to and could find equivalent conditions. One of my favorites was this one here so we looked at an algebra setting because we said okay we have these, these recurrences, and they immediately correspond to So it's, it can be interesting to study these polynomials. I already mentioned briefly that if I have some recurrence, I can multiply extra factors in to get recurrent recurrences of other orders. So that that immediately makes us realize that this if we kind of take the set of all of the characteristic polynomials associated to linear recurrences that a sequence satisfies. It's going to be closed under multiplication but also closed under multiplication by just complex polynomials in general. So that kind of leads us towards some kind of algebraic structure. So I'm going to let this the script R sub C be a set for now in the complex is a joint X. So all complex polynomials in one variable. And again, these are these are all of the polynomials characteristic polynomials for all of the linear recurrences that are original sequence C satisfies it turns out that this set isn't ideal. So that's kind of what I was leaning towards. It's closed under multiplication by by any complex polynomial. And it's closed under addition that's that's not too bad to see if I have two linear recurrences, and I add them together. Well, that's going to give me another linear recurrence, because each one by themselves. So in our definition where everything all of the terms are on one side and it's equal to zero, add two things that are both equal to zero you still get a sum of zero. And then that creates a new linear recurrence for us and the polynomials reflect that in the same exact way. So it's really not too bad to see that this set is closed under addition and closed under multiplication by any complex polynomial so we get that this set of these recurrence characteristic polynomials is an ideal and the nice thing to see is that the field so see join X is a PID. So there's some unique, but not unique up to scalar multiplication, but there's some polynomial that generates this ideal. So this was this was a really interesting thing for us. And how do we then use that to adapt it and apply it to the situation that we're trying to look at. So if we have a sequence with moment rank R. Then, essentially what that's going to correspond to is this polynomial F has to have degree R. It has to have distinct roots. And that's essentially what we say here. The other interesting thing about that is, if F has those properties so just if F. Regardless of its degree if it has distinct roots, then this ideal is radical radical just means that if you have some power of a polynomial within your ideal then you have the original polynomial self. If for example P is some polynomial, if P to the 12th power is in this ideal or just a radical ideal general. But if P to the 12th power is in here then what that tells us is P by itself needs to be in there as well. So this was one context where we were able to come up with an equivalent condition. So the place that we turned to were Hankle matrices. So I'm going to develop conditions on some finite Hankle matrices, but then we're also going to look at some infinite Hankle matrices as well. So Hankle matrices has a special have a special form. Essentially what these are going to look at is some finite portion of whatever infinite sequence we're looking at. We use this notation to define the matrices. So m, this first parameter is going to give us the size of our matrix. We're actually going to define it to be of size m plus one by m plus one. So it's a square matrix. So things will work out nicely. But yeah, we're going to, we're going to let n be the parameter, but the size be m plus one by m plus one we'll see here in a second why that makes the most sense. So the second parameter just kind of gives us a shift in the sequence. So we might not always want to look at the initial segment of the sequence we might want to look at some part farther down the line. So we're going to let T denote the first index of the sequence that we're going to consider and then I'll be some finite part of the sequence starting from there. So what do you handle matrices look like the formally the IJ entry is given by that value there, but if we're just looking at an example, maybe I have a sequence, just essentially the identity sequence so CN is the value and, and we're going to define this for non negative integers. And here's what a couple of our ankle matrices look like so our first parameter the size is going to be one bigger than that. So the parameters one so we're looking at a two by two matrix. And we're going to start with the, the first term of the sequence the one with index zero. So, essentially, these ankle matrices are identical on anti diagonals. So typically our main diagonal is given this we go down into the right anti diagonals go the opposite way. But any anti diagonal, the ankle matrices are constant. But essentially what we do is we take the first m plus one or the first parameter plus one terms of wherever we're starting our sequence, write those down in the row and then shift everything over by one to get subsequent rows. So in this way, this is the, the C zero terms C one, but then C one C two so we get zero one one two. So the first parameter is to so we're working with three by three matrices and we're starting with the fifth term in the sequence. So we get this matrix five six seven six seven eight and seven eight nine just take the first row. Start with this term in the sequence. And then once you have the entire row shift everything. So these are what our final ankle matrices look like. What are we going to do with this well if we think about matrices and they're determined we know that the determinant of a matrix is zero. If we go all the way back to our introductory linear algebra class we learned a laundry list of different equivalent conditions to a matrix having determined zero. One of them being that there's some non trivial linear combination of the columns which give us the zero vector with zero. And we're going to use that and exploit that in this particular setting so let's suppose that we have some sequence. Normal definition let's let's say it has my rank are, I'm going to look at the, we know that because it has my rank are we've already said that there's some are the order simple linear occurrence that the sequence satisfies. I'm going to let the vector a or the column a contain all of the coefficients of that recurrence. And we know that this product is going to be zero. Because if we think about a row of our ankle matrix, it's just because the parameter is are, it's going to be r plus one consecutive values of our sequence. And we know just based on this moment rank are that gives us our linear recurrence. So the dot product of that row of the ankle matrix and this vector here, or, yeah, that vector is going to give us zero. What it tells us is that we get some non trivial linear combination of the columns, which gives us zero so the determinant, the determinant of this matrix is zero and some interesting other consequences I won't go through in and prove them, but other things that happen here. The null space, which is the vector space, which contains all vectors that have this property so all null vectors so I know that you're just being when you multiply it by the original matrix that you have you get zero that's a null vector, take the set of all those they're actually a vector space. The null space of the particular matrix the ankle matrix that we're looking at here has dimension one. So this column here which we generated from the coefficients of the linear recurrence satisfied by the sequence we have. That's actually a generator where the span of that vector gives us the null space of this final ankle matrix. And the, we can kind of see then so no matter what t value we choose the null space always has to mention one. So because of that, we already know a vector that's in the null space of all of these matrices so it turns out that the null spaces of all of these matrices no matter what shift that we use in the sequence, all of those null spaces are the same. And if I take some size ankle matrix for anything smaller than this moment rank of the sequence, the determinant is non zero for all of those as well so this is also kind of a consequence of the first two. Because if the determinant was zero for some smaller size or parameter. And we would see the null space would have higher dimension. So the null space having dimension one. And this term here kind of, you can get one from the other. So if I take all three of these properties, put them together and then also specify that. So formally I've said here that a is in the complement of the affine hard discriminative variety. All that means is that if I look at the characteristic polynomial of the recurrence that this that these coefficients define that it has distinct roots. Because we need that to correspond to the, the simple mess of the linear recurrence that were that we're looking at so put all these together. And a defines a polynomial which, which has distinct roots, and that's equivalent to the original sequence having memory bar. Okay, so I also said that we can consider infinite ankle matrices. So let's go ahead and do that. This infinite ankle matrix is constructed in the same way. Just taking the first row that's that's the sequence itself, and then shifting things over as we go down through subsequent rows we just now instead of having some finite matrix. It's an infinite one. So we can also explore some ideas here. So, because our sequence has moment rank are, we know that the terms of the sequence, take this form where the alphas are non zero and the betas are all staked. So, Van der Maan matrices, hopefully most of us have seen those before their first, their first column is all ones, then their their second column is filled with with some value and subsequent columns are higher powers of the values in the second column can take a Van der Maan matrix you can generate it by some particular values I hope most of us have seen those before. But you can generate a Van der Maan matrix from some collection of complex numbers or real numbers in general. But if I take the Van der Maan matrix generated by the betas that I have. I take a diagonal matrix where the diagonal entry is alpha i times beta i. Then when I look at this product here I actually get the sequence terms back. So, the ankle matrix that we have that's populated by the sequence terms is going to be equal to this, we call this a Van der Maan decomposition. And actually we've we've decomposed this matrix this ankle matrix into this product here where D is again a diagonal matrix and V is a Van der Maan matrix generated by the distinct roots of the characteristic polynomial associated with the linear recurrence. So there's we can translate between this setting and the roots of the characteristic polynomial seamlessly. So D is diagonal, all of the diagonal entries are non zero. So it clearly has rank R. The same is true for the Van der Maan matrix. The Van der Maan matrix actually has R rows and infinitely many columns, but it also has rank R because all of the betas are distinct. So both of these matrices have rank R and a little bit a little bit of work is required but we can see, and the ideas are kind of there that the infinite ankle matrix H also has rank R. And this this ankle matrix having rank R and giving a Van der Maan decomposition where the size of V is R by infinity or just having R rows. That's also equivalent to our original sequence. Another place that we can turn to are the the notion of generating functions. So again I have my my sequence to see it satisfies some rth order linear recurrence. I'm going to let fee denote the ordinary generating function of this sequence. What I'm going to do is I'm going to multiply this ordinary generating function by this finite sum here might say well that's kind of odd. Why would you do that. Well, when you actually do the multiplication. If we look at something in the product determine the product of these two are this this polynomial and the generating function, if the degree of the monomial that we're looking at is less than R. Then we only get some of the terms out of this polynomial providing some kind of contribution and that that chunk of this power series is given by this some here this finite some notice that even though we have double sums they're both finite. Whereas if the degree of the monomial that we're looking at is at least are then we're going to get a contribution from all of these terms, and that's going to look something like this. Now it takes a little bit of thought and some index chasing to realize that this is just a restatement of the recurrence that we satisfy things are shifted a little bit just depending on the actual value of J. But this is just the linear recurrence that we satisfy. So this double sum here is really just zero. So we see that this product the product of this polynomial and our ordinary generating function is equal to this polynomial notice that the power of X is is J and J ranges up to R minus one. So this is an R minus one degree polynomial. This is an R degree polynomial. So if we take this polynomial and divide it from both sides we see that the the ordinary generating function of the sequence we have is a rational function and the degree of the denominator is exactly the moment rank of the sequence that I started with. And that's actually so not just that denominator having degree R but it factoring into distinct roots meaning that the generating function itself has our simple poles, that's equivalent to our sequence having moment rank R. So to just summarize some of these equivalent conditions for us. If we have a sequence with moment rank R, that's the same as R being the smallest positive manager. So that C satisfies an R for a simple linear recurrence. We also had that finite ankle matrices condition. And that's summarized here. So we looked at this ordinary generating function of associated with the particular sequence that we started with. We looked at Van der Monde compositions of infinite ankle matrices, we looked at our recurrence ideal, and that the set of all the polynomials associated to the sequences of our sequence actually form an ideal in C adjoin X and that that ideal is radical. And lastly I didn't talk about this at all. But it shouldn't, it might not be too hard to see that our sequence having moment rank R is the same thing as satisfying a complex or atomic measure or B, sorry, the complex are the moments, it's the moment sequence for a complex or atomic measure. So the finite sum of some direct delta functions, where the betas are alphas are just the constants that go out for the right. So the title wasn't something about recurrence ranks the plural we've only talked about memory. The other one that we define we specialized it to this context of unitary rank. So unitary rank was actually the original idea that we were going to employ to apply to these, these hypergraph characteristic polynomials. But in working with it we saw that there was this more general thing happening that's where the moment rank ideas coming from. So unitary rank is really just a specialization of moment rank. Now we take our sequence to start with index one. We require that all of the alphas from the moment rank definition be one. And with that, we no longer require that the betas are distinct. So this in general can be something different. For example, if we have the sequence defined as two times three to the end, just as an example, this is going to have moment rank one, but unitary rank to because of this this extra coefficient here. This breaks into three to the end plus three to the end, which is the form required for unitary rank. But for a moment rank, we can just let that alpha be to the only beta is going to be three. So we see that this can be different depending on whatever sequence. We're looking at so a lot of the statements that were equivalent for moment rank, we can translate them in some careful way, and we get an equivalent statement for unitary rank but there's one new one. So again, let's take our sequence again we're going to start with index one, and we'll see why actually right here, we're going to consider this, this object here. So, e to this integral where fee is the ordinary generating function of particular sequence that we're looking at. So, because the sequence starts with index one, it's constant term is zero. So we can look at this use power rule to integrate everything and we get something that looks like this. Okay, now notice right away, this kind of looks like log. Right, so the, the, the exponential in the log are going to seemingly cancel. So when we exponentiate this, this is what happens because our sequence we suppose that it has unitary rank are we can replace our sequence terms with this finite sum of these are and powers. So because the sequence now starts with, with index one, we've shifted the power of betas down to match the index of the sequence so we get that there are sequence values are the same as the sum for Michael's one to R of these betas to the end power. And like any good proof with two sums in it we're going to flip the order of summation. And by doing this now we've turned things into logs. So a finite sum in the exponent of the exponential turns into a product of those exponentials. Now this is log. This is log of one minus beta times x, either the log well we know what happens there. We got a product of some linear factors. So this is a polynomial of degree R. And those are actually equivalent as well. So if our sequence has unitary rank are that's equivalent to this object here. Being a polynomial degree R with some non zero moves. So in the last couple minutes that I have I would like to go all the way back to the beginning, and help us go through a very almost trivial example, but to illustrate the concepts and the ideas of how we can use these ideas to help us with graph and hyper graph characteristic So, we're just going to, I'm just going to consider a graph we're going to look at just the complete graph on inverteces. But the idea is then we can try to to generalize to the hyper graph case because I mentioned that one technique is to take graph results and generalize them to hypergraphs, but there was no graph result for this so this is an attempt. Maybe to look at a graph result and then help us extend it to a hyper graph result. So if I have some graph with adjacency matrix a, I have all of this linear algebra that we can apply to that adjacency matrix. So one of them being just the just the rank only theorem so the rank plus not only the graph is equal to its order, assuming that we have a square adjacency matrix. So, the multiplicity of zero as an eigenvalue for our JC adjacency matrix is the same thing as the nullity. So that's the order or the size of a minus its rank. The size of a is very easy to obtain just looking at the graph. That's the number of vertices that we have again in the graph setting. But what about the rank is it is it possible to look at, or, or take properties of the graph and determine the rank of this adjacency matrix. The rank is actually the same thing as the number of non zero. So you can take all of the eigenvalues, you can look at the non zero ones, the zero ones zeros correspond to the nullity non zero corresponds to the rank. And the other interesting thing is that if I take the trace of some power of the adjacency matrix trace again is just the sum of the diagonal entries. That is actually the sum of the jth powers of all of the non zero eigenvalues. And if you think about the graph context. The jth power of the adjacency matrix, the ij entry gives us the number of distinct closed walks from vertex. So again, we're looking at the trace. So this is the diagonal entries. So this gives us the number of closed walks, meaning walks where the starting and ending vertex of the same thing. But this gives us the number of closed walks of length j particular graph. So we want to try to take this, this sequence see if we can find a form for the sequence, given by the trace of a to the j or these, the number of closed walks of particular lengths, and see if we can apply some of our conditions to it to determine either moment rank or unitary rank for it so in the last couple minutes. Let's look at just the complete graph. We know that some recurrence, some finite recurrence of some finite order, less than the number of vertices that we have were less than or equal to I should say we can use that recurrence to define this sequence so just looking at the first couple values. As with any graph on inverteces the number of closed walks of length zeros is the number of vertices so that's n, but the closed walks of length one are going to there aren't any, because if you traverse an edge. You're going to be different than where you started. There's this many choices for a to, but let's try to determine a recurrence from here so if I have some closed walk of, of order and then there's two different, there's two cases. Either the last vertex and two from the end so WN minus two either the same vertex, where they're different. They're the same vertex. Then I have a closed walk of length and minus two, and essentially I can traverse one edge forward and backwards. However, I'd like and that gets me all of the. The closed walks of that form of length and so because there are the complete graph is regular of degree and minus one. I have this contribution to this recurrence that I'm trying to build up. Now if that wouldn't be the case so if WN and WN minus two aren't the same value, then essentially what I'm going to do is I'm going to take a walk of my then minus one. I'm going to delete the last edge, and then use two other edges to get to my destination. And there are n minus two ways to do this. So this contributes an extra, some end of n minus two times a n minus one or the number of closed walks of length and minus one my graph, putting these two together I've covered every case. So that gives us a linear recurrence for the number of closed walks in the complete graph on it. You can look at the characteristic polynomial associated to this recurrence. We removed these two terms over to the other side, set it equal to zero, replace the an with x squared and minus one with x and then minus two with just x to the zero. I factors in this way and so we see that the two different roots are n minus one and negative one. And based on the initial conditions that we have. It's not too challenging to see that the coefficient on n minus one has to be one and the coefficient on negative one has to be n minus one. So what this tells us is the moment rank of the trace of a to the J sequences to but the unitary rank is actually n, because we have to break up these this n minus one coefficient into n minus one ones. So we get n different terms for the unitary right definition. So what that tells us is the multiplicity of zero as an eigenvalue zero. Just because this unitary rank is actually the same as the rank of the adjacency matrix. So the sizes and now we know the rank of the adjacency matrix is in. So what this tells us that the, the nullity or the multiplicity of zero as an eigenvalue zero. So again this is nothing. This is nothing new or revolutionary for this particular example but the idea is that we can do something like this for other kinds of graphs. And then hopefully at some point, extended type of graphs. So that's all I have. Thank you very much for sticking around and being attentive what questions do you all have. Thanks Grant, we could all thank our speaker and then we'll go ahead and open it up for some questions. Thank you so much speaker. Grant, what is the largest. Have a graph you can compute. What are the science. Well, I, I don't, I don't have a good answer to that. I, well, the, the, so by sizing the number of edges Lincoln or. Either way. So just the trivially the K unit, the, well, the K uniform hyper edge. We know that one so as the uniformity grows so does the number of vertices so you can take that one infinitely that's kind of trivial probably not a very satisfactory or satisfied answer. But it depends on the structure of the edges. For example, the Hummingbird graph that Greg and Josh had worked on. I think that was like 13 vertices if I go way back here. But that was one of those linear trees which behave really nicely. But I think it was 13 vertices and so there's 246, 810 13 vertices and six edges. And that one's known. But the phantom plane is seven vertices and one, two, three, four, five, and then two more so seven edges, but that one we don't know. So it kind of depends on the structure of the graph itself. If not, thanks again, Grant. Thanks everybody for making it out and have a good weekend everybody. Bye bye.