 Good afternoon everyone. Today we have our own branch fitness to give us a talk and he will be telling us about the structure of linear hypergraph and null bearing. Take us still with Grant. Yeah, so this is a joint work with my advisor Josh. So as always, thank you for all the continued help with everything. So yeah, we're going to talk about linear hypergraph and all varieties. So I guess we should talk about what some of that stuff means. So all of this is going to be based on hypergraph. So if you're not familiar with hypergraphs, we take the just normal graphs, edges connect to vertices and we relax that condition. So if I have a hypergraph, the edges aren't, we don't require that they can have two vertices. They can connect any arbitrary subset of the vertices. But in general, we think of, we want to talk about uniform hypergraphs meaning that all of the edges connect the same number of vertices. So we can think about these edges as sets, subsets of the vertex set. We just, we want to require that all of those sets have the same size. And we use K to denote that common size. So we call hypergraphs K uniform if they satisfy that condition. We're going to talk today about linear hyperpaths. There are a lot of places in literature where they're called loose hyperpaths as well. The terms can be used interchangeably. I'm going to stick to linear. So if I have a K uniform linear hyperpath on N edges, I'm going to denote that with P subscript and superscript K. So what do those things look like? Well, they look like this. Now, easily I think about just a path on graphs and I take each edge and blow it up into a K uniform edge on whatever K I'm going to do. A little bit more rigorously. If I take N edges that all have the same uniformity, now I just label them E1 through EN and then two consecutive edges intersect at one vertex and any other pair of edges don't intersect at all. That's how we can think of these and hear a couple of different things. So in each of these pictures, each color represents one edge. This is typically how we draw these kinds of things. And we can think about these edges like faces or shapes and we fill them accordingly. So here are some examples of linear hyperpaths. So I want to be able to talk about these things that I call null varieties. In order to do that, I need to talk about what eigenvalues and eigenvectors are. But in order to talk about eigenvalues and eigenvectors, I need to have some kind of matrix or hypermatrix that we're going to associate those things to. So in an effort to try to build up some definitions and terminology in order to do that, we're going to look at hypermatrices. The graphs are nice. We can just look at matrices. But what we're going to be able to do here in the hypergraph cases, find some hypermatrix and associated paragraphs. We'll do that in a second. But what is a hypermatrix? So there are two properties or parameters that we associate to these things, the order and the dimension. So the order, if I just have a matrix, that'll be a second order. So the order is just the number of sides that it has. Every entry we can specify by two coordinates. So the number of coordinates that we need to specify a location is going to be equal to the order of our hypermatrix. And we're just going to require that each of those entries be between one and whatever. So if we have a matrix, we have rows and columns, we can specify any entry by what row is it will come for the second order hypermatrix. So that's how we kind of visualize this. So we can take this kind of idea and define a normalized adjacency matrix for any hypergraph that we want. And now the word normalized, including there, we'll talk about that here in a second. If we just think about graphs, we can define an adjacency matrix for a graph. Any entry of that matrix is going to have the value one. If the vertices that correspond to that entry are connected, or connected by one edge, they're adjacent to one another, and a zero otherwise. So the zero idea is going to be preserved here for our normalized adjacency matrix. The one is going to be slightly different. Instead of having a one and an entry that corresponds to an edge, we're going to have one over k minus one. Now the reason for doing that is just kind of because we do it for aesthetic purposes. As you start to do computations with these objects, if we just use ones and zeros, then there's a bunch of k minus one factorials that fly around everywhere, kind of convoluted the computation. So in order to eliminate those and make the computation a little bit easier to work with, we kind of bake that normalization that leads to the hypermatrix itself. So maybe not a very satisfying answer as to why that's normalized in that way, but that's why we do it. Okay, so I have a hypergraph. I can talk about this tensor or hypermatrix that we have associated to it in this way. So now we have an object that we can talk about spectral things, eigenvalues and eigenvectors. So let's let's build up towards that. So I'm going to take some arbitrary vector complex entries in n-dimensional space. So I have it in this way. I'm going to define two notations. I'm going to take x, this vector and raise it to an exponent. I'm going to define that in two different ways. Rotation will be slightly different, but still a little confusing. So I take x to the m power, just represent how we usually think about exponents. Then that's going to correspond to another vector where the coordinate of this new vector is just the m power of the corresponding coordinate of the original vector. So I take my vector x1 through xn, and I raise that vector to the m power where it just takes the entries of the original vector x, raise it to the m power. The second thing I want to define is x again to a power, but now I have this O times intact for this outer product. And what this is going to do is instead of spitting out another vector, it's going to spin out a hypermatrix. Okay, the order of this hypermatrix is going to be the integer that occurs up in the exponent, so m in this case, and it's going to be an n-dimensional hypermatrix. The n comes from the size of the original vector. So we need to talk about what each of the entries of this hypermatrix looks like, and the indices for this entry that specifies the location within our hypermatrix are going to specify the value that we choose. So we're just going to take the product of the corresponding entries in the original vector. So if I'm looking at the x1 through xn entry of my hypermatrix, that value is going to be filled in with x sub i1 times all the way to x sub i1. So whatever values that we have there for the indices in our entry, we're just going to take the product of those components of our original vector. So we have two different kinds of products to find in this way. So if I take some hypermatrix, I can look now at eigenvalue eigenvector pairs. Just like we do for just normal matrices, eigenvalue is just some complex number. The vector is just some vector in our n-dimensional space, and they're going to satisfy this eigenvalue of weight. So on the left side, I have this outer product hypermatrix definition for the exponent, and on the right side, we just have this x to the m minus one is just another complex number. So how do we understand this kind of thing? Well, look at it just in one second. But what we do is we're going to specialize this a little bit. We're interested in our k-uniform hypergraphs. So in this context, the k, the uniformity of our hypergraph is going to replace the order of the hypermatrix. So m minus one is going to become k minus one, and the hypermatrix that we're interested in is the adjacency of it. So this is what the equation looks like for our star setting of hypergraphs. Now, what is that? Okay, so the right side, it's a little bit easier to understand on the left. So the right side, that x to the k minus one is just the k minus first power of the vector defined as another complex number. And we know how to scale or multiply, take a scalar and multiply by a vector. So the right side, we just get some n dimensional vector where n is going to correspond to the number of vertices that are hypergraphs. So that the right size may be not too bad on this table. So if these two things are going to be equal for some choices of lambda and some choices of x, well, I better get a vector on the left side. So how does that work? Well, I mean, a little complicated. I find it easiest to understand this when I look at a picture, an actual physical example. So I'm going to choose a hypergraph here. So I've kind of, from the original pictures that I showed you on the first slide, I kind of turned it a little bit. But really, this is just a three-uniform when you're hyper-pattern two edges. I'm just taking the pictures the way it works. So I have five vertices. So any of these vectors that I'm interested in are going to be five dimensions. Okay, so I choose some five-dimensional vector with complex entries, and I can think of it as a vertex. So if I label my vertices as one, two, three, four, and five, my number of them in that way, then I can assign the corresponding entry of this vector that I've chosen to each of these vertices in this kind of way. And then what I can do is I can look at what happens when I take this product of my adjacency diagram, I should say, with the outer product of this vector x. So what that does is it takes the original vertex labeling that I have, and create the new one. And the way it does this is in the following way. So a little bit of vertex, maybe I look at this middle third vertex for example. I'm going to get one term for each edge that's incident to this vertex. So again, if we're looking at this middle vertex, then I have a red edge and a blue edge that are both incident to this third vertex. And on each edge, what I'm going to do is I'm going to take the product of the other way. So on the red edge, I'm going to take x one times x two. And on the blue edge, I'm going to take x four times x five. I'm going to take this product over all of the edges that are incident to this vertex that I've chosen, and then add all those things. And that's the new labeling that I get after I look at this, this product. Okay. So actually trying to define that product algebraically is can be a little bit convoluted with symbol pushing them or not. But that's how we can understand it with a picture. Same idea applies to every other vertex. It's just we don't see that there's a sum because there's only one edge that's incident to any of the other four. The first, second, fourth, and fifth vertices are all doing one edge. So we're taking a sum over one thing. And that thing for the fourth vertex, for example, is the product of the other two labels. So that's the x three times x five. And that same process gets followed up. So what if I make some choices of some vectors? I can see what happens here. So maybe I choose my vector to be one negative one zero one. I can assign the labels in this way. And then I can figure out what I get after I take this product that was on the left hand side of our original. Okay. So the top left, the first vertex, the new label is going to be the second label, the top right times the third one. Well, that's a zero. And if you work through the rest of them, you get zeroes everywhere else as well. Okay. So in this way, I call the original vector, we're going to call that a null vector. Just means that it's an eigenvector for the eigenvalues zero. Okay. So this choice of vector one negative one zero one one is a null vector for this. Okay, we want to pick a different choice. Maybe I choose zero zero one zero zero. Assign the vertex label to the original picture. Look at what happens when I take a x squared. And you think about it for just a second. All right, great. So this zero zero one zero zero is another complex vector, which happens to be a null vector for that. Okay, fine. One is the interest. Let's add them together. Okay. So if I'm careful about taking these first two choices and adding them together, I get this vector here. One negative one, and then one's the rest of the way. I can assign that. Now, again, let's look at the first vertex. What's going to happen when I look at the product A x squared? Well, the new label is going to be x two times x three, or x two is negative one, or x three is positive. The last one I check that's not here. It's negative one. So if we look at what we get when we look at this product, we get which isn't the all zero label, which is fine. What that's saying is that this choice of vector is not an all vector. Which is fine, but find disappointing. Because again, how did we obtain this third vector? Well, we took the first two and added them together. I took a linear combination of these first two null vectors, and I don't get an all that, which is unfortunate. But so it goes. So I'm going to use this lovely notation, v subland superstrict with my hypergraph, to denote the collection of all eigenvectors of the hypergraph g associated to the eigenvalue lambda. And I want to examine, what do we know about this set? Okay, so when we just saw that it's not closed in our arbitrary which is unfortunate because in the graph case, it is. Okay, these these collections of eigenvectors for the graph case form vector spaces have really nice properties. There's a really rich spectral graph theory that we can look at as it pertains to these kinds of objects. We have no such luck with the general hypergraph. So, okay, you fine, we don't have a closure under arbitrary linear combinations. So we don't get vector spaces. But what do we get? Well, it turns out we get varieties. And if you're not familiar with varieties, they're just simultaneous zero sets of some set of polynomial. So I give you some polynomials, ask you where they're all zero, put that together, form a set with all of those values, all those vectors, and then catch what we call a variety. So varieties are just simultaneous zero sets of some collection of polynomial. Okay, but what polynomial? So how does this kind of seem to work? So we're going to use this script D to denote the variety defined by and then this big argument is just going to be some set or some list of polynomials. So now what this notation on the right is going to describe is all of the vectors or all of the points in for us here, it's going to be five dimensional space. But all of the points in five dimensional space that correspond to zeros of all of the polynomials that I've got. So this is what this object here is. Okay, so in this example, we just looked at, this is the variety that we're interested in. But why? Why does this make any sense? Where does this come from? Well, think about the vertex label that we get from the product on the first vertex. It was the x2 times x3. Okay, on the second vertex, it was the x1 times x3. And then we see those terms that show up for each of those five vertices. And then where does this other piece come from? Well, I'll think all the way back to the original eigenvalue eigenvector. The right side was that lambda times x to the big K minus 1. We're thinking at the moment about these three uniform things. So K minus 1 is 2. So if I just take my vector square into the coordinates and scale and multiply it by lambda, then the new vector I get is going to be lambda x1 squared, lambda x2 squared, so on and so on. That's where these pieces come from. So in order for us to have an eigenvalue eigenvector here, both of those vectors that we get, and what does it mean for two vectors to be equal? Well, corresponding entries are there. So we need x2, x3 to equal lambda x1 squared. We need x1, x3 to equal lambda x2 squared, so on and so forth. But if I just subtract the lambda vector variable squared to the other side, I get these objects and they all equal zero. So I'm interested in the simultaneous zero set of all of the equations that we get in time. Okay, so what kind of structure do these objects have? So if I just give you some positive advantages, you can factor it. You can break it down into a product of its prime factors, whatever one does. And in a lot of areas of math, there's somehow more information that we can determine from that factorization than in the original integer that I gave you. So if I look at ideals in a ring, you can do the same kind of thing. Now you have to know what it means to have a prime ideal or things like that. You can take an ideal and you can decompose it into product of prime. In that kind of way, there's a very, very strong correspondence between ideals and varieties. So because we can take ideals and kind of break them up into a product of its prime pieces, we can do the same kind of thing on varieties. So what that's going to look like for us is we can take a variety and express it as a union of irreducible components. That's the big fancy way to talk about it. Think about prime numbers as the same as like irreducible components when it comes to product. And our multiplication to get its integers is going to correspond to unions for our varieties. What exactly it means to be an irreducible component? I'm going to kind of just brush on this. I'm not going to go into the algebraic geometry of what exactly that means. We can talk about it later if you want. But just the idea here is that I'm going to somehow take my right and break it down into what's small, indecomposable. So what is that going to look like and how do we do that? Well, I'm going to start with the most basic case that we can. So graphs are the same thing as two uniform. So bundled up by one, we get three uniform. The smallest linear hyperpath that we can talk about is going to have one edge in it. And if I look at all of these polynomials and I make a lambda zero, it's going to somehow reduce this even farther and make it somehow easier to do. That's assuming that zero is an eigenvalue for some eigenvector. And for all of these examples, it's going to be zero. So I'm going to just take lambda to be zero. And that reduces this a little bit farther and hopefully it's going to make it easier for us to talk about those cases. So that's the case I'm going to consider. And then we'll consider two edges. Okay, so if I have one edge, I assume my eigenvalue is zero, and I have three uniform, then the variety that I'm interested in are the final things zeros of x1, x2, x1, x3, x2, x2. So they just have one edge. It looks like a triangle. And I label them x1, x2, x3. So there's just one edge and I'm the new label that I get is just taking the product of the okay. So there's x1, x2, x1, x3, and x2, x2. So I want to know where all three of these polynomials are zero. So the variety that I'm interested in is defined in this way. So I'm interested in places where x1, x2 is zero and x1, x3 is zero and x2, x3 is zero. But this variety is just a set. So I want to think about and in terms of sets, we're thinking about the intersections. So if you've worked with varieties at all, there's some nice ways that we can break them up and work with them. This is one of the things. If I have a variety defined on a couple different polynomials, we're going to ask the same thing as the intersection of each other. So I'm going to break that up in this way. And then look at the first one. I have x1 times x2 is zero. And we scream in our college algebra kids that were blue at the face. But if I have a product of two things, which is zero, well, then one or both better be zero. So I can break this up. So if x1 times x2 is zero, well, then either x1 is zero or x2 is zero. But one or the other. So or when we think about set theory, it's going to break up into a union. So the variety is defined on the polynomial x1, x2 is the union of the variety on each of those. So I get something like this and setting on how good or not your set theory ideas are. How do we work with something like this? Well, I think about it like distributing polynomials. I think about intersection is multiplication. I think about union as addition. So I have three binomials that I'm distributing. So we get a turn of that expansion for every choice of a monomial in the first polynomial and then the second and the third. Same kind of deal here. So this is the union over the intersection of a choice from each of the three bars. And it turns out that the ones that I'm interested in are the ones that impose the smallest number of conditions. What do I mean? Well, I can choose the variety defined by x1 in the first group. I can choose it again in the second group. And then in the third group, why not choose something else? But there are two variables that I get here. So I can get the variety defined by x1, intersected with the variety defined by x1, intersected with the variety defined by x2, but then there's two conditions. I require the x10, I require the x20, two conditions. I mean, there are other terms that I can get in this expansion where I get all three. So I can again pick x1 in the first one. I could pick x3 and then I could pick x2. But now I've imposed more conditions. So if I require that x1 and x2 are both zero, well, then x3 is free. But if I require that all three of them are zero, well, then I'm somehow restricting myself. Okay, so when I union those together, more restrictions get lost. So what I'm interested in here are the smallest number of restrictions that I can get for these terms. And when I do that, we get every choice of truth. And then I'm also kind of baking into this line and just taking this kind of thing. If I take the intersection of varieties and that's the variety of, just to limit them by confidence. So these are the pieces that we get. And so this is actually a decomposition of the original variety. Again, whatever. So it's actually pretty easy to show that each of those are very useful. We're in three dimensional space. Just think about it as x, y, d. This is saying that x is zero, y is zero. So it's the z. This is saying that x is zero and z is zero. So these three things are just the axes. Okay, so great. We've done it for one edge. The next hardest thing is two. We've already seen these polynomial. We care about when they're all zero. So if I look at the first one, x1 times x3 is zero. So either x1 is zero or x3 is zero. You can perform that same kind of analysis for everything except the third, because I have a sum, a non-trivial sum. So I have a bunch of x3s all over the place. So if I take x3 to be zero, then I satisfy the first equation, satisfy the second, the fourth, and the fifth, and the only thing left is this third one. So if x3 is zero, then I also have to require that x1, x2 look like four. Okay? You might say, well, that's just, is that just the same as saying that x1 and x4 are zero or just taking one from each of these terms and requiring to be zero? Not necessarily. Remember that maybe this product could be one and this product is negative once they add to zero. That's not, that's weaker than requiring that two of these variables one from each other. Okay, so I have two conditions here. Now what if x3 is non-zero? Well, from the first, second, fourth, and fifth equations, the other variables better. So if x3 is non-zero, the first one tells us x1 is non-zero. Okay? So this in some way kind of discusses every case. Clearly either x3 is zero or not. And in these situations, we kind of get just one consequence. So the irreducible components of the no-variety for the two-edge criminal performance linear hyperbolic, irreducible components are going to be these two. So what I'm saying is that if I take these two varieties union and together, I get back the whole thing. Okay? So there's the two-edge. And we can do this all day, right? The induction says we can just keep on going and going. It gets harder as we start building our edges. So it would be really great if we could do a jump. Okay? Now it turns out we can. And it's, maybe Josh here disagrees, but I think it's not pretty. I don't think, I don't think it's a pretty argument at all. But we can do it generally. So if I have a three-uniform linear hyperbolic on n edges, then that thing is 2n plus 1 vertices. And you could draw a picture and connect yourself with it. But I'm going to let m be that constant. So it's easier to work with. Okay. So what does it mean if I have some vector, which is an all-vector for this general three-uniform linear hyperbolic? Well, it's going to be an m-dimensional vector, same dimension as the number of vertices that I have. I'm just going to let the identity be x i, just like we've seen already. Here's our picture where I don't know how much we're having to go. Okay, but we get something like this. And then here's where it's maybe a little easier to see that it's 2 times n plus 1 vertices. So here I get 2, here I get 2, I get 2 from each of the edges. And then when I get to the end, I have to account for the problem. Okay? So if I lay my vertices in this way, then I can take my vector and assign it as a vertex, lay one. And then we want to look at what the action of taking the adjacency while applied by this vector out of product with itself, what that actually does. And what that gives us is something that I believe it looks like this. Okay? So I get a new vertex lately. Again, I'm taking this, for any particular vertex, I'm taking the sum over the edges that are incident to it of the product of the other other vertex. So for this third vertex, for example, I'm going to get x 1, x 2 times x 4, x 5. And that's what I have here. And when you write it out normally, it doesn't look like this. That's why it's written this way. The top vertices are maybe a little bit easier because they have degree 1. So there's only one term here to consider. So if I just look at the top there, just for example, I have these product of odd index values. And because I'm interested at the moment in these null vectors, this needs to be 0. So x 1 times x 3 needs to be 0, so either x 1 or x 3 is there. You can do the same kind of thing the whole way through. So if you just look at the variables that have odd index, there can't be two consecutive of them that are both non-zero. Because if you do that, so if 3 and 5, for example, were non-zero, well then their product would be as well. And that's an issue. So you can start from that, that statement, and kind of go out everything. That's what we're going to do. So I'm going to take some null vector. And from that little argument, we can see that if I just look at the odd index or 3, then no two consecutive of them can be non-zero. Okay, so what does that look like? What does that do for us? So I'm going to let this script x be the collection of all my variables. And I'm going to let x sub o be the odd one. Okay, so these are all of the variables that have odd index. Okay, so again, whatever null vector I pick, no two consecutive things in x0, ordered by, no two consecutive of them can be non-zero. So we're thinking about varieties defined by some polynomials. The one I'm saying is that any set that defines an irreducible component has to contain at least one of every consecutive pair of these odd indexed here. So what we're going to do is we're going to use those collections or variables as a starting point and build everything else from there. So how's that going to work? So I'm going to let sm be the collections of all, so I'm going to take the odd images. Okay, and I'm going to let s be a subset of those odd images so that it includes at least one of every consecutive, like what we're interested in at the moment. I'm going to let this sm be the collection of all such assets. Okay, so I have that. And it's maybe not imperative for the proof, but worth noting that these sets or the size of these collections are kind of had a fibonacci. So fibonacci, they're easy in terms for us commentatorially because we can look at them recursively. There are well known recurrences that define the fibonacci numbers and objects that are counted by those fibonacci numbers. So in the actual proof of this stuff, those recurrences are actually very helpful in terms of ultimately we'll count these. Okay, so that'll be something worth noting, but not necessarily necessary for us at the moment. So I have these collections of subsets of the odd integers, which don't emit two consecutive. And then what I'm going to do is I'm going to generate set of polynomials, which give us irreducible components of our know from these things that are starting. Okay, I'm going to let b be the thing that I generate. So I'm going to generate a collection of polynomials from the starting point, the starting set. You know, let b be the set of polynomials that I generate, and eventually we'll let script b be the collection of all these. But before we do that, there's one little extra thing that I want to do for this polynomials p sub and index looks like this. So if we go all the way back to the first example we saw that the two edge free uniform, the third vertex that was the x one x two plus x four. That's the polynomials of that form. Okay, so I'm just going to call it p sub, not example. Okay, so I want to be able to talk about these things and not have to write out all of that. These are the conditions. Actually understanding exactly what they're saying, maybe not the most important thing, but we have this list of like six conditions or so, which allow us to go from a set s chosen to the set of polynomials that I'm interested in. And if I do this, this process for all of my stats in that collection we considered, I'm going to call that collection of these new polynomial sets and call that script b. Okay, see if you're very careful about it, you can go through and put these conditions on things and see the set. Okay, so what we actually see is that the null variety that we're interested in is actually the union over all of the b's in my script b of the variety defined on that set. Take this collection script b, look at each element, each set of polynomials that's in there to find the variety on that set, union all those things together and I get my null variety. That's how this is going to end up working. So I've glossed over what it means for a variety to be reduced to, but each of these varieties is going to be reduced, whatever that is. So the definition itself is obviously important, but I'm not going to talk about it, so I guess kind of take my word that each of those are reduced. Okay, now there's one more thing, like what we talked about in the single edge piece, we had the intersection of reunion when we talked about distributing those things, but we only cared about some of the pieces that we thought, some of them employed extra conditions and well they're kind of including what we already have, so we don't worry about these. So we're only interested in the pieces of this or the varieties that we get in this union that are actually maximal by what I mean is we have one that's included in another, but we don't care about the small one because it's kind of engulfed by the larger one. So we want to be able to talk about which ones in this union are actually maximal, we'll keep those and we'll throw out the rest because the rest are kind of just extraneous. So as it turns out that if I define this theta B, so if I pick a B out of my script B, I can define theta B to the collection, B to the collection of maximal set of consecutive odd index variables. Okay, so if I have, maybe if I have B, which is like x3, x5, x7, and then x11, then I'm going to throw out one in the last one, but assume that those are alpha, what I'm saying. Okay, one of my collections is going to be x3, x5, x7, the other one's going to be x11. I'm just going to take the maximal sets of consecutive odd index, just variable, that are in this collection B. Okay, and as it turns out that the ones, the varieties that are actually maximal are the ones that don't include a set in here of odd cardinality at least. Again, the proof of that is not the model. So this is the result, I'm going to make a proof, but it's what I'm trying to illustrate is that it's possible to talk about the ones that are maximal under this inclusion relation, and we can take the union over. And then that's going to be a decomposition of our varieties into it's irreducible. So this whole algorithm idea can be a little bit hard to follow. So we looked at the one edge case, the two edge case, the five edge case, there's actually, what do we have here? I think 11 different pieces here. Now I've denoted them with angle brackets. So really I'm thinking about the model. But if I take the variety defined on each of these ideals, union them together, that's going to be a decomposition of the null variety for the five edge three uniform case. And that's okay. So we can talk about dimension or co-dimension. So if I have some space and dimensional complex space, and I look at some set in that space, I can look at its dimension. Co-dimension is just the size of the whole space minus the co-dimension. So it turns out that based on the construction of these varieties, because we just have variables and then these very special little polynomials as well, it turns out that the co-dimension is actually equal to the number of polynomials in this space. It's very easy given one of those ideals to talk about the co-dimension of the variety. And it's equally as easy to talk about the co-dimension. Take the size of the whole space, track off the co-dimension. So we have some way to construct all of the pieces that give us irreducible concordance. And we're going to talk about the co-dimension or the dimension of each of those. So from this, what do we like to do in the comment circle while we construct a variety? So that's what we have here. Two variables, okay. The x pointed on z is going to be the number of edges in the linear hyperpolysis considering. And the x pointed on y is going to be the dimension of the irreducible component we're talking about. The co-dimension is going to be the number. So the number of irreducible components of the three uniform linear hyperpolysis and edges that have a particular dimension, generating functions given by this. Okay. What does it do for us? Why do we care about this stuff? Well, so why is this interesting in any way? So we can think about, I think that is in a different way. We can talk about the characteristic polynomial of hypergraph. Just like we do for just general graphs, we define it as characteristic polynomial of our adjacent dimension. There's a way to define characteristic polynomials for hydrogen. Okay. Resultant of, it's a multi polynomial result. So exactly how we come up with these things, I'm not going to go into you, but it's possible to define a characteristic polynomial for a hypermatrix, here go, hypergraph. So these eigenvalues that we get in this way are the exact same collection of complex numbers that we get for roots of the characteristic polynomial. So because of this, we can look at two different multiplicities of these eigenvectors, one being the multiplicity of the root for the characteristic polynomial. So like if I have x, so we have zero as an eigenvalue, then x to what power divides the characteristic polynomial? That largest exponent is going to be our algebraic, whereas the geometric multiplicity is going to look at the dimension of our eigenvalue. So we have two different notions. For graphs, they agree, almost. For hypergraphs, they don't. And one kind of way we way to see that is the degree of these characteristic polynomials fits large quickly, whereas exponentially in any case, where n is the number of vertices, k is the uniformity. Whereas if we think about these eigenvalues, well, they live in n-dimensional complexity, so their dimension is at most, and it's bounded above by the number of vertices. Whereas the algebraic, so the degree of the characteristic polynomial is large. And as it turns out, these characteristic polynomials have relatively few distinct roots. So all of these roots have really high multiplicity relative to the number of vertices. So the algebraic multiplicity grows quickly as n grows, whereas the geometric multiplicity doesn't. It's relatively constant. It's just growing as the number of vertices. It's linear as the number of vertices, whereas the algebraic is not. So these two things aren't specific for hypergraphs. What relations, if any, do we have? So here we need five years ago published a paper which held a trajectory between the geometric that tried to relate the geometric structure and some formula based on that, and tried to relate it to the algebraic multiplicity. They do it in this way. So they take the eigenvariety and they break it down to its irreducible components, and they guess, they conjecture that this sum over the irreducible components of the dimension of the component times the uniformity of the hypergraph minus one to the dimension. They conjecture that that's upper bounded by the algebraic multiplicity of the derivative. Very little explanation as to why. They don't verify it for anything. They say, well, here you go. So it would be really, really nice to maybe verify this, maybe for some cases, maybe even prove it's not true or prove something stronger. So that's what all of this stuff is in an effort to do. So we've looked at these three uniform linear hyperpaths. We fully understand the number of irreducible components that have the different dimensions, so we can compute the left side of this forager. It would be great if we knew the algebraic multiplicity. It takes a little bit of work, a lot of symbol pushing, but it's possible. So some work of Banff-Hammond-Gilles published last year defines the characteristic polynomials for these linear hyperpaths recursively. So if you're really, really careful about crossing the T and dotting i or i's to the usual recursion, it's possible to come up with an exact formula for the algebraic multiplicity of zero for any characteristic. So I'm going to denote, so for the k-uniform linear hyperpath on n edges, I'm going to denote the algebraic multiplicity of zero, the degree by dn. I'm also going to let u be k-1 to the k-1, you know, let v be k to the k-2, and the reason I do that is because I want the formula to fit on one more. Okay, so there's the formula which expresses, which finds and computes the algebraic multiplicity of zero. So it's something like that. Okay, so now we know both sides of this supposed inequality. We can go about verifying it. So to make our work a little bit easier, we can take the generating function that we had previously in terms of the two variables and convert it to a generating function for this finite sum of matrices. You think about it for just a second. You can differentiate in terms of log. I want that as it takes the dimension of the zero but irreducible components and bring it around front of the multiple irons of traction with the x-burners, which gets us the form that we want and just plug in k-1 for y. And then it gets us a one variable generating function with the change of the first couple of coefficients and make it all work together. But it's possible to get a generating function for the left side. So we have an explicit formula for the right, a generating function for the left. So again, not too bad to verify the conjecture, but it actually holds in this particular situation. So that's nice. We verified the conjecture for one very small, very narrow case. Okay, but in our reality, this algebraic multiplicity, if you look at it asymptotically, it grows like n times four of the nth. But there's some on the other side of the inequality, it grows like 2.7 to the nth. So we wonder if something stronger is true. But it's hard to say at this point because this is all we have to go with. So we know that the structure of these null varieties for linear hyperpads through uniform, that's it. So it'd be great if we could find some other decompositions for other classes of hyperpads to get us some more information to be able to say, well, is this to be able to say something more specific about this conjecture? Is it true all the time? Is it not true? Is it true? But uninteresting because this stronger thing is true. So that's what we're at. So that's all I have. Thank you all for coming and listening. All right, if we could all thank Grant for an excellent talk. And are there any questions for our speaker? What is the meaning of this conjectured inequality? So somebody would not conjecture such a thing randomly. It might mean something. Yeah. Great question, Laszlo. So yeah, he's referencing. Yeah. So the brief idea that they get behind this is that in some way or another, taking this sum over just the dimension of the whole variety. So if I take the sum over the irreducible components, but instead of looking at the dimension of each piece, I look at the dimension of the whole space, then this thing should somehow be a lower bound for the algebraic multiplicity. So if in each of those pieces, instead of looking at using the dimension of the whole thing, you just use the dimension of the irreducible component, which is down and above by the dimension of the whole thing, then this should still hold. It'll be weaker, but it should still hold. So that's the brief motivation that they provide for that. But the exact reason why replacing the dimension of each of the irreducible components with the dimension of the whole space by that's upper bounded by the algebraic multiplicity, I'm not exactly sure. Thank you. Great question. Thank you, Laszlo. And I could elaborate a little bit on one aspect of that, which is that you notice the summands are these nk minus one to the n minus one things that are the degrees of the characteristic polynomial. So that's clearly playing a role, but yeah, it just is not really what I'm even not sure. And I can tell you when this was run by an algebraic geometer, you'd look at us cross-eyed, because it seems to come out of nowhere. I have two questions for clarification. Hello. Hi, Tony. Yes, hello from Kustang University. So on page 15, I think slide 15, just a clarification. No, no, no, sorry, 16 then. Yes. Do you mean k uniform, like complete k-uniform hypergraphs on n vertices like the last line? Because it's just generally, Tony. So if I take any k-uniform hypergraph on n vertices, then the degree of the characteristic polynomial is a constant and it's not constant. Oh, okay. I see. Okay, got you. And then the other thing I want to ask is like on second last night or something like that, like 18 or 19, you have some square brackets and I have no, yes, that's right, p, n, k, like are they standing with something special? No, they're just, they're still just treatment parentheses, Tony. I just do that. I have the right number of, so if I have all these parentheses that are open, it's going to be my luck that I don't close the right number of them. So I just do it to help me see if things easier. Well, sure. Thank you. Thank you. Yeah, thanks, Tony. Maybe the inner one should be ordinary parentheses. Yeah, but yeah, I think just treat them all as friends. All right. Any other questions for Grant? Okay, if not, let's take our speaker one more time. Thank you. And thank you everyone for coming and have a great weekend.