 OK, so let's start. So I want to quickly finish with matroids. And then after matroids, I want to go to the left set beyond positivity. So at the end of the last lecture, I stated that for matroids, we have Hodgman relations and Adler-Schett theorem. And I want to just quickly, we stated theorem because we left off there last time, and then explain its applications and give an indication of its proof. And so we had M, a matroid. And then we had B of M, the associated fan. This is a Buckman fan of the matroid. And then we looked at A, B of M, and they're satisfied. Well, first of all, from territoriality, we didn't talk about rank of matroid. So let me just say the fundamental class in degree corresponding to the longest chain. And then we had left half for all L. OK, so here's a way to state it. So we had, we take A1 of Bm, ample. And here what I think of as ample is the restriction of the ample cone of the free matroid on the same number of vertices, on the same number of atoms, which was a complete fan. So this was just a Boolean lattice. I had every possible subset of the index set. So I had a complete fan. There I know what ample means. I can just restrict this. Then I have for these, I have hard left sheds and Hotrimon. And the application of this is usually to questions in combinatorics concerning matroids. So let me state the simplest one of them. And this is an application to what is called the characteristic polynomial of a matroid, chi of M. So chi of M is defined, well, you can define it in several ways. Let me just define it recursively. So what I can do is I can delete an element from the matroid and I can contract one without going to the details of the contraction. I mean, think of it as the matroid is an abstraction of the concept of vector configurations. And then the deletion of an element is clear and the contraction is really just a projection along one of these elements. So I have chi M without E. And then I only have to, I mean, then, OK, so then I just have to. You can, so you think of that like vectors and you project them to the vector space mod E. Yeah, exactly. Do you omit the zero vectors? Yeah, you omit the vector that you contracted. You don't, OK, so you omit the vector that you contracted. You allow for loops, OK? You allow for zero vectors. I will explain now what effect they have. They basically have zero effect. Here's the reason. Last time for matroid, apparently you didn't have zero vector. OK, so the zero vectors, they always, they have no effect on this characteristic polynomial. So here's the reason. So now you norm this, right? So you want to say something about the simplest possible graphs, right? The ones on one edge, and then you say, OK, so I have this. I should be zero. The effect should be zero. And then I should have this. And this should be, I hope I do the signs where I lambda minus 1. If you have a graph, this is essentially, I mean, if you come from a graph, then this is essentially the chromatic polynomial. You multiply with lambda, again, the whole thing. But that's essentially the chromatic polynomial. And then you can look at questions for the coefficients of this polynomial. Where is the lambda intervening? Is the lambda, is the variable, or? Maybe I should. Maybe, OK, maybe I should lambda, lambda. Yeah, so the characteristic polynomial is a polynomial in lambda. And this, the relation observed, it finds it inductively starting from what? Starting from these two. But those are matterless, or? Ah, OK. So I told you, a special case of metroids are those coming from graphs, right? So if I have a graph, I can take the independence matrix of it. I can take the matrix of vertices and edges, and I orient the edges in some way. And then I make a vector between, for every edge, I make a vector, let me have some edge here. I make a 1 for this and the minus 1 if these two vertices are corresponding to it. And this gives me a vector configuration, right? And that gives me a metroid, right? A vector configuration gives me a metroid. If you're given the field of a ratio of? Yes, yes. Let's work over the reals, yes. I think of it just as it's the metroid on one vector, there's a 0 vector, and there's the unique vector that is independent. That's it. That is what I'm saying here. These are the two metroids that you have on one element. So if I have the metroid with one non-zero vector, which is the second case, we should get them the minus 1. But in the equality, you can start to delete up. But you don't allow to do it when you have only one vector or this relation. Do you apply it for m consisting of one vector? Let's not allow it when m consists of one vector. Let's not go to the empty set. I mean, you can also define it directly via a recursion formula without the recursion formula. But let's not go there. OK, so let's forget this entirely. Think of metroid just as graphs, and then just think of chromatic polynomials. What word is the chromatic polynomial? You did define it? Yes, yes, I think we did it maybe in the first lecture. Chromatic polynomial is just the function in terms of lambda of how many proper vertex colorings are there of your graph with lambda colors, OK? That's it. So if you have the graph, just the second case, you want to cover to have different colors on the two vertices. No, you mean the coloring means that the coloring is of the vertices or the edges. Of the color, it's a coloring of the vertices. I think you get lambda of lambda. Yes, yes, but in the chromatic, yeah, so there is a slight difference between the characteristic polynomial and the lambda. But it's a factor of lambda. So I mean, that doesn't matter for the question. Yeah, yeah, so now, I mean, you can think of it as you take one vertex and fix a coloring there. That's what we want. So now you can ask questions about the coefficients of this polynomial. And it turns out that you can compute them algebraically. So a question, what can we say about lambda? Sorry, what can we say about the characteristic polynomial about chi and lambda? And well, I mean, you could ask whether it's real rooted, for instance. That is not true. So the roots are dense in the complex plane. And the next thing you can ask, well, you can look at the coefficients. And you observe that some of the absolute value of the coefficients, they're unimodal. So they rise onto some point, and then they fall again. OK? So that was it, yeah? No, they are alternating. So the absolute values, OK. And then it turns out, if you think about it, something stronger seems to be true. And they're integers. They're integers, yes, you can, yeah. This you can see quite easily from this recursion formula. I mean, it's clear that the chromatic polynomial satisfies a recursion formula like that, right? I mean, if you want to color a graph, you want to color this graph, how many colorings can you have? Well, I mean, let's remove an edge. Then you have this here, right? So now you can ask, well, are all colorings of this graph, colorings of this graph? And clearly not, because sometimes these two colors are the same, so you have to deduct the colorings of this graph, which is exactly the contraction here, right? So chi of this is equal to chi of this minus half this. Do you see it, Gopher? It's fine? But you would have to check the inner dependencies in the, when you identify, ah, OK, it really looks OK, yes. But then what about this question of the generic case, because you said that the recursion doesn't seem to hold. So at the very bottom, if the argument is correct, then it may be usual problem of doing thing by induction, where you start from there. You have to know where the argument starts to be valid. So here, I asked you about this. OK, so I mean, OK, so just this one, this just this graph of two, these two vertices. And then apparently the recursion does not hold, which suggests a very good problem. Maybe, and also, of course, you have to know that you divide by lambda. I mean, it's actually fine, right? It's actually fine. It's actually fine. You can still, you can actually contract once more, right? It's somehow, this here is this minus this. It still works. Well, I mean, OK, so what is the chromatic polynomial of this graph here? It's lambda times lambda minus 1. So here, you have, these two vertices are just independent. So you have lambda squared minus lambda, all right? So here, you can just color in any way you want. There's no relation between these two. Anyway, it's a, OK, let's color graphs in the break, OK? It's, I mean, it's, I think it's somehow. It's a break by the half. Yeah, yeah, I mean, so for, I mean, it's a convention that, yeah, yeah. Fix the coloring on one vertex, right? Somehow fix one vertex arbitrarily and fix the color in there. That's, I mean, that's right. I mean, then really, you can also divide by lambda minus 1, because also it will always appear as a factor. And this is in the project device characteristic polynomial. OK, fine. You can ask coefficients here. And it turns out that the coefficients, a i squared, all right, they have the property, or seem to have the property that they, for example, if we have a chamber without h, then the quadratic polynomial is a lambda power 10. But in JSON, we should be connecting, so if you work with that way, if I think there's... Yeah, OK. In that way, we can connect to components just to define it in a better way than that. Yeah, you have to be careful with the connectivity, yes. OK, then you have to divide by some power of lambda. OK, so it's not exactly what you have written. It's just to give you an intuition anyway. It's not my main point is proving these left-hand theorems. Not, OK, I'm, otherwise, yeah. Yeah. But you have to know how to formulate the formula for it, because there are just small details if you confuse it. I mean, I can just tell you, still it's nothing. And it turns out that you can access the coefficients of this polynomial by computing some intersection numbers. So AI are the absolute values of the coefficients. Yes, so the AIs are just the coefficients. I mean, they're alternating. It's not hard to show that they're alternating. So the coefficients or the AIs are the coefficients of the polynomial chi. And it turns out that you can compute them as intersection numbers. So you can compute, actually the absolute value of these, as an intersection number of alpha to the i times beta to the d minus i, where d is the degree of the fundamental class and alpha. OK, so let me give you a coordinated version. So alpha j, I will define as the sum over all the xf. Well, again, remember, f, these are all subsets. So f are subsets of the index set. And I take all f that contain my given element, j. So and j is an element in my set of atoms, OK? And then beta is analogously defined. Beta j is the sum over all those xf, where f is not in j, or f does not contain j. It turns out, it's not hard to show that alpha j, the class of alpha j in A of BM is independent of j. So all the alpha j and the beta j are the same. And you can kind of now imagine why such a thing, why these intersection numbers measure something combinatorial. So if you think about it, if I multiply alpha 1 with alpha 2, then what do I get? Well, first of all, I start with all alpha 1 is just all of those f that contain a given element. And then I start multiplying with the next with alpha 2. And these are all of those that contain a given element 2. And now, if you think about it, if I multiply them, I can only get those products where, well, OK. So let's see the combinatorics of the fan again, right? So what is xf? xf is not variable. This is the variable. Remember that in B of m, I have a ray for every subset of the set of atoms, E, right? So remember, somehow B of m, the rays of B of m are in correspondence with elements of the lattice of flats without the empty set and the total set. OK. And now, remember that somehow these rays, they spend a cone together if and only if the corresponding flats, they were related by inclusion relations. Now you see, if I multiply alpha 1 with alpha 2, then really I start with those elements that contain 1. And then the only non-trivial products can be coming from those elements that container, that form a common element in L with 2, 1, tomorrow, between 1 and 2. So I start with 1, and then I multiply, and then I get 1. I have all those flats containing it. And then I restrict to those that also contain 2, and so on and so forth. And then I multiply through with the elements, and then I see that in some way I'm counting chains in my matroid B of m. I'm counting chains of inclusion. Yeah, I mean, so I remember that you defined this farm the same as the last time. And then the XF are the same as the notation EF of last time or no? Well, EF is the ray. So now I'm thinking of XF is the characteristic function of the ray. OK? XF is an element of the ring. Ha, XF is in B of m. Yeah, A B of m. EF is an element of B of m. That's the distinction. So what is XF? It's a characteristic function of this ray, E of f. All right? Think of this A was the ring of converse polynomial function. OK, so this is the one that you just described before, that is on the ray, is the coordinate, and on the other rays it is 0, and it's 0. This is an element in the fan, and this is an element in the ring, all right? In this wing of converse polynomials, more or less the ideal of global linear functions. OK, so then you want that something is independent? Yeah, so I define these classes alpha and beta, all right? In the ring. Yeah, I basically, first I gave you a ring before I modelled by the global polynomials, all right? I gave you an element alpha j. That depends on j, all right? So now I take this element, and I'm modding out the global linear functions, and then I'm claiming this element alpha j is independent of j, OK? So all alpha j is equal to alpha k in B of m. That's what I'm saying. If you think about it, this is just because you mod out the global linear functions. The global linear functions make all of the basis vectors the same. I want us to think a little bit, but think of it again in terms of the fan, all right? It looked somewhat like this, all right? They sum to 0, so you have E0, E1, and E2, all right? Now let me take the linear function that is 1 and positive on this ray, all right? Then minus 1 on this ray, all right? Then in particular, all right? So now you see that, OK, so I look at all of those that contain, OK, maybe that was not the smartest version. You have to look at the corresponding definitions. You have to take the j containing, yeah? f containing j, yeah. Yeah, actually, that's correct. So notice that I'm telling you why alpha 0 minus alpha 2 is a global linear function. That's what I'm telling you here in the picture, OK? Therefore, they are equivalent in the quotient. That's what I'm saying, right? Some other classes are the same. What is there, what is there? I mean, you can always take the complete matrix. You can always take the free matrix for this, OK? Because all of these alpha j and beta j, I can think of them as coming from as restrictions from the free matrix, all right? All of this is compatible with the restriction from the free matrix, OK? So I can always think of evaluating these relations inside the free matrix. And so then I just have all of the subsets of the index set that correspond to rays, yeah? And now I'm telling you alpha 0 and alpha 2 are the same, all right? Why is that? Well, I'm claiming that alpha 0 minus alpha 2 is a linear function, all right? And if you think about it, it's exactly this function that is 1 on this ray, all right? It's 0 on this hyperplane that divides them. It's minus 1 on this ray. And then it extends linearly to this ray and this ray, all right, to all the other rays in these half spaces. That's the calculation that you make. That's a geometric image that you should have, OK? The math rate is 0, 1, 2. Ah, OK. Yeah. OK, I see. And then OK, it's also very good. Yeah, it's not hard to verify. And now you can, OK, so now the point is if you compute these intersection numbers, first of all, there's another claim that these alpha and beta are neff. OK, so again, not so hard to see. So they are not strictly convex on your fan, but they are in the closure of the strictly convex ones. So they are convex, all right, that's it. And finally, OK, what do we have? Finally, well, now observe, I will not go over the detailed calculation why this intersection number corresponds to the coefficients of the polynomial. I just want to convince you that computing a product like that has some combinatorial meaning. And if you think about it, let's just multiply of the alphas. OK, let's just take a power of the alphas. What happens if I take alpha 1? Inside the lattice of flats, my lattice of flats, it consists of 1, 2, so I have the atoms here. And then I have above this, I have flats corresponding to one-dimensional subspaces. And so what I do is if I take alpha 1, then basically I'm restricting to this ideal in the post set. If I'm then multiplying alpha 1 with alpha 2, well, then I'm restricting to the intersection of this post set with this post set. So I'm taking the ideal of 2 in 1, and so forth, and so forth. I could multiply with another one. And you see that this has a combinatorial meaning this intersection number. And that is exactly what is happening there. And if you think about it, computing these chains in the end, this gives you the characteristic polynomial. And that's why it is important to have these the Hartree-Mann relations here. Because then again, so in the same way that we discussed the Alexander-Authentia relations the last time, following from the Hartree-Mann relations, now because these alpha and beta and f, we get the log-concarity of these numbers. So degree alpha to the i times beta to the i squared is larger equal to the degree of alpha to the i minus 1 beta to the d minus i plus 1 times degree alpha to the i plus 1. And I will not write the power of beta because I'm not able to fix it. OK. And that is it. OK, so that's why it is important. So you get this inequality using what? The Hartree-Mann relations. So last time, or maybe it was the second lecture, I explained how you got the log-concarity in the Alexander Authentia relations, right? Alexander Authentia inequalities from the Hodgman relations. So the idea was, well, I write down the Hodgman form in degree 1 for two ample classes. Let's say big A and big B for the subspace generated by these two ample classes. And then what do I have here? I have A squared. I have A B. I have A B. And I have B squared. And now what I do is, OK, now I compute, well, OK, so now I want to understand what is the signature of this matrix. And the Hodgman relations tell you there's one positive eigenvalue coming from degree 0 and all other ones are negative. So whatever this matrix is, it's definitely indefinite. It's how Eric used to say. And in particular, the determinant of this matrix cannot be positive. In particular, the product of these two degrees minus the product of these two degrees is not positive. That's just the formula of all the determinant. And then the fact is, OK, so these classes are only Neff, but I can approximate Neff classes by ample classes. Therefore, whatever inequality I get from this for ample classes, I get also for Neff. And that's it. Yeah, yeah, yeah. Yes, yes, yes, I'm cheating a little, yes. And OK, so let me not spend too much time on the proof, but let me give you the idea. And well, the idea is essentially we use McMullen's argument. So this iterative, all right, some of this hiking the mountains argument for Hodgman and hard left sheds. So we have this vertical, this, some of this ascent part where we prove hard left sheds for a matroid by using the Hodgman relations in co-dimension 1. And then basically to prove the Hodgman relations, we use this deformation argument. And if I am in the free matroid, all right, that is quite clear. So the free matroid, again, this is this matroid, but I can start just from projective space, all right. And then I can iteratively blow up. And these are my deformations. So I introduce first this one, then I blow up here, and I blow up here. And that's it, all right. And then I control. I know that the hard left sheds is true. I control the signature in these blowups, some of the analogs of my flips and the original proof. And that's the idea. If the matroid is more complicated, right, so if the matroid is perhaps of a lower dimension, then I have to argue a little that in some way I can still define these blowups. So for instance, I could look at the matroid m on ground set 0, 1, 2. And then the flats, I could be 0 and 1, 2. And then the fan would be just this one. So this is E0 and this is E1, 2, all right. And then I have to argue that in some way there is a nice way to go from the skeleton of this projective space, right? So I don't take the matroid over the fan over the simplex itself, but the fan over the skeleton of a simplex, which again is algebraically just projective space. And I have to argue that I can define, so this is not a refinement now, so I don't have a nice pullback map of the convex linear functions but or convex polynomials, but I can at least not immediately. But the way that I can think about this is I extend it linearly to the free matroid and define my pullback there. And then I have a pullback map. So then the idea is to go from the skeleton of m0 and define a pullback map to m1 and so on until I am at the Berkman fan of my favorite matroid. And I have to trace the Hadrima relations through this deformation. And that's the idea. That is, again, it's, again, just the semi-classical proof of the Hadrima relations. And that is what we knew how to do in this case of positivity, where we have Hadrima relations, where we have ample classes. And what I wanted to do today, I wanted to start with is finally to go beyond that. So I told you last week that somehow the coolest version of that legend that we have is actually one where there are no more ample classes. And this is what we will go to now. And this we will do in detail for the rest of the Hadrima lectures. So let me erase. Let me make some space and say a little something. So now you are proving something which is not the same as. So this will. I will restate what I've proved now, OK? OK, so now left sheds and hard left sheds without positivity or whatever you want to call it, ample cone projectiveness or convexity. So we really cannot use the Hadrima relations in any nice way. And the theorem that I will focus on is the case of sigma, a triangulated sphere. Of dimension d minus 1. And now the first difference is I will allow any field. So k, any field, I will just impose its infinite field. And then I consider the ring, a sigma. And this is parametrized by this linear system of parametrized theta. In which the shear mean from the homologous sphere is k. Again, yes, yes, yes. As I said last time, I want a triangulated sphere to be shorter and for, I want to be k homologous sphere, OK? So k homologous sphere. Well, homologous sphere is really in the weakest sense. So it's a homology manifold, meaning that the links of vertices are again a sphere, again small complexes that have the homology of a sphere of the appropriate co-dimension. Another way of saying this would be Gorinstein complexes. So Gorinstein's implicit complexes, all right? Think of Gorinstein. Gorinstein with fundamental class in degree d, all right? And what I'm considering. So then for this generic artinian reduction and for and l in a1 sigma theta, again, a generic element. We have a hard left shat zero. We have hard left shats, meaning, all right? So again, isomorphism from degree k to degree d minus k induced by the power of l to the d minus 2k. And then we have this replacement for the Hotrimann relations, which we call the Hall-Laman relations. And we will see later where they come from. It's a kind of, unfortunately, maybe I chose the name, unfortunately, because the acronyms are the same. But so what are the Hall-Laman relations? Well, this is that the Hotrimann-Bilinio form, they can still define qkl, all right? This is just sending a and b to the degree of a times b times l to the d minus 2k, that this quadratic form does not degenerate at any square free monomial ideal, does not degenerate at i square free monomial ideal. And this is what I will, that is kind of the innovation that goes beyond the classical techniques for the proofs of Hart-Levshad. So it's really an entirely new approach. And so I will first follow the 2018 proof. And then towards the end, I will give a second proof. I will sketch the second proof that this joint with Stavos-Papelakis and Vaso Petroto. And both rely on this kind of, on something that is, on the non-degeneracy of this pairing at many, many subspaces. So that is the theorem, and this is kind of what will, what will somehow occupy us for the rest of the lectures. All right. And the story starts with a rather simple lemma that comes from, well, I mean it's essentially basically the algebra lemma. So let's focus on the first non-trivial isomorphism. All right. So let's say sigma is of dimension d minus 1 equal to 2k. And we want to prove the isomorphism, well, the middle isomorphism. The first one, there is non-trivial. So from ak of sigma to ak plus 1 of sigma. All right, then we want the mystery map L here, OK? Want the isomorphism. So how would we attack this? Well, I mean at first it seems rather hopeless, right? I mean, how do I even describe a generic element? Well, I could try to say, well, OK, so maybe what happens if I just take the variable corresponding to a single ray, a single vertex, all right? I don't really understand it that well, all right? But what I understand immediately is the kernel under the multiplication of the image, all right? So the kernel under the multiplication with xv, all right? From here to here. Well, this is, let me re-explicit from ak to ak plus 1. Well, what is the multiplication with xv? Well, it's just a pullback to the star of the vertex. And then the kernel is just a sigma relative to the star of the vertex of sigma, OK? And the image of xv, well, this is just, OK, so this is in degree k, of course. And this is the image similarly is just, well, it's the pullback to the star, right? So I pull back to the star of the vertex, and then I multiply. So I have ak star of the vertex in sigma. But then I multiply this with xv. So OK, so I have at least somehow I know what kernel and image are. So now what would be the next thing? Well, I mean, so the next thing would be I take another map. Maybe the variable corresponding to another vertex. And now what I would do is, well, I could try to, well, I could multiply with this map. But again, probably has a kernel. And the image is also rather small. So how do I get back to this? Well, I could try to say, I'd look at the generic linear combination of xw and xv, all right? And look at, well, OK, so what would be the ideal way of things behaving here? Well, the ideal thing would be that the kernel of these two here is the intersection of the kernels, all right? That has come out of my hope. I want to create an isomorphism in the end. So I want the kernel of the generic linear combination to be as small as possible. And similarly, for the image of the generic linear combination, the best thing I could hope for is that the span of the individual images, that's somehow my ideal hope. And again, so here this plus in quotation marks means generic linear combination, generic linear combination. So how do I describe a generic linear combination? This is where a very basic and simple lemma by Kronecker comes in. Yes, yes, yes, yes, Maxime, yes, yes, you're spoilering. Yeah, yeah, yeah, yeah. OK, do you want to say it's the lemma Kronecker quiver? So lemma goes essentially back to Kronecker. So I have x and y two vector spaces over k, and a and b are linear maps from x to y, all right? And then my conclusion, my first conclusion, should be I want, let me say the conclusion, let me write it a little on the right. So I want that the kernel of the generic linear combination of a and b is the intersection of the kernels, right? Kernel of a intersection kernel of b. So how do I ensure that? Well, here's a nice and sufficient condition. It is that I take the kernel of a, and I map it under b, all right? And then I intersect it, so now I'm sitting in y, and I intersect it with the image of a. And I want this intersection to be trivial, OK? Then this is true. So this is the first point. And then I can write down the dual, all right? So I can look at b to the minus 1 of the image of a. And I can look at what it spans together with the kernel of a. And if this is x, then the image of the generic linear combination is the combination of the image, all right? So yeah, so let's go back to Kronika, a very simple and very beautiful lemma. You can do all kinds of very beautiful stuff with it. But it turns out that for a miraculous reason, it works even better if you have a nice intersection ring, or a nice ring like that to work with, because there is a small but beautiful miracle happening if we consider this. So now, OK, so let's say we want to prove this, all right? So what do we do? Well, OK, so let's say b, OK, a is the previous map, all right? And b is the new map, all right? So in this case, previous map is actually the new map, or somehow, well, it's not the new map, but it's somehow the new component, all right? Some of the perturbing component or whatever you want to call it, which is, in this case, xw. Then what I want to measure, so I want to look at x, let's say I wanted to prove, let me just arbitrarily decide for proving one of these. So let's say we're trying to prove this. So then what we're doing, so we want to look at xw times the kernel of xv, all right? And I want to intersect it with the image of xv, all right? I want this intersection to be 0. So first observation is, I mean, if they intersect, they must intersect in the pullback, right? In this ideal of xw. So I can just intersect once more with the image of xw. So that's the same thing. Let me actually, not to be confusing, let me write this separately. So this is the same as xw, xv, intersection, and now in a bracket, image xv, intersected image xw. Next observation is? Did you just throw a constriction of the decomposition of the station of one circuit or the other, yeah? Yeah, where was I? Ah, so now, OK. Just a little bit, because each of them contains already a contrainted image. Well, I mean, it's OK. So I mean, there is something, there's not a trivial ingredient here that we will see, all right? So there must be, otherwise I wouldn't have to restrict to a generic linear system of parameters. Maybe actually, I should also give the example where a non-generic is not enough. Yeah, OK, I will do this in the next section. I think it fits better, though. OK, so now notice that the kernel of xv and the image, so this is a subspace of ak sigma, and the image of xv. This is a subspace of ak plus 1 of sigma, all right? So they form exactly orthogonal complements. So what do you use for all the spaces sub-dual, x and y? Yes, yes. Maybe I can say this, consider a particular special. Yeah, yeah, that's a special thing about this situation, and now we have dual spaces. That's the beauty of this. So we have orthogonal complements, all right? So I mean, yes, everything is tautological. Everything is trivial here, but small. Let me still say it. So now, OK, so I have orthogonal complements. But now I restrict it to the ideal of xw, all right? I restrict it to the ideal of xw, all right? And so I have xw, all right? So hence, OK, so let me write it like this. So I have xw times kernel of xv, all right? And I have the image of xv as a subspace of the ideal, right? I intersect it with the ideal of xw. And these are now in the ideal of xw in ak of sigma, all right? This here is isomorphic to, well, this is isomorphic to xw times a of the star of the vertex in degree k. Now, OK, so now, OK, so both of them are in the star of this vertex. Now, this here is a sphere of co-dimension 1. So this here is isomorphic to ak of the link of the vertex in sigma, all right? OK, and now I can look at xw kernel of xv and the image of xv, all right? But they are orthogonal complements, OK? So xw kernel xv, OK, so and image xv are orthogonal complements in ak of the vertex w in sigma. But now, OK, so now let's go back to the criterion that I wanted to verify, actually. Conveniently, it is here, right? The intersection should be 0. I have orthogonal complements. When is the intersection of orthogonal complements 0, all right? Intersection of orthogonal complements in ak of link w sigma is 0 if and only if, right? The Poincare pairing ak link times ak link to the reals does not degenerate, degenerate on either of them. So you get small field k. Ah, thank you, yes. On either of them, on either of them, all right? So this Koneka lemma, this is a basic presentation theory of the Koneka quiver, plays for some miraculous reason beautifully with spaces that are dual to each other. It's kind of a, I mean, it's a miracle. It's some, I mean, they couldn't go better with each other. It's like a white one and fish, I don't know. So now you see that, now you suddenly see why constructing a left shed element, right? Constructing an isomorphism is related to non-degeneracy of the pairing at subspaces. That's it. So that is a property that you want to prove, right? So in general, what you want to prove inductively, so. So how do you know that you have to ask the orthogonal complement k x v and image x v in different degrees k and k plus 1, that goes to the pairing that goes to the degree 2k plus 1? Yeah, this goes to my, OK. So this goes to the, right, this goes to the pairing. This is a pairing in a sigma, all right? So I have the pairing times with 2k plus 1, right? I took a 2k dimensional sphere, all right? And now I pass to the link, right? I pulled back to the link, all right? I multiplied with x w, so I pulled back to the link of the vertex w, where the pairing is now of degree k times degree k to degree 2k, because now I'm a 2k minus 1 dimensional sphere. But how do you know that those things are exactly orthogonal when you take, ah, OK, you go somehow to the link. Yeah. If you think about it, it's just a pullback of these. You hold somehow the relation between the links. OK, let me finish off what I wanted to say, and then we can discuss over the break. So what do you want to prove inductively, then? Well, you want to prove the following property. So we want to prove the transversal prime property, all right, that for all subsets w of the vertex set of your sphere sigma, and this is specifically that if I take the generic linear combination of the x v, where v goes over the element in my index set, maybe I should, yeah, let me stay with v. And then the kernel of this here should be the intersection of the kernels of the elements x v. And similarly, what do you want? Dualy, well, dual, I mean, this is just really equivalent, because these spaces are dual, is that the image of the generic linear combination of the x v is exactly the span of the images, v goes over, advancement w, v and w, all right? And what we want to do is we want to prove this inductively. The transversal what property? Transversal primes, somehow, because I just take the torus invariant prime divisors, and I want to say that they are transversal in some way. It's just a name that I don't know whether it's a good name, but it's a name I chose, so deal with it. OK, and we want to prove this inductively. Prove this inductively. Prove inductively by adding vertices one by one. By adding vertices one by one. Adding vertices one by one. That's a goal. And now, this is how I get this middle left-shed isomorphism. This gives us the idea to construct this middle left-shed isomorphism. And I mean, what I have to explain, what is more complicated now is, first of all, OK, so I will argue that I can always reduce to proving the middle left-shed isomorphism. So that is a critical one. But the next thing that is more important, I have to argue that I can actually close the induction and prove, well, this non-degeneracy of the pairing in some way by induction. And it turns out that this will be proven by using a left-shed property in positive co-dimension. So we have to prove this iteratively. We have to exploit this non-degeneracy of the pairing at the current and the image. Oh, actually, it's enough if you do it at one of them. And this is what is more complicated, what is left to explain. But now, I think maybe 10-minute break. What do you want? Yes, OK. All right. So let me just say clearly why the transversal prime property, why once we have proven the transversal prime property, at least we have established the middle left-shed. Well, all right, so transversal prime property for w equal to the vertex set of sigma, all right? Then I have the generic linear combination of the xv, where v goes over all vertices. And sigma has a kernel, which is the intersection of all the kernels of pullback maps, all right? So v in 0. But by prankariduality, this intersection must be 0. So if you've proven the transversal property for the entire vertex set, this must be 0. Therefore, the kernel of this generic linear combination will be 0. Therefore, all right, if we prove the transversal property, we are done. That is it. So let me go to, well, now actually what we want to do is we want to prove this non-degeneracy of the pairing at a subspace, all right? And this is what I call bias pairing theory. Yeah, I mean, I will go over this theory of understanding the prankarid pairing at subspace is a little. Maybe before I do that, I should explain why the genericity of this theta is necessary. So this was something. Why should theta be generic? Theta be generic. So let me construct a sphere sigma together with a linear system of parameters theta that is where the genericity of theta is necessary. And the trick is to start with something, to start with a very simple sphere, and theta that is not a good linear system of parameters. So let me start with the following sphere. By the way, your genericity, so you have theta and then linear l, but is it genericity for the pair? It's genericity for the pair. So both theta and l, or theta comma l has to be generic. In the sense of the rescue to power? Yes, yeah. So why should theta be generic? Why? And I want to say that there is a bad theta, or that there are spheres with bad theta. So I start with the following sphere. So sigma, the boundary of the simplex on four vertices, 0, 1, 2, 3. And I take the boundary of that. So this is geometrically just a tetrahedron. And now what can I do with this? Well, I want to take the following linear system of parameters. And this is just given by the following matrix. And let me just be naive. So 0, 1, 2, 3. So I have some generic entries here. I have some generic entries here. So how many do I need? The linear system of parameters should be of length 3. So I should add more generic entries here. But let me just add 0 here. So this is my linear system of parameters, schematically. Some generic vector, some generic vector, 0. Of course, that's not a linear system of parameters. This is not a linear system of parameters for this sphere. Because the last linear form is 0. So let me not cheat and make it into a linear system of parameters. Well, what I do is I take a stellar subdivision at every facet. And this introduces other linear system of parameters in the sense of commutative algebra. Yes, in the sense of commutative algebra. So it's three linear forms, one of which is trivial. And if I take the quotient by them, the cold dimension of this object is not 0. So k of sigma has cold dimension 3. OK, now it's graded, OK. And you want to look at linear forms which form a system of parameters in the sense of commutative algebra. And that's where the quotient is finite dimension. But if you take the ones that you have written, which are what, which are not? It's some generic vector here, some generic vector here. And then 0, the 0 vector. Generic vector where? I mean, it's just a generic element of k1, another generic element of k1, and then just a 0 vector. It's not a linear system of parameters. It's quite correct. Yeah, yeah. So that's not a linear system of parameters. But let me make it into one by taking the stellar subdivision at every single one of these faces. The stellar subdivision at every single triangle. This introduces me four new vertices. Let me call them 0 prime for the vertex opposite to 0, 1 prime for the vertex opposite to 1, 2 prime for the vertex opposite to 2, and 3 prime for the vertex opposite to 3 in the tetrahedron. So if this is vertex 0, then 0 prime is this vertex. Because it's resulted from the stellar subdivision, of the blow up of the triangle opposite to 0. And then what I do is I just take generic, I take this tetrah and I extend it generically here, extend it generically here, and extend it generically here. OK? All right? So now I have a tetrah on 0, 1, 2, 3, and 0 prime to 3 prime. OK? And it turns out that now I'm a linear system of parameters. If you remember, the condition for being a linear system of parameters was that if I take a minor corresponding to a face, then it has to be of full rank. So before I had the minors corresponding to these triangles before I subdivided, and they were not of full rank. But now these faces, the original faces, 0, 1, 2, it doesn't exist anymore. I subdivided, right? The triangle doesn't exist anymore. The only faces that exist, they involve at least one. So this here must be the vertex 3 prime. So they must involve 0, 2, 3 prime. They'll always involve one of the vertices prime. And now, so 0, 2, 3 prime, this if I choose this all generically in K1, will be of full rank. Will be of rank 3. So that is now a linear system of parameters in the sense of commutative algebra. So why now can claim, right? So this subdivision, let me call it sigma prime with this new linear system of parameters, tetrah prime. I claim, even though this quotient now is a finite dimensional vector space, so this is a linear system of parameters, that this can never satisfy the left-shed property, all right? And for this, let us look at the quotient. So let us look at A of sigma intersection sigma prime with respect to this linear system of parameters, tetrah prime. All right, this is a quotient of A sigma prime. I'm sorry, so you say that this tetrah prime is generally called a sigma prime for now. This tetrah prime, sorry, what? I mean, it's confused with this tetrah prime. Yeah. But it's generally called sigma prime. Yes. But you say that it doesn't stay very fine. Well, I mean, it is, well, it is not what you mean. It's not a generic linear system because it's still zero. It's still rather degenerate on some of the vertices. Sigma intersection sigma prime. Well, this is just, I mean, I took the original sigma and I subdivided some places. So what is the intersection? Well, these are just the original vertices and the original edges that I have, all right? Yeah, this is just, well, if you want, this is just the complete graph on the vertices 0, 1, 2, 3, all right? So now, OK, so let's compute this. What is A2 of this and A1 of this, all right? Sigma intersection sigma prime. Well, I mean, essentially, this is just A of sigma with the original linear system of parameters theta that I had, restricted to degree 2. Because I mean, degree 2, that is just what I have on the edges. D1 is just what I have on the vertices. And so I just have the original linear system of parameters theta here. So this here, this was my original theta and now I extended it to theta prime. So what is it? Well, this is, OK, I start with the free polynomial ring on four variables. And then I mod out two linear forms. The third one is trivial. So what I have here is, well, this is isomorphic to vector space over k. It's three-dimensional, all right? This here, well, I'm missing one linear form. So this is isomorphic to a vector space k2, all right? But now I see that I can never have a left-shed element on sigma prime. Why? Well, if I had a left-shed element, then I would have sigma prime. If I had a left-shed here, all right? Then I would have isomorphism from here to here. In particular, I would have a subjection from here to here, just by the commutative diagram of the restrictions, OK? But no, from a1 to a2. Ah. And then the dimensions come to this, each other. Yeah, sorry. You're right. Sorry, yeah. The other way around. So I have the isomorphism here. Otherwise, it wouldn't be a contradiction. Thanks. Thank you, yes. But that cannot happen, all right? So the conclusion is that the generosity by saying that the faces, the line that corresponds to the faces should be 0 is not enough? Yes, yes. So this is only enough for guaranteeing that you have a linear system of parameters. But it is not enough for guaranteeing the left-shed element. All right. All right. What do we want, all right? So we have our ring, a sigma, for sigma, the d minus 1 is few, all right? What I'm saying is we say that I, ideal in sigma, a sigma, satisfies the bias pairing property in degree k, if, well, what do I want? Well, I want to look at the Poincare pairing restricted to this ideal. So I look at i k times i to the d minus k to my ground field, all right? That's a Poincare pairing. And this here is non-degenerate, should be non-degenerate in the first factor. I say that a of sigma satisfies the bias pairing property in degree k, if, well, for all, if it satisfies the bias pairing property at all square-free monomial ideals. Ideals. Maybe better use the y-chall. Ah. Yes. And you see how this is related to the property that we want to show, right? We want to show that the pairing, the bias, that the Poincare pairing does not degenerate at certain ideals. And this is exactly what we are trying to do. Now, I want to explore how one proves properties like this. And it will turn out to be, again, related to the left-shed property. So let's take some time to do this. And the first is the following nice descent lemma. So consider sigma a sphere of dimension d minus 1 and k, some entry less than d half, and strictly less than d half. Then a of sigma satisfies the bias pairing property in degree k, if and only if, oh, sorry. Let me just state the if version. If a of sigma, a of link of a vertex in sigma, satisfies the bias pairing property for all vertices v in sigma. So this means that we can always reduce this bias pairing property to the middle degree. So when we are looking at a 2, right now, what we are looking at is a 2k minus 1 dimensional sphere. So the implication is we only need to consider sigma 2k minus 1 dimensional. And the pairing, ak of sigma times ak of sigma, 2k. You want just the non-generacy in the lowest dimension, in the lowest k, not in the second one. Yeah, you only want it in the case where it's actually the middle pairing. That is right, in the lowest k. The original thing, do you want it to carry less than local d over? In the end, we want it. OK, so I can state this definition for all of them. But in the end, I will only want it for k less or equal to d half. So let's restrict. So we only want it in this case. We only want it, in fact, we only want it in the middle case in the end, right? Just like when, if you remember when we had this perturbation lemma, it reduced to a pairing question in the middle degree, right? So now we have to understand the pairing in the middle degree. And what I will do is I will go over this in two steps, in two levels of generality, to convince you that understanding this property, all right? So now this is really just the middle pairing in the ideal. So it doesn't matter whether I say that it's non-degenerate in the first factor and the second factor, it's non-degenerate. And I want to convince you that this non-degeneracy of the pairing that I want is again related to a left-shed property, OK? So let me have some space here. So let's consider sigma of dimension 2k minus 1, all right? Now, we want to consider square free monomial ideals. And they come from restrictions to sub complexes. So we want to consider ideals i of the form a of sigma to a of delta, some sub-complex, OK? So now let me imagine a very simple kind of sub-complex. So let's consider the case where delta is a co-dimension of one sphere, one sphere in sigma, all right? So that kind of turns out to be a rather simple but powerful case that we can look at. So what we have, all right? So now we have some odd-dimensional sphere. I cannot draw interesting odd-dimensional spheres. So I will draw an even-dimensional sphere. So this is my sigma, and here is my sub-complex delta, all right? It parts my sphere into two components. Let's say d and d bar, d and d bar. So i sigma delta, well, it is generated by i sigma d. And i sigma d bar, all right? So it is generated by the monomials in the northern hemisphere and monomials in the southern hemisphere. In fact, these two hemispheres, they stand orthogonal on each other. If I have a monomial here, all right, and I have a monomial here, they multiply to 0, all right? So these are orthogonal on each other. I'm going to say it in words, standard orthogonal on each other. Hence, if I want to prove the bias pairing property for i sigma delta, I could just as well say I prove it for i sigma d or d bar. It doesn't matter, all right? I can prove it somehow. So prove bias pairing property for i sigma d bar, OK? So first observation, i sigma d bar, this is isomorphic to what I called a sigma d bar, which is, I have to press a little harder with the chalk, which is, OK, so this was the non-phase ideal of d bar, modulo the non-phase ideal of sigma, and then I mod out the linear system of parameters, theta, all right? In fact, it fits into an exact sequence, a sigma d bar to a sigma to a d bar. So these two are isomorphic. i d bar, this is the non-phase ideal of d bar, modulo the non-phase ideal of i sigma of sigma. So your definition of a sigma d bar is, I think it was given last time. Yes. So it means that the quotient of the two are there, what is it that? OK, so let me define again k sigma, or k a, k of a pair, k a, b, for a, a simplification complex, and b, a sub-complex of it. And this was defined as the non-phase ideal of b, modulo the non-phase ideal of a, OK? So it is a kind of non-unitellular in something like this. Yeah, and I can think of it as a module over the phase ring of a. That's fine, yeah. So these two are isomorphic. Well, it's an ideal in the ring of a. It's ideal in the non-phase ring of algebra of a. Yeah, all right. And this is for simplification complexes with the same vertices, or not necessarily? What is for simplification complexes with the same? So a and b have the same vertices? They don't necessarily have to have the same vertices. But you can always think of the ideal in a larger polynomial ring. So you add the arbitrary number of vertices. But then the new vertices, if they are not in b, they just correspond to non-phases of b. So you mod them out again in i, b, right? So think of i, b as an ideal over the polynomial ring generated by the vertices of a. But then the vertices that are not in b, you just kill them again because they're not faces in b. That's the natural way of going about it. Second observation, well, let's say I want to prove the bias pairing property for an ideal j. Say we want bias pairing for an ideal j in a sigma in degree k. Then lemma, this is equivalent to an injection from j in degree k to a sigma modulo the annihilator of j in degree k. The bias pairing property in this Poincare duality algebra is just saying I have an injection from j to the annihilator of j, to the quotient of sigma by the annihilator of j. That's just an almost empty statement. All right, so let's combine this. So we have i sigma d, which is isomorphic to a sigma d bar, a sigma d bar, which is isomorphic to a d boundary d. Now, what is the annihilator of j? What is the annihilator of j in this case? What is the annihilator of my ideal? Well, these are exactly what annihilates my ideal. Well, these are exactly the monomials supported in the interior of d bar. So I take out the annihilator. Of i sigma d bar, well, this is exactly i sigma d. Hence, a sigma modulo this annihilator, that is exactly what is left. Well, if I take out the ideal supported in the interior of d, well, what do I get? Well, this is exactly, well, now I mod out all those phases that are not contained in d. So I really am left with a of d. So hence, for the bias pairing property of i sigma d bar, we want an injection from d boundary d to a of d in degree k injective. If we have this injection, then we have the bias pairing property. So it's just a little reformulation here. OK, so now how do we prove this injection here? So the a of d, the a was already depending on some theta, yes? Yes. And so all those things work for which theta? For now, I haven't made any assumption on theta, right? It's just a linear system of parameters. It has no assumption yet. Now I want to extract the meat of it. What is the condition? I want the bias pairing property here. Hence, I want this injection. And this injection will now turn out to depend on theta. Whether this map here is injective will depend on theta. But you know to have this annihilator is equal to something, you know the two things annihilate each other. But to have the annihilator is exactly something you need. No, it turns out that for this it's enough that it's a linear system of parameters. There's no genericness in it. For this it's enough that it's a linear system of parameters. There's no genericness needed. To know that the annihilator is exactly. Yeah, let's not go over it. This is a simple. It's a simple commit to do. Yeah, I use the coordinate in us. But this is somehow for this is still the classical commutative algebra. There's nothing fancy here. OK, no, I can understand. This is the key, the kind of the behind what you're using. Yes, yes. But now let me OK. So now I want to say why should this be injective? When should this be injective? And now we will see the connection to the example that we did last time. So when do I have this induction? So the trick is to consider this. Well, let's try to consider this map before we take out the linear system of parameters. So I will do two things. I will take theta, this is my linear system of parameters. I will write it as a linear system of parameters that is one element shorter and a final element. I will write theta as theta tilde with an additional element L. Next, observe that if I look at k of, well, before the Athenian reduction, k of d boundary d, then well, if I map this to k d, the phase ring of d, first of all, it's an injection. Every monomial, remember, this here was the polynomial ring k of x modulo i d. This here is i d, i boundary, i boundary d modulo i d. So what is actually, what is here is just k of boundary d. All right? That's it. So before the Athenian reduction, this is a short exact sequence. In particular, I have an injection here, all right? Before the Athenian reduction, I have an injection. OK, so now you see perhaps why I took theta tilde, why I split theta into theta tilde in one additional element. Well, OK, so this object is called Macaulay. This object is called Macaulay. This object is a sphere. It's also called Macaulay, all right? So what is a cold dimension? Well, the cold dimension here is a dimension of d plus 1. And this is a dimension of boundary d plus 1, which is one lower, all right? So this here, if I choose this in some sufficiently generic way from theta this splitting, then theta tilde will be a linear system of parameters for boundary d. So by calling Macaulay in us, what I get is k of d boundary d modulo theta tilde to k d boundary d, k d modulo theta tilde to k of boundary d to theta tilde to 0. OK, so because theta tilde is regular on the boundary of d, therefore I can take it out. This is all exact. It's fine. But now I want to take one, all right? I mean, to get back to a of d, I want to take out this additional linear from l, all right? The one that is missing, OK? So what do I need? So this is degree k. This is degree k. This is degree k. Well, I mean, I could try to just mod it out. But then I see that this l will no longer be part of the linear system of parameters for boundary d, all right? So what I have is boundary d from degree k minus 1 to degree k of boundary d modulo theta tilde. And I have this multiplication. In general, there's no reason to expect that this map is injective. But this here is a sphere of co-dimension 1, all right? So this boundary d is a sphere of dimension, OK? So I started with a sphere of 2 dimension 2k minus 1. So now it's a dimension 2k minus 2, all right? So this here, from degree k minus 1 to degree k, these are exactly the Poincare dual components, right? The fundamental class lives in degree 2k minus 1, all right? So this is exactly the Poincare dual component. This is exactly the middle f-shirts map. So result or conclusion, the injection a of d, boundary d, to ad in degree k, which is equivalent to the bias pairing property for i, for i sigma d bar. This is equivalent to the left shat's property for k boundary d modulo theta tilde with respect to l. So this here was the injection that I wanted, right? I knew it was equivalent to the bias pairing property, all right? And now what I'm saying is this left shat's property that I have here, the injectivity of this multiplication is equivalent to getting the exactness under modding out the last element l. In particular, the left shat's property is equivalent to this injection, all right? Notice, right? So Ofer, are you happy? So you have the snake lemma or something like that? Yes, yes, it's just a snake lemma, yes. OK, so the injectivity is equivalent to the kernel being subjective. Yes. And since it is a system of parameters, it's the kernel of the Heri-V, or like when the colon is. OK, and so it's the same as the kernel. OK. All right, and this explains this example that we had last time, all right? So remember, I told you p1 cross p1 is a bad example for the bias pairing property or for this whole Aman relations, all right? Why? All right, so p1 cross p1 looked like this. All right? And now what I took is, all right, so this is sigma. Let's say d bar is everything in the lower hemisphere, all right? And then I take my linear system of parameters, which was, OK, so my linear system of parameters, let me restrict it immediately to the boundary of d, all right? Equal to the boundary of d bar. Well, this was, this is some vector, well, it's some non-trivial vector. Then another non-trivial, well, OK, so now I'm in degree 2. I mean, it's some non-trivial vector, and then the zero vector, all right? The second, the redundant linear form here is just zero. In particular, it is not a left-shed element, all right? In this case, I'm going from, all right? In this case, this is isomorphism from the left-shed property from degree 0, from A0 to A1. And because the form is just zero, because L is just zero, it's not a left-shed element. Therefore, the bias-parry property is violated. So the left-shed property, and could I mention one, governs this bias-parry property. That is the takeaway that we have. OK, I think we are badly over time, so let me stop here.