 So I want to talk about the interplay between combinatorics and algebraic geometry. And usually, this is focused on applications of algebraic geometry tools to combinatorics, at least in this side of the topic. However, I want to really start by this. I want to give you the applications to combinatorics at the start. And then for the most part of these Hadamard lectures, I will actually tell you how combinatorial tools are used to prove, actually, theorems in algebraic geometry, and specifically theorems of the decomposition theorem type, hard left sheds type, and Ho Chi Minh relations, and some things that are variants of Ho Chi Minh that only pop up when you try to prove hard left sheds here and beyond positivity, so for varieties that are not projective. So there are rather few people in the room that will give, that will try to ask questions. But now, if there's anything from the remote audience, please interrupt me right away, OK? OK, so today, it's overview and applications. And let me start, actually, with the second word of the title, let me start with the applications, just to keep you motivated. And then I will tell you where in algebraic geometry they come from. So application number one pertains to matroids. So matroids and characteristic polynomials and other invariance of matroids in the end. And the point here is, well, associated to a matroid, which is an abstraction of the concept of linear independence in vector spaces, we can think about things like the characteristic polynomial or things like we could count the number of independent sets of a given size, we could also look at another special case of matroids, which has come from graphs, and then we could look at the chromatic polynomial. And there are all kinds of invariants that are interesting here. And usually, the question is, well, what kind of shape do these invariants take? And without actually defining matroids, let me just state two special cases. So one is, if I look at the chromatic polynomial over graph, that is the polynomial counting the number of vertex colorings, the proper vertex colorings of the graph with n colors. Well, first of all, it's not clear that this is a polynomial. That's not how to prove it. I think Birkoff did it in the end. But once you know that it's a polynomial, well, you can ask about its coefficients, all right? And what shape does it do the coefficients take? What shape do the zeros take? You repeat, so you consider the number of ways to color what? Yes. OK. So the chromatic polynomial, first of all, the chromatic function is the number of colorings of the graph with t colors. You mean you color all the edges? OK, so let me hear. So this is vertex colorings, vertex coloring. And it's supposed to be proper, meaning that no two adjacent vertices get the same color. So I can do this in orange. I can then do this in orange. But for this vertex, I have to use another color. And then you have to use another color. Yellow is maybe not the best choice. OK, that's the coloring. And I count them, all right? And then, as I said, it's not quite obvious. I mean, it is a polynomial, but it turns out to be a polynomial, all right? The way that you can prove this is by a recursion relation. So what you could not? Well, how do you compute the chromatic polynomial for this? Well, here's a simple trick. So you could count the number. You could use the inclusion-exclusion formula. Yes, yes, yes, it's a deletion contraction, all right? So I could take an edge. I could remove it, all right? And then I could have the colorings of this graph, all right? So now there's this edge missing here, all right? And then, of course, I overcount it because I counted also those colorings where those two get the same. So I deduct from this the contraction of that edge, all right? So now I kind of move this vertex into this vertex. I contracted this. I identified the two. And then it's OK. So now you basically reduce it to the graphs on one vertex and then you're done. That's the way that you get to a polynomial, all right? And that's one case that is interesting. Here's another case that is very nice. So if I have a configuration of vectors in some vector space, I could count the number of independent sets of size i in this vector configuration, all right? So I could have f i, so v vector configuration in, maybe I shouldn't use v in some vector space, x. Maybe I should turn it around. And then I could look at the sequence f i, which is the number of independent sets in v of size i. And in this sort of context, one is often interested in log-on cavity. So the coefficients of the characteristic polynomial a i, or the coefficients, or these numbers f i, they often turn out, or they do some examples, they turn out to be log-on cave. Meaning that if I square an entry, then it is larger equal to the product of the two adjacent entries, all right? That's log-on cavity. And that's the first sort of theorem that I want to talk about. Chromatic polynomials have no negative coefficients. They have alternating coefficients, I think. But for the log-on cavity, that does not matter. This one does not, well, this one is positive coefficients. So that's the one sort of combinatorial property one is interested in. And this is in this world of matriots. Convex geometers, like Jan might also recognize this as, well, this looks suspiciously like Aleksandro Fenchel. And we will, in fact, see that this is somehow, in some sense, is a special case of a version of Aleksandro Fenchel. Of Aleksandro Fenchel. Aleksandro Fenchel inequalities? It's in convex geometry. So that's why I said the convex geometers might remember this. This is one sort of question that I'm interested in. Here's another. And actually, I think this one is, all right, it comes in the middle, but I think this one is a favorite. But you have to hide whatever is your most favorite. So here's a simple problem, all right? So I have delta, a simplisher complex. And now I want to embed this into some vector space, all right? What is interesting? Well, let's say if it's a k-dimensional simplisher complex, then the most interesting space to embed it to start with would be a 2k-dimensional space, all right? Because then, really, the obstructions are purely in the k-dimensional phases, right? If there are too many, you expect that they intersect in some way transversely, all right? And that is indeed kind of the right question to ask. And well, we could ask then, OK, so the simplest one would be, we have a k-dimensional complex. And we embed delta into r2k. In bet-hc-pixon linear way, or in bet-hc-pixon? OK, so you can embed this in a topological way. Unfortunately, the methods that I will discuss don't quite work if the map is not tame enough. So I have to assume, at least, that it's piecewise linear, or piecewise smooth. So it should be tame enough. So let me, for purposes of being specific, let me just restrict to PL maps. So it doesn't have to be linear on every face, all right? But you should be able to break it into linear. You can break the faces further. And then it is piecewise linear, OK? That's the question. And then you can ask, well, how many faces could there be? I mean, the critical question is, how many k-dimensional faces could there be in terms of lower-dimensional faces? So is there a theorem you can do it? Yes, yes. OK, so here's the theorem. In this case, the number of k-dimensional faces of delta is less or equal to the number of k minus 1-dimensional faces of delta times k plus 2. And this is asymptotically tight. So you can add another constant here, but it's an additive constant. So it's plus something to make it tight, all right? And this depends on the dimension, but it doesn't come up. This is a necessary condition or a sufficient condition, though? No, no, no, no, no. I mean, you could always take a sufficient complex and make it very dense in some part, and then that part doesn't embed, all right? Like a graph, you can have a bit of a plane there. Yeah, yeah, if it contains a computer. Right, that is an example, all right? So example, all right? A graph into the plane, all right? And that is a classical result. Then the number of edges is less than 3 times the number of vertices. That's equal to that. OK, that's it. My favorite form of inequality is, right now, this is the first non-trivial case, and it turns out that once you understood that case, you understood everything. And this is if you have delta and you embed it into R4, well, then the number of triangles is at most 4 times the number of edges, all right? It turns out that there is a world of difference between proving this inequality, which is classical, and goes back to Descartes and Euler. I will show you in the second half. And the inequality in four-dimensional space. And again, here I need PL. I mean, here it doesn't matter, it turns out. Let me quickly sketch why this is difficult, or why it is easy in the, well, one reason why it is easy in the case of graphs, all right? If you have a graph in the plane, then what you can do is always, actually, you want to count the number of edges. And what you can do is always you can add edges until every component of the complement is a triangle, including the component of infinity, all right? So here, this is not complete, so I could add this, all right? And then I have, well, I could add this edge, and then every component is a triangle. And then, of course, I could use the Euler's formula. And well, I have a simple double counting, right? Three times the number of triangles, all right? We account the number of incidents of triangles and edges is two times the number of edges. And, OK, so if you calculate it, you get the desired inequality, that's it, all right? Just put them together, and that's it. So and now you see immediately what the issue is in dimension. When you embed into dimension four, well, OK, so now you want to add triangles until you have a triangulation of a sphere or something like that. Well, it turns out that you cannot do this without adding edges at some point, all right? So there are simple complexes of dimension two that you cannot turn into the two skeleton of a triangulation of the sphere, let's say, without adding an edge. And then it's no longer monotone, right? Notice that here I really only added edges, all right? I only added to one side of the inequality. That was the key part. And so that's one of the key issues that arises here. And the third application is you get something like E plus 2. Ah, OK, we did with a constant 2, but it is 3V minus 2. You get 3V minus 2. Yes, yes, I said that tomorrow. As emphatically tomorrow, the linear factor is tight, OK? So there is an additional, there's an error term here, but it's an additive error term that I don't care about for the purpose of the talk. And this error term is negative or positive? It's always negative. So the inequality as stated is true, all right? You only get better. All right. Also another reason to write it this way is if you think about it, if the number of vertices is very small, then the additive error does not apply, right? If you just have one vertex, then you cannot write vertex minus 6 because there's no edge, but the number, it would be negative. So you have to be careful with the additive error a little when the graph or the complex is very small. That's another reason not to write it. OK, the last part is, well, now we can go to triangulations of manifolds. And let us specifically try to understand them, not from a topological point of view, but from a combinatorial one. And combinatorials, again, they like to count things. And so if I have a triangulation of a manifold, let's say a sphere, all right? So here's a triangulation of the one-dimensional sphere, let's write a decomposition into one-dimensional simplices. Then I can count the number of simplices of a given dimension in this triangulation. So the number of zero-dimensional simplices, so f0 is equal to 4, and f1 is also equal to 4. And then usually you add f minus 1, which is just for the empty set just to make it down closed. And this is equal to 1. And this object here, this is the f vector. So this is a vector that counts how many faces of a given dimension, how many simplices of a given dimension I have in my subplural complex. And now it's interesting to restrict it topologically. It's an easy problem, or it's a much easier problem if I just don't give any restriction on a subplural complex. But if I give you some interesting topological restriction, suddenly the problem becomes very, very hard and very interesting, because it says much more about the geometry of the space in the end. So we ask, well, what is characterization? And the result here is, so theorem, the face vectors, so f vectors of simplificial spheres. And here I mean in the loosest sense of the words. So these are homology manifolds whose homology is also globally equivalent to the sphere. So I fix some field k. And I want that locally I have the homology of a sphere, and globally I have the homology of a sphere. And that's it. So here's a kind of underhanded way of stating it. They are characterized, are realized, by f vectors of polytops, and are characterized by a ring condition, by an algebraic condition. Let's ignore the last part of the sentence for now. We'll start to explain it in a second. But the first part is already interesting, because it says that if you want to know the possible face vectors that you get for simplificial spheres, it doesn't matter whether it's Poincare homology sphere or any other homology sphere, whether you want as wild as you want. If you want to know the face vector, you know already that there is a polytope with the same face vector. I can chelorate using this here. Yeah, the polytope, it's a ball, so you take the boundary of the ball and that. But it's not sufficient. Ah, I'm sorry. You're right. Thank you, Maxi. Yes. And Pierre, remind me to repeat questions, actually, if I've got several times now. You mean exactly by polytope? I will write it. So a polytope, and I will restrict mostly to simplificial polytops, a polytope is just a convex hull of a finite number of points in a vector space, in a real vector space. And it's simplificial, well, if the boundary of this polytope is decomposed into simplices, or equivalently, if you can just put, if the vertices are in general position. So you can move the vertices a little without changing the combinatorics of the polytope. So for instance, the cube, not this cube, but the cube in dimension 3 is not a simplificial polytope, because the boundary is decomposed not into simplices, but into quadrangles. Or what I said with the wiggling a little, if I wiggle one of the vertices a little, then the combinatorics breaks. So if I move this a little, then it might go either this way or this way. All right? So when you say this you mean, which was still the boundary of the polytope in the spin. Yes. And this is the same F vector as your original situation. Yes. And it doesn't matter which simplificial sphere you start with. It doesn't matter. It doesn't matter. If you have a simplificial sphere of dimension D minus 1, you find a polytope of dimension D so that the spheres again of dimension D minus 1 such that the phase vectors are the same. So you can have like over z-mode 2, you can have a Klein bottle. Yeah. Is it a simplificial sphere? No, it's not. I mean, it's not orientable, right? So there's no fundamental. No, you're considering a simplificial sphere in terms of homology. If you wanted, yeah, you're right. Yes, yes, yes. Yes, right? The first homology vanishes? No, no, no, it doesn't have the right homology. Yeah, it doesn't have. Sorry, that's surfaces. Yes, surfaces. Yeah. Poincaré sphere you can. Right, right. Yeah, in surfaces there's no interesting example. Yeah. All right. But OK, so there's really an algebraic condition that characterizes it. And now let me go and explain this algebraic characterization, or at least I would start. OK, let me use this one here. So over the course of the lecture, we will consider several rings. But there are several algebraic objects. But I will start with the most basic one that in some way is related to all of them. So these are no basics. So I start with delta simplisher complex. And I consider k, any field, just for convenience and simplicity later, let me assume that it's infinite. All right, so that's things like normalization work. And then we can consider, associated to the simplisher complex, a rather simple ring. So I could consider the polynomial ring in several variables where I identify each of these indeterminates with one of the vertices of my simplisher complex, OK? Identified with the zero-skeleton, so the zero-dimensional, the vertices of delta, all right? So that's a complex that really doesn't contain much information per se. But then I can go on and consider a final object. This is k delta. And this is my original polynomial ring, model law, an ideal that now encodes the combinatorics of the simplisher complex. The ideal generated by all those monomials that are not supported in my simplisher complex. All right? And then, well, OK. We want a monomials with some type that is not one, that is monomials which are not the empty where we have a convention that they're empty. No, OK, OK. For me, simplisher complex is always down close. That makes also reduced homology much more nice. There's an empty phase, OK? Every simplisher complex has an empty phase for me, all right? That's it. Then all these things work out much nicer. But yes, you're right. You have to worry about the empty set sometimes. And of course, it's usually finite in your. Yeah, OK, so yeah. Actually, so in the end, I want to go to infinite objects. But for now, let's say it was finite, all right? All right. And this now encodes the combinatorics quite nicely. So I can write down, for instance, well, I mean, it's a graded object. And I can write down the Hilbert function of this object. All right? And there are several nice ways of writing it. Let me give you the following. So let's assume that delta is of dimension d minus 1. Then I can write this as 1 minus t to the minus d. And then the following sum, OK. So let me get this down a little. And then the following sum, the i minus 1 dimensional faces times t to the i times 1 minus t to the d minus i. All right? And there are several other ways. So I mean, you can also, if you prefer to write it more directly, and actually, this I had to work out yesterday because I didn't remember it. I hope it's correct. k equals 1 to infinity for i equals 1 with my convention to infinity, and then it's k to the 0. k choose i, fi minus 1. 2 to what? Is the same as before? Fi is the same as before. It's just the number of faces of dimension i. All right? Because before it was independent set. Oh, OK. There's something. No, no, no, there's. Ah, OK. I'm sorry. Yes, yes, yes. Sorry. This is, OK. So this is fi of delta. Yeah, in this context, it was independent set. I wanted to write some other letter, but i, i also didn't work. So maybe, j, i, I don't know. And so f minus 1 is where? Yeah, I always write this with f minus 1 because it's more natural to think about the modality of a simplex in this context rather than the dimensions. But somehow, the standard way is to write the dimension, so it's fi minus 1. All right. Um, but that's, of course, an infinite dimensional object. It's somehow, as a ring, it's somehow, it's infinite dimensional, but as a vector space. I'm sorry. As a vector space, this is infinite dimensional. As a ring, the whole dimension is, of course, finite. And then what you do is, well, OK, so you have this face ring. I mean, not make this any fuller and start something new here again. The whole dimension is finite. And it's, in fact, the cardinality of the maximal simplex, of a maximal simplex. All right, so the dimension of the complex plus 1. Again, that's the cardinality. It's more natural, usually, to think about it. And then, often, we want to consider the attenuary reduction of this object, so a of delta. And this is not quite well-defined. So let me add a theta for the moment. And this is, I take the ring k delta, and I mod out by sufficiently many linear forms, so that this becomes, really, of a whole dimension 0, finite dimension also is a vector space. So the ideal is generated by some linear forms. It's usually convenient to think of this as a matrix, theta times the vector of indeterminates. OK, so now, OK, you can ask what my natural question is to ask. Do you take a minimum number? Huh? Do you take a minimum number? Minimum number? No, no, no, no. I want to include Berkman fans. Yes, he spoilers a little. So Omit asked whether I want to take the minimal number, but I purposely leave that out for now, because I want to also include objects where, for natural reasons, you take a theta number of this linear form, theta is larger than just the cool dimension. Thanks. So theta is a matrix of a? Yeah, you can think of theta as a system of linear forms. I prefer to think of it as just a matrix with some entries in k times the vector of indeterminates. All right? Why? Because, well, later we will think about this geometrically a little, and we think about the formation theory of some of these objects, and then it's useful to have a geometric perspective here. So you just cut it with a linear shape, OK? Yeah. And when you cut it, you want it to be a transfer, well, in some sense. It should be zero dimensional. Yeah, OK. So let me give you the precise, OK. So Maxim asked, and go for it in some way. They asked the same question, or Maxim did not ask the answer. When is this finite dimensional as a vector space? Whether this is of cool dimension zero, this quotient. And you can make it quite simple. And you can look at any phase. So this is finite, or the quotient is of cool dimension zero, the zero if and only if. For all phases sigma in my delta, so all simplices in my complex delta, I look at the columns corresponding to sigma, all right? So if sigma is of cardinality four, these would be four columns. And these would be the four columns associated to it. The associated columns. So now I look at theta restricted to the columns of sigma. And I look at the rank of this matrix. This has to be the same as the cardinality of sigma, OK? So in particular, generic works, OK? But we can be finer than that. And in fact, there will be situations where we really want to be very non-generic, all right? OK. So generic will decide whether you will allow enough, it's a matrix A, where the number of columns will be at least. Well, the number of columns is, well, you want it to be the number of vertices, all right? Otherwise, all right? So you want every vertex has its own column. Column was a vertical stuff, right? Column was a vertical stuff. You have to think of row more, atoms. And the number of rows that you need, all right? The number of rows that you need here is at least the cold dimension, so at least the cardinality of the maximal simplex. Well, maybe I can rephrase. Is the scheme forgetting reports? It's a union of coordinate subspaces here. If you want to intercept with some other linear space, you get just a point. Yes, yes, yes. Are you happy, Ofer? I'm confused by the matrix notation, doesn't it? OK, OK. All right. So let me go now to some topological restrictions that make this interesting. So I'll put it here before we go on. May I ask you some simplification? Yeah, this simplification complex can consider the quotas collection of coordinate subspaces, yeah? Yeah. Yeah. Is this, you can see that is this quotient ring is the same as ring of functions you have to adjust it to find this union of subspaces into the polynomials? Yes, yes. We will get to this. Yes, yes, yes. OK, you are spoiling. It's fine. I will not repeat it because, OK. Because it's a spoiler. OK. So let's look at an interesting case. And this is, well, one of the nicest possible ring structures that we could have is, well, it could be Korn-McCauley. All right. K delta is Korn-McCauley. And then there are two interesting side facts. First of all, OK, so let me maybe remind everyone what Korn-McCauley means. Well, it means that if I take theta, the length of theta, to be equal to the core dimension of the ring, then this linear system of parameters is regular. Then theta is a regular system for K delta. Under this model, it's actually a characterization. So that's a characterization of the Korn-McCauley structures, which means that, well, it means that the multiplication by theta 1 is very basic. If I take K delta and I multiply with theta 1, then this map is objective. And if I then look at the quotient by theta 1 and multiply with theta 2, then this is objective again. And so on and so forth. That's what it means to be Korn-McCauley. Why is this nice? Well, let's look at the Hilbert function. What happens if this multiplication is injective and I compute the Hilbert function? Well, what I do is really just in the Hilbert function of the quotient. Every time I take quotient by one of these tetras, I deduct one rank from the other. I deduct whatever I have in degree K minus 1 from degree K. So what I can do is what I can write, the original Hilbert function H of K delta T as, well, a polynomial H of T modulo 1 minus T to the d. And then the Hilbert function of this attenuion reduction associated to this theta delta theta. To break it, 1 minus T to power d. Ah, thank you. Yes. Then this is just H of T. So called H polynomial H vector. So in particular, the coefficients are non-negative. Yes, that's exactly the nice thing about it. Yeah. And not only non-negative, but you also know that you cannot skip. You cannot get zero. Yes, yes, yes, yes, yes. Again, OK, again, all fire-spoilering things. I will not repeat it. All right. So Pierre, you're the boss. Is it a break planned? We started late, so it's too early to get a break. But we could have a break in 15 minutes from now. OK, OK. So let me continue a little with what's called Macaulay. OK, so now this is an algebraic condition. And you know that, OK, under this algebraic condition, you have some facts about the phase numbers in particular. First of all, you have some facts about the H numbers. But then you also have some facts about the H numbers. Then you also have some facts about the phase numbers of the original complex. So in this case, you can, for instance, in this case, you can compute H from F and vice versa via the following trick. You write down compute H as follows. This is really just a Pascal triangle in reverse. So I take the empty phase. OK, that's just one. Then I have the zero dimensional phases. I have the one dimensional phases and so on and so forth. Up to the D minus one dimensional phases for D minus one dimensional complex. And then I want to fill in the entries here. So everything outside is zero. And this entry here, so the top entry of the primary that I don't have, the top most entry that I don't have, is just the difference between this entry and this entry. And this is always the same. So this entry here in the middle, difference between this entry and this entry. OK, so here I have one, one, one, one, one. And here I have F zero minus one and so on and so forth. So these are just nice entries. OK, and then what I get in the end is here I get H zero. So that's the constant coefficient of this H polynomial. Makes sense now because H zero, that should be just the degree zero functions, which is just the field itself. And then I have H one, which I can still write down, which is F zero minus the number of linear forms that I took out, F zero minus D. And the rest is more complicated. So then you get H two and so on and so forth. That's a way to compute it. And you see immediately that once you, now you have the H numbers, you have some facts that Uffa already told us. They're always non-negative. There's no gaps in these. And you can transfer them back to the face numbers as well because you can just do the same Pascal triangle in reverse. You can just sum them up. And what is nice here is that in this direction from face numbers to H numbers, so the coefficients that I got were not all non-negative. But if I go in the reverse direction, it's always a non-negative linear combination. So for instance, if I want to bound the number of faces from below, the clever thing to do is to bound the H numbers from below, to look at the algebra and what makes this algebra tick and how large does it have to be for algebraic reasons. And then automatically you get lower bound for the face numbers. We will see nice examples of that. Yeah, just the characteristics. Does Koen Makode follow or is it so this whole sphere? Again, with the spoilers. Yes. Yes, yes, yes. OK, so Maxim asked whether there's a topological condition. And yes. So here's a theorem of Hoster. So K delta for delta d minus 1 dimensional complex is Koen Makode if and only if. And let me use some new chalk. For all sigma, simply this is in delta. And again, explicitly I'm including the empty face. OK. The link of sigma in delta. So let me write this as these are the tau in delta such that tau intersection sigma is empty. But tau union sigma is a simplex in a simplex complex. And so tau not in delta, but the subset of the vertex. No. Ah, maybe. Yeah, it is in delta. Actually, you can substitute. Excuse me, I'll just confuse. OK. Has homology with K coefficients concentrated in dimension? Traded in dimension. Well, I want the dimension of the complex, d minus 1, minus a cardinality of the simplex that I took the link of. All right? So if you think about it, this will be a simplex complex which has homology just in the top dimension. No, I don't mean because, again, there's an empty set, automatically reduced. OK? So yes, but you should really get used to think of the empty set as included. And then it's just the normal way of doing homology. And then I don't have to say it every time again. All right. Right. OK, good. The Tostar theorem? It's what? Yes. Yeah. I think you should wear several masks. So Maxim is spoiling again. Govindstein corresponds to the homologist's case. Yes. Oh, I was just joking. OK, which one is oldest? I think this one is oldest. OK. Too easy going. And now I return to, OK, so I will skip Maxim's spoiler for now. I will go there in a minute. Now I will go to author's spoiler for a second. And state McCauley's theorem, really just a small part of it. So let's say I want to characterize the phase vectors, the F vectors of coin-McCauley complexes. OK? F vectors, or equivalently, what I want to really characterize are the H vectors. Am I using this chalk wrong? The F vectors or H vectors of simplicial complexes are characterized by the Hilbert series, of polynomial rings, some commutative H vector. Ah, H vector of the coefficients of the H polynomial. Sorry, H vector coefficients of H of t. Thank you. Yes, thank you. Of coin-McCauley-simplicial complexes. Coin-McCauley-simplicial complexes. Thank you. Of commutative graded algebras generated in degree 1, which is just a fancy way of taking a polynomial ring. All right? I mean, one direction we already saw, all right? Because, well, the Hilbert series is such a polynomial that it has to satisfy. I mean, the non-trivial reason, the non-trivial direction now is I can, in fact, construct a simplicial con. For every vector that I get this way, I can construct a simplicial complex as coin-McCauley and has this as its H polynomial. I can imagine a commutative gradient. Yeah, okay. If it's infinite, yes. You're right. Anyway, we set finite for now. For now, everything is finite. So you claim that you have such an answer, but you have another one with the same dimensions, which of course is a simplicial con. Yes, yes. And this was brought by the McCauley's name Yeah. All right. Okay, and that's, okay. So that's the extent of... Is it true that any commutative gradient can be deformed to the one coming from? Yeah, okay. Now the question is what you want, what you allow as deformations. Yeah, so the deformation has changed dimension, right? Yeah, yeah. Yeah, I don't know whether we'll get there, but there are operations like that. So for instance, you can pass to the generic initial ideal and this you can realize. This is exactly the way that McCauley did it. And in fact, you can characterize, I mean, you can give a combinatorial, you can also write a combinatorial formula to really describe the coefficients that you get here. But I will not go there. It's a tedious combinatorial formula. So the generic initial ideals means that you actually allow action of a comb to impact the limit in a certain way. Yes, yes, yes. You could think of it like that. So these are also called the m vectors here. So a short way of saying it is h vectors are m vectors and vice versa. These are also m vectors for McCauley. Okay. This is where m vector means... I think it's just said for McCauley. I don't think he named them themselves in that way, although maybe he tried to find a... I don't remember. I don't think he called them m vectors himself. Actually once, about the book of McCauley, an old one, is it in this book? The one... You told me. How would I know now? I don't know. I don't know which book, yeah. Like some modular systems. All right. I think this one is the oldest one. Let's just do without break. No, it's fine. I mean, it doesn't make sense to... I mean, we ended... It's fine, let's do it without break. Okay, let's have a five minute break. Okay, okay. Okay, good. Then we have a five minute break, whatever you want. You want five minutes break? Yeah, okay, that's two five minutes break. I need a new coffee. Slowly we'll get there. So one important source of spheres I already mentioned, that's boundaries of polytops. Specifically, simplicial spheres give rise to simplicial... Simplicial polytops give rise to simplicial spheres that way. So let's consider that case. So sigma, that's for me the boundary now of a simplicial polytop. And then what I can consider... Well, I can consider again this ring that's phase ring associated to sigma. But I can also do something that's also maximum hinted at. I can look at algebras of polynomials. There are various ways to think about it. But here's one way. So let's say... Okay, so we have our simplicial sphere and let's just identify it with a fan over the sphere. So every single one of these faces I just cone over and I get a simplicial cone and complete... Together this forms a fan. Okay, so think of sigma as a fan. So this is a depolytop and this is as a fan. Then you can consider the following algebra, p of sigma. And this is the algebra of cone-wise polynomial functions on this fan. So these are cone-wise polynomials functions on the ambient space or if you want to think of this fact abstractly, it's cone-wise polynomial functions... These are continuous functions on the fan as an abstract cone complex. When I restrict it to any of these simplices, I'm a polynomial. So this is like real coefficients? Yeah. If you have a real function or if you have a rational function, you have a rational function. I mean, let's think of this all with real coefficients. Yeah, that's right. I mean, if I have a rational fan, I can think of rational coefficients but it doesn't make... For me, let's restrict to reals. You're corrected if I restrict to rational fans, then what I get here is just a covariant homology of the associated torque variety. Okay, so let's just say it. So if sigma rational, then what we get, then p of sigma is just the covariant homology of this fan. Okay, so it's rationally smooth. Yes, of x sigma, this is a torque variety over this fan. So this is a complete torque variety because of a complete fan. Yeah. And for other torque variety, it's also in the same description like, ah, okay, yeah, okay. Yeah, we will... When we got to matroid a little more, then we will get to this. Exactly. Okay, p of sigma, this is the algebra of converse polynomials, of converse polynomials. And it's naturally isomorphic to k of sigma. Ah, in this case, r of sigma. All right. And how do I do this? Well, what are the generators of this algebra? Well, there are those functions that are non-trivial on one ray of the fan and zero on all others. So for every ray here, let's say the ray generated by a vertex of my special complex, all right, I have the function, the converse linear function, k v, which is one on v. All right, so it's linear and non-trivial on this ray, and zero on all other rays. And when you wrote r of sigma, does it refer to the previous notation for a simplicial complex? Yeah. But you don't assume that sigma is a simplicial, how do you? Ah, sorry. Everything's simplicial for now, sorry. Yes, thank you. Okay. I said it, but I didn't write it, sorry. Okay, and then this isomorphism is just given by sending this characteristic function of the ray to the corresponding variable. And that's it. In this way, I also get characteristic functions on every phase, all right? If I multiply... It's easy to see the rhythm up in the other direction. Yeah, it's not complicated. Yeah, it's maybe a little exercise, but it's not a difficult one. Yeah, it's not... Yeah, you're right. You're right, it's somehow... Okay, maybe then it would be more natural to write it in the other direction, but anyway, it's an isomorphism. Anyway, what we get is also the characteristic functions of phases, of simplices, all right? And those are non-trivial on one simplex, non-trivial of degree equal to the cardinality of the simplex, and there's zero everywhere else, all right? Zero everywhere apart from the immediate neighborhood of the simplex. They are just defined as a product over V and sigma, okay, so these are the characteristic functions that generate every graded component. All right, that's all good and nice. So this is the algebra, and then I have corresponding to the artinian reduction. All right, so now, again, this is a covariant homology, so what I want is a finite dimension vector space, right? I want a section again, and well, what I can do in this case very naturally is p of sigma and mod out the ideal of the global linear functions, right? So these are the kind of the boring functions, global linear functions, all right? So these are the boring degree one functions. I mod out the ideal generated by them. That's nice, and this is, okay, so this is isomorphic to a sigma. It's supposed to be catching a formula, but how do we... And okay, so what is this, the theta now that I associated with it? This, well, I mean, I naturally have coordinates for these vertices here in RD, all right? Everything of one of these vertices has a coordinate, and I normed my characteristic function in this way, that it is one on this coordinate. So where theta is given by the vertex coordinates in RD. So here's a hint that we will think about this a little more geometrically, even though in general, we won't think about fans so much, okay? But this is kind of... That's a way to get a nice geometric interpretation. All right, and now, again, so now we go exactly to the spoiler that Maxime gave earlier. So we want to say that, these algebras are even nicer than just coin McCorley, okay? And here's one important fact in this direction. Let me actually use a new blackboard. Let me use this one here. So a theorem in the most general form. Actually, yeah, it's not... I think it's also due to Hofstra, but it's sometimes not clear because Hofstra did many things that were not quite written down by Hofstra. And this is Pankeriduality, or if you want to think about it in other, in a commutative algebra language, then this is Gorinstein property. Let me write a table to Hofstra. In the most general case. So sigma, again, the boundary of a polytope, the boundary of a polytope of a polytope, or more generally, k homology sphere. Then the degree... Ah, let me write it down. Of dimension D minus 1. Then the degree... the degree D component of this ring isomorphic while in the boundary of polytope case I don't have much choice. But in the more general case, I can just say, okay, so then I look at this over a field and I can write k here. And what I can do now is I can write down a pairing. I can write down a pairing between degree k and degree D minus k. So this goes to degree D and this pairing is perfect. His pairing is perfect. So we really have a nice Pancredo-Alderty algebra. And this works really in a context that is much larger than just the case of polytopes. So this immediately you can generalize to see. We will see a proof that is much simpler than usually or classically used. But probably not today. All right. But... So when you say k-homology, so that we thought that globally, that locally the links are not the worst thing of the same homology? Yes, exactly. So the top homology, all right? The homology field exists. It's concentrated in the top and now I want it to be one-dimensional every time. For all the links and for the worst things. Yes. All right. So homology sphere. Let me write it down. Here, k-homology sphere. It means for me that if I look at the dimension of dimension d-1 means that it's called Macaulay and the... If I look at the homology of the link of sigma in small sigma and big sigma in dimension d-1 minus the cardinality of sigma. This is, and again with k coefficients, this is just isomorphic to the grand field. This gives both the links and the words in it. And for the coin Macaulayness, you also needed all those... Yeah, yeah. So the coin Macaulayness is included. So it's coin Macaulay plus this. So it's concentrated at the top. But additionally, I say the top homology for all sigma in data. All right, like this. I'm sorry, I have a question. Yes. So you said that this ad is one-dimensional and so I suppose the higher dimension, like the, you know, ad plus 1 and all dr subspaces, there are all zeroes, right? Yes, yes, that's right. Can you explain this? Why this is true? At least on this level? I will now basically give you a way to compute a computer ring in the case of the boundaries of polytops. All right? And then next time we will see how to do this more generally, okay? But I mean, if you want, it follows already from this formula for the H polynomial that I gave. All right? I told you right down these phase numbers on the diagonal and then use this inverse Pascal trick to compute the H polynomial, all right? Which is basically just, I mean, every time I took an attenuary reduction, every time I took out a linear form, I deducted one rank from another. So that's the formula that comes out. And there you see that the top non-trivial coefficient is d, all right, hd. That's it, okay? That's one way of seeing it. Now I will do the geometric argument, okay? Good. Mm-hmm. So let me try to give you a geometric intuition. So let's look at, okay, let me introduce a notion. So let me call, let's consider a simplicial complex delta. And this is a simplicial complex of dimension d minus one and I call it shellable. Well, if d minus one is equal to zero or, okay, so if it's higher dimensional, I have to say something non-trivial. So it, or, it is or delta is a d minus one simplex, all right? So delta is really just one simplex of dimension d minus one or there exists f, a d minus one simplex in delta such that another two choices, not two choices, such that two, two conditions. So I can look at the simplicial complex delta induced on the remaining facet. So delta minus f, so this is the complex induced by induced by the remaining facets or which is a d minus one simplex. This complex has to be shellable again, it's shellable and delta minus f intersection f is shellable of co-dimension one. Here do you assume that every simplex is contained in a simplex of dimension d minus one, is it part of your it's not part of the definition of simplicial complex, but when you say that you look only the maximum dimensional facet to define delta minus one. Yeah, in particular it has to be exactly, in particular it will be pure. It will be of every simplex every maximal simplex will be of the same dimension, that's right. So because this simplex it will be contractable, yeah. No, no, no, it will it could be a sphere. So let me give an example, right, I mean it's a the boundary of this triangle. All right, if I take out this face the intersection is of co-dimension one and shellable again. Yeah. Well the implication is that it's always called Macaulay. It's in fact homotopy called Macaulay so it's the strongest form of called Macaulay. So this implies that delta is called Macaulay. But you really need something that it seems to me that you are using a situation where you have a sphere it should be like the one that you remove the F that you remove in the beginning and attach the other one with in order to No, no, no, so this is why I take delta minus F, this is the simplex complex induced by the remaining facets. So all the other facets. So let me give you an example. So this complex and this complex, okay. If I remove, if I take this face here and I remove it and what I have left is a complex induced by the remaining facets and the remaining facets is this and this and the intersection, so the intersection of F with this here is, so the intersection is exactly this face. So indeed this is shellable or at least I can do the first step of the shedding. And you mean pure codimension one when you say codimension one. It's shellable, shellable in particular wouldn't cry pure. It is shellable again. It's shellable of them, okay. Of pure if it's a connected component Yeah, yeah, okay, so it's in the boundary of the simplex, you're right, it's not really needed. It's equivalent to say just it's pure of codimension one, that's right. But let's leave it with a shellable of the codimension one, that's fine. Johanna, you look skeptical. Depending on how you read the first page if you have a pure, you follow the dozen. Do you mean that you define the fastest to be exact? D minus one simple c is a bit fishy. If you say if you're making fastest Okay, you're right, you're right, you're right. Okay, yeah, yeah. Okay, so let me just say pure. It's fine. Pure of dimension, complex pure of the same dimension. D minus one. And pure means all maximal faces which are called the facets of the same dimension. Thanks, yeah, okay. Okay, so now pure is part of the definition. Yeah, but concerning the second condition, what I meant is like you have an example, like you have those two that face in common then you take another one which has the vertex in common with one of them and then if you remove the middle one it intersects, the rest intersects with these two components. So one of them is called dimension one. Yeah, I mean, you mean I could also remove this face first. No, no, no. And I draw another thing with those two and another one that touches in the other vertex. This one which only touches the triangle there and then you remove the middle triangle. Yes. The intersection of delta minus f with f will be disconnected. Yes. And it will be shareable. No, it will not be pure. So now we make pure. Ah, all right, because you shareable should be. Okay, now you put it back. Okay, you made everything. Yeah, yeah. Yeah, thank you. Anyway, this complex is not shareable, right? If I try to remove this face, then obviously I'm of co-dimension two, right? The intersection with whatever remains of co-dimension two. All right. Yes. It's always homotopy called McCauley. Yes. Which means that it's homotopy equivalent to a wedge of spheres in particular. It's a wedge of spheres of dimension equal to the dimension of the complex. Okay, and that also applies locally. Yes, so it's common McCauley with respect to homotopy even. Is it the same as what Hofstra defined in one of his early papers before something on shareable? Maybe I confused it with something else, what I see. I mean, this notion is rather old. It goes back to Whitehead, so I don't remember Hofstra. I'm sure Hofstra talked about shareability once, but by then this notion was already standard. I mean, it comes from PL topology, usually. It comes from PL topology. PL topology. Yeah, so. So what's the role? Some kind of linear tool for geometry can use quite McCauley's role. Sorry? Why explain us? Okay, okay. Okay, so to answer Jan's question essentially and to explain a little bit more about shareability, I will tell you. Okay. Again, you're already at the spoiler. Yes. What is the oldest part? I think this part is the oldest. So for delta shareable, well, we can compute if delta shareable, then in what we can do is, well, we can look at the ring A of delta. All right. With respect to, yeah, so let me A of delta with respect to theta the linear system of parameters. All right. So these are linear forms number, so the length equal to the co-dimension. And I can look at the restriction of A delta to A delta without F without this facet. Okay. So I have my simplest complex delta and I look at the restriction to delta without F, right? I move one, I basically look at one shelling step. And then the point is that this kernel here of this map is generated by a single phase X sigma F Let me write it as a current. Let me write it as X because we're in the ring picture. In the phase ring picture, not in the commerce polynomials. Where sigma F is a minimal phase simplex of F not in delta without F. What is your theta? So the number of linear forms is equal to the co-dimension. So I'm not in this case where I take out more linear forms than the co-dimension. So it depends on the generic parameters. Yeah, secretly it depends on it. And now we can actually prove punkaraduality in this case where sigma is a boundary of a polytope in a rather easy way. Sir, is this an epimorphism? Yes, this is always subjective. Well, I mean, why? We are just restricting somehow. If you think about it as converse polynomials, we are just restricting to a subspace. Yeah. What is the conditional correlation of that? Schalable means cormecoli automatically. As I said, it even means homotopy cormecoli. Schalable is automatically cormecoli. The other way around is not true. But the real condition of the global homology? Yes. Okay, so if you, okay. This is a condition of the global homology, of course, that is immediate. But you... That's a point. It's immediately cormecoli over all fields. It's homotopy cormecoli even. It's a wedge of spheres globally. And there's a lemma or a little proposition that you can prove that if you're a schalable, then the link of every face, this link sigma, it's also schalable. Yeah, but wedge of spheres or the top one is, ah, okay, you don't dimension it's just okay. Okay. So it's independent of the shape, okay. Ah, so this, the shading gives some kind of order based, yeah? Yeah, yeah, that's it. That gives us, yeah. What was the oldest one now? Maybe this one now. Actually, not this one. So if sigma is a boundary of a polytope then sigma is schalable, okay. That's theorem and it's somehow it's nice to prove by a picture. It's very easy. So let's say we have a polytope and then we have a little moon program on it or a little rocket program. There's a rocket on one of these faces here. And then you just shoot this rocket off in a generic direction and you note the continents or the simplices of the polytope as they come into view. All right, so I mean you start here, so that's the first one and then you just start in a generic direction and maybe then this one comes into view first and at some point this and it's a very ambitious rocket program so it reaches infinity and it comes back from the other direction and when you come back from the other direction you want the faces in the order that you lose sight of them. So maybe so it goes here then here and then I would lose sight of this here first four and five and at some point I crash land here, six or maybe it's not a crash landing but that doesn't matter. All right, and that's a shelling and it's not hard to see that any shelling of a sphere has a property that you can turn it around, it's again a shelling of a sphere. Okay in particular here with the rocket program it's trivial because you can just go in the other direction with the rocket and the reverse shelling is a shelling as well is a shelling as well. So this is true for any shelling not necessarily the rocket type I mean the rockets don't give you all the shelling No, no, no, they don't give all the shelling, these are special rocket shelling, yes. But for any of the shelling the reverse is still a shelling Yes, for any shelling the reverse is still a shelling. If it's a shelling of a sphere. A sphere means it is. Okay, so here's the fact that if you are a homology sphere and you're shellable, your homology manifold, you're shellable automatically either a PL sphere so PL homomorphic to the boundary of the simplex or a PL ball. These are the only situations. Okay Why? Well, you can think of this every shelling step induces a PL homomorphism except the last one where you complete the sphere and that's it. Alright, so that's kind of if you want it's a shellable homology manifold with any coefficient is either a PL sphere or a PL ball Okay Shellable class homology manifold implies that you are a sphere, a PL sphere homomorphic sphere homomorphic manifold depends on the fluid. PL sphere means equivalent to the normal sphere. Yes, yes. So it doesn't matter what field you start over. Yeah, automatically. And so it's either a PL sphere or a PL ball. You don't get any other objects. So if it's closed it's a sphere, otherwise it's a ball. Okay? I mean it's a side fact, it's not so important for us. What can't there be a higher wedge? Because a wedge will not be a manifold in general. You will need a wedge point. That's it. Alright and now we can prove Pankaridwalati in a rather easy way. And how do you get a shellable thing which is equivalent to a wedge of spheres? What is an example? Okay, so also asked how do I get a shellable thing that is a wedge of spheres? And here's an example like this. It's like two triangles, two boundaries of triangles, two one spheres just put together at a point. This is shellable and the wedge of spheres. Ah, because you can delete each one. Yeah, of course. And now why, okay, so why can I, why do I have Pankaridwalati? Well, this tells me that when I, okay, so if I have an element alpha in A over K of sigma, that's a formal combination of monomials of degree K. And at some point I have enough monomials here in the shelling to write it. Right? I have all the monomials I need to write this element. And then let's say this, this happens exactly at the shelling step, right? So now this happens is writable in shelling step F. F, meaning the facet that I removed along the shelling. So maybe this here in this shelling step F. Is what called writable? You can write it. You have enough generators you have, right? So this here generates my algebra, right? Completely. And at some point you have your favorite element, right? You want to prove Pankaridwalati, right? You want to have for the given element alpha in degree K you want an element beta in degree D minus K such that alpha times beta in degree D is non-trivial. Right? That's a pairing is perfect. That's what I want to prove. Pankaridpa. So it is, suppose alpha is... Okay, so alpha in ak, right? So that's the element I want to... That's the mystery element that I want to find alpha is the mystery element that I want to find the pairing element for. I want to pair it in some way. Yeah? And at some point so okay, so now I know how to generate, how to give a basis of my ring in terms of in terms of the shelling, right? I know that there exists a shelling now because I'm in this situation of a boundary of a polytope and I know that I can generate a nice basis for this ring and at some point I will have enough elements together to write this element alpha. Okay? I will have enough elements enough monobiotes of degree K to write this element. Sir, in this setting you get filtration, yeah? Not basis, yeah? Um... Yes? Yeah, yeah, yeah, exactly. I mean, so I fix theta. I fix the theta, yeah? Yeah, but it's kind of parametric to generate parametric. No, no, no, but I gave you a condition what is a good choice. Right? I fix this specific choice of parameters, yeah? Yeah. So, here is what? It's writable. Okay, so maybe writable is not the best word here. In terms of filtration? Um... So, you want to write it in terms of the... Okay. In terms of the simplices that are left, all right? All right? Yeah, it's all right. I remove the facets one by one. All right? All right? Every time I do this the kernel of what I have to what I have left after the removal step is one-dimensional. Right? Okay, yeah. It's one-dimensional step and actually it's non-zero. It's one-dimensional. Ah, it is one exactly one-dimensional. Exactly one-dimensional. Yeah. It's exactly one-dimensional. It's really just the multiples of this one element associated to the shedding step. Okay, so each time you you have and then writable means that you want to write in terms of those guys in the kernel or in terms of those guys... Exactly. In terms of those guys that I have here in the kernel, right? You're already lost when you shelled in other words, you want it to go to zero and shelling step K. Yeah. Yeah, that's it. All right? Okay, so now what do I do? All right, I have this element alpha and I have my shelling step F. All right? And now I can turn around the whole shelling. The shelling turn around is again a shelling and I look again what happens in this step F. All right? The last, okay, so alpha is a linear combination of things plus let me say a coefficient that appears in the last step and this is exactly the last step that it appears. Lambda sigma F X sigma F. All right? I don't care about this. Alpha is equal to this. All right? And now what do I do? I multiply this, right? So I pair this with I pair this times okay I define beta to be exactly X F without sigma F. Why is it the right choice? This is exactly all right, so if in my shelling sigma F was the minimal phase of F that is not in the remaining part then if I turn it around if I turn around the shelling what is left in the reverse shelling is exactly the complement of sigma minus F in F. Yeah? That's it. And now I pair these two all right? And what I get is exactly a degree, okay so this here is of degree K then right, so degree K which means that this here is of right now this is of cardinality sigma F is of cardinality K this here will exactly be of the complementary degree. In this way of writing it alpha times beta which will be exactly lambda sigma F times X sigma F times X F without sigma F which is just the same as lambda sigma F times X F and this here is not zero. All right? And that's it. Now in this way you have geometrically proved the Poincare duality for the case of the boundaries of polytops. All right? That's it. Okay I think I used too much time on this how much longer do we have? Negative 30 minutes. Negative 30 minutes. All right. So now we have Poincare duality for this and I think we started late but I don't know whether people have to leave. So yeah, you are the boss. Yeah, okay. So we continue on Wednesday. Okay. All right.