 The title is The Three Difference. So, please. Thank you very much. And I would like to thank all the organizers very much for this opportunity. Can you hear me? Okay, for this opportunity to give a talk here in a fantastic place. This was my first international talk. It was here in 1994. And I did not mention the other day that, you know, Chung actually organized the first real conference here in commutative algebra in 1992. So this has been a wonderful place for commutative algebra. I think this is the fifth one. And we are really glad that we can come here all together and, you know, do the things we like. But in particular, I would like to thank Jugal Verma because he did enormous work for all this, not just this conference, but for all this special year for tight closure. So thank you very, very much for all the effort, Jugal, that you put onto this and for everything you've done. And I would like to just say one word about the fantastic work that Mel Ochster and Craig Unick have done for commutative algebra. I have not worked so much on Mel, you know, mathematics. However, you know, I was very much impact by the way Mel shaped commutative algebra. Obviously, all of us have been very much impact by mathematics because he has done, as we have seen in Craig's and as we have cited many of his work, he has done fantastic results and he has worked in seven decades, so obviously we all be influenced. But as a human mathematician, I've been very much influenced by the way he has shaped the field and has made the field very welcoming to everyone. Also, for Craig, for me, it's even more personal because I've been influenced by his own mathematics for all my career. I think the way Craig has worked in commutative algebra, he has touched every part of commutative algebra. His work is so broad. And as Bernouroch said the other day, he has started, he may have not finished, but the fact that he started so many different, you know, themes has given the opportunity to young people like me to get on this theme and to develop them. So I have worked on the core, I have worked on reduction, I have worked on resalgebra, I have worked on multiplicity where he started everything. So my debt to Craig Unicke can never be said with a thank you as much as I would like to thank him for everything he has done, besides being like yesterday that David Eisenberg said, just a fantastic, gentle soul. So thank you very much, you cannot hear me, neither of the two is three o'clock or whatever in the Midwest now, but thank you to be and to have been such a pillar in our field. So now I would like to start with my talk. I don't know if I finish, I just want to, you know, be clear and, you know, not to be fast, so you can follow because this is maybe a technical subject where not many people have worked on, but it's a classical subject, so there is a lot of work done in the past. So I want to talk about difference, and I want to talk about, this is joint work with Burnrich, and so this talk really talks about ideals that arise in the study of ramification loci, and they are the killer, the nether, and the dedicating difference. So this, my goal, our goal was to give a homological interpretation and give possibility to actually compute them in a different way. So, and to give actually for certain classes of rings explicit formulas. So let me introduce the players. So I assume that is an algebra over A, essentially a finite type. I want to assume that A is regular throughout the talk, and R is reduced. And now since I have this kind of, you know, situation I want to, I can write R as a polynomial ring over A, let's add some variables, and then we localize, and then we go model an idea, let's say it's generated by f1, fn. It's ideal I, okay? So once you have this, you write R this way, you can construct a matrix that we all know, and it's called the Jacobian matrix. So let's write the Jacobian matrix, and I'll write it, you know, now, polynomial ring, I can write it simply as the derivative of the f, let me write like this, okay? And it's clearly an N by D matrix, okay? So this is the Jacobian matrix. But now I want to take the image in R, okay? And transpose it. And if you take the transpose and the image in R of this matrix, you get, so I put the bar, and then the transpose, this matrix presents a famous module, the module of differential. So here you have R to the D, which is R dx1, this is the basis, R dxn dxd, and the module presented here, the module of differential, sorry, sorry, now it's correct, okay? So this is the module of differential presented by this matrix, so very explicit. It's also much familiar with the module of differential because it characterizes the regularity of the ring, and we know that through the Jacobian criterion. Indeed, I can look at the, let's say that I have to assume something more for that. Can you read? Oh, it's too small, is that fine, okay? That A is just a perfect field, and then this ideal I, I have to assume a little more, that is equicodimensional, okay, of dimension of IG. So all the minimal prime, they have the same height. We can look at this Jacobian ideal, which is the fitting D minus G of the module of differential. So if you want the G by G minors of this matrix, okay? And what we know is that what the Jacobian criterion says is that the singular locus of R is simply given by this Jacobian ideal. So if you want the module of differential is free, if and only if R is regular, okay? So we know that very well. And there is a lot, there are many, many important questions about the module of differential, actually very famous conjectures about the module of differential and about its dual, the module of derivations. For example, the Berger conjecture. For example, the Zariske-Lippmann conjecture, the Vasconcelos conjecture. And they were, you know, of this race long time ago, and they were open for a long time, and some of them, like the Berger, is still open. And we saw that Craig just recently has worked with Vivek, Vivek cited, but it's still very open. If you want to work on it, I mean, I know that Wurich started his thesis on it. It's a very important conjecture. The Zariske-Lippmann is one where, you know, Melochster made a big, big impact. He solved the positive graded case in characteristic zero. And remember, the Vasconcelos was almost solved by Briggs very recently. So these are very important, lots of activity on this area. So where are we going now? I'm going to define the players. So, and they are connected, as I said, to the module of differential. So hopefully you remember my setting. So don't forget the setting. I'm going to raise the board so I can write. Wurich told me to write very large, because I have the money to write small, but then I have to erase. So I will say, so now I want to assume even more. I want to have that A is a local ring, not only regular, but regular local. And the map is local. So let me put R and a different maximal ideal and a different residue field. Okay. And this is local. All right. And now we say that a ring R, the ring R is aramified. The following are true over A. So what has to be true? The first thing you have to have M as to obtain as an extension of N. So M has to be equal NR. And the second thing is that extension of residue field. So this extension has to be separable algebraic. Same assumption I was before. A is regular. A is the regular R reduce. That's from the beginning. I said I'm never changing my assumption. Okay. Yeah. And now I take it local, the local situation. And what I want to define now, I want to state a theorem about this. And the theorem, maybe my attribute is not correct because it's a classical result. But I would say Burger-Kunds. They characterize that R is aramified over A if and only if the module of differential is zero. Because they do compute a minimal number of generator. And so you can see that it's zero exactly when these two conditions are satisfied. And now we can define the first difference. There are three, as I said. And that's the scalar difference. Easy to define. Easy to define. The scalar difference is simply the fitting zero of the module of differential. So I have to always have the extension here R over A and the case tails for scalar. Okay. So if you want is the D by D minor of this matrix here. And we can see that it's very much related to the Jacobian ideal. And indeed we can get the Jacobian ideal from the scalar difference. You know, you can take A and other normalization. And if you vary the normalization and compute the scalar difference, we can obtain the Jacobian ideal. Okay. And now, immediately as a corollary of the theorem, we get that the scalar difference define the ramification of low psi. Because what's happening? But it's happening that the fitting zero, you remember, is connected to the annihilator of a module. You know, what annihilates a module of ison, but for example. So we know very well that what's happening here is that the scalar difference is contained in the annihilator of the module of differential. And they are equal up to radical. So now, I will define the ramification of low psi. And as corollary, we get that the right pen. So I want to raise down there, otherwise the blackboard is messy. So I don't have a messy blackboard. What's happening? So the ramification of low psi, the ramification can be defined as the Q in spec R, such that when you localize A at the contraction, this extension is ramified. And then obviously this is exactly the support of the module of differential. And because of this is given by the scalar difference. Okay. Now, actually we have even more. We have that. They, I mean, we know a lot about the ramification of low psi because there is, for example, this famous fear, we have to assume a little more though. There is this famous fear, which is called the purity of the branch lockers that was proved, for example, by Zarisky, Auslander, lots of names attached to it, Nagata. And it says that, so you have to assume that the characteristics is zero, R is also domain, and then you have two field, two quotient field. And you have to assume that extension of the quotient field is algebraic. And then if R is normal, the either this guy, you know, the branch lockers is zero. There is nothing or is, you know, is pure in dimension one. So you just have to check, to check that something is ramified. You only have to check a prime of dimension one. So now I want to go to the second, the second player. So let me go to the second player. For the second player, I have to, again, look at the module of differential, in a little bit different way that we did now, okay, not just through the presentation matrix, another, in a certain sense, definition of the module of differential. So remember that there is another approach to the module of differential. You can consider the enveloping algebra, okay, and then obviously there is a natural map, which is the multiplication map that send it to R, and this map is always surjective. The actual element you multiply, this is surjective, it gives you everybody. But there is a kernel. And the kernel is the so-called diagonal ideal, okay, which is generated by x tensor one minus one tensor x, x in R. So this is the diagonal ideal. Let me write diagonal ideal. And this allows us to give another definition, because one can prove that omega is simply if you want d over d square, which you can also write as d tensor over the enveloping algebra with R, okay. And now how can I define the next different, the next different, which is the nether, was introduced by Emmy nether, okay. Different is, so you take the annihilator. This is an ideal in the enveloping algebra. You look at what annihilates this guy, and then you push it with this map, you know, mu to R. So is the image of the annihilator of d, obviously, in the enveloping algebra, okay. So this is a little bit more obscure as a definition, but this is the definition of the nether different. And because of this, and the fact that the 15-0 is contained in the annihilator, we get immediately the inclusion. I mean, you have to work a little bit, but not much. The inclusion of the scalar in the nether different. And then you get that this is also containing the annihilator of the omega, but the annihilator now over R. So you have this inclusion, which tells you that the nether difference also defines the ramification of low psi. So I'm using again that 15-0 is contained in annihilator to do this, okay. And then what happened when you, you know, you push it to M. And I remember that 20 later, remember, over R e and over R. Okay. So this is the second difference. I want to go, I want to ask, one could ask, obviously, this is a very classical question. So, oh yeah, Alessandra. R times R over A. Yes, down here. Okay. So now, what's happened? Now I was saying that one is already interesting. This was a classical problem. How, when are these equal? Okay. In general, we'll ask this question about all three different. And this is true, for example, if R, you see, is a complete intersection and is flat over A. Okay. This ideal I, where the fine R is a complete intersection ideal and also R is flat over A. So let me write flat, local, complete intersection. Okay. And now that we are here, we can go for the third different. The third guy, which is a little bit harder to define. This I just say when they are equal. No assumption. Just when they are equal. They are equal in that case. Assume that. No, no, no, no. No. I want to go now to the third different, but let me tell you that as I mentioned before, this is a classical subject. So many, many people have worked on it. So I don't want to, you know, I mean, obviously there are the old guys, like Nathan, you see, I mean, Nathan is involved here, but very earth-soaked. I mean, the old guys, I mean, Nathan is gone. So she's old. It's definitely. Nathan, Tate, Berger, Huns, but earth-soaked, Valdi. We have many, many names. Muqaiz, Wuric. These are a little younger than eminator. Malifman, Shea Storch, you know. We have a lot of names attached to it. So I'm sorry if I don't mention some of the people, but, you know, the last different though is the most classical because it's the one introduced by Dedekin. So it's the older and come from number theory. And so a little bit, the definition is more even harder, but I'm not going to give you the precise definition. So I'll give you a way to define it. But what I need to work with it, which is usually the best way to think of it. So I want to define the third different. I do us need to assume a little more than I even assume so far. So I assume, I actually define it when this map is finite. But I can put essentially finite after you can localize, but I'm going to do it when it's finite. Define it when it's finite. So I tell you already. And then I have to assume that, so this is obviously a field because A, remember, is a domain, is a regular domain. And now R is not a field, but is reduced. So it's a product of field. And what I want is that the extension, this extension, I want them all to be separable because I want to have a trace map, a well-defined trace map. And then the Dedekin different third player, fractional ideal, in theory, is inverse of the complementary, the Dedekin complementary module. I'll tell you what this is. And so it's the inverse of this module. But it's not a fractional ideal because in my case, R is normal. So it turns out to be an, A is normal. So it turns out to be an ideal. Okay, because A is normal. R is not normal, but A is normal. So this is an ideal. So an ideal in R. It's an ideal. What is this complementary module? This guy, this doesn't write so well, is isomorphic because A is got to a canonical module of R. Okay? So just think of this as the inverse of the canonical module. Okay? And even fix A plot is another normalization. If you fix the normalization, it's actually unique. You can think of the unique canonical module of R with respect to A, so in that setting. Okay? So just think about it this way. And then you can ask again, how is it related to the other different? Okay? Because obviously this was already there when Nathan introduced a different, the Dedekin was what was there. And then obviously she introduced this new different. She has to compare it with what is there. And that's actually a harder theorem. She does prove that the R different, the nature different is always contained in the Dedekin different. So this is always contained in the Dedekin different. So you see already an inclusion. The scalar is in the nether, is in the Dedekin. And when do you have equality? But equality are actually the way, she phrased it because at that time, we couldn't say Archomecoli, there wasn't Archomecoli. She phrased it saying R is free of A. But that's the same as saying that R is comacoli. Okay? So this is the situation where very equal. So as I said, now I want to just compare them and talk about the quality. These, they all actually, the Dedekin is harder to show, but does define again the ramification loci. So they all free give you the ramification loci, but with a very different scheme structure. And what's happened is that the Dedekin and the nether are a little bit nicer as ideal. They're always a mixed. The scalar is not usually, but the scalar is more explicit. Though it doesn't mean that the scalar is actually easier to find because it's a mess to find this determinant, this Jacobian. It's actually somehow the Dedekin because it's an inverse of a canonical module. And the canonical module is very much studied. It's somehow a little bit easier in that sense. So it's important to know the quality even to compute them explicitly, to compute the scalar, for example. And the fact, actually, that they're equal or they're comparison implies the classical result that was used by Oxford and Leunerke that the Jacobian ideal is, you know, containing the conductor. And also the fact that was proved initially by Tate that the Jacobian ideal coincided with the Sokol for an Artinian complete intersection. And now it's been generalized by Ulrich and Eisenberg for example. So, you know, I will talk about the different and what I want to do is, as I said, as giving a normological interpretation. Now you could ask in general when they're equal what was known before us. I think the most general thing known was that, obviously, it's a complete intersection. Everybody is equal, but more generally, if art is an almost complete intersection in the linkage class of a complete intersection, then we knew that they were all equal. Okay? So that's a more general thing. But we'll see other instances now for classes of ideals. There is another player I want to introduce, but this player is much well known and actually was talked a lot and is somebody that, you know, Craig has worked a lot with is simply linkage. So let me talk, introduce for the third time at least in this conference what a link is and what liaison is. All right? So I have R is a local goddess of ring and J, you don't need that, but it's okay. Ideals of R. We take a complete intersection, alpha one, alpha G, and then we have that I is alpha colon J and J is alpha colon I. We have heard a lot about linkage already, so I don't have to say much. I should say that the reason it was introduced and it's so classical was really to study varieties and also to classify variety, but also to study them, to understand them. And for example, there are very interesting results about class group of rigid algebras. They were very much studied, even from me, myself, and Wurich, and Yuneke, and Vasconcelo to study these algebras. So there are like residual intersection. Residual intersection are even more like partly, we say in Italy, ubiquitous, but linkage is extremely useful tool. Linkage, goddess, linkage, they're all very useful. And however, to study what we want to do now is not consider a single link or even the linkage class, what we want to do as a fourth player, I would say, because I already, maybe fifth player, because I already mentioned the Jacobian ideal, the fifth player we want to compute is the sum of all links. So the way I'm going to define it actually is like this. If I is a complete intersection, I just want it to be complete intersection. I want it to be R. And if I is not, I want to take the sum of all links. Sorry. So if I is a complete intersection, it should be R. If I is not a complete intersection, it should be the sum of all links. So why do I do that? Because this sigma i, and actually sigma i, modulo i encodes all the property of the linkage class, all the property of the links. I mean, it encodes a lot of information. And more or less the linkage behavior of i. This was already studied very much, for example, we used it to understand blow up algebra again, because we used it to prove for direct links of symbolic power of prime ideals are integral over a complete intersection with the reduction number one. So it creates a lot of, you know, ideals with these algebras, and that has good behavior. So it's quite important. And we want to, we see as better behavior than a single link. So we want to really look at this all together. And what's the idea? The idea is to look at this Jacobian, this three difference, and this sigma i over i, and give an homological interpretation. Okay, so that's the main theorem. So let me state the main theorem. I have no idea how I'm doing what's time, but it's fine. Whatever I do, I do. So I will not spend any time giving you proof. Maybe at the end I'll give you an idea of the proof of the application, but just tell you a little bit. So what's the main theorem? So there are really only two assumptions, so it's clear what the assumptions are. So the assumptions are only really true. I'll put them here big, so it's clear. So I assume something about K. I assume that K is an infinite perfect field. It doesn't erase well the board. I wish I had my chalks, the Japanese chalks. K is an infinite perfect field. And then R instead is a local, complete algebra. And then I assume that it's reduced comacole. Okay, so these are my assumptions. It's just notations. So if you are in this setting, the notation are the following. What can I do? I can choose. So all the rest is stuff I choose. I can find a natural normalization inside R. Okay, and this is going to be K. Let me see how many variables I want to put here. Y1, Yd, because that's the dimension of R. Okay, inside R, and R can be written as a portion of a power series ring. And so I take A and I add as many variables as the height of the ideal that I go modulo. Okay, and remember this ideal is going to be comacole. And it's going to be generically complete intersection. The thing is reduced. Actually complete intersection with dimension one. Yes. A is this one. It's complete. Yeah, and not only this is separable also. Okay, so I can even take it separable. So this is separable. So why do I do this? No other normalization because I want the data, you know, the data can do be defined. And we need the trace map. So you have to assume this kind of thing. Sorry? Yes. Yeah, it's a K algebra. Yeah. Okay, so very, very good situation. So but remember, this is just, you can choose it. This is just because of the assumption. And then what do we take? This is another strange thing. And I'll tell you why I, after why I do this, but why I want to do it in such generality. I can find obviously even a regular sequence, but now I don't want to assume it's a regular sequence. I just want to assume that this guy generates I generically. So they don't have to be a regular sequence. And this is going to be very important from the computational point of view because that's very hard to find a regular sequence. Instead, it's easy to find guys that generate I generically. And then I want to take a Jacobean, a precise Jacobean. The Jacobean, just these are G element. These are G variable. You get a G by G matrix. Take that determinant. And you take the imaginary in R. Okay. So this is my, what I call delta is an element. And then I consider, I told you it's an homological interpretation. So I had to consider a minimal S resolution. This is S. I should say that this is S. S resolution of R. So really we are going minimal free S resolution of R. So we use a free resolution. And then I take, I complete this F1FG to a system of generator of I. I call them again F1FN. And I take a causal complex. Okay. This is the causal complex of F1FN. Hold it K. Okay. And I is generated by F1FN. So far all stuff you can find. And then what do you do? You want to consider a map between these two. And this map should be, this morph is a complex should leave the identity in S. So let me write it because it's the main gadget we are going to use. And it's actually very easy. This is already Macaulay. So you can do tons of example because it's all in Macaulay already done. U is a morph is a complex is such that U0 is the identity in the ring S, the regular ring S. Okay. And then I need a base element because I want to specify some things. So I consider, so all this stuff I choose a one veg veg EG. Okay. And this is a base element. The first base element KG, the G spot in the causal complex. Okay. With what are the AI? AI are exactly the basis element of K1. They go to FI. Okay. So these are the basis of K1. So I fix this guy. And why do I fix this guy? Because when I look at this comparison, I want to look at the last comparison map, the G comparison map. And this will give me one column, the first column. And I call K the ideal, the column ideal, the ideal generated by the entries of the first column generated. And we can consider, we can find, I mean it's actually easy to show because of my assumption, a non-zero divisor as I enter in K. When you can change the basis of FG to move it in the first spot and call it gamma. Gamma is a non-zero divisor on R. We move it and then we can consider the row ideal that contains gamma and call it H. So this is the ideal generated by the first row. So imagine that the situation is this. I'm going to try to, I don't write so otherwise I really don't tell you anything else because I have nine minutes. Yes, that's what I'm going to try to say. Of our first column and first row of what? So of, I'll call it phi. That's why I was going to make the picture, which is UG tensor R. Okay. So I take the last comparison map. So you have KG. Here you have FG. This is the end of the resolution because, as I call it, you have a beautiful map here, comparison map. And call it phi because phi is not UG, but is UG after you go module I. So tensor R. This is the guy I look. And I want to take the entries, the ideal generated by the entries of the first column and I call this H. And then I want to, no. Did I call it H? Yes. I called it H. That's good. And yeah, that's okay. I called it H. I thought I was calling the other L, not K. K is not a good name because we have a K. Sorry, guys. I changed the idea. I call it L. So, sorry. So this is H. And this guy instead is L. And this entry, so this is the first, shouldn't go out. This is gamma. Okay. Gamma is just here. L, column ideal, H, row ideal. Of this map, all right? So now that everything is set up there and you also know what this guy is. And this guy is, I can state the theorem. And then I would like at least to tell you the application. I mean, when we compute something otherwise. So let me just tell you the statement of the theorem. And then without spending so much time. And so I can at least tell you the important application. Otherwise, I do have more time, though, because I started four minutes later. So I've been checking the clock. Otherwise, then what do we get? When we get, we describe all these objects in terms of this delta, gamma, H, and L. And H and L are there. Okay. So first of all, the complementary module, though this is not really as, because Kuhns and Baldi have it already. Once you know it, you remember, is the canonical module. And they already express it as a column ideal. And you can see that this is a column ideal. Okay. So at least when F1, FG are a regular sequence, this is Kuhns and Baldi. But actually we prove with Bernes that you don't have to consider a regular sequence. You can consider any set G element that generates I generically. And as I said, it's important for computational reason. It gives us, you know, our students is here and she knows that computationally is much easier because she's doing all this, computing all these guys. So it's just one of our delta times L. Okay. That's what the complementary module is. Now, when the data can, you get it right away. Remember, it's equal to the nether because you are a colorine. And this is a link of this. I mean, another link, the inverse of this. And I can do it with gamma because gamma is in L. So it's gamma over delta and gamma column L. Okay. Because the inverse. So this is okay. It's not so much surprising. But then we express our job is really to find the killer different. And the killer different is the same multiplier gamma over delta. Did I do it the other way around? Yes, because it was the inverse. So it's delta over gamma. So delta over gamma. And now is time H. So this is isomorphic to the inverse of a column. And this is isomorphic to a row. And then I'm not going to write it otherwise I have no time. You can write the Jacobian obviously once you have this. Okay. Again with H and a precise Jacobian. And you can write, I'll skip the Jacobian. I'll write is that sigma i over i. That is i one of it. So all the entries. The idea generated by all the entries of this matrix. And I will, as I said, want to go to the application. Let me tell you two things only. Why is this important? Because it looks like, oh my God, this is more mysterious than what's having the definition. Like the killer. It's some kind of minor. Why is not this more explicit? And this H is more explicit because actually this is much easier to compute. So it is more explicit. And it's not just this. It's also the fact that this H and L, the form is specialized. So this is very going to be very important for computation. And that's the same for this ideal that usually the links on the form, they don't specialize unless you have some information, you know, regularity on the link, some regular sequence on the link. But this guy is that the form is specialized. So that's actually the good part. Also we recover immediately the inclusion of this scalar and this ideals because H times L gives you gamma. This matrix is rank one. So H times L gives you gamma. And also when it's going to be an equality, it's going to be an equality if and only if H is gamma colonel. And you see that a mixedness is very important because then it's a mixed. Okay. So now I want to go to the application. And I have, I think five minutes at least. So let me go to the application. So we actually compute it. I'll tell you what the application are. Maybe I skipped the first two, which are easier. So the application are, we compute it for grade free Goriston ideal. And it's just in terms of Fafian. In that case, they cannot be equal because this ideal is principal since, you know, obviously the canonical module is principal. So this is the inverse of a principal ideal. So it's principal. And this one is not unless it's a complete intersection. But this is a Fafian, unless you can get from the, you know, from, from eyes and but practically. And then we compute the great shoe perfect. And then what we really do, which is the big thing is do it for maximal minors. So for maximal minor, both of a generic at the beginning, but we can actually get it for any matrix where we have some assumption on the height. So let me write that theorem. When I can tell you a little bit at least, at least a fear, even if I cannot say how. So I skip the first two cases, which are easier. And I go to the main application. So you have M times N, a generic matrix. Well, when we get it, for example, for, for scrolls and stuff like that, generic matrix. And I assume that M is strictly less than N. Okay. Characteristic zero now. That's important. And then you look at the maximal minor and you look at X prime, which is a main, the matrix where you delete the first row. So M minus one times N matrix, staying from X. Okay. And then in this case, we get that they're all the same, all three different. And what are they? They are delta over gamma like before, very explicit. And then you take the maximal minor of X prime, and you have to take it to the N minus M power. Okay. This obviously works also for I2. Though for I2 doesn't even have to be generic. And in the grade two perfect case. And you can also write the sum of links. And this is simply the minor, the M minus one minor of the big matrix to the N minus M. And now what's happened if it's not generic? I will just, since I don't have much time, just tell you, if I take the word generic out, then what I have to assume is that obviously this is the best height possible, so N minus M plus one. And then I have to assume that the ideal of M minus one minor. So this idea has to have N minus M plus three. So two more than the one of this. Because I want that the ideal is generically, is a complete intersection in co-dimension one. And it's actually necessary. We have an example, but if you don't have that, it doesn't work. With these two conditions, you don't get the quality in general. But what you get is that this guy is the N minus M symbolic power, and this guy is the power. And again, because this specialized, but in that case, the symbolic power doesn't have to be the power. In the generic case, it is by, you know, Bruns and Wetter, but if it's not generic, it doesn't. And in fact, there is a quality if and only if the symbolic power is the power. For example, in the scroll, they are never equal, and you will actually have the expression exactly what it is. And the proof is really using the fact that this L and this H deform, so you can go to the generic case and where you can prove it and use M as a multiplication structure, okay, because F is an algebra and it's associative, but that's very difficult to actually compute that these are equal. It's not easy at all to show that this, even using the multiplication structure, you cannot show, I mean, you show in inclusion, but you cannot show that every product of such minor is, appears. So you have to actually use the action of GL and K over the matrix, because then using the action, if you have one such product, you have the whole power, you get the whole power. And then you use the fact that they are mixed and then they use the fact that in Codimation 1, you know, is a complete intersection, so they're all equal to actually prove that they're equal in general. Okay, thank you very much for having listened to me. Thank you. So is there any question or comment? I would be surprised if Alessandra doesn't have the question. Sorry, I mean, you know that there are not that many people that can ask questions about residual intersections, so your abstracts said that you were using techniques from residual intersections, but you only used them. There's no intersection in the proofs. And actually, you can get a beautiful inclusion. You can get that there is a sum of certain links containing the color different. The color different is containing the nature, which is equal to the dedicated, and this is containing a residual intersection. Because really the assumption that is here can be equivalent assumption. One, you can say that R is reduced and this thing, but you can also say that the ideal, this ideal is a residual intersection. F1, FG, gamma, column I, this is a G plus one residual intersection of I. That's really the assumption. And that's what makes everything works. So really you need, these are the kind of assumption that we are using there. And yes, we use linkage or use a residual intersection. That's what we use in the proofs. And actually that is nice for the minor. I think I've wrote it correctly. Did I have wrote it incorrectly? No, this one, you think it's the column of the link. Yes, you're right. Anyway, column K, the famous ideal K, but that's why I didn't want K to appear there, which is F1, FG, column I. Yes, that's correct. That's why I didn't want K to appear. And so this is a residual intersection and this is a geometric link. So that's what really is under there. And in fact, for minors, there was some result of Jenna for residual intersection that you can use for the link of minors and do the same characterization from the other side because these two are equal. She proved that the residual intersection of links can be obtained as a sum of links. So even in that case, the K-lar and N-nator, which is another case, they are equal and you can express it as a residual intersection. Yeah, so at the end, the K-lar becomes a residual intersection. So as you see, there is always buried there. I just didn't have time to give back the proofs or anything. What about Fafians? Do you have a similar balance? We only have for grade three and that's actually, we ask our student, now she's working there, she's working for non-maximal minors. She's trying the non-maximal minor. We did not try the Fafian. We expect the Fafian to behave well, but we have not compute the Fafian. And we talked with Aldo. He says that when you have symmetric matrix, it can be harder and everything can be very different. So we are very apart. So we told her, don't touch symmetric matrix. But first try, it's already very difficult for non-maximal minor. She's trying the two by two minor of any matrix. And the behavior she finds already very beautiful behavior appearing there. So it's clear that there is a lot of mathematics there already. Yes, yes. And it's nice to get this classical object back into light and to investigate them back and see what they can tell us. Thank you. Do you have a question? Yes. What about if you're over a perfect field K, what happens in the theorem? I mean, the main theorem is fine. It still works. It doesn't need characteristics zero. But here we need characteristics zero. Otherwise, even the Emma, first of all, in our proof that using the action, everything falls apart. Resolution. But the thing about the resolution is still valid. That just needs K perfect field. So the main theorem that says that one is isomorphic to H and the other is isomorphic with this delta over gamma to an inverse of L. They are still valid for any perfect field. But then when you go to the application, the case of the maximal minor, that case, we do use Emma and she has characteristics zero because you remember she divides when she computes a thing. And we do use the action. We couldn't do it without the action. The action needs it. I mean, our theorem, at least even, we have to go to the real number and then back to the complex number to do the work. So, yeah. So, yeah. But you don't have counter examples of these? We don't have counter examples. No, we don't. We don't. We didn't compute it in characteristic P. We don't have characteristics. So one could look in characteristic P. Remember, we always, we are always in ecocaracteristic case, in any case, because we always have a K algebra. So inside, I mean, even to do to define well, then the data can different and all that, you know. But yes, I don't know in characteristic P. That's completely open. And you have to find different method. That's clear. It could be still true, but you have to find other method. In these classical invariance, there's a discriminant ideal, which is related, it's at least related to these things you're talking about. Yes. But we did not look at that. But yes. Yeah. So there are lots of interesting objects there to look at. Thank you. So there may be some more questions, but it's maybe it's time. So thank the speaker again.