 OK. So I can start. So we were still talking about tangent space, singular and non-singular points. And we had introduced first the tangent space of a fine algebraic set, so the x, the n, the fine algebraic set. Then we had that the NP is a point in x. We had introduced the tangent space at p to x to be the zero set of all the differentials at p for all f, where f lies in the idea of x. We'd seen this can also be, and this will by itself be some, this dpf is a linear polynomial. So this will be a linear subspace of a n. And in fact, the vector space, because it goes to zero. We had seen that one, and this can also be written as the zero set of, so if the ideal of x is generated by some elements f1 to fr, this can also be written as the zero set of the differentials. And maybe we saw that this is also the same as taking the Jacobian of f1 to fr at p, where the Jacobian is the matrix of partial derivatives. So Jane, put? Yeah, yeah, that's what I meant, obviously. That's what I should have written. So this map given by these differentials is the Jacobian, and then the kernel. OK, one could, just so that you have some more intuition, one could also see it in a different way if p is the origin. Then one can also say it is following the differential to f at p to at zero is just you take, so f is a polynomial. So you just take the polynomial f, and you take its homogeneous part of degree one. That's the same, OK? So that's maybe a bit simpler than, I mean, it's equivalent to that formula. But it somehow gives you more feeling that you really have some kind of linear approximation to the zero set. OK, and I wanted to, OK, so we had to introduce now another version of the tangent space, and I wanted to prove that it's equivalent. So if x is just a variety, p and x, then we can define another definition of the tangent space, the more general one, is just we take this to be, we take the maximal ideal at p, o p, o x p is a local ring, and m p maximal ideal. We take m p, model m p squared, the dual vector space of this. So this would be the same as the maps v from m p to k with the property that if I take v and restrict it to m p squared, this is equal to 0. And now we want to see that if we have in a fine variety, this definition coincides in a suitable sense with the other one. So at least as vector spaces, these two versions of the tangent space are isomorphic. And so I have to say, so I have to write down some kind of isomorphism. It's slightly clumsy. But first maybe I'll also write something else. We had for a morphism of varieties, we also had a map of tangent spaces, which we had defined in terms of, I mean, for a fine varieties. We had defined the map of tangent spaces in terms of the differentials of the components of the map. And now there is also a version of this here in this more abstract setting. So if phi from x to y is a morphism of varieties and phi of p is equal to q, then we have the differential of phi at p is the map, which is somehow given by pushing forward. So we take dp phi is a map, should be a map from tpx to tq. Why? And how can we do this? Now we want to have a map from mp. So if you have a map from mp, what mp squared to k, we want to make a corresponding map here. So if we have a, so we assume we have such a map v from mp to k, we stick to mp squared at 0, we can associate to it the map v composed with phi. We first take the element with phi star, I expect. So if we have a phi star goes from O xp from O yq to O xp, namely by composition with phi. And so we have this. So one could say if you know that phi star is just composition with phi. So if we have for an element f here, we compose it with phi, and then we apply v to it. OK, so this is the kind of most natural map that one can have. But anyway, we are not going to use this very much, or maybe not even almost at all. I just wanted to have it for completeness. Yeah, yeah. That is usually when you want to do the computation, then you will need to most of the time to go to the fine case. Because for concrete computations, this is a little bit abstract. So we don't know precisely most of the time what mp and mp squared is and so on. So if you have a variety, which is a hypersurface in some affine space or something, it's much simpler to work with the other definition. And so you can go to the fine case. But if you want to prove theorems, this is much more useful. So now we want to prove that both definitions of the tangent space are isomorphic vector spaces. The question is, where do we have this? Yeah. And this is a little bit, maybe a little bit long, and afterwards we're not deferred to it. But at least one can see how it goes. So I will write for the moment, write so now let x in a n in a fine variety. And we write, say, small tpx for the old definition of the tangent space. So p and x. So the one which is given as a zero set of the differentials, that is right now with small t. I'm going to show that the two t's are isomorphic twitcher. So first I want to, so I have to define an isomorphism between these two vector spaces. So first, so I make some definition. So if f is an element in the coordinate ring of x, then I say I want to and a is an element in this tangent space. I want to say what the evaluation of the differential of f at a is. So we say define dpf of a to be dpf of a, where f, so where the class of f is equal to f. So f is a polynomial. So what do I mean by this of a? So if I write a equal a1 to an, it's just a vector in an. So dpf of a is just sum i equals 1 to n df by dx i at p times ai. We just evaluate this polynomial of dp1 at a. And we define it like this. And note that this makes sense because this is a reasonable definition. If you have any other representatives, so we know that any two representatives of this element f in ax differ by an element in the ideal of x. And the elements a in the tangent space are precisely laws which lie in the kernel of the differential. So this will give the same. So this is already fine. So what I mean, only on the tangent space. And in the same way, so then we can also go to the maximal ideal. So if, so let, so p was a point in x after all. So now if h equal to f divided by g is in the maximal ideal at p. So it means f and g are elements in ax. And f of p is equal to 0. And g of p is different from 0. Then we define dph of a. a is again an element in the tangent space to be what? To be df dpf of a divided by g of p. And check that this is well defined. So if we have another, if you write f as another, h another way as a quotient of such polynomials. And you know what the equivalence relation is. Then you should get the same result for all a. And you can also check if h is an element in mp squared. Then dph of a is equal to 0 for a. This is basically because you can then write it as a sum of products of two such. And then you use the Latinx rule. And you find that you get 0. OK? What? Well, so. This is because I've assumed here that I'm in mp. So in principle, you would expect. So what? No, no, h is certainly not a unit. No, no, no. But I mean, this formula is a special case of a formula that you know, which looks quite different. And maybe then, as he asked, I can tell you. So if h is an element in o xp, I can define dph of a. I don't need it. But you know, it's confusing that this formula looks very different from what we're used to in calculus. Then I can define this. And this is where I define by the usual formula, which I hope I remember. So dpf of a times g of p minus f of p times dpg of a divided by g of p squared. And now we use that f of p is equal to 0. So we forget this term. And we divide by this. Yeah, the value is 0. That is the point of being the maximal ideal. So this is precisely the elements which at the point are 0. So that is a bit slow. So this is this. So this defines me this thing. So therefore, we have a map, which I call dA. So thus, for A in the old tangent space, so A, an element in the old tangent space, we have, say, an element dA, mp, if you want mp, mp squared to k, which is therefore an element in the new tangent space of x. And so we define a linear map, which I would be called delta, from tpA, tpx to tpx, which sends tangent vector A to kind of this d delta A. And the claim is that this is an isomorphism. And therefore, you just basically, so the map is by evaluating the differential of the function in mp mod mp squared on the tangent vector. So this is where this duality comes from. So the theorem is, so this map, delta, is an isomorphism. And the second statement, which we will not also not prove. And it's not so important, but for completeness. So using delta to identify tpx and tpx, we find that the two definitions of the differential of amorphism are identified. So for phi from x to y, amorphism are identified. So if you, I think you understand. So basically you have the, so we have this old definition of tp phi. And we have the new definition. This is the new definition of tp phi. And here we have this diagram commutes. OK? That's what I mean by these words. OK, I will only prove the first part. Second is if you want an exercise, it's just a little bit. One has to kind of go through the details. It's not so exciting. Anyway, so fun. So we have to show that this map, which I unfortunately just wiped out, but you have it crystal clear in your head, is an isomorphism. So we have to show it's injective. It's obviously a linear map. We have to show it's injective and surjective. So we prove the injectivity first. That's the easy part. So we have, remember we have x in a n. And p was a point in x. And so write ti for the class of xi. So xi are the coordinates. x1 to xn are the coordinates on a n. Minus the value of xi at p. So this is still a polynomial. And then the class of that in the local ring at p actually lies in mp, because it's 0 at the point p. So we have some elements in mp. We can evaluate our wonderful. We can see what happens if we look at this. So now, so assume we have our tangent vector a in tpx. So then we ask ourselves if we take delta of a. So remember that we had this map delta. Would send a tangent vector a to da. So if we take da and apply it to xi, what is it? So we are supposed to take the differential of ti. So we are supposed to take the differential of ti and apply it to a. So the differential of ti is just the class of xi. You apply it to a, you get ai. So thus, if a is in the kernel of this delta, that's da is equal to 0, then it follows that all ai are equal to 0. Simple as that. Now, so subjectivity is not much more difficult. We want to show that we, so we have seen that we can pin down what the tangent vector a was by looking what it does on these coordinate functions. Now, in order to prove the subjectivity, we have to see that if it does it, the correct thing for the coordinate functions, then it is the correct tangent vector. So therefore, we will, we want to show that somehow the tangent space, the mp mod mp squared, is generated as a vector space by these ti. Then it follows to show subjectivity. It is enough to show that ti, so t1 to tn generate mp mod mp squared as a vector space. Then, but I should maybe a little bit more say about it. So yeah, OK, I come back to this in a moment. And then I, so what I mean is, so if I take an element delta in tpx, so a tangent space in the new sense, so that means that delta is a map from mp mod mp squared to k. So then, we want to get a vector here. And we know how to get the vector. Namely, its coordinates will be what happens if I apply it to the ti, if the map is a. So therefore, we let ai equal to delta of ti. And we put a equal to a1 to an. And the claim is that delta is equal to da. That means, which is after all equal, yes, it's stupid to call it delta. Maybe I call it v, because this was really not so good. So in other words, that v is equal to delta of a. So the map is subjective. OK, so it's clear that if I take, by definition, precisely, we see that v of ti is equal to delta a of ti. So if it is true that these generate this as a vector space, therefore, the two are equal. Because they are equal on a space which generates. And so this is just then, one just has to see that this happens. It's quite simple. So let us look at an element. If I have f equal to g mod h is an element in mp, then I can write f, if I write f, minus g divided by h of p. Now, this is just a. So as usual, when I write it like this, it means that g of p is equal to 0, and h of p is not equal to 0. And here you see, I've just taken, this now is just an element in ax, because this is just a number. If I take this, I can write this as g h of p minus h divided by h times h of p. Just put it on the common denominator. But what you see here is, this thing is 0 at p, and this thing is also 0 at p. So this is an element in mp squared. So thus, if I replace, so in every equivalence class in mp mod mp squared is an action element in ax. So thus, mp mod mp squared is generated by classes of elements, say, a of x. So I don't have to look at quotients of polynomials. So but it's clear by definition that ax, the xi are just the coordinates. And if I make the shift here, these are still coordinates of an, however, just shifted. So kx1 to xn is also generated by these shifted ones. And therefore ax is also generated as a ring by the ti. So I have that ax can be written as kt1 to tn. It's given by all polynomials in the ti. I mean, I don't say it's a polynomial ring. There are some relations between them. But it can be written like this. It's a ring generated by these. And a polynomial of degree bigger equal to 2 in the ti. So a homogenous polynomial, well, say, a monomial of degree bigger equal to 2 in the ti will lie in mp squared. Because I know the ti lie in mp. It's in the maximal idea. So if I multiply two or more together, we lie in mp squared. So therefore, in mp model mp squared, it's generated as a vector space by the ti. So this proves that it's maybe not so extremely illuminating. But at any rate, we find that these two definitions of the tangent space are equal. And as I said, this old definition somehow has the advantages quite concrete. You can also view it as saying you just take a linear approximation of your variety, just taking the zero sets of the first order part of your polynomial. And so that's kind of very concrete. On the other side here, this will turn out to be quite useful also because it's very similar to how you view things in differential geometry. It is very reasonable to view a tangent vector as something which acts on functions. So you could, for instance, so one classical way of, I mean, more elementary way of looking at the different geometries, you look at all curves to the point. And you look at all the tangent vectors to the curves. So the derivatives along the direction of the curve. I don't know which one Retzor uses. But anyway, there are several versions of defining tangent vectors. And many of them involve somehow. So to say it one last time, a tangent vector is in some sense something is the same as the operation as taking the derivative along that vector. And so here we have the tangent vector in the old definition as a kind of vector in some vector space in which the variety is a subspace. And this new definition translates it into taking somehow the derivative along the direction of regular functions along the direction of the tangent vector. And these are equivalent viewpoints on the same thing. OK, so maybe that's enough now. We want to just do a few more. So first, now we want to prove to generalize this result that we had in the beginning. We had proven that if we have a hyper surface, then the non-singular points form an open dense subset. And now I want to prove that this is true for any variety. So the non-singular points form an open dense subset. So that every variety is almost everywhere smooth. So this is the following theorem. So let the x be a variety. Then we have that the regular of x, so the set of all non-singular points, is an open dense subset of x. So then along the way we prove another statement, namely for all points p and x, we have that the dimension of the tangent space of x at p is bigger equal to the dimension of x. So you recall that in the definition of non-singularity, we had that a variety, so a point is a non-singular point at a variety if the dimension of the tangent space is equal to the dimension of the tangent space. Now if you have a singular point, it's the other point where it's not equal. But the statement here is that if it is not equal, it must be bigger. So the dimension of the tangent space can never be too small. It can only be too large. So well, actually it's not one. It's always we kind of mix up the two proofs. So x has an open cover. We will reduce to the fine case and then use the old definition of the tangent space. So it has an open cover by a fine varieties. And it's clear the theorem is true if it is true for each open set in the cover. Because here we say that the regular locus of x is an open dense subset of x. If we have an open cover of x, and for each open set in the cover, the intersection of that open set with the regular locus is dense in that open set, so then x-ray is open in x, open and dense in x. Because you can check such things by checking it for an open cover. And this statement is also true. Here I'm just saying something about every point in x. The dimension of the tangent space of x is the same as the dimension of the tangent space at p of an open subset of x. Because it only depends on the maximum ideal at p. So we can, in order to check the theorem, it's enough to check it for each open set in an open cover. And we take an open fine cover. So therefore, we can assume that x in an is a closed sub-variety. Just we replace the elements in the open cover. In the open and fine cover will be isomorphic to closed sub-varieties. The statement is obviously invariant and isomorphism, so we can as well assume this. So we have immediately gotten rid of the general setting. We are, again, just talking about the fine varieties. So let's assume, let's say, f1, the ideal of x is equal to the ideal generated by some elements f1 to fr, where the fi are some polynomials. Then we know that the dimension of the tangent space of p of x, when I just write it down first, is equal to n minus the rank of the Jacobian of f1 to fr. If you recall, the tangent space of p of x is the kernel of the Jacobian as a map from k to the n to k to the r. And so we get the dimension of the kernel is this. So this is of the Jacobian at p. So this is a matrix of polynomials. We evaluate p. So the locus where this matrix has a certain, has a rank smaller equal to something is a zero set of certain minus of this matrix. So of the determinants of a certain, of all the determinants of a certain size. So therefore, we know, so thus, thus, so if you look at the, so thus for all d, xt is a set of all p and x, which I defined to be a set of all p and x. That's the dimension of tpx is bigger equal to d, is the size, is the zero set of certain minus of this matrix. So it's closed in x for all d. So every d, I get a closed subset xd. And now we choose. So and obviously it's clear that, by the way, this is written that xd contains xd plus 1, contains xd plus 2, and so on. I mean, they get smaller. They might all be equal, but at least, or I mean, but at least, you know, the bigger the d is, the more conditions you have. So we choose the largest d such that xd is still equal to x. For instance, if we put d equal to 0, the dimension of the tangent space will be at least 0. So xd is certainly equal to x. If we put d larger than the dimension of the, than the n, the dimension of the an, as the tangent space is a subspace of an, then xd will be the empty set. So there will be some last time when xd is equal to x. And we put x0 to be the difference x minus xd plus 1. This is, by definition, not equal to x. And it's an, this will be an open, dense, no. So it's not, by definition, it's not the empty set. And it's open because xd is closed. So this is an open, there's an open subset of x, because a complement of this closed subset. And it's dense because it's a non-empty open subset. And x is irreducible. So then we know, by our definitions, the dimension of the tangent space of x at p is bigger equal to d for all p in x. Because we have chosen d to be the largest one such that xd is still equal to x. So for all p in x, we have the dimension of the tangent space bigger equal to d. And the dimension of the tangent space of p of x is equal to d for all p in x0, which is an open, dense subset. So this means we prove the theorem, both part 1 and 2, if we prove that d is equal to the dimension of x. Because then it precisely says, for all points in x, the dimension of tpx is bigger than, bigger equal to the dimension of x, which is part 2. And there's an open, dense subsets where the dimension of the tangent space is equal to the dimension of x, which means there's an open, dense subsets where x is smooth, which is part 1. So we only have to show this if I have it. Now we come back to this theorem that we proved. The other time to prove the statement that two different definitions of dimension were equal, namely that every variety is birational to a hypersurface in a to the dimension of x plus 1. So x is birational to a hypersurface in x. And they find space of dimension of x plus 1. Now this is something we proved. So it's birational. So there are open subsets which are isomorphic. So let u in x be a non-empty open subset, which is isomorphic under this birational map to the hypersurface. So maybe hypersurface, I call it y, which is isomorphic, say, to an open subset of, say, the regular locus of y. So you have to remember that x is birational to a hypersurface y in x plus 1. It means there's a non-empty open subset u tilde of x, which is isomorphic to an open subset of y. But we know if we have a hypersurface, we have proven that it contains an open subset, that the regular locus of y is an open subset. So we intersect this open subset of u, which is of y, which is birational to the open subset here with the regular locus. We still have an open subset. And we can find an open subset of x, which is isomorphic to it. But now we know that y has dimension, the same dimension of x. So if I have a point p in y, the dimension of the tension space is the dimension of x. So then the dimension of tpx is equal to the dimension of x for all points p and u. But now we have, so u is an open subset of x, a non-empty open subset of x, and x0 is a non-empty open subset of x. So x is irreducible. Any two non-empty open subsets intersect. So we have u intersected x0 is non-empty. So but on the other hand, the dimension of the tension space for every point of u is the dimension of x. And for every dimension of tpx, on the one hand side, it's equal because we are in u to the dimension of x. So p is in u. And on the other side, so let p in element in u intersect at x0, as p is in x0, it is equal to d. So obviously, it follows that d is equal to the dimension of x. And that proves it. So it's not so, you know, but OK. So in some sense, if you think of it, the proof is not so very complicated. You just have to, you know, you have this kind of trick how you reduce it to the hypersurface here, and then still. OK, so we know now that every variety has an open-dense subset, which is non-singular. So I want to make one last statement about this. This is about how one can a slightly more simple way check the non-singularity of a variety by, I mean, I will maybe just leave it brief because it's not so exceedingly exciting. So if x in an is in a fine variety, and ix is generated by f1 to fr. We find that p in x is non-singular, if and only if the rank of this Jacobian at p is bigger equal to n minus the dimension of x. OK, and this is clear because, I mean, by definition, the dimension of the tangent space is n minus this rank. So by definition, it's non-singular, if and only if here we have equality. But we know, by what we just proven, that the other inequality is true. We know that the rank of this is smaller equal to n minus dmx. So if we have this, then they're equal. And the second statement is the same in the projective case. So first, I have to write it. So if we have some polynomials in kx0 to xr xn homogeneous, then p in x is non-singular if and only if I write down the same. So I'm sorry. So I first have to define the Jacobian of this. In this case, I just take the same Jacobian, but depending on all n plus 1 variables, no? Let 1 by dx0 until df1 by dxn. And here goes on, dfr by dx0 until dfr by dxn. Then p in x is non-singular if and only if I can't forgot. So I have, again, I assume the homogeneous ideal of x is generated by these elements f1 to fr. Then p in x is non-singular if and only if. The dimension of the rank of the Jacobian is bigger equal to n minus dimension of x. So the same statement. It's a slightly more complicated if you think of it, because here we actually, I mean, we are in the projective case. We have to somehow reduce to the affine case. So you can assume that our point p lies in the open subset u0. So the first coordinate is 1. And then you can express this Jacobian in terms of the Jacobian of these. But it's still a little bit tricky, because you actually have now one more. This matrix is slightly larger than what you would think. So you could worry that the rank of the matrix might be too large. But it, so let me, I mean, as I said so much, I have to maybe explain properly. So we can assume that p is of the form 1 a1 to an. And this is the point in an. And we put fi, x1 to xn, to be the large fi of 1, x1 to xn. So we de-homogenize. And then we know by definition that p in x is non-singular, if and only if it is non-singular as a point of the intersection of x with the n. So that means if and only if a equal a1 to an is a non-singular point of the 0 set of this, which is just equal to x intersected with the n. I mean, identifying this thing with the 1 with the vector in an given by a1 to n. And if you look at it, if we take dfi by dxj for j from 1 to n at the point 1 a1 to an, this is the same as dfi by dxj at the point a1 to an. So somehow if you want to compute it, it's the same thing. Actually, it's just in a very strong sense the same thing. So that means if we write down this here, j of f1 to fr at our point p, this is equal to fdf1 by dx0 until dfr by dx0. And here at p. And the rest is just j of f1 to fr at a. Matrix looks like that. So by definition, we have that our point a is a non-singular point of this thing even only if the rank of j of f1 to fr of a is bigger equal to n minus the dimension of x. Now, and you want to claim that the same is true for this bigger matrix. So we have to see that the matrix does not, the rank of the matrix does not get larger when we add this row, the first column I think it's called. Well, and so that means we want to show that the first column is a linear combination of the other columns. And this follows from Euler's formula. I actually didn't want to explain it, but as I started, I cannot give it anything like that. So we have the fi, a homogeneous of some degree, which I may be called di. And so there's Euler's, there's the well-known Euler formula, which says that if you take sum j equals 0 to n, xj dfi by dxj, this is equal to di times fi. This is a formula which is some trivial calculation that this will be true. If you make this derivative, you can just see that the factor di, when you make this partial derivative, the factor di comes down and you lose 1xj. I mean, it's not, so that anyway is the statement. And so therefore, if you take dfi by dx0, this can be expressed in terms, say, at the point p. So remember that p is equal to 1 a1 to an. So I put p here, the coordinates of p for the xj. And I bring this one to the other side. So then this will be equal to sum i, if I want to i, j equals 1 to n ai dfi by dxj. Yeah, the point is that at the point p, p is a point in x, these fi are all 0. It looks like that, except there's a minus. I have that the sum of all of these is equal to 0. And I bring this on the other side. So that means, in other words, that the first column is a linear combination of the other columns. OK, so this was just this criterion to check some points non-singular. Maybe just check with, I forgot something. Yes? The Euler formula, well, it's precisely this formula. So if you have a homogeneous polynomial of degree d, then if you take this sum of the partial derivatives multiplied by the variable, you get the degree times the function. That's what I call the Euler formula. And if you have some doubts about it, it's an exercise. It's something that one usually does as an exercise in the second year of analysis. Or even in the first year, it's some formal manipulation. Oh, there are many Euler formulas. Euler is maybe the most, I think, he might be the mathematician who published the most. And he was very much into formulas. So I mean, in principle, if you say Euler formula, I can think of, well, at least a handful of things which I would call Euler formula. This is maybe the easiest thing which has that name. Well, I mean, I think you have basically a whole drawer full of his collected works. I'm not even sure whether everything, oh, well. I mean, it's one thing, it's all computations. So it's not that you have much of it is computations. So it's not that you have to sit down for a few months thinking about some deep property. But often it's really just calculations. And he was in Saint Peter's book in this place. And this came with his own place to publish things. So he would just write his notes. And whenever the pile of his notes was high enough, he would take the notes and give them to the printer. It's maybe, and obviously he could work. He was very brilliant, could work very fast, and was very good with formulas. No, it's not, I mean, if I had things like that, it wouldn't. OK. So now we come to the next paragraph, which is let me first see whether I want to say something more. No, no, I think that's fine. So now I want to talk about non-singular points of curves. So the special case of curves. And somehow, see that if you have a non-singular point of curve, the local ring has a special property. It's a so-called discrete-valuation ring, which basically means that for every function, first for every regular function on the curve, you can say to which order it vanishes at this point. And this is the number that you can associate to the function. But this turns out to be useful. Finally, you can do it for every rational function. So it's like in analysis. If you do complex analysis, you have this thing. You look at meromorphic functions on the plane, and they have some order of pole and some order of 0, depending on whether they are homomorphic at the point. And the same kind of thing with the order, you also have four rational functions on curves. And this turns out to be quite useful. And actually, in the last chapter of my notes, which obviously I didn't really plan to do in the course, because there's only time enough if people ask me to give some extra lectures. And I do not ask you to ask. So in the last chapter, I somehow give some consequences of that. I mean, you know, to study things. But I don't think I think you have quite enough exams now so that you're maybe not too eager to have this now. But I will first just set this up, how it starts, what is the local ring of curve looks like, and some first consequences of this. So this is about non-singular curves. So now in this section, we want to talk, so we want to say a curve is supposed to mean just a variety of dimension 1. And the non-singular curve would be a non-singular variety. And we want to study non-singular points of curves. C. And as I said, the special fact that you will find is that the local ring at this point is the so-called discrete valuation ring. So first, I have to again talk a little bit about modules. So recall the definition of module over ring R, maybe A. I'm not sure whether I mean it's very, as you remember, the actions of a module over ring are precise the same as the actions for vector space over field, only that we replace the field by ring. So that means you have a sort of activity for the, so I can just at least say it means you have A is a ring, M is in a B in group, and you have an action from A times M to M, which satisfies the usual actions, which I maybe refuel right down again. So if you have, so if something is A, it's an A. And if something is with the M, it's an M. So if I have A1, A2 times M, this is equal to A1 times A2 times M. So as a strategy, then we have two distributive laws. If I have A times M1 plus M2, this is A M1 plus A M2. So this particularity in M, and we also have A1 plus A2 times M is equal to A1M plus A2M. And finally, we have the obvious thing that the one element of the ring should act as the identity on M. So these were these actions. And we talk, so then in this case, so M is called an A module. If we have this for all A1, A2, and A and A, and all M, M1, M2, and M, as usual. And so we also talked about modules, about module generated by some elements. So if S is a subset in M, then the A module generated by S is a set of all linear combinations of elements in S with coefficients in A. So is the set which I call S, which is the set of all A1S1 plus ANSN for some N. And the AI are elements in A, and the SI are elements in S. Linear combinations are always finite, but S could be an infinite set. But we have just like four vector spaces. And we call M is called the finitely generated module that we had before if it can be written as generated by a finite set. We had also recalled that we know that if we have an ideal, so if I in A is an ideal, then I is by definition an A module in the obvious way by multiplication in A. And its generators as an ideal are the same as its generators as a module. Finally, piece of notation. So if M is a module and I in A is an ideal, we can write IM to be the set of all say B, M, where B is in I and M is in M. And it follows from the fact that I is an ideal, that this is a sub-module of M, A module. And it's obviously IM. It's contained in M. Now I want to at least state the first result that we need, which is the lemma of Nakayama, so somehow the most well known result about modules over local rings, which one uses in many contexts. So let A be a local ring. So if I do it in two parts, first I do the general set up. Let A be a local ring with maximal ideal M in A, maximal ideal. Let large M be a finitely generated A module. So the first statement is, I mean, you have said that if I take these products by definition, if I multiply with all elements of A, I will lend still in M. So I know that IM is contained in M because I is a subset of A. But one could imagine that it happens that IM is equal to M, that it doesn't get small. So if I do this for the maximal ideal, so if it is true that M is equal to the maximal ideal times M, so can this happen? The answer is no. If this is true, then M is 0. And the second statement is somehow corollary. If you write, so A is a local ring. So if you take A divided by M, it's a field. It's M is the maximal ideal. So write K equal to A modulo M. So assume we have some elements, F1 to FR in M. Be some elements. So assume we have such elements, F1 to FR in M, such that the classes F1 to FR generate this quotient M, modulo MM as a K vector space. So if I take the quotient of these two modules, this will be a K vector space because I can multiply with elements of A. But if the element of A lies in the maximal ideal, then I will lie in MM. So we have an action of A modulo M, which is K. So it's a K vector space. So assume that, then F1 to FR generate M as a module. So that means in order to check that we have a set of generators of this module, we only have to check somehow to first order. We only have to check on M modulo MM, which is much easier. OK. But I have to warn you, the assumption that M is a finitely generated AME module comes before. If I have a module, if I have an A module, where A is a local ring, of which I don't know that it's finitely generated, even if I find finitely many elements such that their classes generate this quotient, I do not know that they generate it as an AME module. It might not be finitely generated. I have to assume before that it's finitely generated. Maybe I can still prove the first part, which is relatively simple. So we do it indirect. So assume that M is not the zero module, not. So we know it's finitely generated. So let U1 to UR be a minimal set of generators of M. SA module. So we know there is a finite set. There must be a set with the smallest possible number of elements. And we will want to bring this to a contradiction. We want to find a set with one less generators, a set of generators, which is one smaller. So we note that the last one, UR, is an element of M, which is equal to M times M, according to our assumption. So that is, we can write UR as in a combination of the UI with coefficients in M. So sum I equals 1 to R MI UI. Because you can write any element as in a combination with coefficients in A. And now we multiply with elements in M. And so we then in M. Because M is an ideal in the maximal ideal. So I can bring this to the other side. So I get 1 minus MR times UR is equal to sum I equals 1 to R minus 1 MI UI. That's simple enough. But the point now is that this element is a unit. Why is that? I mean, either it's a unit or it's not. Or it lies in the maximal ideal. We have a local ring. So if not, we have that 1 minus MR lies in the maximal ideal. Because we know that all the nonunits in a local ring lie in the maximal ideal. So we also know, thus, if we take 1, this we can write as 1 minus MR plus MR. So now both lie in the maximal ideal. This will also lie in the maximal ideal, which is obviously contradiction. 1 is certainly a unit. So this is a unit. And so we can bring it on the other. We can divide by it. And so we get UR is equal to sum I equals 1 to R minus 1 MI UI times 1 minus MR to the minus 1. Or if I want, I can write it like this. So this is some. So now we see we have written UR as a linear combination of the other UIs. So thus, already U1, so UR minus 1, generate the ideal, generate M, which is a contradiction. We have started with the minimal set of generators. We found one which contains one less element. And so this is impossible. And this proves the first part. Second part we will do next time. And then we'll see why this should help us with our business. I think we meet again on Monday. I also hope that I can correct your exercises finally. I was a bit busy recently with other things. OK.