 So, we were talking about dimension, and as I said, we wanted to study the dimension by using morphisms, though, because we know a little bit about morphisms, we can now use them, and so we will, so we will dimension use morphisms, and in fact, the tool that we will most use are so-called finite morphisms. So, I will in a moment define what finite morphism is, that is somehow defined in a somewhat algebraic way, but one property that finite morphisms have, they have, which one will have to prove, is that they are, say, the fibers are finite. We have f to y, then f to the minus 1 of y will be finite for y, and so we would expect that they preserve dimension, and so we show they preserve dimension. So, if I have, if f from x to y is a surjective finite morphism, then the dimension of x is equal to the dimension of y, and so then we can somehow use this to study the dimension, somehow if you want to prove that the dimension of two things are equal, one way is to prove that there is a finite morphism, finite surjective morphism for one to the other, and it's just a question of finding always such morphisms whenever one needs them, okay. So, before, so, and we will somehow try to do that, and now in order to, for this to make some sense, I have to tell you what finite morphism is, because this is not the defining property of a finite morphism, it's a much more algebraic concept. So, so I just, so a finite morphism will be, say, of a fine varieties, will be a morphism such that, well, if you have a morphism, you can have the pullback from ay to ax, no, you can pullback, you just compose with that, and we want it in such a way that if I take f that ax is what's called a finite, so f to the minus one of ay algebra. Now, so, what we have is that, so I will have to explain what this means, but you have the pullback, so not f to the minus f star, so we have the map in the opposite direction f star from ay to ax, this will, so the image of this pullback will some, will be some subring of some kind of ax, and what we want is that ax is not so very much bigger than this, and I will have to explain how. So, in order for this definition to, to make a definition here, I have to tell you what it means for a k algebra to be finite over another one, okay, and so this is what I do now, to finish. So, let a and b be k algebras, so these are just rings with, which have k as a subring, and obviously the, the k should be the same in both, so this is a sub algebra of b, so it's a subring, and it's compatible with the division by k, and I say b is called finite over a, if you are, you know, in some sense, it's like a finite dimensional vector space, so if there exist finitely many elements, say b 1 to b n in b, such that I write it like this, b is, you know, is given by say b 1 a plus b n a, by which I mean that every element of b is a linear combination of, of this b i with coefficients in a, but just all linear combinations of the b i with coefficients in a, so every element of b can be written as a linear combination of the b i, and one only needs finite d n, so this is a, this is somehow related to the notion of a, of a module, so you know, if you look at it, it's, it looks a little bit like having a finite dimensional vector space or vector space, you know, every element, and if it's, now we are not talking about vector spaces, because we are not over filled, but the, the analog of that over ring is a module, and so I, so that b is a finite a algebra, it will be equivalent to saying that b is a finitely generated a module, if I know what a module is, so let me briefly recall this, we don't really need it, but you know, just put it into context, so let r be a ring, so if I have in a b n group, say b, together with a multiplication operation, together with an operation multiplication from b times, say r times b to b is called an r module, where if in a suitable sense it fulfills the actions of a vector space over r, only that r is not a field, so if the following holds, if and only if, first if I have, so this is for all r, r1, r2 in r, and all b, b1, b2 in b, we have the following actions, first r1 times r2, so I have to take the multiplication in r, and I multiply this in this sense with an element of b, this is the same as if I do it like this, okay, so if I first multiply in the ring, and then apply this to an element of b, or if I multiply twice, it's the same, so this is the associativity, the second one is if I take an element in the ring, and I multiply it with a sum of two elements in this module, then we have the distributive law, and the same, we also have it if we instead take a sum of elements in the ring, so this would be distributive law, distributivity, and the final one is that, I remember that for us a ring always contains the element 1, and so we want that the element 1 in r acts as the identity, okay, and as I said, if instead of saying that r is a ring, we say let r be a field, and write down the same actions, and this says this is a vector space, so it's just a precise analog of a vector space, and in principle, so it's just the same, and now an r module is called finitely generated, well if it's, if every element can be written as a linear combination of finitely many elements, so if there exist p1 to pn for some n in b, such that p is equal to p1r, so element of p can be written as linear combination of these, of some given finite set of elements in b, and so this would correspond to a finite dimensional vector space, okay, so some examples, so for modules, so first if r is a ring, so r a ring, and i in r an ideal, then it follows that i is an r module, you know, in the obvious way via the multiplication by elements in r, because we know the fact that i is an ideal means that whenever I multiply it by any element in r, I still end in i, so it is a module, okay, so I could say r times i in i comes is because, is an ideal, okay, and the other actions adjust to the fact that r itself is a ring with one, so it's true for any elements in r, in particular if half of them are from i, so we can also, however, also do something else, we can divide by the ideal, we also get a module, so if i in r is an ideal, we put a equal to be the quotient, then I claim this is also an r module, again in the obvious way, you know, we can multiply an element in a by an element in r, by just multiplying with the class of that element in a, so by, say if I have r times a is equal to, say, r times a with a in a, r in r and r the class of r in a, and so we can certainly multiply in this way, and again we are basically in a ring, so we, it is evident that these actions hold, and finally it also works for sub rings, so if a subset b is a sub ring, so with this I mean that a and b are rings in the sense of our definition, so a and b both contain one, then b is an a module, you know, via the multiplication in b, so I have a ring structure on b, so I can certainly multiply any element in b with any element of a, because a is after contained in b, and these actions hold for all elements in b, so they certainly hold for, if half of them are in a and the other half are in b, okay, so, and what is now the connection, so if, so our thing with the finiteness, maybe I should still leave it until, alright, let's make a statement, so if a in b are k algebras, then by what I just wrote here, then b is an a module, and we say, and by definition is a finite a algebra, if and only if b is a finitely generated a module, because if you look at the definition, they are just the same, so we, you know, have somehow put this into, so in the general language of modules, a finite b algebra, a algebra is just an a algebra, which is finitely generated as an a module, okay, so now I want to, you know, for later use, prove a few properties of this, but maybe I should still just, I mean, either to avoid confusion or maybe to confuse you instead, relate this to another concept, namely you could also have the notion of the finitely generated a algebra, which is different, so remark, we anyway are going to use this in a moment, so there is another notion, so if again a contained in b are rings or k algebras if you want, so if we have so for b1 to bn in b, b denote say a b1 to bn to b, well, if you want the smallest a algebra, which contains these elements, so this is the set of all, take a polynomial, g evaluates on b1 to bn, where g is any polynomial in kx1 to xn, so this thing a, a joint b1 to bn is the set of all elements in b, so this is obviously where definition is subset of b, which are obtained by taking any polynomial in n variables and replacing the variables by the bi, this is called, I know, certainly you are right, yeah, yeah, otherwise I should have called it k a1, yeah, you are completely right in a must play a role, that's how it plays a role, so this is called, this is called the a algebra generated by b1 to bn and b would be called a finitely generated a algebra if there exist b1 to bn, such that b is equal to a b1 to bn, so if there exist say b1 to bn in b, such that, well, it's generated by these elements, so this is another notion, way how one can make, you know, how something can be finite, but it's a much weaker statement, you know, so it's clear that if, so b finite over a, finite a algebra will imply b is a finitely generated algebra, but the converse is false, okay, but the converse is false, so I mean just as a stupid example, I mean, so for instance if you have the polynomial in kx is a finite, finitely generated k algebra, but it's certainly not finite, because being finite would mean it's a finite dimensional vector space over k, that it certainly is, okay, but I mean that's just, okay, so now we want to come to a few first properties, this notion, so some kind of transitivity, and, well, let's see, so this is the following proposition, which takes almost longer to state than to prove, because it's somewhat long, so the first statement is, so let, so say, now if you have three such algebras, the algebras, so it again means that b is a sub-algebra of c and a is a sub-algebra of b, first, so if, so this actually, first, if b is finite over a and c is finite over b, then it follows that c is finite over a, okay, and there's a similar statement that, what is it, because it always gets it wrong, so if c is finite over a, so there's two statements, a and b, then c is finite over b, okay, so second statement is the only thing which will turn out to be difficult, so let b over a be a finite a-algebra, so then every element, say, I just called x in b satisfies a monic equation, so I can write, say, for some n, x to the n plus a1 x to the minus 1 plus n, so on, plus, say, a, maybe I'll write it n minus 1 here, plus a0 is equal to 0, so there, I have such a statement, so every x in b is a 0 of such an equation with ei, or the i being element of a, and the leading coefficient is 1, so that's what makes a demonic equation, okay, and the third statement is some kind of converse, so, so assume is an element which satisfies, so it satisfies a monic equation over a, so there's a polynomial, say b to the n plus a n minus 1 b n minus 1, so on, equal to 0, where the ai again in a, then if I take the a-algebra generated by b is finite over a, okay, so let's see, so these are the statements, so the first one is, you know, really trivial, directly from the definition, so b is finite, assume that b is finite over a and c is finite over b, so I have b is equal to b1 a plus bn a, for some bi in b, and c is equal to c1 b plus cm b, so it means every element can be written as a linear combination of these bi's, it's coefficient a and similar like this, then it obviously follows that every element in b here can be written as a linear combination of the elements in i in a, so we can kind of combine this, so we have that c is equal to sum i equals 1 to m, j equals 1 to m and bi cj a, that means I can write every element of c as a linear combination of these products with coefficients in a, this is kind of obvious, and the second statement here, part b is even simpler, so we have c is equal to sum say ci times a, well a is contained in b, so if I can write it as linear combination of elements in a, I can write it with coefficients in a, I can write it as the same linear combination, which I viewed as a 1 in b, so this follows is contained in b, okay, so that was jivir, so now we come to part 2 and that's a bit more interesting, we have to do some, we will do this by doing a little bit of, so in fact it is so much more difficult that actually as stated it is wrong, because we will need some trick which doesn't work in this generality, so let b over a be a finite a algebra and assume b is an integral domain, then every element x and b satisfies a monic equation, so we have this extra assumption which is not quite clear how one might want to use it, but we actually want to reduce it to a statement about linear algebra and for this we, if we have an integral domain we have the quotient field and we can somehow a statement that we have here about the modules, we can actually make a statement about vector spaces where you have learned in your first year of university how to do it, so we want to reduce to that, so in fact we will use the determinant, so if I have d equal to say d ij where i goes from 1 to n and j also goes from 1 to n and n by n matrix with coefficients in some ring, in a ring r, we can define the determinant, so there in linear algebra you have several definitions of determinant, one of them is just some crazy formula and that's what we use, we just tried a formula, so the determinant of d is the sum over all permutations of n, so write this sigma in the symmetric group of n letters, we take the sign of the permutation times, so we take the index d1, so the element d1 where and then we apply the permutation to the second one times say d2 sigma of 2 and so on dn sigma of n, so I think you have had this formula somehow, you take all possible ways how you can pick elements out of the columns and you sum them all up and you do this with the sign, which is the sign of the permutation, so the number of transpositions you need to write this permutation, so minus 1 to the number of transpositions, this is one way to define determinant and so if for instance, so if x is an element of some ring b, which contains r, we can write down we have the determinant of say x times delta ij minus, that's mean of brilliant representation, but anyway that's what I did, minus d ij, now again ij from 1 to n, so delta ij is this thing which is 1 if i is equal to j and 0 if i is different from j, so if I just take the matrix of delta ij, this will just the identity matrix and I take this difference, so the determinant of this, what is it? I mean obviously you don't know precisely, because you don't know who the d ij are, but you see there's only one term in all the story, which picks up an x in every factor, which is if you just take d 1, d 1 1 times d 2 2 times d 3 3 and so on, or the other ones contain less often this and if then we take this term and we develop it, it starts with x to the n, so we get that this will be equal to x to the n plus some say i equal to 0 to n minus 1, say r i times x to the i where the r i are elements in r, okay, so we see that if we write down such a determinant in x, it's a monic polynomial, well you know here I've called it r, you know I can call it however I want you know an element, oh is a ring or the ring was called different, so here the ring is called r and the elements in the ring r I could have called a i, which would have been which I think they are called in the notes, but they are called them r i because it feels I mean of course I did this, so we somehow will have to see that we get our monic polynomial that we finally want here as such a determinant, well you know if you take a solution of this it would be an eigenvalue, yes, but we actually it turns out we are not interested in this question in the moment, you know we just argue abstractly, but you are right that this is how one determines eigenvalues of the matrix and so that somehow it turns out that whatever is happening here has something to do with eigenvalues of some matrix, but it's not so relevant for us now, so now we want to see and but we will have to see that we somehow can produce such a thing so the determinant is 0 on order to get this, okay, and so how we do this, so I will now maybe I want to have the statement still there so I write on this side, so we have that B is a finite a algebra, so we can therefore write that B is equal to some i equals 1 to n a times Bi, Bi, so now we are back to the statement where it's a, here we had general r and now we are back to a, so maybe that was the question, Bi in B, so if we take x times Bi, x was an element in B, this is also an element in B, so we can write it as a linear combination like that, so this is equal to some i equals 1 to n maybe j, here we have i so we don't want to call it i, so j equals 1 to n Aij times Bj, of course again a linear combination of those, so we can use these Aijs to make a matrix for some elements in A, so now I can bring this to the other side, then I have 0 is equal to whatever you get here and in fact I can write it like this, sum over all j equals 1n x times delta ij minus Aij, that's just the same thing, so this is x times times Bj, because what I've written here if I just look at this thing have sum x times delta ij times Bj, this is just x times Bj and the other term is what we had before, so this is the same, it's just this brought to the other side, so but what does this mean, it means if we take the matrix, so thus if we take the vector here D1 to be n, this is an element in the kernel of the matrix of m, it's m equal to the matrix x times delta ij minus Aij where i and j go from 1 to n, what I'm saying here is precisely how do I apply this matrix to this vector, I just multiply them like that in the terms that you get is this one and I'm saying that this is always 0, so this element lies in the kernel of this matrix and obviously if x was a non-zero element this we have that this is a non-zero element in the kernel, so a non-zero element, so that is we have a matrix which contains some non-trivial element, so matrix which contains non-trivial element in the kernel, now here the Bi are elements in B, but B is an integral domain, so it is a suffering of its quotient field, so I can view these as the Bi as elements in the quotient field, they are just elements in a field, so then we are doing normal linear algebra and it tells us that if we have a non-trivial element in the kernel then the determinant must be 0, okay, so viewing the Bi as elements in Q of B which is a field the quotient field this implies the determinant of M is equal to 0, of course we know that if you have, if the matrix doesn't have the full rank, so if it doesn't, I mean if you apply it to a non-zero vector and we get 0 then the determinant is 0 and so and this is our monic equation, so so that M is a monic equation, okay, so that was a slightly tricky and then the last one is again basically trivial because 3 is easy, so we assume that B satisfies a monic equation of A, so if we have the B to the n plus A n minus 1, B to the n minus 1 plus it's all plus, C finally E0 is equal to 0, so B satisfies such monic equation with the Ai in A, well then it means I can always bring Bn to the other side, so every power of B which is bigger equal to n can be expressed in terms of lower powers, bigger or equal is a linear combination 1 Bn minus 1 with coefficients in A, so that means that A B is equal to A plus A B plus plus A to the n minus 1 and we see it's finite, so the degree, yeah okay, okay, so this was this little proposition, so that's as much as I wanted to say for the moment about these finite, about one algebra being finite of another, now we want to apply this to morphisms, I mean to say what a finite morphism is and then use it what we've learned so far for that, so I only do this for a fine morphisms of a fine varieties, so definition let the x and y be a fine varieties, so if you want you can do them as morphisms, sub close sub varieties of some fine spaces, a morphism say phi from A to B, then y A to B, x to y is called finite if well A x is a finite phi star A y algebra, so we know that we have this the corresponding fullback phi star which goes to from A y to A x, so phi star A y is a sub ring of A x and it makes sense to ask that it would be finite, okay, okay, so this is what this is this notion and so for one thing I mean this is just a special case of the general definition, you can define it for any morphism of varieties, so but it's a bit more complicated to work with and we can deal just with the case of a fine varieties, but the general notion is in general a morphism phi from x to y, morphism of varieties is called finite well if it's locally like here, so if and only if y has an open affine cover, so it has an open cover by open subsets ui each of which is a fine, so y equal to u1 union un such that the inverse image of the ui are also a fine that phi to the minus 1 ui is a fine for all i and as a morphism from wi to ui it's a finite morphism is finite, okay, so this is the general notion, it is very it works very well and so on, but you see because of this thing with the cover and so on it's a bit more difficult to use, so we somehow will avoid that and then some more practical things, so I never, let's see, so if y subset x is a closed sub variety of an affine variety, so if two fine varieties one contains another, then I can look at the inclusion, so inclusion, so if you have this inclusion is a finite morphism, so the embedding of a closed sub variety into a bigger variety is a finite morphism and that's clear because in this case if I take the pullback it's subjective, now if you just think of the definition we have that say ax is equal to say k if you are in an kx1 to xn divide by the idea of x and ay is equal to the same thing divide by the idea of y and y is contained in x, so we have that iy contains ix and the i star is just the canonical map, it's the canonical projection, just every element is sent to the corresponding class there, so in particular it's certainly subjective and then the third statement is just some other general observation which we use about a hundred times, so if we have, so how it works with composition, so let phi from x to y and c from y to z be morphisms and then we have first if both are finite so of a fine variety, so if phi and c are both finite then the composition is and the second one is if the composition is finite then the first map is, hope I got it right, yeah so and this is a direct consequence of this proposition we had before, if you remember we had the proposition that if we have a contained in b contained in c rk algebras and if a b over a and c over b are finite then a over c is finite, this just directly translates into the statement, it's the same thing, no you just put in what it is, no you put a is equal to a of z, b is equal to a of y and c is equal to b of x and then it's just that statement and the other thing is if we also have this statement if a over c is finite then and if c over a is finite then c over b is finite and this is precisely this thing, so it's just a translation of the composition and there's one particular case when we want to use it which is a bit more trivial but which just saves a little bit of headache, so in particular if phi from x to y is finite and we have that the image is contained in some closed sub-variety where it's w sub-x closed sub-variety then if I take phi as a map from x to w is also finite okay and this is just no we just use this for the inclusion of w into y, we use this statement here, we know that you know and it's just the same statement what what ah yeah yeah okay so yeah w is in y otherwise statement would make sense and thank you okay so that's this now I want to start looking a little bit at the topological properties of finite maps so there's the one important property of finite maps is that they are closed so the image of a closed subset is closed so remember that if you have a morphism from a projective variety it's always closed that was a big has to do with the completeness of the projective variety for a fine varieties it's not the case if you have a morphism from a fine variety it doesn't have to be closed but if it's finite it will be closed so morphisms finite morphisms are closed so now I will have to work on that first we start with some stupid remark so if x is in a fine variety is an ideal in the coordinate ring of x is proper ideal then if I take the zero set of this ideal which I just mean the set of all points in x such that say f of t is equal to zero for all f in i this thing will not be the empty set now this looks very much like the null stanzatz and it's obviously a straightforward consequence of the null stanzatz so approve this is from the null stanzatz namely we have pi the canonical projection from the polynomials to ax which is just kx1 to xn divided by the ideal of x of this projective map um then obviously if I take the inverse image of i this is now an ideal in this polynomial ring is an ideal and it will be a proper ideal no it's obviously a proper ideal because otherwise this would have to be the whole thing so then it follows by the null stanzatz usual null stanzatz we have that zero set of pi to the minus one i is non-empty but you know this is zero set in in an and but by definition this is just the same as the zero set of i okay so this was this remark and now we come to the statement which is what I just wrote there so I write it again so theorem finite morphisms are closed so this is a bit more difficult so we will have to work a little bit first we make some reduction so let so proof let f from x to y be a finite morphism I mean we assume they are by our definition we assume that x and y are fine so we take a closed subset or we can as well take a closed sub variety of x and we have to show that the image is closed in y if we show this for every irreducible closed subset it's enough because the image with the union of the images the finite union of closed subset so for that w z w yeah x be closed sub writing so you have to show that uh f of w is closed in y okay so we so we put let z be the closure of w in y of f of w in y so then we have to show that z is equal to f of w but then we could also say so now we see we can now just replace x by z and y by f of w then we are in the situation that you know if it restrict z so let me see um then we are in the in situation that we have a finite morphism because as a restriction of a finite morphism this morphism is still finite um and the image is dense no and we have to show the map is subjective so replacing x by w and y by z we have to show so let f from now again i call an x and y from x to y your finite morphism of a fine varieties with dense image so if you want a dominant morphism then it follows f is subjective okay so this is the same state we have just restricted you know so and this is we want to prove this the finite morphism with dense image then it is subjective i can view so as we know that f of x is dense in uh y we have that the pullback map f of a star from a y to ax is injected so i've used this before i think um i never proved it but now you actually have got it as an exercise okay so you can convince yourself it's quite simple so for simplicity we identify the inverse image f star of a y with its image so a y with its image f star of a y in ax so i can view a y as a sub ring of ax just to save some notation basically so we have that y is in a fine variety we can assume it lies in some some an as a close sub variety so we assume that y lies in an um and we take p a point in y we have to show so any point an arbitrary point in y so we have to show that the fiber over this point is not empty that it lies in the image no because we have to if we show this for every point the map is subjective so to show and we have to show that f to the minus one of p is non-empty now we want to make this an algebraic statement so if we take m so we call x1 to xn the coordinates on an so we take and write down a maximal ideal m which is just ideal generally by x1 minus a1 xn minus an so this is the ideal of all the functions which vanish at p no the ideal in k1 to xn doing this but i mean here the xi actually are supposed to be the coordinate functions so not the coordinates but the classes in ax a y because we are y isn't it so this is some ideal in a y and now i want to consider you know i have viewed a y as a suffering of ax so i now want to look at the ideal in ax which is generated by m okay so we look at ax times m so to let the set of all elements we take say a times m where a is an element in ax and m is an element in m so this is the ideal generated in ax by m and now now why do i consider this because i claim that this ideal the zero set of this ideal is the fiber so if i look at f to the minus one of p this is equal to what well obviously this is the set of all q and x such that f of q is equal to p but what does it mean to be equal to p it means that the i th coordinate of f of q is ai okay so that means this is the set of all q and x such that if i take x i minus ai and composite so of f of f of p this is equal to zero for all i one other words we have this is the set of all q and x such that if i take f up a star of x i minus ai or you just say so x i minus ai composed with f is equal to zero of p what does it say for all i but you know here we have i mean maybe it was nonsense to identify this with a pullback just means the pullback of x i minus ai is equal to zero at q so this means with this identification i have x i minus ai is equal to zero so so i claim this is i would have to think not just a little better so i claim this is equal to zero set of ax times m so these are all the points where you know where x i minus ai vanish and so this is the same as a zero set of the ideal generated by these but remember that now we are in x so we have to take the ideal in ax generated by and this is just this ax times m okay and now so we find that the fiber is the zero set of this thing but now we use this remark i made before that the zero set will be non-empty if and only if this thing is not the whole of the rate so it's enough to show that ax times m is not equal to ax it's a proper ideal then the zero set will be non-empty okay and so we this is then the last step and we make this into a lemma of its own right so so thus it's enough to prove the calling lemma so let b be a finite a algebra assume b is an integral domain so this is a lemma assume b is an integral domain let i in a be a proper ideal then if i take bi in b is a proper ideal so this will work because in in our case obviously we have that a is equal to ay b is equal to ax we have assumed that x was reducible was a variety so ax is an integral domain and then we have the statement and the thing that we had before and this precisely says that if this is true we have made the assumption that so this m is obviously a proper ideal in e y it is actually a maximal ideal so and we want to show that the ideal generated by m and x is a proper ideal because then by what i wrote here we find that the fiber is non-empty okay and so we have to prove only this lemma we will again have this this crazy condition with the integral domain so we will again have to do something to determine it no because that's where it comes from so let's go through the proof so now we make an indirect proof so assume that bi is equal to b and then we will make a contradiction so we know that b is a finite a algebra so we can write b equal to say a b 1 plus a n where the bi are elements in b we know that bi is equal to b so we have b where we take the next page yeah i don't need this anymore so b is equal to bi or ib whatever so this means i can multiply this thing an element of b by an element in i here so if i do this every element in a is made into an element of i so i can write this as i d 1 plus i d n in the combinations of elements in i okay so in particular the bi i elements in b so they can be written in this form so in particular we have that bi can be written as sum j equals 1 to n aij dj where the aij are now elements of i and now we do the same what now we do the same trick as before write m equal to the matrix now we don't need the x delta ij minus aij and j go from 1 to n so we have this matrix and we know that you know we bring this on the other side we see that this matrix applied to the vector b1 to bn is zero so we have that m b1 to bn is equal to zero no that's what the formula bar says so that means so again view m as a matrix with coefficients so in the quotient field of b which is now a field then if it if an element contains something in its kernel then the determinant of the matrix was zero we find that of m is equal to zero now we have again to remember the definition of the determinant you know it was this this sum over all the permutations and so there's precisely one element which is this diagonal one where we just have this delta 11 minus a11 and so on the whole sum like this and all the other ones are divisible by sum aij which lies in i so it follows and now if we take just this diagonal term then it's again if you multiply it out it could be one plus something in i so we find all together that that m so zero is equal to the that of m is equal to one plus sum over sum l cl where l is where cl is an element in i so we get okay so zero is equal to one plus something i so thus in a how can this be we can bring the one to the other side so we find that one lies in i so that means i is not a proper ideal and this is either contradiction if you want or one could call it the contra position namely we find that if bi is not a proper ideal then i was not a proper ideal in a and so we have proven it and damn it with this we proved that finally that finite morphisms are closed so that's a nice place to stop