 This lecture is part of an online course on commutative algebra and will be about integral elements. So let's first define these. So suppose S is an R algebra. So an element S in S is called integral over the ring R. If it satisfies a polynomial with leading coefficient 1, so we have S to the N plus R N minus 1, S to the N minus 1 plus R naught equals naught for R i in R. So for example, let's try and find the integral elements of the rationals. So let's take R to be Z and S to be the rationals. Let's find the integral elements of the rationals. So suppose we've got an element S which is A over B with A and B integers. Then we know that A over B to the N plus R N minus 1 A over B to the N minus 1 and so on, plus R naught equals naught where the R i are integers. And we can assume that A and B are co-prime, obviously. So now we multiply this by B to the N and we find A to the N plus R N minus 1 A to the N minus 1 B and so on, equals zero. And now we see that if P divides B for some prime, so then P divides all this because this is all divisible by B. So then P divides A. Well, A and B were co-prime, so B is a unit. So A over B must be an integer. So the integral elements of the rational numbers are just the integers, which is just as well because otherwise the term integral would be a bit weird. Notice that all we've used here is that Z is a unique factorization domain and Q is its field of quotient. So if we have any unique factorization domain as R and its field of quotients as S, then the integral elements of S over R are just the elements of R. So let's look at some basic properties of integral elements. First of all, if S is integral, this implies that R of S is a finite and what this means is finite as an R module. So you remember when you talk about things being finite, you can mean finitely generated as an algebra or finitely generated as a module or finitely generated as a field and these are all different. And here we mean when we talk about a ring extension being finite, we usually mean finite as a module or the terminology just is a bit confusing. And this is obvious because it's generated as a module by 1SS squared up to Sdn minus 1, where Sdn is equal to Rn minus 1, Sdn minus 1 and so on plus R0. So you can see from this that all higher powers are also in this module. Of course, it's finite as an algebra, it's just generated by the element S, but as a module, you need more generators, but you still have only a finite number. And the converse is actually also true. So if R of S is finite, then S is integral. In fact, we can prove something slightly stronger. If S is finite over R as a module, then all elements of S are integral. And I'm going to give two proofs of this. The first is if R is notarian, it has a very easy proof. We just look at the submodule of S generated by 1 and the submodule generated by 1 on S and the submodule generated by 1S and S squared. And this is an increasing sequence of R modules. So eventually, two of them must be equal. So 1S up to Sdn minus 1 must be equal to the module generated by 1 up to S to the n. But this says that S to the n must be in the module generated by all the things up to S to the n minus 1. So S to the n is equal to R0 plus R1S plus Rn minus 1 to the n, S to the n minus 1. So if R is notarian, this is very easy to prove. And normally, if something has a proof like this over notarian rings, it's generally not true over nonnotarian rings. But in this case, this is an unusual exception. And in spite of this easy proof of notarian rings are, it's still true for nonnotarian rings. And to do this, we're going to use the Cayley-Hamilton theorem. Well, you may have come across the Cayley-Hamilton theorem in linear algebra. And people who do the Cayley-Hamilton theorem for linear algebra wimps because they just work over vector spaces. And we're going to do the Cayley-Hamilton theorem for modules over arbitrary rings. So what does it say? It says that any automorphism, sorry, doesn't have to be an automorphism, it can be an endomorphism, phi of a finitely generated R module. M is integral, fairly obvious, should be fairly obvious what that means. It just means phi satisfies a polynomial equation with leading coefficient 1 as an endomorphism of the R module. And to prove this, we suppose M is generated by elements m1, mn. So these do not necessarily form a basis. In fact, M might not have a basis, it might not even be a free module. And these might not be linearly independent. And then phi is given by some n by n matrix. And you've got to be a bit careful here, because this matrix is actually not uniquely determined by m1 up to mn because these might be linearly independent. So there might be several different matrices corresponding to phi. So this matrix is not unique. So if we're working over a field and took this to be a basis, then there would be a unique matrix for phi, but for rings we have to be a little bit more flexible. And this means that if we take this matrix here with just phi down the diagonal and subtract this matrix A and apply this to the vector m1 up to mn, this is just equal to zero. I mean, that's more or less what you mean by saying that phi is given by some matrix. So on this side, so this thing here is a matrix. And the coefficients are in R of phi, not just in R. And now we know that if we've got any matrix, and the matrix times it's a joint, which is some weird matrix you probably forgotten from linear algebra formed by lots of n minus one by n minus one minors is equal to the determinant of the matrix times the matrix with just ones down the diagonal. So in particular, if we take the determinant of this matrix with phi's down here and then subtract A, this acts as zero on the vector m1 up to mn, so as zero on the matrix m because it kills off a set of generators. In other words, this determinant is just acts as zero on m. And you notice this determinant is a polynomial in phi with coefficients in R and leading coefficient one because we've got a term phi to the n from the determinant down the diagonal. So phi is a root of the integral polynomial where you just take the determinant of this matrix x, x, x and so on and subtract the matrix A. So phi is integral. We can even write down the explicit polynomial that satisfies if we really want to. So from this, we can go back to the theme we're trying to show that if, remember, we're trying to show if S is finite as a module over R, all elements of S are integral. And what we do is we just take the module m to be S and if little S is in S, we take the element phi to be the endomorphism given by multiplication by this element S, which is an R linear map from S to S. And then if we apply the theorem, we see that the element S is integral over R because the polynomial satisfied by this transformation phi will obviously also be satisfied by S. So we see from this that S is finite over a ring R is equivalent to saying that S is generated as an algebra by the finite number of integral elements. So let's just check this. Well, this implication we've just done because all elements of S are integral. So you can just take a finite set of generators of S as a module and these will obviously generate as an algebra and they will all be integral. So that's easy. The other implication is also fairly obvious, but all you need to do is to show that if we've got two finite sub-algebras of S, let's call them S1, S2, then they generate a finite sub-algebra. Finite as usual means finite the generation as modules and that's very easy because if S1 is generated by elements A1, opt to AM as a module and S2 by elements B1, opt to BN as a module, then S1 and S2 are both contained in the module generated by the elements A, I, B and J and this is obviously also a finite module and it's also obviously closed under multiplication and addition so it's a ring. So any integral element will generate a finite algebra so a finite number of integral elements will also generate a finite algebra so this implication follows. The fact that in particular if we've got two integral elements then their sum and product is also integral. So for example in number theory we can take Z equals the sorry R equals the integers and S equals all, let's just take it with the complex numbers, then the integral elements of C over Z are the algebraic integers and these form a ring because we've just shown if you've got two algebraic integers they both generate a finite extension of Z and you can take the ring generated by both of those and that will also be a finite extension of Z. Notice that this is not really obvious if you just use the definition of being roots of a polynomial, for example suppose you've got the square root of two and you add the cube root of two and you add the fifth root of two and let's find a polynomial in Z of X with this as a root. Well I'm not going to do that because in fact the smallest polynomial like this is degree 30 and there's nowhere I'm going to write out a degree 30 polynomial so you see if you think of these as being roots of polynomials it's not at all obvious that adding two roots of polynomials is going to be a root of another polynomial but it is sort of obvious if you think of integral elements as being things generating finite extensions. So next we're going to look at the concept of normalization. So if S is the integral closure of R in the field of quotients S is called normalization of R so this is yet another example of the world normal being grossly overused by mathematicians. The interval closure is of course just all elements integral over the ring R. So let's see some examples of this for example let's let's find the normalization of Z with root five adjoined to it. So the field of quotients is just q root five which are the numbers of the form a plus b root five for a and b rational numbers. So let's find some elements of the normalization. Well it obviously contains all numbers of the form m plus n root five for mn integers and it's not at all obvious what else it could possibly contain in fact the obvious guess is that these are all of them but in fact it does contain some other elements let's try and find the normalizations systematically. If m plus root five n is in the normalization let's call this normalization S then pretty obviously m minus root five n is also in S and we've seen that the normalization is a ring so if we add these we find two m is in S. Now sorry that shouldn't be S that should be in Z no should be in S. So two m is in S so in Z because we've seen that any rational number that's integral over Z must actually be Z so m is either an integer or it's a half integer well the ones with m an integer we've pretty much found you can easily check that if m is an integer then n must also be an integer so this leaves over the possibility that m might be a half and in fact there is a number with m equals a half that's integral because we can take one plus root five over two and this is the notorious golden ratio phi and you can check that phi squared is equal to five plus one so it is integral over the integers and this shows that the normalization of Z root five is not Z root five as you might guess but Z one plus root five over two the same thing happens for Z of root D for D congruent to one mod four you find that you can get things of the form one plus root D over two which is a root of X squared plus X minus D minus one over four unless I've got a sign wrong here I can never remember what the sign there is let me put it plus or minus. So I think I don't usually give exercise in lecture but let me let me give a quick exercise here find the normalization of Z root D for D an integer and you may as well assume it's square free otherwise you can just take out the square factor without really changing the normalization. So next one other thing we should check is that the integral closure of the integral closure is just equal to the integral closure for example suppose you've got a root of say X cubed plus alpha X plus the fifth root of two equals zero where alpha cubed plus alpha plus one equals zero then this implies X is also an algebraic integer which again is not immediately obvious if you try and find a polynomial of the integer coefficient satisfied by S and this follows from the fact that if S one up to Sn integral over R and S to the n plus Sn minus one S to the n minus one and so on plus S naught this naught equals naught then then S is integral over R and this follows fairly easily because you notice that you've got R contained in S naught up to Sn minus one and this is contained in R S naught up to Sn minus one S and this extension here is finite as I mean that's the finite R module as usual and this extension here is finite and you can easily check if you've got a composition of two finite extensions like this then this extension is also finite so little s is integral because it's in a finite extension and finally we just mentioned that the ring R is called normal if it is integrally closed in the field of quotients I guess if I'm taking a field of quotients I should make R an integral domain so for example any if R is a unique factorization domain R is normal actually proved that at the beginning of the lecture we sort of showed that the integers is integrally closed in the rationals and commented that the same proof works for any unique factorization domain and we've also seen that something like Z of root five is not normal because its integral closure is Z one plus root five over two so what we're going to do next lecture is study the geometric meaning of being integral or being normal because it turns out that being normal is a sort of very mild form of things being non-singular it turns out that if you've got a variety and its coordinate ring is normal that doesn't mean it's non-singular but it means that all singularities have co-dimension at least two