 This lecture is part of an online course on commutative algebra and will be about a proof of Hilbert's theorem that ideals of polynomial rings are finitely generated. So if you've got a polynomial ring in finitely many variables over either field or the integers, Hilbert showed that all ideals are finitely generated. In other words, they're notarian, and we recall that the property that all ideals are finitely generated is equivalent to the property that all chains of ideals, i1 contains an i2, i3 are eventually constant. And well, we're going to start off not with Hilbert's original proof, but with a sort of cleaned up version of it. You see, instead of working with polynomials, you can actually prove a more general theorem that says if R is notarian, which just means all ideals are finitely generated, then this implies the polynomial ring R of X is notarian. And since fields and the ring of integers are both notarian, so fields, the only ideals in the whole field and rings, all ideals are generated by one element, then by induction, you see that all polynomial rings in a finite number of invariant and a finite number of polynomials are also notarian. So we're going to prove that if R is notarian, then R of X is notarian. And suppose I is an ideal of R of X. Now what we're going to do is we're going to form a chain of ideals in R. So I naught is going to be the ideal of leading coefficients of degree naught elements A naught in I. Now that seems to be a rather roundabout way of saying the constant elements that are in I. The reason I was saying it in this roundabout way is because I'm going to define I1 to be the leading coefficients of degree one elements A1 plus A1 X plus A naught that's in I. So I1 is going to be formed by these terms here and I naught by these terms here. And I2 is going to be defined in the same way. It's just going to be degree two elements A2 X squared plus A1 X plus A naught that are in I and so on. And now we notice, first of all, that IK is an ideal of R. It's kind of obvious because you can add two polynomials of the same degree and you can multiply polynomial by some element of R and it stays the same degree. And secondly, it's almost as obvious that I naught is contained in I1, which is contained in I2 and so on. And that's because if we multiply a polynomial of degree N by X, we get a polynomial of degree N plus one with the same leading term. And now we apply the fact that R is finitely generated. So IK is finitely generated as an ideal of R. And now we use the fact that in a notarian ring like R, any chain of ideals must be eventually constant. So for some N, we have IN equals IN plus one equals IN plus two and so on. So they're all the same from some point onwards. And now we'll find a finite set of generators for I. And we construct this as follows. So our finite set is going to be the union of the following set. So S naught is a set of polynomials. Let's take a finite set of polynomials of degree zero. So these are just going to be polynomials A naught, whose leading terms generate I naught. Again, saying it's a finite set of polynomials degree naught is a bit silly because these elements are just constants. But the reason is we want to say S1 is a finite set of polynomials of A1X plus A naught of degree one, whose leading coefficients A1 generate I1. And we sort of go on like this in the obvious way until we get to SN, which is a finite set of polynomials ANX to the N plus A naught. So that's the leading coefficients AN generate IN and we stop at N. So we remember N is the point where IN is the same as IN plus one and IN plus two and so on. And S, a set of generators, is going to be the union of S naught S1 up to SN. And now what we're going to do is we're going to show that S generates the ideal I. And this is fairly easy to see. So what we do is we observe that if F equals AMX to the M plus something is in I, we can find some element of the ideal generated by S with the same leading coefficient. So if M is less than or equal to N, we just take AM is going to be in IM and sort of by definition of SM, there's some element of SM with the same leading coefficient. If M is greater than N, we can find some AMX to the N and so on in the ideal generated by SN and we just multiply it by X to the M minus N. So we can always find some element of the ideal with the same leading coefficient and now we just subtract F minus G, let's call this element G of the ideal. And now this is in the ideal generated by S and this now has smaller degree than F and is still in I. And now we just keep repeat until F is equal to zero and we've managed to write F as a linear combination of elements of S. So this proves Hilbert's theorem. In practice, you find the number of generators of an ideal can actually be really, really large. I mean, too large to fit on computers. Also, this process is not entirely constructive. The problem is at this step here where we say for some N, these ideals are eventually constant and if you're actually trying to calculate, it can be really hard to tell when you've reached the right N because, you know, you might find IN is IN plus one is IN plus two, but maybe there's something in IN plus three that's bigger and it looks as if you have to check an infinite number of cases. And there was some controversy about Hilbert's proof when he first came when he first gave it because of this non-constructive element in it. I mean, at that time, invariant theorists thought you were supposed to actually give an algorithm to find invariants and Hilbert's proof left this kind of slightly funny non-constructive point. However, this non-constructive bit is actually nothing to do with polynomial rings. It's a problem even if you're trying to show that ideals of Z are finitely generated. So I suppose I've got an ideal of the integers and for any integer, I will, if you ask me, I will tell you whether or not it is in the ideal and you need to find a finite set of generators for this ideal. So you ask me, you know, is one in the ideal and I say no is two in it and I say no is three in it and I say no. And if you ever get to some integer N, I say, yes, N is in the ideal, then you found out the finite set of generators for this ideal. It's just N. But if I never answer yes to your question, you can never be sure. I mean, you know, maybe the ideal is just zero. So maybe you get up to a million and say, and I say no, the numbers one to a million are not in your ideal. So then you say, well, the ideal is obviously generated by zero and I say no, a million and one is a generator of the ideal. So there's no real constructive way to find a basis even for ideals of Z. So the fact that this problem also turns up for polynomial rings isn't really all that surprising. So next we have a sort of variation of Hilbert's theorem. What I'm going to do is to show that the ring of power series over R is notarian if R is notarian. And what we're going to do is we can try and copy the proof of Hilbert's theorem. So what we'd have to do is to look at the leading coefficient of a power series. Well, this is a bit tricky because a power series doesn't actually have the largest term. It just goes on forever. So we can't do that. What we do instead is we sort of turn Hilbert's proof upside down. We look at the smallest term of power series instead of the largest term. So we set I naught to be the ideal of A naught for A naught plus A1 x plus A2 x squared and so on in I. And we let I1 be the ideal of elements I1 for A1 x plus A2 x squared and so on in I and so on. And as before we find I naught is contained in I1 is contained in I2 and so on. I k is an ideal of R. So just as before, we can find N with I N equals I N plus one equals and so on. And now we can almost finish as before, but there's one slight tricky problem. We seem to need an infinite sum to write F in terms of the elements S. So we find a finite set of elements S just as we did for the case of polynomials, but the trouble is in order to kill off all terms of a power series, we seem to need an infinite number of terms. So in other words, we need to add terms like, well, if we've got some S N in N, we might have to add a term R N times S N. We might have to add a term R N plus one x times S N and R N plus two x squared times S N and so on. However, this doesn't really matter because R N plus R N plus one x plus R N plus two x squared and so on is in the ring of power series. So we can combine this infinite number of terms we have to add together into just one power series. So a ring of formal power series over R is notarian. And of course, we can iterate this. A ring of formal power series in several variables is also notarian if R is notarian. Well, the original proofs by people like Hilbert and Gordon didn't work like that. So now what I'm going to do is to give a proof that's more like the original proofs because this introduces some other useful ideas. So here's an alternative proof, which is essentially due to Gordon and there's a variation of Hilbert's original proof. What these proofs do is they sort of prove it directly for polynomial rings instead of working by induction. And what you do is you first use a result sometimes called Dixon's lemma, which is a bit unfair to Gordon because Gordon found it first, but nothing is ever named after its original discoverer. So this says if S is any set of monomials, it has only a finite number of minimal elements. So the elements are ordered by divisibility. And these are monomials of a ring of polynomials and a finite number of variables. What we're doing is we're not looking at all polynomials. We're just looking at the monomials and these are much easier to work with. So the set of monomials is just a semi-group under multiplication and it's ordered by divisibility. So this is only a partial order. So for instance, we say x to the 3, y to the 7 is less than x to the 4, y to the 7. And it's also less than x to the 3, y to the 8. You can't compare it with x to the 4, y to the 6. So neither of these are smaller than the other. And I'll just sort of sketch how you prove this for n equals 2 because I always get confused about the case when n is greater than 2. So the case for arbitrary n you can do is an exercise that the proof is sort of similar to the proof of Hilbert's theorem I gave earlier, but a bit simpler. So what we do is we write out these all these monomials. So here's x to the nought, y to the nought, x to the 1, y to the nought, y to the nought, x to the nought, y to the 1, x to the nought, y to the 2 and so on. And we've got some collection of monomials and we want to show there are only a finite number of minimal ones. So what you do is first of all, we find the first column containing an element of the set and we find the smallest element of the set in this column, say it's here. So we've sort of wiped out this region and this region. So there are going to be no more, so we don't need to consider there are going to be no other minimal polynomials in either of these regions. And now we look at the columns and we sort of work along and find the first column which contains a minimal element below this blue line and say it's over here. And now this means there are no more minimal elements in this blue region. And now we carry on and find the next column containing a new minimal elements and we might get to here say we might have a new minimal element here and this now wipes out this region here. And now you see what we did with this element, we had a sort of long thin rectangle and each time we find a new minimal element, this rectangle gets thinner. And since the rectangle only is finite width, it can only get thinner a finite number of times, so there are only a finite number of minimal elements in our set. So, so this is a very general finiteness result that in some sense doesn't really have much to do with polynomials it's just a finiteness result about this semi group, free abelian semi group on a finite number of generators. So this result about semi groups we can easily prove the result about ideals so so let's look at. So step two is now look at an ideal I in polynomials in n generators now we order the monomials lexicographically. So, so x to the a y to the b is considered to be greater than x to the C y to the D, if a is greater than C or a equals CB is greater than D. So this is now total order. So you shouldn't confuse it with the partial order we had by divisibility. And you notice it's a well order. There are no infinite strictly decreasing chains. And now we can define the leading term term of any polynomial as the monomial say r times x to the something y to the something such that this is is maximal in in in this lexicographic order. And now all we do is we look at the set of leading terms of polynomials in I. So this is a subset of the monomials. So we pick. We don't pick. It has a finite number of minimal elements and pick a polynomial for each minimal element. So we've got a finite number of polynomials and we just notice that these then generate the ideal I. And this isn't easy exercise because if F is an I, there is some elements of the set whose leading term. So pick some generator whose leading term is less than that of F, which we can do because we took all minimal elements and we can use it to kill the leading term of F. And then so we can keep on killing the leading term of F and making the leading term of F be smaller and smaller and smaller by subtracting multiples of our generators. And as we said, the order we put on was a well order so you can't have an infinite decreasing set of polynomials so eventually this process will stop and we will write F as in terms of the basis. You notice this is sort of more or less constructive up to finding the finite number of minimal elements of a set. It's actually an example of something called using something called a grobner basis. So choosing a grobner basis for an ideal involves putting a partial order on monomials like this. In fact, there are lots of different choices of sorry, not a partial order, a total order, and there are lots of different total orders you can put on them. I mean, here we've used lexicographic order, but you could order them by the total degree and then order inside each degree and so on. So there are many different orders we can choose that would still make this proof work. Again, grobner basis was sort of invented by Gordon, but he doesn't seem to get much credit for them, unfortunately. Okay, so the next lecture, we'll be talking about how to use this in order to prove Hilbert's finiteness theorem for rings of invariance.