 This is algebraic geometry lecture 12, where we will cover the proof of the Hilbert Finiteness theorem. So let's just recall what this says. Suppose that A is the ring KX1 up to XN, polynomials in N variables over a field. G is a group acting on K to the N, which we think of as being spanned by X1 up to XN. So G acts on A. So A is just polynomials over this vector space here. And we look at AG, which is the invariant elements of the ring A. And we can ask is AG finitely generated as a K algebra. Can we find a finite number of invariant elements such that all invariant elements can be expressed in terms of them? Sometimes it is and sometimes it isn't. It depends a bit on what the group is. So we're going to prove Hilbert's theorem. This is finitely generated. If G is finite, and the characteristic of K is equal to zero. Actually, Hilbert proved a much more general theorem. He didn't assume G was finite. He allowed G to be almost any reductive group, but we will do finite groups for simplicity. The condition about characteristic K being zero is not necessary, but simplifies the proof quite a lot. So to prove it, we notice that A is graded by the degree. So A is equal to A naught plus A1 plus A2 and so on. Well, A naught is just the field K and A1 contains elements X1 up to Xn and is an n-dimensional vector space and so on. So this is really just the usual way of grading a polynomial ring. And we let I be the ideal of A generated by the homogeneous elements of AG of degree greater than zero. So you've got to be a little bit careful about the definition of I. So I'm saying it's generated by invariant elements. However, we're not thinking of it as being the ideal in the ring of invariant elements. In fact, it contains this ring. We're thinking of it as being the ideal of A. The other subtle thing is we only take invariant elements of positive degree. If we took invariant elements of degree zero, this would include the element one on I would be the whole of A, which would not be terribly interesting. Next, we notice that I is finitely generated as an ideal. And we can assume it has a finite number of generators and we can assume these generators in AG and a homogeneous. We've got generators. So suppose I1 to In are generators as above for the ideal I. We want to show they generate K algebra AG. So you've got to be a bit careful here. There's a big difference between a set of elements generating an ideal and then generating an algebra. So if you're talking about them generating something as an algebra, you're allowed to multiply them and add them and so on. If you're talking about them generating an ideal, you're also allowed to multiply these generators by any element of A, which will be quite a lot more. So there's no particular reason why if they generate the ideal A, they should generate the algebra A. For example, suppose A is K xy. Now I'm going to draw K xy by drawing a point for each monomial. So we have the monomial 1 x x squared x cubed y x y x squared y y squared x y squared and so on. So you can think of all the basis element for this algebra A as being represented by points in a quadrant. Now suppose I is this ideal. So we're taking I to be the ideal generated by Y. So it's a perfectly good finitely generated ideal. So now let's look at the ring generated by I and one. So the ring R here would be spanned by one and all monomials that are divisible by I. And you notice this ring is not finitely generated. So I mean, you can generate this as a ring by taking the elements y xy x squared y and so on. But that gives you an infinite number of generators for the ring. And it's not difficult to see that you can't actually generate this ring by a finite number of generators. So even though the ideal I is finitely generated as an ideal, the ring it generates is not finitely generated as a ring or an algebra. And this is a bit of a problem because we've got a finite set of generators for an ideal and we want to show the corresponding ring is finitely generated. As we've just seen, this isn't true for most sub rings. So there must be some special property of the ring of invariance that we have to use. The special property of the ring of invariance. So the ring of invariance, A g of A has a Reynolds operator. This is called rho. This is a Greek letter rho standing for Reynolds, I guess. So the Reynolds operator, rho of A is the average of A under the group G. So for G finite, rho of A, rho of little A is just sum over G and G of G of A. Except you then need to divide that by the order of G. And it's in order to define the Reynolds operator that we needed to make the two assumptions on the group here. So we can sum over the group G because G is finite. And we can take the average over G because the characteristic of K is equal to zero. If the characteristic of K were divisible by the order of G, then this number would be zero and we would not be able to define a Reynolds operator. You can actually ask, who was Reynolds? Well, Reynolds did not actually work in invariant theory. He was actually a guy working in fluid dynamics. In fact, the Reynolds number in fluid dynamics is named after him, which sort of tells you what a fluid flow looks like. So why was Reynolds looking at the average of something under group G? Well, his original application, he was looking at a fluid flow varying in time. And if you've got a fluid flow varying in time, this is really kind of complicated. So he simplified it by taking the average fluid flow at each point. So what he had was a group of time translations acting on his fluid flow and he was taking the average under the group of time translations. So this is where the Reynolds operator originally comes from. It came from taking the average of a fluid flow under the group of time translations, but here we're using it for a finite group rather than the group of all times. So anyway, Reynolds operators are the following properties. First of all, the average of one is equal to one. And the average of A plus B is the average of A plus the average of B. It's kind of, both of these are kind of trivial. Is the average of AB equal to the average of A times the average of B? And the answer is in general, no. This is not a homomorphism of algebras. However, there is one case in which this is true. So row of AB is equal to A times row of B which is equal to row times A row of B if A equals row of A. So if A is fixed by the group, then it's easier to check that row of AB is equal to A times row of B. So this says that row is a homomorphism from A to A g of A g modules. So it's not a homomorphism of algebras from A to A g but it is at least a homomorphism of modules over the algebra A g. So this is very easy to check from the almost follows immediately from the definition. So now let's see how to use the Reynolds operator to prove Hilbert's finiteness theorem. Well, we show by induction on the degree that if X is in A g and X is homogeneous then X is in the algebra generated by I1 up to I whatever it was. So this is obvious for things of degree zero. So we can start the induction. So we may as well assume that degree of X is greater than zero. And now we write X is equal to A1 I1 plus Ak Ik for sum AI in A. And the reason for this is that X is in the ideal generated by I1, I2 up to Ik. This is just the definition of it being that. And we can assume that the AI homogeneous but this doesn't show that X is in the algebra generated by the I's because the AI need not be in A. All we know is that there's some elements of the algebra A. We don't know that they're fixed by the group G. Now apply the Reynolds operator. We find X is equal to a row of X is equal to row of A1 times I1 plus row of A2 times I2 and so on. This is because row of I N, I M is equal to I M for all M. So the elements, we chose the generators, I1, I2 and so on to be fixed by rows. So by the properties of the Reynolds operator, we have this equation here. And this follows because the first inequality follows because we're assuming X is in A to the G. And if something is fixed by G, then the Reynolds operator fixes it. Now we notice that row A1, row A2 and so on are now in A to the G because the Reynolds operator projects elements of A to fixed point elements. So these elements in A to the G because of the properties of the Reynolds operator and these elements are in A to the G because we chose them to be in A to the G. So X is a polynomial in elements of A to the G of smaller degree. So the point is each element I1, I2 is degree at least one. So these elements, row A1, row A2 and so on. So the degree of row AI is less than the degree of X. So by induction, these elements here are polynomials in all the AI. So X is a polynomial in all the elements AI. So that's the end of the proof of Hilbert's finiteness theorem because it shows that every element is polynomial in these homogeneous positive degree generators of the ideal. That's a rather remarkable proof because you see the key content of it is almost just one line long. We're just applying the Reynolds operator to this element here. And the simplicity of the argument kind of disguises the fact that proving finiteness of invariance was this really major open problem at the end of the 19th century. I mean, the Hilbert's work almost trivialized this very difficult problem. I mean, people have proved special cases of this. For instance, Gordon's proof of the finiteness of generators for binary quantics was really rather long and complicated. And this is a much simpler proof of a much more general theorem. Well, as I said, Hilbert didn't prove it for just for finite groups. He proved it for more general groups. So I want to briefly describe how we can extend it to other groups. So here are some extensions. First of all, for G compact and the field being R or C, the same proof works. The point is we can define a Reynolds operator of A by just integrating over the group of G of A with respect to harm measure on the group. And I guess we should divide this by the volume of the group G if we haven't normalized it to a volume one. So for compact groups over the reals or the complex numbers, much the same proof works. What about things like SLN of C? Which was actually the case Hilbert did more or less. Well, here we can use something called Viles Unitarian Trick. The point is the group SLN of C contains the special unitary group SUN of C. And this group is compact. So unitary groups are compact. So we get finiteness of rings of invariance. And now the key point is that complex actions of SLN C on complex vector spaces where V is finite dimensional are the same as actions of SUN C on V. So this is a easier result you get by looking at the Lie Algebra of the groups and noticing that the complexification of the Lie Algebra of the unitary group is the Lie Algebra of SLN of C. So the point is that finiteness of invariance of SUN of C implies finiteness of invariance of SLN of C provided it's acting as a complex group on complex vector spaces, essentially because the finite dimensional representations in the two cases are equivalent. So something like this works for more general reductive groups like orthogonal groups and symplectic groups. And with a bit more effort, you can do it over fields of characteristic zero, not just the complex numbers, although you need to work a bit harder because you can't integrate over the group in general. As I mentioned before, you can also extend this to the case of fields that are not of characteristic zero, but that's quite a lot harder that was done by Habouche a few decades ago. What Habouche did was he managed to prove there was a sort of Reynolds operator, only it wasn't linear, so it was a bit more complicated. So a summary of this is that Hilbert's finiteness theory means that we can quite often construct meaningful quotients of affine varieties or affine algebraic sets by groups, why the group isn't too strange. Nagata actually found examples of some groups that are so strange that their rings of invariance are not finitely generated. So he found an example of a group G acting on K to the N so that the ring of invariance A to the G is not finitely generated. I give a very brief summary of what Nagata's example looked like. What you do is you first take the group K, which you think of as being all numbers of the form or two by two matrices like this. So a set of all matrices like this form a one-dimensional group isomorphic to the additive group of K. Anyway, this acts on K squared. This is a sort of universal counter-example to anything. It's something called a unipotent action where all eigenvalues of all matrices are one and unipotent groups tend to be counter-examples to things. So this is the simplest example. Anyway, what he did was he took 16 copies of this. So K to the 16 acts on K to the 32, by just copying this construction 16 times. And Nagata's example, Chi, was a generic 13-dimensional subspace of K to the 16. So generic means most 13-dimensional subspaces work. There are some conditions this has to satisfy so I'm not going to write out. So his example really isn't all that exotic. They, once you start looking at unipotent groups, the fixed point subrings are in general not finitely generated.