 This lecture is part of an online course on commutative algebra and will be about the famous Hilbert Null-Stellen Sats. In case you're wondering what this means, Null means zero. This means position and Sats means statement or theorem. So there are two forms of this. There's a sort of weak version and a strong version. So the weak version is as follows. We can ask what are the maximal ideals of a polynomial ring, k, x, y. I'm going to just write down two variables, but in general, there can be several. Well, there are some obvious ones. Well, the obvious ones are things of the form x minus a, y minus b, and so on. So these correspond to points a, b, and so on of k to the n. Notice, by the way, the other parentheses mean the ideal generated by these elements, whereas here the parentheses mean the coordinates of a point. As usual, notation is a bit illogical. So what this says is the maximal ideals of this are related to the points of k to the n. If k is algebraically closed, this gives all maximal ideals. And this is the weak nullstellensatz. What happens if k isn't algebraically closed? Well, if k isn't algebraically closed, then there can be other maximal ideals. For example, suppose we take k to be the reals and look at the polynomial ring in just one variable over the reals, then we see that x squared plus one is a maximal ideal, but doesn't correspond to a point of the line over the reals. It's sort of something to do with the point i in the complex numbers, but that's in the algebraic closure of r. So that's the weak nullstellensatz. Secondly, we can have the strong nullstellensatz. So suppose i is some ideal in k, x, y, and so on. Then we can put v to be the variety of zeros of i. In other words, the set of points where all elements of i vanish. So if all elements of i vanish on some algebraic set, then v might be this. And now we can ask the obvious question. What is the ideal of elements or polynomials vanishing on v? And the obvious question is, on v, and the obvious guess is it's just i. And in fact, people seem to have sort of implicitly assumed that the answer is just i in the very early days of algebraic geometry. Well, certainly, if we call this ideal j, i is obviously contained in j, but it's not necessarily equal to it. For example, we could just look inside the ring k of x and take the ideal to be generated by x squared. So the variety of zeros v is then just the point nought. And so j, which is the set of polynomials vanishing at nought, is the ideal x, which is obviously not equal to i. So you see here, the problem is that x squared is an i, but x is not an i. And more generally, we can see that if f to the n is in i, so that means f to the n vanishes on the whole of v, then obviously f vanishes on v. So f is in the ideal j. So we see that the radical of i is also contained in j. And the strong nullstellensatz says that if k is algebraically closed, then these are equal, so k algebraically closed. So this is the strong nullstellensatz with an n part because I've run out of room and part because I'm tired of writing out a long word. So in other words, j is a sort of radical of the ring r over i, so j over i is the radical of the ring r over i or the nil radical, I guess, because there are several other sorts of radical. In other words, just the nilpotent elements. Now, you might think nilpotent elements of a ring are easy to find. I mean, if some square of an element is zero, then it ought to be really obvious what that element is. But in fact, it isn't. Nilpotent elements of a ring can be, really hard to find in practice. So what I'm going to do is to give you some examples to show that it really isn't obvious what nilpotent elements of a ring are. So what we're going to do is look at the space of nilpotent matrices of some size. So suppose x is a matrix, and let's just call its entries x11 up to x1n, xn1, and so on. And x is nilpotent means some power of it is equal to zero, and it's pretty obvious that if some power is zero, then x to the n equals zero. So we can let the ideal i in kx11 up to xnn up to xnn be generated by the coefficients of x to the n. So in other words, the zero set of i is just the nilpotent matrices. And now we can ask, are there any other functions which vanish on the set of nilpotent matrices? So let's call the set of nilpotent matrices v. So let's find other functions vanishing on v. Well, one obvious one is x11 plus x22 and so on plus xnn. So why does that vanish on v? Well, x nilpotent implies all eigenvalues are zero, which implies the trace is zero, because the trace is just the sum of the eigenvalues. And obviously the trace can't be in this ideal because the coefficients of x to the n all have degree, they're all homogenous of degree n. So something of degree one, such as the trace can't possibly be in this ideal. Well, is that all? Well, no, there are some other ones, because more generally, if we look at the characteristic polynomial, if we take the determinant of lambda times i minus x, then this is just equal to lambda to the n for x nilpotent. So all coefficients are zero, so all coefficients of the characteristic polynomial of this determinant of lambda i minus x, all coefficients of this of lambda to the i for i, less than n, I guess, also vanish on the nilpotent matrices. So there's quite a lot of polynomials that vanish on v, but aren't immediately obvious. For example, let's just do this explicitly for two by two matrices to even see what's going on. So let's write the matrix x as, let's call the entries a, b, c and d, because I will get confused by subscripts. And then we can see x squared is equal to a squared plus bc, a plus db, a plus dc and d squared plus bc. So these four elements generate the ideal i, so it's generated by a squared plus bc, a plus db, a plus dc and d squared plus bc. And what we're saying, what we're trying to do is to find the radical of the ideal i. And if you look at it, I think you'll agree, if you just look at this, it's not at all obvious what the radical of this ideal is. Well, we've just said that a plus d must be in the radical by the strong nilstellensatz. So this means a plus dk is an i for some k. And you might try and guess what is k. That is a pretty obvious guess, which says that k is equal to two. I mean, what else could it possibly be? We're looking at two by two matrices. It's going to be ridiculous if k is bigger than two. In fact, k isn't two. You can check that a plus d squared is actually not in i, which is fairly easy to check because everything's homogenous to degree two. And this just doesn't work. And in fact, we find that a plus d cubed is an i. You have to go up to the third power of a plus d. And it's not immediately obvious that a plus d cubed is an i. You can notice that i contains, and the following entries, it contains a plus d times all the following entries. Well, it's got b and c and a minus d. So b and c come from these entries. a minus d comes from taking the difference of these two. And it's also got a squared and d squared. And a squared comes in here because for example, a plus d times a squared plus bc minus a plus d times bc. So a squared times a plus d is that minus that. And this and this are both in the ideal. And now you notice that a plus d squared is in the ideal generated by these things here because it's just equal to it's just equal to 2a squared plus 2d squared minus a minus d squared. So all these entries here are in this cluster of things. So I think you agree it's not at all obvious or trivial that some power of a plus d is in this ideal. So we have to do a fair amount of work to see it. If we actually want to work out the radical properly, well, we can do it like this. You also notice that the determinant ad minus bc is equal to a plus d times d minus bc plus d squared. So this is the trace. And this is in i. So we can see directly that some power of the determinant is in fact the cube of the determinant is going to be in i. And in fact, the radical root of i is equal to a plus d ad minus bc. Well, we've shown these are in the radical. And if you look at k of a b c d and you mod out by a plus d ad minus bc, well, this says that d is equal to minus a. So we can write this as k abc, what you know the irreducible polynomial a squared plus bc. And this is a nice irreducible polynomial. So this has no zero divisors. In particular, it's got no nil-potent elements. So these do generate the full radical of i. So let me give you another example to show how difficult it is to find the radical of something. Suppose I've got two matrices. X is x11 up to x1n, xn1 up to xnn. And y is equal to y11 and so on. And let's look at i is the ideal of k x11 up to ynn generated by the coefficients of xy minus yn. So in other words, it's the ideal whose vanishing set V is pairs of commuting matrices. Well, that can't be too difficult to figure out because a pair of matrices commuting is about one of the simplest conditions you can possibly put in a matrix. So we want to know what is the radical of i. In particular, we can ask is the radical of i equal to i? And the answer is I don't know. And as far as I know, nobody else knows either. This seems to be a very difficult problem. In fact, the space of pairs of commuting matrices turns out to be an extraordinary difficult space to deal with for some weird reason. It can easily be reduced to the problem of two commuting nil-potent matrices. But then you find trying to analyze the variety of pairs of commuting nil-potent matrices also seems to be incredibly difficult. So the point of this is it's generally a really hard problem to find the radical of an ideal. Well, let's give a proof of the weak north Stelenzats over complex numbers. And in this proof, I'm going to cheat by working over the complex numbers, which allows me to give an incredibly short proof of it. So put M is a maximal ideal of CX1 up to Xn. So this means CX1 up to Xn over M is a finitely generated extension of C. What this means as a C algebra and a field. And it's a field because M is maximal. And now you notice that this thing here has countable dimension as a C vector space. Rather obviously because there are only a countable number of monomials in X1 up to Xn. Well, that implies that the field is algebraic. Well, the reason for that is that if it had a transcendental element. So if it contains an element A is transcendental over C, then the elements one over A minus alpha for alpha in C would be an uncountable number of linearly independent elements because the complex numbers is uncountable. And you can see these are obviously linearly independent over C. So this field is an algebraic extension of C. So is C as C is algebraically closed. Now we're finished because if this, so we've shown that this is equal to C and this just means that each of these XIs is equal to some element AI in C. So XI minus AI is in M. So M is generated by X1 minus A1, X2 minus A2 and so on. So that result is kind of cheating because it's using the fact that the complex numbers are uncountable and that's not really a very algebraic argument. It's more of a sort of set-theoric argument where you're arguing about the difference between countable and uncountable elements. So we'll have to fix that next lecture but here is a very short proof of the Wheatnell-Stellanzatz over the complex numbers. It would also work over any uncountable algebraically closed field. Well, having done the short proof of the Wheatnell-Stellanzatz, we can also give a short proof that the Wheatnell-Stellanzatz implies the strong null-Stellanzatz and this will also only take a few lines. So what do we do? Let me let I be an ideal in k x1 up to xn. Here I don't need to assume k is the complex numbers. The Wheatnell-Stellanzatz implies the strong null-Stellanzatz for any field k and as we're about to prove and let v be the zero set and let f vanish on v and we want to show that f to the m is in i for some number m and what we do is we localize at f. So localizing at f means we sort of want to introduce an element such that f x0 equals 1. Well, what we're going to do is not quite localize at f. We're actually going to work in the polynomial ring generated by x0. So what we're going to do is we look at the ideal generated by i and 1 minus x0f in k x0 up to xn. So we're sort of thinking of this as being a localization at f except we haven't quite quoted it out by this yet so we haven't quite localized this is a very rich trick. So Rabinovich's idea was to introduce this extra variable x0 and these have no common roots because any common roots of all the elements i must be a common root of f so f times x0 vanishes so this doesn't vanish. So it's not in any maximal ideal the weak nullstellensatz because the maximal ideals just correspond to the points of k to the n plus 1 and we've shown it doesn't vanish at any of these. So you see when we say the weak nullstellensatz implies the strong nullstellensatz what we really mean is the weak nullstellensatz in n plus 1 variables implies the strong nullstellensatz in n variables. Anyway, so it's not in any maximal ideals if it's not in any maximal ideals so it is just the ideal k x0 up to xn so we can write 1 is equal to a1 b1 plus a something b something plus a times 1 minus x0 f so here all the b i's are in i and all the a i's are in all the a i's and the element a are in k x0 up to xn and now we just put f equals 1 over x0 in the field of rational functions and we find that 1 is equal to a1 b1 plus a n bn where now each a i is in k x1 up to xn 1 over f so we have a i equals c i over f to the m for some fixed m and now we just clear denominators and we find f to the k is equal to c1 b1 plus c something b something and this is in i so we've shown that f to the m is in the ideal i and that's the strong nullstellensatz which we were trying to prove so next lecture we'll give a slightly more honest proof of the weak nullstellensatz that works over general fields but this is just a very short proof of the nullstellensatz for the complex numbers