 Okay, so let's start. So the topic of this course is algebraic geometry. So here one studies solutions of algebraic equations. So solutions of equations given by polynomials. So in algebraic geometry, I study, say, zero sets of sets of polynomials in, so for instance one thing that everybody knows, if I take the circle as one, it's just the set of all points, say x, y in R2 such that x squared plus y squared is equal to one. So this is the set of zeros of the polynomial x squared plus y squared minus one over the real numbers. And you know that this somehow looks like a circle. So once you study such zero sets, you can consider them over different fields. So arithmetic algebraic geometry, which is part of number theory, studies such solution sets over the rational numbers. So studies such zero sets over q. So one equation that essentially everybody will know is you can look at the equation x squared, x to the n plus y to the n is equal to z to the n. And everybody, so you can, for instance, study this over q. So you know that over z, you know that if n is equal to two, there are many solutions because you just have to make a triangle with a right angle here. And the length here is x, the length here is y. Then the length here will be z by the theorem of Pythagoras. But if n is bigger than two, then for Masler's theorem that there are no non-trivial solutions. So if x to the n plus y to the n is equal to z to the n implies that x is equal to zero or y is equal to zero if x and y and z are supposed to be rational numbers. And that's one of the most difficult theorems in number theory, which was only recently proven after 300 years. So somehow one sees that things are more difficult if the field is kind of small, like the rational numbers, because you have to solve equations, algebraic equations over some small field. But you know from algebra that you can always find solutions if the field is algebraically closed. And so in order to really just do geometry and not be concerned with these x-ster difficulties, we will work only over algebraically closed fields. So with this we understand the geometry, and then if one understands that one can maybe go to these more difficult questions. So we work over k equal to g bar in algebraically closed field. And if you wish, you can assume that k is just the complex numbers, but that is not relevant. So just to make sure, we know we talk about the same things. I want to briefly call something, you know, some concepts of algebra. I mean, I expect you have, you know them, but just for five minutes say, for instance, what polynomial in several variables is, so that we use the same words. So preliminaries. So for us, a ring is a commutative ring with one. If you want to consider other rings, we don't, we will say it. And if phi from a to b is a ring homomorphism, then it follows, then this includes also the statement that phi of one is equal to one. Now briefly about polynomials, so we'll, I mean, you have had polynomials before. Obviously, I just repeat some words. So kx, or say rx, or r ring, then rx is the ring of polynomials in x with coefficients in r. And these are just expressions, sum i equals 0 to d, a ix to the i, where a i is an element in r. And such things are considered equal if the a i are equal. So the sum and product are defined in the obvious way, so you sum them coefficient wise to multiply them out using the distributive law and xi times xj is x to the i plus j, so as you know. And so I just want to say the degree of this polynomial is the largest d such that ad is not 0 and ad then is the leading coefficient of f. So we will most be concerned with polynomials in several variables, you know, as here. So I just have r inductively define r of x1 to xn to be, we take, we assume we already have defined the polynomial ring in n minus 1 variables, which is now a ring. And so then we can take the polynomial ring in another variable, xn. This will define it and so you can see from that that the elements in this ring are expressions f sum i1, in, a i1, in x1 to the i1 times xn to the in, where the, so i1 to in are elements in a positive integers, non-negative integers, these, the coefficients a1 to an are only defined in the many non-zero, non-zero. And then this is again a ring in the obvious way by adding coefficients and multiplying them out by the distributive law. So okay, so I expect you know what this is, but you know just so we have this on the blackboard and now we want to actually start with the subject of the course. So we want to do algebraic geometry, we talk about algebraic varieties or first algebraic varieties and the first thing we talk about in it is a fine algebraic sets. So these are the things I just talked about, these are zero sets of sets of polynomials. So in k to the n, so let's start slowly. So first it's a bit anyway, so I define the n dimensional, the fine space is an which is just crazily enough in other words for k to the n. So for some reason in algebraic geometry the n fold product of our ground field is called an, okay, that's the way life is, there are some reasons for it, but we will not find out about the reasons in this course and partially it has only historical reasons. But anyway that's how this thing is called in algebraic geometry. So if you have a polynomial, if f in k x1 to xn is a polynomial in n variables, the same n is here, this defines as a function on an, which I also call f from an to k in the obvious way, if I have a point p which has coordinates a1 to an, then I send this to the polynomial evaluated at these coordinates. So polynomials define functions and therefore we can talk about zeros. So now we want to talk about these fine algebraic sets which I said are zero sets of sets of polynomials. So let s be a set of polynomials in n variables. So the zero set of s will be denoted z of s. And by definition it's the set of all points in an where all the polynomials in s vanish. So this is the set of all points p in an such that f of p is equal to zero for all f in s. Okay, so this is the obvious definition and this will be a subset of an. So and subsets of this form are called the fine algebraic sets. And for why we want to talk about such zero sets, for notation if you have a finite set, so the zero set of a finite set f1 to fk of polynomials, you also just write z of f1 to fk. So one can easily, can write some examples. For instance, for instance an is such a fine algebraic set because it's a zero set of the polynomial zero. Or it is also the zero set of the empty set of polynomials. Okay, then so we have this trivial case and we have also another trivial case. The empty set is the zero set of the constant polynomial one or if you want it's also the zero set of set of all polynomials. I mean it's always the common zero set. So as already for one you don't get any zero you won't get it there. So here we see the most trivial examples of a finite set. At least the whole of an and the empty set are finite sets. And then also a point will be in the finite algebraic set. So if, so let p be a point with coordinates a1 to an, an, well then obviously p, I mean as a set p is the zero set of not one polynomial but of several polynomials. So I mean we can just take x1 minus a1 xn minus an because this will be the set of all points in an such that the first coordinate is a1 and so on. And the nth coordinate is an so it just is the point a1 to an. So then say if f is a polynomial in two variables usually if I have not so many variables I call them x, y, z. Only if I have many I call them x1 to xn is a polynomial in two variables like that which is not constant. Then we call the zero set of f is called an affine plane curve. And we can look at some examples. For instance, if I take z of y minus x squared. This is a conic or parabola but any conic. So if we look for a moment to make it, if I make pictures I only take the real points over the real numbers because I cannot make a picture of the complex points because it would be something two dimensional in a four dimensional space real. So this would look, the real points look like this, no? It's just the parabola. This would be zero, this is y, this is x. Like you have seen it in school. And then you can have z something more complicated. You can have some points which look different from others and look at this. This is called a caspital cubic. If you look at the real points you can so now like this. So you'll find that it's somehow real points somehow look like this. I don't know what I can do it well. There's some kind of symmetric but it doesn't look very symmetrical anyway. This is y, this is x. And here there is some point, so the casp. And you know this is just the axis maybe. And we can also, another example would be z of y squared minus x to the 3 plus x squared. This would be a nodal cubic. If you want to make a picture, so then you find that again y is x. You'll find that it somehow looks like this. Well, no, not at all. You see that there are some special points. Here this thing is called the casp, here this thing which is called the node. Which are called singularities of the curve. We will talk about such things at the end of the course. And similarly, so here we had talk about plane curves, which are the zero sets of one polynomial in A2. So in the same way if f is a polynomial in n variables, which is not constant, then the zero set of f will be called a hyper surface. So we want to study these things. There are a number of things that one can talk about. So we will find out, later we'll find out there's a concept of dimension. In such a way that this fine n space has dimension n and the hyper surface has dimension n minus one. But it is actually surprisingly complicated to do this in the algebraic way. And you can talk about tangent space and smoothness and singularities. But first we talk about more basic things. And so the first thing that we want to do as we are doing algebraic geometry, we want to somehow get a little bit closer to the relation to algebra. So until now we have that in a fine algebraic set is a zero set of a set of polynomials. And now we want to say that it is in fact a zero set of an ideal in the polynomial ring, which somehow allows us to use a bit more algebra to work with things. So the first tiny thing we want to show is that every a fine algebraic set is the zero set for i, an ideal. So this is actually very simple. Let's just do it. So we start with some obvious observations of S and T are some subsets, so some sets of polynomials. So if S is contained in T, if S is contained in T, then it follows that the zero set of S contains the zero set of T. And I should say this is completely obvious because here you look at all the points where all the polynomials in S vanish. And here instead here you look at all the points where all the polynomials in T vanish. But they are more polynomials in T than they are in S. So if all the polynomials in T vanish, then in particular all of them in S vanish, so we have this inclusion. So it's a completely reality. Anyway, you can. Now the other one, and now we've come to observing that this is also about as obvious. So if we take the zero set of S, where S is some set of polynomials, then this is equal to the zero set of the ideal generated by S. So if you remember, this would be the set, this is the set of all sum i equals 1, n fi si where no n is some positive integer. Then the fi are some polynomials. So the ideal generated by S in kx1 to xn, obviously. And the si are elements in S. So the ideal generated by S consists of all linear combinations with coefficients in the ring of elements in S. And so from this statement, we get what I just said that every affine algebraic set is the zero set of an ideal. Because by definition it's a zero set of some set of polynomials. And then it's the zero set of the ideal generated by that set of polynomials. So let's do this little observation. So we have to prove both inclusions. So we have to, it's clear that the zero set of the ideal generated by S is contained by the zero set of S. Because S is contained in the ideal generated by S, and we use the first part. So now the other inclusion, conversely, we have to remember that we talk about zeros. So if p, so let p be an element in the zero set of S. And we take any element g, any element in the ideal generated by S. We can write it as sum i, shouldn't call this n, it calls sum number. So hi, si, an element in S. So that means that the hi are some polynomials and si are elements in S. Well, what's g of p? Well, you just know how do you do it. You evaluate it by evaluating these. So this is sum i equals 1 to n, hi of p, si of p. And the si all vanish at p. So this is zero. Okay, so as you see, this is also trivial. Okay, so much to this. But we keep in mind that we have that every defined algebraic set is the zero set of an ideal. And we are going to use this quite a lot in the future. But now we want to look at something different, which is the Zariski topology. This is a slightly strange thing. So you want to say, we have these affine algebraic sets. The zero sets of these polynomials. Now we say that these are the closed sets of a topology on an. So we define a topology on an, whose closed sets are the affine algebraic sets. So, and then it means if we have any affine algebraic set, we can give it the induced topology. So the closed sets of an affine algebraic set are the intersections of this affine algebraic sets with another affine algebraic set. So that means now all our affine algebraic sets become topological spaces. And this is mostly done for convenience of language. So it's kind of convenient to talk about continuous functions, about closed sets, about the closure and so on. We will see that. But there's one thing that you have to keep in mind. This topology is very strange. It's extremely different. For instance, if you are over the complex numbers, it's very different from the standard Euclidean topology. So the closed sets are very large, are very small. The open sets are very large. And you know, things are not house store for anything like that. But all the same, it's a convenient language, but you should never think that we are going to do some interesting topology. It's mostly just language. So we use this. We find it convenient to say, you know, this is a continuous map because it's a practical thing. But we are not going to find out about separation axioms or something like that. You know, the topology is somehow not interesting or not useful most of the time. But, and you don't have to try to imagine how such a thing looks like in this topology. But you just, it will be a convenient language and we are going to use it in the whole course. So you have to somehow get familiar with it. Okay, and so, as I said, the fine algebraic sets are the close subsets of the topology on the end. And so let's see how this works. Actually very easy to see, proposition. So if the S alpha, alpha, alpha sum index set is a family of subsets, this one, of the polynomial ring, then the intersection over all alpha of the zero set of S alpha is just the zero set. Of the union over all alpha of S alpha, okay, the first thing. Now if you think of it, again, this is completely trivial because what does it say? Here you say you take, so S alpha is the common zero set of all polynomials F in S alpha. So this thing is the common zero sets of all the Fs which lie in any of the S alpha, which is the same as this. Okay, so basically by definition these are equal. The second statement is if S and T are some sets of polynomials, then it follows that the zero set of S intersect with the zero set of T, no union, zero set of T is the zero set of what I call S times T, by which I mean S times T is a set of all products of elements of S and T. So it's a set of all F times G, where F is in S, G is in T. So this I maybe have to give an argument for, but what you can see is that the first one says that an arbitrary intersection of a fine algebraic sets is in a fine algebraic set. And the second one says that the finite union of a fine algebraic sets is in a fine algebraic set. It says it here for two, but then you can do induction. So thus arbitrary intersections and finite unions of a fine algebraic sets are a fine algebraic sets. Okay, so let's see, I mean this is again quite simple. So proof, again, first one already proved. So let P be a point in Z of S union Z of T. Well, then it's obvious that it's in Z of S times T because so that means that F of P is equal to zero for all. So okay, we take a point here. Now let F be an element of Z of S, of S, and G be an element of T, then, you know. So then FG is an, you know, will in this way become an arbitrary element of S times T, if we let this go. So what is F of P equal to zero times G of P? Well, this is F of P times G of P by definition, you know, evaluated by the product. And SF, you know, as P is in Z of S union Z of T, it means that either all, for all F and Z of S, F of P is zero, or for all G and P, Z of T is zero. So one of the two must be zero, is zero, that is follows that this is equal to zero. And now, conversely, so P is an element in Z of S times T. Now, conversely, we can also do the rather round. We assume that P is an element of Z of S T. We have to show that in the union of these two. So assume that P does not lie in Z of S. Then we have to show P lies in Z of T, you know. So this thing does not lie in Z of S. So let F be a polynomial in S such that F of P is non-zero. And let G be any element now in T. Then F times G is in S T. And thus it follows that F times G of P is equal to zero. But this is just equal to F of P times G of P. And this is non-zero. So it follows that G of P is equal to zero because we can divide in the field by this thing. So it follows G of P is equal to zero. So thus we obtain that as it follows that P is in the zero set of T. Okay, so with this, so this proves as I will recall that the affine algebraic sets are the close subsets of a topology on AN. For this I just for security will review what this means. And then we, although you have covered it in the topology course, but just very briefly I will review it. So remind us, so let X be a set, a topology on X is a collection of subsets of X called open subsets such that three actions are fulfilled, such that first the empty set and X are both open. The empty set and X are both open. Second arbitrary, well, maybe two first finite intersections of open sets are open. So in other words, so finite intersection of open sets. So if you one and you two are open, then you one intersected you two are also open. And third arbitrary intersect, arbitrary unions of open sets are open, okay? So if I have any set of open sets, they just take the union of all of them, then this is again an open set, okay? Slightly less common is that you can, so if you have a topology, you can talk about other things. For instance, you can talk about close subsets. So if X is a, in this case, if X has a topology is called a topological space. So if, so let, you know, assume we have a topological space. So a subset say A in X is called closed if the complement is open. So if and only if, I write it maybe like this. X minus A is open in X, okay? That's the definition. So then therefore it is obvious that these actions for the open subsets translate into actions for the closed subsets. So equivalent axioms for closed subsets. So that the complement is open is equivalent to, so the first one translates into X and the empty set are closed. The second one is that finite unions of closed subsets are closed, of course. And the last one, union becomes intersection. So because we take the complement, so third arbitrary intersections of closed subsets of closed subsets are closed. So I'm pretty sure that you also had this in the topology course. It's one of the first things to notice. And then I want to just introduce the standard words. So if X is a topological space, we can say that the closure say U bar of a subset U and X is just the intersection of all closed subsets that contain X. Okay, what other words do we want to use? So we say that U is called dense in X if its closure is equal to X. And we want to introduce the induced topology, so that X topological space, Y subset X closed, no, a subset. So then the induced or subspace topology is given. So by saying that U subset Y is open, if and only if, it's intersection with Y of an open subset in X, W in X open. No, that's the subspace topology. And just as a side remark, you can write the same thing for closed subsets. So equivalent to this, that A subset Y is closed, if and only if A is equal to B intersected Y with B subset or X closed. No, it's again by taking the complement, this is an equivalent statement. And then finally, most important concept is that of a continuous map. So a map, the F from X to Y of topological spaces is continuous if the inverse image of any open subset is open. It's open in X for any subset. Y, U in Y, which is open. And again, equivalently, you can do it with closed subsets after the minus one of A is closed for any A subset Y closed. Okay, so this is kind of boring and you should know it, but we just want to, I just say that we want to use all the standard words that one learns in the first couple of lectures or so of topology. So now we can go back to our case. So this Ariski topology definition, this Ariski topology on the end is the topology whose closed subsets are the affine algebraic sets. And note that we have seen that this is a topology. So this lemma that we proved said precisely that finite unions and arbitrary intersections of the fine algebraic sets are the fine algebraic sets. And at the very beginning, I gave these examples, which said that the empty set and the whole of A n are the fine algebraic sets. So all these three actions are fulfilled. So this is a topology. Okay, and then if we have a subset of A n, in particular, if we have a finite algebraic set, we give it the induced topology. So if X subset A n is a subset, we give it the induced topology. So that means that in other words, the closed subsets of X are X intersected A with A and a fine algebraic set. And so in particular, and this is called, this topology is called the Ariski topology on X. So the name is after Oskar Zariski, some mathematician of Polish origin who first worked for a long time in Italy and then worked in the US. And he invented this topology as a way to make it easier to deal with things. So as a remark, well, whatever. So it's kind of obvious, you should find it obvious that if X, so the most important case is that X is itself in a fine algebraic set. And then the Zariski topology on X will be so that the closed subsets of X are the fine algebraic sets contained in X, because the intersection of the fine algebraic sets isn't a fine algebraic set. So let's look at, just to see what this topology is like, let's look at some example. So first, we find that all finite subsets of An are closed. This is because we have seen that a point in An is in a fine algebraic set. And finite unions of a fine algebraic set are fine algebraic sets. So all finite subsets of An are fine algebraic sets. And the second statement is, let's look at one simpler case. Let me just add A1. So what are the closed subsets? What are they? They are the empty set and A1. We know they have to be. And all finite subsets. So what we see here is that there are very few closed subsets. So the topology is very strange. The closed subsets are very small, except for the whole of A1, all other ones are finite. But anyway, so you can in particular see that, for instance, the topology is not housed off. But as I said, we are not really doing topology. It's language. So let's just see this second one, which is not quite as... So the first is obvious, but the second one has to give a small argument. So let's say I in Kx be an ideal, or just any set of polynomials, but I can say ideal. Then I know that any defined algebraic set is a zero set of an ideal. So either I is the zero ideal, then it follows that the zero set of I is A1, yes. Now assume it's not. So if not, then there exists an F in Kx without zero in I, with F is non-zero. And now you know from algebra that if you have a polynomial in one variable, then it has a finite set of zeros. You need at most as many as a degree of the polynomial. So then Z of F is finite. And Z of I, you know, as F is contained in I, is contained in Z of F, is also finite, a subset of a finite set. So it might be finite set or the empty set. And that's what is being claimed here. Okay, another, so today I, you know, I don't do anything which is particularly difficult, but I introduce very many concepts. So you have to look and study it a little bit so that you remember everything. You know, we haven't, and to get used to how these things work, but until now there has been no argument which is difficult. So we introduce one more concept which is also quite simple, but it's one more, so it gets kind of piled up. So that's the ideal of an algebraic set. Later we will also work with it, but now we just introduce it. So we know, so if I is an ideal, then its zero set is in a fine algebraic set. Actually it's true for any set, but anyway, is a fine algebraic set. So we have kind of, in some sense, a map which associates every ideal, its zero set. We want to go the other way around. We want to say if we have a fine algebraic set, we want to associate to it an ideal. So we want to, and this should somehow, and so here the zero set is just a set of all common zeros, and the ideal of a fine algebraic set will be just a set of all polynomials which vanish on the whole of this set. So let me just write it down. So the definition, once we are at it, we can actually associate an ideal for every subset, but it's mostly interested in, it's interesting if a subset is an ideal. So let X in A n, the subset, the ideal of X is I X. This is the set of all polynomials which vanish on the whole of X. So I could, I just say it like that. If I take F, restrict it to X, this is the zero map. So it's somehow similar to the other. For the zero set, it's a set of all points where all the Fs vanish, and here instead the ideal is a set of all polynomials which vanish there. And so just, we are in the moment not going to work with that, but just as a remark or maybe exercise, first statement is that if you take the zero set of the ideal of X for any subset, this is equal to the closure of X. In particular, if X is an algebraic set, then the zero set of the ideal of X is equal to X. Okay, this follows, I mean, you should be able to do this, this follows more or less directly from the definitions. One inclusion is obvious, for the other one you maybe have to think a moment, but anyway, so this is just, you can try to do this to check whether you understand what these things are. Okay, so until now I've introduced all these things, so we actually come to our first non-trivial result, our first theorem, the so-called Hilbert-Basey's theorem, I mean, named after David Hilbert, a famous German mathematician, which anyway, so this is, for us it means something rather simple. So if you have a fine algebraic set, then we know it's a zero set of some set of polynomials. That's the definition. And the Hilbert-Basey's theorem implies that we can always take the set to be finite. Before I always had taken, you know, S was an arbitrary set of polynomials, but actually we can always take it to be finite. So the Hilbert-Basey's theorem says we can take S to be finite. And this by itself, you know, it's nice to know, so we don't have infinite set of polynomials, but actually turns out that this has some geometric consequences. It follows for instance that every a fine algebraic set can be decomposed into finitely many pieces which somehow cannot be further decomposed. So an algebraic set will somehow always look like this, and they're not somehow infinitely many pieces, but only finitely many. We will see in a moment what this precisely means. We will not come to this this time. But this is actually a consequence of this statement. So in order to talk about this, I first have to introduce some algebra, some commutative algebra, which I think you did not have in the algebra course, but I'm not sure anyway. So lemma definition, so let R be a ring. Then the following are equivalent. First, every ideal in R is finitely generated. So I can write it as the ideal generated by finitely many elements. I is written as f1 to fk for some k, some finite number k, and some elements fi in R. And the second statement is that whenever I have a chain of ideals, so there's one ideal contained in the next and so on, some infinite chain like that, i1 containing i2 and i3 and so on, until infinity, then at some point it stops. From some point onwards all the ideals are equal. This is so-called ascending chain condition. So that is, so if i1 contained in i2, contained in i3 and so on, infinitely is a chain of ideals. So the next one is always contained in the previous one, and it goes until. Then this chain becomes stationary, by which I mean that from some point onwards they are all equal. There exists an N such that iN is equal to iN plus 1, is equal to iN plus 2 and so on. So if we have kind of ideals which get larger and larger, at some point it must stop. They cannot get larger and larger infinitely many times. And these two statements are supposed to be equivalent. And so this is the lemma part, and the definition part is that if a ring fulfills these properties, then it's called notarian. This is after Emine Neuter, some well-known German algebraist who invented this concept. So now this is not so difficult, but we have to do it. So we have to prove both inclusions. So we take a chain of ideals, and we assume that this one holds. Now we take a chain of ideals, and we want to show it becomes stationary. So what do we do? We have this chain. We can take i to be just a union of this. Now it is straightforward to see that this is an ideal. Just think of the definition. You have to show that if you take the sum of any two elements here, then it lies there. If you multiply it by any element in R, it lies there. And the point is if you have any element here, it already lies in one of these ii's, and therefore each of those i ideals. So from that it follows directly that also the union is an ideal because they're all contained in each other. So by one, i is finally generated. So we can write i equal to f1 say to fk for some k. But now, so these are some elements in the union here. So each of them must lie in one of the ii's. Each fi lies in iiL for some iL. Now I can just take let N to be the maximum of the iL's. And then we know that all of them lie in iN, are elements in iN. And so it follows that i is just equal to the ideal generated by f1, 2. So that i therefore is contained in iN, but we know that iN is contained in i. So i is equal to iN. And so it follows that from, so therefore iN is equal to iN plus 1 because i is just the union of all of them. Okay, so this is one direction. So you can see it's not, you know, you just have to use this, you know, because it's finite, use the finite degenerate to see that you can also here only have finite many steps. And now the other inclusion is, I think, even, is maybe simpler. I don't really call. I think it's slightly simpler. So let me take an ideal. So this is 2 to 1. So we want to, you know, we have to show that it's finite degenerated. And for this we have to make some kind of chain of ideals. And so how do we do this? We just, we assume it's not finite degenerated. We make an infinite chain of ideals out of this. So assume i is not finite degenerated. So then we can take, say, f1 to be an element of i. We can take f2, an element of i, minus the ideal generated by f1. And because i is not finite degenerated, it's certainly not equal to that. And inductively, we take fn plus 1 to be an element. So it's always a non, yeah, an element of i without the ideal generated by f1 to fn. And then clearly we have a chain of ideals. f1 is contained and strictly different from f2, f1, f2, because we have the element f2 lies in this and not in this and so on. So we get an infinite chain of ideals all strictly contained in each other. Okay, so this is the statement. So this concept of Neutron is a, Neuterion is a quite important concept in commutative algebra. Somehow it is the most important finiteness condition that you want to put on rings. And the point now is that in algebraic geometry we do have these finiteness conditions because the polynomial ring kx1 to xn is Neuterion. That is Hilbert Bayes' theorem. Okay. We get this as a corollary to an equivalent statement. No, not equivalent, maybe slightly stronger statement. Namely, this follows from something which one could also call Hilbert's Bayes' theorem. Namely, let R be a Neuterion ring. Then it follows that if I take a polynomial ring, this is also Neuterion. And why does it follow? Well, because k is Neuterion. No, note k is a Neuterion ring for trivial reasons because, for instance, if you talk about, you're here to talk about ideals, k is a field. So a field has very few ideals, namely just the zero ideal and the whole of k. So the only ideals, you know that this is true for a field, that the only ideals are the zero ideal and the whole field k. And so certainly these ideals are finitely generated. This is generated by zero and this is generated by one. Okay. So we have to prove this second theorem. And in the second formulation, you can see, you can prove it by induction. So prove. Not sure. I don't think I will be able to finish it, but at least I maybe can start a bit. No, note that by our definition, the polynomial ring in n variables was just defined to be the polynomial in n minus one variables, where we add one more variable. So this is a polynomial. So if now this is the ring of coefficients and this is a polynomial. So if we, if this is true, so therefore, if we know that I is Neuterion and we know that if I is Neuterion, then also I is Neuterion, if we join a variable x, then it's true for Ix1 to xn. So that's enough to prove. By induction. If I is Neuterion, then it follows that the polynomial ring Rx is Neuterion. No, this is directly induction. So you would have, you know, Ix1 is Neuterion, then applying it to Ix1, Ix2 is Neuterion and so on. Anyway, this is just induction. Okay. And we will do an indirect proof. So we assume Rx is not Neuterion and want to show that R is Neuterion. So R is not Neuterion. So this is, this requires a trick. So somehow we have to somehow, for instance, assume we have a non-finitely generated ideal here and we have to find something which is also not finite in a suitable sense in R. So we have to get from, go from our x to R. And we do this looking at the leading coefficient of a polynomial. Somehow we associate to an ideal here the ideal generated by the leading coefficients of the polynomials and we will see that this will lead us to contradiction. So, so let I in Ix, your ideal, which is not finitely generated. Then we do something like here. This gives us a way to make a chain of ideals and from this we will make a chain of the leading coefficients which will not terminate. So let's see how this works. So let F1 be an element of I, which is non-zero. So we take say F2 in element of I without the ideal generated by F1. But I forgot something. So such that the degree of F1 is minimal. So every polynomial here has some degree. And now among all the non-zero polynomials in I1 we take the one, we take one with the smallest possible degrees. So this could be zero or one you know there's always a smallest possible degree. Take the smallest degree that the elements in I1 can possibly have when they are non-zero. And inductively we put, we take say Fn plus one to an element in a polynomial in I without the ideal generated by F1 to Fn a polynomial of minimal degree. So we don't just choose any element at this case like in the previous example but we always take one with a minimal degree. And this we can always do because the degree is just some integer. Some positive non-negative integer and there's always if you have a set of non-negative integers there's always the smallest one. Okay. And we keep track of this so we say ni is defined to be the degree of fi and ai is supposed to be the leading coefficient fi. So you know the highest possible degree that occurs in fi is ni and the coefficient of x to the ni is ai. That's some non-zero number. So then you notice that you have the following. First N1 is always smaller equal to N2 smaller equal to N3 and so on because we have always taken F1 to be an element of the lowest possible degree here. Then we take an element at the next step we take an element of the lowest possible degree in a smaller set because we have thrown away these. So each time the number of polynomials we choose to get the lowest degree gets smaller. So therefore the lowest possible degree can get larger but it can never get smaller. So we have this and we can also look at the ideal. So we have A1 the ideal generated by A1 is certainly contained in A1, A2. So we have a chain of ideals in R and we want to show that R is not by showing that this chain of ideals does not become stationary so that these inclusions are actually all strict. Okay, let's do it. We have already set it up now. We have to just see if we manage to do this. We have shown that R is not a criterion so we have shown our theorem. So assume otherwise. So then at some point it must be true that two successive ones are equal. Okay, then for some K we have that A1, AK is equal to A1, AK plus 1. So in particular we have that AK plus 1 is actually an element in the ideal generated by A1 to AK. I want to see that this cannot be possible. So assume we have that so we can write AK plus 1 as a linear combination of these AIs. So we can write thus we can write AK plus 1 is equal to the sum I equals 1 to K of say I want to call them well apparently I want to call them BI times AI where the BI are some elements of R. Okay, why not? Now, somehow, yeah it would be some trouble, but anyway. So no, but now we want to we have to go back somehow to Rx and see that this is not possible. So we write down an element G the polynomial which is FK plus 1 minus the sum I equals 1 to K BI X to the NK minus NI times FI. Remember we had these elements FI of which the AI were the leading coefficients. So we define this. So there are two things. First, we have that G is an element in I without F1 to FK because otherwise we can bring otherwise bring this to the other side. We have that G and we have FK plus 1 which is just equal to G plus the sum I BI X, N, K minus NI, FI. If this lies in the idea F1 to FK then if G does and these certainly do so then this would be in F1 to FK. Okay, but on the other hand let's look at the first we can look at the degree of the summons. So we are here. So all summons here on this right-hand side of G F degree NK let me see a finger on this. Yes. It's NK plus 1 F degree NK plus 1 because FI has degree NI you multiply it by this so the degree becomes NK plus 1 and this one anyway has degree NK plus 1. So they all have degree NK plus 1 and if you look at the leading term so the sum of just the leading terms here it's A K plus 1 and here it's sum B I E I A I so this the leading term the term of degree K plus 1 of degree NK plus 1 is actually 0 is just here A K plus 1 plus the sum I equals 1 to K B I times the leading term of F I there's a minus so it's not plus so here I put a minus and I know that A K plus 1 is equal to this so if I take minus this is 0 so it follows that the degree of G is smaller than NK plus 1 now that was a bit unfortunate but this is a contradiction because we had chosen F K plus 1 as an element of minimal degree in let me wipe this out in I without F 1 to F K and here I found 1 which has lower degree namely G so this is a contradiction and so this means that everything I said was actually wrong and so the conclusion is that it is not true that the so it's not true that the chain does not become it is not true that the chain became stationary it is not true that it can happen that these are equal so it follows that so we have a contradiction so thus the chain does not become stationary and thus it follows that R is not an eterian is not an eterian so the conclusion is that if Rx is not an eterian then R is not an eterian so equivalently if R is an eterian then Rx is okay and so this is the finishes the proof of the Hilbert Bayes' theorem sorry I went slightly over time but anyway so this is I couldn't really very well stop in the middle of it but anyway so the proof is a little bit tricky maybe take some effort to follow but what makes it into a theorem is the fact that it's difficult to think of it I mean if somebody told you try to prove this theorem or somebody told me try to prove this theorem I would say I have no idea and that is what makes it into a theorem okay thank you