 OK, so then I will start. So we were talking about, we had started to talk about morphisms and functions. So we had defined regular functions on fine varieties and open subsets of fine varieties. And now we want to do the same in the quasi-projective case. So we're talking about regular functions, quasi-projective varieties. So I will briefly, we had started this already. I will briefly recall where we are. So we have x in pn, say a projective variety. And we have its homogeneous ideal, ix. And so then this is an integral domain because x is a variety, so it's irreducible. And so if we have an open subset b in x, non-empty obviously, then we also call, so we write, anyway we write s of x equal to kx0 xn divided by the homogeneous ideal of x. This is called the homogeneous coordinate thing. And we also write s of v for that, just that we don't always have to remember when we talk about the quasi-projective variety of which it was an open subset. So now I think we come to the new part. So we first want to talk about the homogeneous parts of elements in sx. And then we will see that instead. Well, anyway, let's start now. So we take element f. So the class of a polynomial in the homogeneous coordinate ring. So f is just a polynomial. Then we define the homogeneous part, say sd, fd of f. It's just, well, I just say it is the class of the homogeneous part of degree d of f. And so now, I think that's OK too. But anyway, this is the homogeneous part. It's the class modulo ix of this thing, so element sx. And we write sxd to be, I think, without the brackets. So the homogeneous part of degree d in some sense of the homogeneous coordinate ring to be the set of all parts of degree d of elements f in sx. And one should note that it's not completely obvious that this makes sense. So exercise or whatever, this is well defined. So the point is that f is just the class modulo s of x of some polynomial. So you have to see that if you take another representative and take its homogeneous part of degree d, then this will lie in the same class modulo the ideal of x. But this follows easily from the fact that ix is a homogeneous ideal. You can work it out. I mean, that's a very simple exercise. And it's actually even written in the notes. So now, if you remember, in the affine case, we had defined regular functions as a certain suppering of the quotient field of the coordinate ring. And we want to do the same here. But you don't use the whole quotient field of this. But only, in some sense, the part of degree 0. That is, the elements in the quotient field of sx, which can be written as quotients of homogeneous polynomials of the same degree. So to let q of sx be the quotient field or fraction field of sx, so this is the field. We know how it is constructed. So the field of rational functions on x or also on v, it's the same, is, so I write it either kv or kx, which is a set of all elements in the quotient field, which I can write f divided by g in q of sx, such that f and g are both in sx of d for some d. So they are both in homogeneous of the same degree. And note that this means, as I mentioned the other time, so if from p equals to a0 to an is a point in x, such that g of p is non-zero, then we can form f of p divided by g of p can be just defined as f of a0 to an divided by g of a0 to an. And this is well-defined, because if I replace the representative a0 to an by lambda times itself, you get lambda to the d comes out here, lambda to d comes out here. And so this makes sense, this is well-defined. So elements as before elements of kv or kx are called rational functions on v. And it's again obvious that this will be a field, because if you multiply two things, which are quotient of these things, they will still have the property. If you divide them by each other, it will still be the same. And if you take the sum, you can bring it on the greatest common denominator. And again, it will be homogeneous of the same degree below and upstairs. So this thing, and now that we have this, we can work in the same way as in the affine case, maybe just as I'm reusing it. So if f is some element in sd of x, I can also talk about the 0 set of f. Define 0 set of f is just the set of all, the usual thing. And this makes complete sense, because it is a close subset of x. Namely, if you take any representative of f, polynomial f, then 0 set of the small f will be the 0 set of the representative big f intersected with x. Now, we want to define regular functions in the same way as in the affine case, so it's all very, now we just repeat what we have done in the last lecture. There's no real difference. So if I take p, a point in v, then we can define the local ring. v at p is o p p, which is defined to be the same way as before. The set of all f divided by g in kv, such that g of p is non-zero. Again, this means that f and g are both homogeneous of the same degree. And you can choose them in such a way that g of p is non-zero, so that you can always multiply by something both upsets and below that it's 0, but that is not what we mean. And then, if u is an open subset of v, then we, as before, we just take, one could say that o pv is a set of all rational functions which are regular at p, and then to be regular at u means to be regular at all points of u. So therefore, so the ring of regular functions u is o u, which is just the intersection of all p and u of the local ring there. And then, as before, we have this remark that a regular function on u defines in a natural way a function from u to k. So if I have h in o v, u defines a function which I also call h from u to k in the obvious way, namely that if h equal to f divided by g with this property g of p is non-zero, if I apply this to p, this will be f of p. So p is sent to h of p, which is defined to f of p divided by g of p, defined as I said it here. And this can be seen to be, again, well-defined. And the map which associates to this element, the function is injective, h maps to the map p, maps to h of p. This map is injective. And in the same way, as in the defined case, it's an exercise to say what the image is, namely, again, the image will be all the functions which are locally quotients of polynomials. In fact, locally they are quotients of polynomials of the same degree, that the homogeneous of the same degree, which is kind of obvious from the way we have set things up. And it's, again, a simple exercise to show that. So I don't know whether I wrote it. So the image, so this identifies o v u with a set of functions, u to k, so maybe h and u to k, such that for all points p and u, there exists an open neighborhood, u of p in v, and polynomials in g, which are homogeneous of the same degree, and such that the g is nowhere 0 on u. Ah, yeah, yeah, sorry, sorry, sorry. I forgot I had, maybe I called it w. I have to see, because for some reason I had p written v instead of u. So it's nowhere 0 on w, such that h of q is equal to f of q divided by g of q for all q in w. I mean, this is, I mean, it takes a bit longer to state it than to prove it. And anyway, this is an exercise. And in some sense, it's kind of clear. I mean, the statement is just that locally every such function is a quotient of two homogeneous polynomials of the same degree. But this is not necessarily, it is not claimed, and it's usually not true, that you can take w to be equal to u. There's no reason to believe that. No, you might need several different polynomials for different parts of u. OK, so now I want to very briefly collect some properties of regular functions, which then later we will mostly use these properties and will not kind of, most of the time, not go back to the original definition. So which, sorry, let me just write it. So we have the following basic properties. So it's like what proposition. So what, so the first thing is that I already said it several times that all v of u is actually a ring. Most precisely, it is a k algebra and a little bit more. So k algebra. So by this, I mean that first, so constant functions a and k are regular on any open subset u. That's kind of obvious. Then if we have two regular functions on some open subset, then we have that the sum f plus g. The product g times f, f times g, are also regular. And if g is nowhere zero on u, then also f divided by g is regular. And if, in addition, g has no zero on u, then f divided by g is regular on u. So this is rather simple fact, you see it in a moment. The other statement, which is kind of very useful, is that being regular is a local property. So what do I mean by this? So you can either say it like this. So let ui, i in open cover of our u, a function from u to k is regular if and only if f restricted to ui is regular for all i. And one should notice, I mean, one can reformulate this in an equivalent way. It's just the same statement. You can also say the function from u to ax is regular if and only if for all points p in u, there exists an open neighborhood, say w such that f restricted to w is regular. And notice that this is actually the same statement, because if I have this cover, I can always take the ui as an open neighborhood of whichever point lies in it. And if I have all these open neighborhoods, then this stays also from an open cover. So it's just the same statement only depending on what is, when you make a proof, which one is psychologically nearer to what we want to do. And the third one is that regular functions are continuous. So let h be a regular function on u, then as a function from u to k, if we identify it with a1, which is a risk typology, is continuous. So k equal to a1 is given as a risk typology. It's maybe one of the reasons we are considering the risk typology that is compatible with regular functions. So these are these simple facts. They are already quite easy, small exercise. So this thing that constant functions are regular is trivial from the definition. If by definition, for the other one, so by definition, we have that, after all, that O v of u is the intersection of all p in u of O, if you're looking, O vp. So therefore, it is enough that fg, so that if f and g are in O vp, then f plus g and f times g are in O vp. But if you look at the definition, this is completely obvious. You just write down all the quotients. So it's obvious that O vp is a ring. I mean, you just look at it. And anyway, if the denominator of these two is not 0 before, and you multiply them, then it will not be 0 afterwards, and there's similar defects. So this is kind of clear, so which is clear. So there's the last statement. So assume now that g has no 0, or actually u. So in other words, so then it means that g has no 0 on u. So then g is a unit in O vp for all p in u. Because we know that O vp has as units, all the elements, which do not vanish at that point. So we can invert it. And so thus 1 over g is an element of O vp for all p in u. And so 1 over g is an element in O v of u. It's an intersection of these. And so thus f divided by g, which is now the product of two elements in O v u, is an element in O v of u. This was this first part, so it's all routine. Second one is also, in some sense, the second one is also completely obvious, because by the definition of the local ring, it's just the intersection over. So the ring of regular functions on u is just the intersection of the rings on p, which just means it has to be regular at every point p in u, which is what's being said there. So you can still maybe see whether I wrote anything reasonable. So we have an element h from u to k is regular. If and only if h is an element in O, I could say vp for all p in u. And so it's regular on ui if this is true for all p in ui. And so this is there for equivalent to say this is equivalent to h in O vp for all p in ui for all pi. It's just the same thing. And so in other words, by definition, this is obvious. And now third one, if you do this, I didn't understand. No, but I can just say, I mean, for me, a k algebra is a ring which contains k as a subring. And then from that, it follows that it's automatically also vector space. But in this case, this is a ring, which I have proven. And k is a subring, just a constant. Yeah. So here we have, so now we want to prove this last statement. So regular functions are continuous. It's also relatively easy, but we have to undo the definitions. So first, h from u to k is continuous, even only if, I mean, a function is continuous, even only if you find an open cover of the domain such as continuous on every open subset of the cover, you can check continuity locally. So if and only if h stricter to ui is continuous for all ui of an open cover. Now, because of this, so we can therefore replace u by some open subset. I, we can make the open subset smaller in such a way that our function, we know that locally it's given as a quotient of two polynomials, homogeneous of the same degree. So we can therefore always find, for every point, we can find the neighborhood such that on this open neighborhood, it's a quotient of two homogeneous polynomials of the same degree. So we can assume that it was like that to begin with. So replacing u by suitable ui, we can assume that h is equal to f divided by g, where f and g are some polynomials, homogeneous of the same degree, and g has no zero on u. OK. So what does it then mean that it's homogeneous? So maybe one should know what does it mean that it's continuous. You have to remember what is the risk etopology on A1. So the risk etopology, so what are the closed subsets? So the closed subsets are the empty set, the whole of k, and the finite subsets. So it's clear that the inverse image of the empty set is the empty set. The inverse image of the whole of k is the whole of u. So we only are concerned about finite subsets. We know that if we have a finite union of closed subsets is closed, so we only have to know that the inverse image of a point is closed. So yes? I mean, I have now, so I say replacing u by a suitable ui, so that means I take this subset ui and then call it u. But I mean, if you want, you can write ui. But usually the thing is we can assume that our u to begin with will have this property. But it actually takes place on the ui, but we kind of drop the i. It's something that one does to confuse people. So yeah, so we know that, so therefore, we only have to show that h to the minus 1 of a is closed in u for all a and k. Well, now we just have to look what it is. So if you take h to the minus 1 of a, this is by definition, the set of all points p in u such that h of p is equal to a, which is, well, we take this expression, we can put g to the other side. So this means there's a set of all p in u such that f of p is equal to a times g of p. In other words, this is the zero set of f minus a g intersected with u. And so it's closed. And so we see that the inverse image of any close subset is closed. So therefore, this map is continuous. OK, so that was as much as I wanted to say about regular functions for the moment. Now we want to define morphisms. And I have talked about, I mean, morphisms are kind of a more general thing than regular functions. These are now maps from one variety to another variety. So regular functions in some sense are a special case of that, namely those where the target is just a1 or k. But we do it in this way because we will want to use the regular functions as part of the definition of the morphisms. So morphisms should be compatible with the regular functions. Actually, our definition should be a map from a variety to another one will be called a regular map or a morphism if it is first continuous. So the inverse image is like this. The inverse image of any close subset is closed. And secondly, it should be compatible with regular functions on open subsets. So if you have an open subset in the target and regular function on that, then you can pull it back by composing the map to a function on the source, on the corresponding inverse image of the open set on the source. And it should be regular there. Anyway, we will define this in the moment. But not immediately. First, we will look at a special case. You could have, you know, one could think there would be an easier way to define morphisms. Namely, a morphism should be just something which is given by polynomials. In particular, if you have two fine algebraic sets, we can do it in this more generality, not only for varieties. Then we just say that could just take morphisms to be the maps given by n tuples or polynomials. And we will see that this is fine for the fine algebraic sets. But in general, one needs the more general definition. But we first do this special case. And then we come to the general definition. I just want to show definitions. So let x in an, y in a. So I should maybe, so at least, you know, it's a new chapter, so I should at least say it. So it's called regular morphisms. And the first thing is we talk about polynomial maps. So definition x in an and y in a m, the fine algebraic sets. Then a polynomial map from x to y is a map from x to y, which is given by polynomials. Not very surprisingly. So a map, say f1 to fm from x to y, which sends a point p in x to f1 of p until fm of p for f1 to fm, some polynomials in n variables is called the polynomial map. So note that it's part of the, you know, if I wanted to view it as a polynomial map from x to y, it's part of the definition that I require that x is actually mapped to y. So if I have a point p in x, it should go to y. You can see that whenever I'm given an m tuple of polynomials like that, it will give me a map from an to am, which is also a polynomial map. And the requirement is, in addition, that x is mapped to y. Then I view it as a polynomial map from x to y. OK, one should note that instead of taking polynomials, I could also take polynomial functions. So note, if I take f1 to fn and f1 restricted to x until fm restricted to x, define, obviously, the same map. Oh, that's more than obvious. And these are elements in the x. So therefore, we can also write the morphism, can also write the polynomial map as, say, f1 to fm with fi elements in the coordinate gene of x. OK, and if I have a bijective polynomial map whose inverse is also a polynomial map, I call it an isomorphism. This is a little bit the use of language, but anyway, I do it all the same. So a bijective polynomial map whose inverse is also a polynomial map is an isomorphism. Oh, you can. OK, so one thing, there's one thing that I wanted to still notice that if we have a polynomial map, we can pull back elements of ax by this polynomial map, by and they will be elements of ay to elements of ax. So there's that. So maybe I can, maybe for that I have some, just two stupid examples. So if you have, for instance, if x is in a finite algebraic set and then we see that by definition, the polynomial maps f from a1 to k are the same as the polynomial functions, which are the elements of ax. So a polynomial function is the same as a polynomial map from a1 to k. And once one can look at some stupid example, but I don't know. So if we take c equal to the 0 set of y minus x squared in a2, the real picture would be a parabola, then it's not difficult to imagine that if we take the map t, t squared from a1, which would in principle go to a2, but it actually goes to c. So which means that if I take a point a, it maps to a a squared. Then this is a polynomial map. And in fact, an isomorphism. So the inverse will just be the projection to the first factor given by, if you have on a2, you have xy and you map it to x. This will give you back the a. So as we are going to need this for general morphisms, I wanted to talk about the pullback. So we have that polynomial maps are compatible with polynomial functions. What this means is that polynomial functions can be pulled back. If I take a polynomial function and I compose it with a polynomial map, I get again a polynomial function. So this is a content of the following definition. It's kind of, so definition. Let x in n, y in m, find algebraic sets. So let t from x to y be polynomial map. So we want to define pullback. So if we take an element h of a of y, so polynomial function on y, how do we define the pullback? It's just a composition with phi. I can write it phi star h, but it is just h composed of phi. And a claim is that this is a polynomial map. It is a polynomial function. So the claim is this is an element, but in a moment that's a claim of a of x. So why is that the case? Well, we have to write it down. So if h, we can always write h as a restriction of a polynomial. Now in m variables, maybe I call the variables on this side y. Then how do I compute phi star h? So if I take phi star h and I put some points in a n into it, so a point in a n, phi star h of this, what is it? Well, by definition, this is if we write phi equal to f1 to fm, where the fi are some polynomials in x1 to xn, then what does it mean? You just apply the fi to this thing, and then you apply h. So this is h of f1 of a1 to an fm of a1 to an. So in other words, we can say that phi star h is obtained as follows. We take this polynomial h. We put, instead of the variables yi, we put fi of x1 to xn. This now will be a polynomial in kx1 to xn. And now if you want it to be a set of a polynomial, we can restrict it. We can make it a polynomial function by restricting it to x. And so this certainly is an element in ax. And so you can see it's just we obtain this by just putting these polynomials which make the coordinates of the map instead of the coordinates in the polynomial. So we see, so we have this pullback map. So we have pullback star from ay to ax, which sends h to phi star h, is obviously ring homomorphism. Or if you take the sum of h and g, you can take the sum of the polynomials, and you get the sum here, the same for the product. OK. And you can, for instance, you can also see that if phi from x to y is an isomorphism, then phi star from ay to ax is also an isomorphism of rings of k algebras. That's kind of clear. As usual, if you take the inverse of phi, this also is a polynomial map by definition of this thing being an isomorphism. And from the definition, you can see that the pullback by the inverse is the inverse of the pullback. And so that's it. OK. So much about this special case, which are these polynomial maps. Now we want to come to the general definition of morphism. And I already announced it, so it doesn't come as a surprise. And then afterwards, we will have to see that the polynomial map actually is a morphism. It doesn't follow directly. So definition of a morphism, so that x and y be varieties. So that means it can either be an open subset in some defined variety or an open subset in some projected variety, according to our definition. So now we really treat in the same way the affine and the projected case. So a map phi from x to y is a morphism. So this is supposed to be a definition. So as announced there, if, well, as I said, if first, if and only if phi is continuous. And then as I said, it should be compatible with regular functions on open subsets. So for all open subsets, u in y and all regular functions on u, we have that if I take phi star f, which I defined as before as f composed with phi, this thing is a regular function on the inverse image of u. And notice that this makes sense. I mean, we have defined regular functions only on open sets. But as we have required phi to be continuous, the inverse image of u is open in x. So this is the definition. So you see we have built these regular functions really into our definition of morphism. So we somehow need to really do this. OK. So that now also means that we have again, this pullback again gives us a homomorphism. So thus each open subset u in y, we have that phi star from O y u to O x phi to the minus 1 of u is a k-algebra homomorphism. And as usual, a morphism is called an isomorphism if it has an inverse. If phi is bijective and phi to the minus 1 is also a morphism. So strictly speaking, in the case of the fine varieties, we have to find isomorphism twice. And we will later see that the definitions are equivalent. But we know that for the moment doesn't worry us. OK. It's clear. So instead of morphism, one also says regular map, similar to regular function. So it's clear that, for instance, the identity of x is always a regular map. And the pullback by the identity is the identity. And we also see that if phi from x to y and phi from y to z are morphisms, then the composition of the pullback is a pullback of the composition. So c composed with phi star is equal to phi star composed with phi star, if I am. And it follows that if you have an isomorphism, then for all open sets, the pullback is an isomorphism between O y of u and O x to the minus 1 u. This is all very standard. Well, maybe I can actually write it. So if phi from x to y is an isomorphism, then phi star from O x O y of u to O x phi to the minus 1 of u is an isomorphism for all open subsets U of y. In particular, O y of y is isomorphic to O x of x. So now we want to, so this is the definition. The definition is not, it's a kind of slightly clumsy definition. So if you want to check that something is a morphism that doesn't look like such a nice task, so first to check it's continuous, OK, maybe you can. But then you have to say for every open set, you have to take all regular functions and see where the pullback is regular. That might not be so easy. So we need to have at least some criteria to make it a little bit easier or to have other descriptions of what morphisms are. And so we start trying to do that. And so in this case, I want to do two things. So as you've probably remarked, first being a morphism is local. So if we have an open cover of x and the restriction of phi to all the open subsets in the cover as a morphism, then phi was a morphism. So let phi from x to y be a map and ui an open cover of x, such that phi restricted to ui from ui to y is a morphism, then phi is a morphism. And the second statement is that if you have a morphism and you restrict it, it still is a morphism. So if we restrict it to a sub-variety and maybe we can restrict it in two ways, both in the source and in the target. So it's the following statement. So let z subset x and w subset y be varieties. And let phi from x to y be a morphism. So by which I mean is that z is a sub-variety of x locally closed, w is a sub-variety of y locally closed. It's intersection of open and closed subset inside this. And we have a morphism. Then we assume, so with the property that if we take the image of z, it is contained in w. Then I can view the map phi restricted to z is a map from z to y, but as it actually maps into w, I can view it as a map from z to w. Viewed like this, it's still a morphism. So this means two things. I mean, one could make it into two statements. So one is that if you have a morphism and you restrict it, it's a morphism. And the other statement is that if you have a morphism and the image is contained in some smaller variety, it is also a morphism viewed as a map to that smaller variety. In some sense, that's not so completely obvious because it's supposed to be compatible with the structure. That's not completely clear. And actually, that last statement is the only one which requires a little bit of effort. So let's prove it. So one is fairly trivial because we have seen that being regular function is a local property, so it will follow from that. So let me see w in y be open. Then we have to show for phi to be a morphism, for instance, that is continuous. So we have to show that the inverse image of w is open in x. So then we can write phi to the minus 1 of w. We can also write this as the union over all i of we take phi restrict to ui minus 1 of w, which is the same as taking the inverse image, taking phi to the minus 1 of w intersected ui. So this is certainly true. So then it's not double. In fact, because it's equal to this. And so each of them are open because phi restrict to ui is continuous. These are open. And so the whole thing is open as a union of open subsets. So it follows that phi is continuous. So now we are supposed to take to see that the pullback of a regular function is regular. What if and only if? Yeah, yeah, yeah. No, the other direct, yeah, yeah. Well, if you want, this follows also from the second statement. So if phi is morphism, they're restricted to ui. It's morphism. But anyway, it is even only it. But if you look at this, the second statement implies the other direction. So in any case, OK. So now we have to see. So let the h be a regular function on w. Then we know, if we take h, yeah, how should we put it? So then we know that if we take phi star h and restrict it to ui, or if you want ui intersected phi to the minus 1 of h of w, then this is regular because phi restricted to ui is regular. Because phi restricted to ui is regular. So we see we have an open cover of phi to the minus 1 of w. I mean, we have an open cover on which it is regular. So it's regular. So we have w with union i, w intersected to i is an open cover of w. And phi star h is regular on all the open subsets in the open cover. So by what we proved before, it follows that phi star h is regular on w. So this shows the first statement. Now, come to two, a little bit slightly more tricky. Let's see. So first, again, we have to show that the map is continuous. That's basically trivial because if you have a continuous map and everything is given with induced topology, then restriction is continuous. So phi restricted to z, z w, is continuous as a restriction of a continuous map. As I said, we are using induced topology. The risk to topology is always the induced topology of the bigger risk to topology. So now we have to show the second part that the pullback of regular functions is regular. So again, u in w be open. And let the h be a regular function on u. So now we have to remember that being a regular function is a local property. So instead of proving it for this u, we can take for any p in u, we can prove it for a smaller neighborhood of this point. If we do it for all these smaller neighborhoods, we have also proven it for u, because being regular is a local property. So we can assume, so replacing if necessary u by a smaller open subset such that all of the u's that we get in this way cover our u, we can assume, well, the reason why we always have to make smaller is that we want to always write our regular function as a quotient of two polynomials. So we can assume that h is equal to f divided by g. So with f and g polynomials in the space where x and y were also polynomials, so which are homogeneous of the same degree if we were in the quasi-projective case. If y is quasi-projective, in the affine case or quasi-affine case, they are just f and g are just two polynomials. Always if y is projective, it means y lies in tn. And this will be polynomials in x0 to xn, f and g of the same homogenous of the same degree as we defined it. So we can assume h looks like this, and g has no 0, no 0 on u. So if we have that, so now here we are on w. Now we want to somehow, in order, so we know how to pull back regular functions on open subsets of y. So we somehow have to extend our h to a regular function on an open subset of y. And we do this by writing now the same quotient of polynomials. This will be defined in some neighborhood of, I mean, in some open subset also of y, which contains this u. So this quotient also defines a regular function on an open subset, u tilde of y, such that u is contained in u tilde. No? I can just, so in this case, I get a regular function on an open subset of y, and we can pull it back. Maybe I call this a regular function, which I call h. And then if I pull this back, phi star of h will be a regular function on the inverse image of u tilde and y in x, because phi itself was regular. Well, and then we get the actual pullback of h, which, by definition, is the composition of the small h with phi, is thus the restriction of this one. And so it is also regular. This is u, no, it's not u. So say phi to the minus 1 of u, whatever intersects it, set is regular on. Because to be regular, I can, for instance, again, say that locally it's supposed to be the quotients of two polynomials of the same, of two polynomials, so that denominator is not 0. And if this is true on the whole inverse image here, then it is certainly true with the same polynomials on the smaller. Well, that depends on how this is actually not necessarily true. Depends on what phi do minus 1 of u, what phi is. If I take phi restricted to z, and then phi to the minus 1 of u is in z, but it could be that also point out of z go to u, that would, in theory, be possible. I just, for security, write this. It is possible that there are points outside z which also go to u. This is not excluded. Therefore, I write this for security. OK, so this is this statement. So this allows us then to make things a little bit easier. And now what comes? Yeah, I want to make a change of definitions of how we call things until now. And a fine variety was just the close subset of, irreducible closed subset of A n. Now that we know about morphisms and isomorphisms, I say in a fine variety is a variety, which is isomorphic to an irreducible closed subset of A n. To what we before called an affine variety, namely to an irreducible closed subset of some affine space. OK, now we want to come to some more concrete descriptions of morphisms. So one thing that we want, for instance, to prove is that the morphisms between closed subvarieties of A n are just the polynomial maps, so that we have not that, in fact, in the case of affine varieties or in the previous definition or closed subvarieties of A n and A m, the previous definition of the polynomial maps is the same as that of morphisms. And so there's no confusion of what the correct definition is. I mean, I can at least state it. So this is morphisms to subvarieties of A n. So what I will state now is slightly more general than what I just said. So let x and y be varieties. And let the target be a subvariety of A n for some n. So we have a morphism, so a map. So this is actually supposed to be a theorem. So we want to, I state this theorem. So a map phi from x to y is a morphism if it is given by an n-tuple of regular functions. So if and only if, so if and only if, there exists regular functions f1 to fn, as we are in A n, as many as the coordinates, which are regular on the whole of x, such that if I take a point p, this is just f1 of p, is sent to f1 of p until fn of p for all p in x. So the map is just given by an n-tuple of regular functions. And then we write phi is just f1 to fn. OK. As a corollary, we get that for defined varieties, for closed subvarieties of defined spaces, the morphisms are just the polynomial maps. These subvarieties, then the morphisms phi from x to y are precisely adjust the polynomial maps, or whatever. Because we have seen that, I mean, once we have proven this, we would have seen that a map is a morphism, even only if it's given by an n-tuple, in this case, an m-tuple of regular functions, which are regular on the whole of x. And if x is a subvariety of an, then a function is regular on the whole of x, even only if it's a polynomial function that we have proven. And so this shows the corollary from the theorem. And then next time, we'll start by proving this theorem. It's also, it's relatively routine, but it's a bit slightly longish proof, because one has to check a few things. Anyway, so see you again, I think, on Wednesday.