 This talk is the second part of a proof of the Riemann-Roch theorem. So the Riemann-Roch theorem, as you recall, states that L of D is the degree of D plus 1 minus G plus L of K minus D, where as usual D is a divisor, that's its degree, this is the genus, and K is the canonical divisor. And in the first half, we prove Riemann's half of the Riemann-Roch theorem, which says L of D is equal to the degree of D plus 1 minus G plus I of D, where this mysterious factor is the index of speciality of the divisor D, which is something like the dimension of the space of obstructions to the Mittag-Leffler problem. And what we're going to prove today is Rock's half of the Riemann-Roch theorem, which says I D is equal to L of K minus D. This part of the Riemann-Roch theorem nowadays is usually proved as a special case of serduality. That's what Hauptschrone does, but serduality is a fairly major theorem about high-dimensional varieties, so you don't really need it for proving this part. Well, obviously Rock proved it without using serduality. In order to simplify the proof, we're going to work over the complex numbers, at least to start with, and when we've done the proof over the complex numbers, I will say a little bit about how you extend it to general fields. The big advantage of working over the complex numbers is we can use the residue calculus, we can sort of integrate over paths on a Riemann surface and apply Cauchy's residue theorem and things like that. So the residue calculus gives us two key facts. First of all, if we have a one form, which looks locally like f of z, d z, then it has a residue at any point p. This is a meromorphic one form, and the residue is given by 1 over 2 pi i times the integral around a contour of omega, where here we've got a point p and the contour gamma just goes once around p in a clockwise direction, whatever clockwise means. The second key point is that the sum overall points of a compact Riemann surface of the residue of omega at p is equal to zero, and this is more or less Cauchy's theorem. This only applies to compact Riemann surfaces, rather obvious counter-examples of the surface isn't compact. And the reason is that Cauchy's theorem says the integral around some path is more or less equal to a sum of the residues inside the path up to factors of 2 pi i. And if you've got a compact Riemann surface, then you can also apply Cauchy's theorem to the region outside a path just by considering it as the inside if you look at it from the other direction. So you find that if you've got any path on a Riemann surface, the sum of the residues inside the path is equal to the integral along the path, whereas the sum of the residues outside the path is equal to minus the integral along the path. And from this it's easy to see that the sum of all residues over all points of a compact Riemann surface is always zero. So these are the two facts that we get from working over the complex numbers. Now we recall that the index of specialization that we want to deal with could be more or less defined as R over R of D plus K of C, where you remember this was the space of rational or meromorphic functions on the Riemann surface and this was the ring of valuation vectors. And this was the set of obstructions to the Mittag-Leffler problem. So you remember the Mittag-Leffler problem says that we specify a singularity at each point of the Riemann surface and we want to find a rational function with those singularities. Well, the reason why we're interested in one form is that a one form omega is an obstruction to the Mittag-Leffler problem. And this is because if we've got any one form omega and we've got a function f, we know that the sum over P of the residues at P of omega f is equal to zero. And now we notice that this residue depends on the singular part of f at the point P. So this gives some condition that the singularities of f at all points have to satisfy in order for them to come from a meromorphic function f. More precisely, we get a bilinear pairing from the holomorphic one forms times this space of valuation vectors. Well, I guess we could question that by r and zero plus k of c to the complex numbers. So this is just taking one form omega and some set of singularities, which are rather sloppy writers f. And you just take this to the sum over P of the residue of f omega. So this is what we get if the divisor is just the zero divisor. More generally, we can do something similar for any divisor. So we get a pairing from one forms omega with omega greater than or equal to the divisor. And that means, remember, this is the divisor of zeros of omega and we want it to be equal to this divisor. And this time, instead of r zero, we take r of D plus k of C and this maps to the complex numbers. And we do exactly the same thing. We just take the divisor and multiply it by the singularities and take some of the residues. And you can check that this condition sort of matches up with this condition and make sure that everything is well defined, unless I forgot to put a minus sign in somewhere, which I quite often do. And in particular, we get a map from one forms omega with omega greater than or equal to D to the dual of r over r of D plus k C. And the key point is this map is injective. And it's very easy to see its injective because we can choose a valuation vector with any given singularity at any given point. And from this, it's very easy to get any map from one forms to C by choosing a suitable valuation vector. So showing this as injective is easy. It's also surjective, but this is considerably harder. So we know it's injective and we have to think more about whether or not it's surjective. Anyway, from the fact that it's injective, well, we know the dimension of this is L of k minus D and the dimension of this is just I of D. So from the fact that this map is injective, we find the inequality L of k minus D is less than or equal to I of D. So this is half of Roch's theorem. What we want to show is that L of k minus D is actually equal to I of D. In other words, we want to somehow prove the hard part of the surjectivity. Well, proving surjectivity directly is, well, you can do it, but it's a little bit tiresome. Unfortunately, it's not really necessary because it turns out we can actually deduce surjectivity from all the other things we know about the Riemann-Roch theorem. So let's summarize what we know. We know, first of all, we know L of D is equal to the degree of D plus 1 minus G plus I of D. So this is Riemann's part. Secondly, we know L of k minus D is less than or equal to I of D. And by changing D to k minus D, we also see that L of D is less than or equal to I of k minus D. So this is the sort of easy part of Roch's theorem. And a third thing we need to know is that the degree of k is equal to 2G minus 2. And we worked this out by explicitly calculating the degree of k and found it was equal to this by writing down an explicit one form on a plane curve with nodes and so on. Anyway, from these three facts, we're going to deduce that L of k minus D is actually equal to I of D. And to do this, we're going to use this twice and we're going to use this twice once in this form and we're going to use this once. So what we do is we just write L of D equals degree of D plus 1 minus G plus I of D. Well, that's just Riemann's part. And this is greater than or equal to degree of D plus 1 minus G plus L of k minus D. Here we've used this inequality. And now we apply Riemann's part again. So this is degree of D plus 1 minus G plus L of k minus D is now equal to degree of k minus D plus 1 minus G plus I of k minus k minus D. Which is just I of k minus D there. And now we apply this bit again, which is a variation of this for a different divisor. So we get degree of D plus 1 minus G plus degree of k minus D plus 1 minus G plus so I of k minus D is at least L of D. And now we seem to have L of D is at least equal to L of D plus this rather messy looking expression here. Well, if you look, degree of D cancels out with degree of minus D. So what's this going to be? Well, if we put everything together, we see that this is degree of k plus 2 minus 2 G. Which, well, we worked at degree of k is equal to 2 G minus 2. So this is just equal to zero. So this whole mess here just disappears. So let's see what what we've managed to prove. We've managed to prove that L of D is at least equal to L of D. Oh, well, that seems kind of trivial. So we don't seem to have proved anything. But if you look more carefully, what you find is we've shown that L of D is greater than or equal to this, which is greater than or equal to L of D. So in other words, this must actually be in equality and this must actually be in equality. And the only way this can be in equality is if these two things here are equal. So we've now actually managed to show that I of D is equal to L of k minus D. And this proves rocks part of the Riemann-Roch theorem and thus completes the proof of the Riemann-Roch theorem. Well, that shows how to prove it over the complex numbers. And I want to say a little bit about how you prove it over more general fields. So what about about fields of characteristic greater than zero in particular, when there doesn't seem to be any obvious way of doing contour integration? Well, there are two problems. Problem one, need to show that the residue of a one form omega at a point is defined. And the second problem is we need to show the sum over all points of the residue at p is zero. Well, the first problem looks kind of trivial. So how do we define a residue? Well, let's choose some local coordinate z at point p and a residue omega looks slowly like something of the form a minus n, z to minus n plus plus a minus one z to minus one plus a naught plus a one z and so on. We can expand it as a formal power series. And why don't we just define the residue to be a minus one? What's wrong with that? We seem to have a perfect good definition of the residue. Well, there's a big problem here. Problem is how do we know this does not depend on the choice of z? And you have to be really careful here because it turns out that all the coefficients a i other than a minus one do depend on the choice of z. And if you choose a different local coordinate, they all change. So it's only this particular coordinate which doesn't change. I guess I should have put a d z there because otherwise it really messes things up. So how do we check this? Well, let's take a look at what happens if we change coordinates. So suppose we change to a different local coordinate. Let's call it w. So we might have z is going to be now equal to b naught w plus b one w squared plus b two w cubed and so on. Again, just expanding as a formal power series. Well, of course, b naught is not equal to zero. And let's just try and work out an example. So suppose we've got a one form which we can write locally as a minus one z to minus one plus a naught z naught and so on. D z and now let's write this in terms of w. Well, that's quite easy. We get a minus one and then we we want z to the minus one which is going to be b naught minus one w to minus one plus various terms plus a naught times something. You don't really care about the others. And then we have to put down d zing, which is going to be b naught d w plus various higher order terms. So this is equal to a minus one d w plus higher. So a minus one w to minus one d w plus higher terms. And now you see the coefficient of w to the minus one d w is the same as the coefficient of z to the minus one d z. So, so for poles of order one, no problem. It's just very easy to check that the residue doesn't depend on the choice of local coordinate. The problem comes when you have poles of higher order. And then things become a little bit complicated. And I'm just going to do an example to show you that things really do get a bit complicated. Let's try and order two pole. So here we've got a function a minus two z to minus two plus a minus one z to minus one and so on times d z. And let's again change z to b naught w plus b one w squared and so on. Expand the other a different local coordinates as a power series in z. And let's try and figure out what happens to this power series. Well, it becomes a minus two. And then we've got to take z to minus two, which is going to be b naught to minus two w to minus two minus two b one w to minus one over b naught squared and so on. And then we've got to add a to minus one b naught minus one w to minus one plus various terms. And we've got to take this all and we've got to multiply it by d z is going to be b naught plus two b one w and so on all times d w. And this is beginning to get a bit of a mess. So let's collect together all the terms and see what we get. Well, we get a minus two b naught minus one w to minus two. And for w to the minus one, we get this term here and we get this term times this term and we get this term times this term. And this times this and this times this cancel out in a freakish accident. So we just get a to minus one w to minus one all times d w. And now what we see is that this is still a to minus one, but we get a weird cancellation. We get quite a lot of unexpected cancellation. And you should also notice this is not a minus two. So changing local coordinates really does change the coefficient of w to minus two. And in general it changes all the coefficients other than the coefficient of w to minus one. So how do we show that this coefficient of w to minus one d w stays the same? Well, if we take any one form a minus n z minus n plus and so on times d z. And we change z to b naught w plus b one w squared and so on. Then this term becomes some complicated coefficient times w to minus n plus some complicated coefficient times w to the one minus n and so on or times d w. And what are these complicated coefficients? Well, they're polynomials in the a minus n a one minus n and so on. And they're also polynomials in b naught b one and so on. And that also you need to take b naught minus one. So if you expand them all out, you get some mess. But it's it's obvious that this mess is going to be polynomials in these variables here. And furthermore, they're going to be polynomials with integer coefficients. And that's really all we need to know. And we want to show that the polynomial for w to the minus one d w is just a minus one. So all the other coefficients, this polynomial is going to be some horrible mess. And we want to show that everything cancels out. And this is in fact really easy to do without doing any calculation at all. Or we notice that it is a minus one over the complex numbers. So it is a minus one over if we work over the integers. And you notice that it's going to be the same polynomial with integer coefficients, whatever field we work over. If you're working over a finite field, it wouldn't really have integer coefficients. But if you if you do it with integer coefficients and then reduce the integers modulo p, that's what you get if you work over a finite field of order p. So it is a minus one over any field. So this is a really rather weird proof because we're proving something over a field of characteristic p by using contour integration over the complex numbers. And then observing that forces it to be true of any field. So the point is that Z is actually a subring of the complex numbers. So if you prove, so in order to prove an identity with integer coefficients, it's enough to prove it with complex coefficients. And for that we can use contour integration. And once you prove it over the integers, we can then just reduce it mod p and get it for any field. So this rather messy identity that we need in order to show the residue as well defined. The proof of the complex numbers does actually prove it over all fields with this rather kind of piece of black magic. It looks as if we're cheating, but it is actually valid. Finally, we want to prove that the sum of all residues of omega at p is equal to zero. And I'm not going to prove this in detail. I'm just going to sketch the main idea of the proof. First of all, it's easy to prove for the projective line p1. For the projective line p1, the residue is just a meromorphic function times dz. You can write a meromorphic function as sum of partial fractions and just check this by explicit calculation for each partial fraction. So we can just say proof by calculation. Now for any curve c, what we do is we map it to p1 by a finite map. And if we've got a one form omega on c, what we do is there's a way of pushing the one form forward to a one form on p1 by... This is sometimes called taking the trace of a one form. And what you do is you define this trace and then you check that the sum... If we call this map f, the sum over f minus one p of the residue of omega is equal to the residue of the trace of omega at a point p, where p is a point in the projective line. And if you can prove this, this obviously proves the sum of the residues of omega is equal to zero because the sum of the residues of omega over all of c is then the sum of the residues of the trace of omega and this is equal to zero by explicit calculation. So the proof that the sum of the residues of omega is zero reduces to this calculation, checking this here. And again, this is easy to prove over the complex numbers just by using the residue theorem. And you can prove it over all fields by using a kind of trick similar to the way we showed that the residue is well defined, that this turns out to be an identity between polynomials with integer coefficients. And since it's true over the complex numbers, it's true over the integers and therefore true over any field. Okay, so that completes the discussion of the proof of the Riemann-Roch theorem.