 This lecture is part of an online algebraic geometry course on schemes, and it will be about vector bundles on the one dimensional projective line. So you remember earlier we were looking at coherent sheaves on PR, and we found some examples of sheaves which are just line bundles or invertible sheaves. And these are locally free in that they look locally just like a sum of a finite number of copies of the sheaf of regular functions. And this suggests the general problem of classifying the locally free sheaves on some scheme or manifold whatever x. So in general, this is hard if the dimension of x is greater than one and the rank of the sheaf is greater than one. So if the dimension is one, if you're working over a space of dimension one, we can quite often do it. And if you're just looking at sheaves of rank one, in other words, invertible sheaves, you can quite often do that too. For example, even classifying the locally free sheaves on projective space of dimension greater than one is a very hard open problem in general. So we're going to look at the two cases where x is dimension one and is the projective line, which is reasonably easy as we will see. And later we will look at the case of invertible sheaves on our dimensional projective space, which is also reasonably easy. So we're going to work on the following problem, classify the vector bundles. A vector bundle is just essentially another name for a locally free sheaf of finite rank on P1. So this was sort of done by growth and Dick. But it turns out that the essential part of the proof was done earlier by Birkhoff. Birkhoff didn't work with schemes or sheaves, he worked with matrices. In fact, even before that hill, but improved a rather similar theorem. So who actually first proved this theorem is a little bit unclear. And what we're going to do is show that any vector bundle on P1 is a direct sum of line bundles. So any invertible sheaf is a direct sum of, sorry, any locally free sheaf is a direct sum of invertible sheaves. This is definitely false for P of r greater than 1. An example of a vector bundle that's not a direct sum of line bundles is just a tangent bundle. And as I said in general, classifying all of them is a big problem. So let's take a look at P1. As we all remember P1 is a union of two copies of A1, which are glued along A1 minus the origin with coordinate ring KXX minus one. So these both have coordinate rings K of X. And if we want a vector bundle on P1, we get by gluing two vector bundles on A1 and A1 along this. So let's think about what that means. Well, first of all, we need to know the vector bundles on A1. Well, that's easy. Let's correspond to modules over the ring KX that are locally free. And these are really easy to classify because K of X is a principal ideal domain. So locally free modules are all free, at least by that rank. So the only modules, so the only locally free modules are just free modules KX to the N for some N. Classifying vector bundles on A1 for A1 greater than one is hard. The SEA conjecture says that they're all the only ones who get to the obvious ones where you take a direct sum of the trivial untwisted bundle. And this was eventually proved by Quillen and Susslin. So even for the simplest possible case of affine space classifying vector bundles is quite hard. Anyway, fortunately for anyone, this is much easier. So what we've got to do is we have two vector bundles. Let's write the coordinate ring of the first A1 as being KX and the coordinate ring of the second one as being KX to the minus one. And the coordinate ring of the intersection will be KXX to the minus one. So we pick a sheaf on the first one, which just corresponds to picking a vector bundle KX to the N sum of N copies of KX. And of course we'd pick a sum of N copies of KX to the minus one, and we have to glue them over KXX to the minus one. So this restricts to a bundle KXX to the minus one to the N, and this restricts to a module KXX to the minus one to the N. And we want to glue these together. And how do you glue that? Well, you need to define an isomorphism from this module to the other. And that's obviously given by an element of the general linear group in N variables over the field KXX to the minus one. So the problem is something to do with this group. We can specify a line bundle by picking an element of this group. However, we can also change KX by any automorphism, which would be an element of GLN of KX. And similarly we can act on this by GLN of KX to the minus one. And doing this will change the element we've got here without really affecting the line bundle. So actually by this group corresponds to multiplying on this group here by multiplying on the right. And this group acts on the left on that. So what we end up with is we have the following space. We've got GLN of KX. And then we've got this double coset space GLN of KXX minus one and GLN of KX minus one. So elements of this space correspond to invertible sheaves and every invertible sheaf arrives from this. So we have to write down a matrix here and we can act on it on the left by these things and these correspond to row operations. And these correspond to column operations. So for instance you can add any row times any positive power of X to any other row. And here you can add any column times any negative power of X to any other column and so on. And what we want to show that every matrix is equivalent to a diagonal matrix. That means every matrix in here is equivalent under multiple complying by this on the left and this on the right to some diagonal matrix. And a diagonal matrix obviously just means that your thing is split as a sum of line bundles. So your vector bundle has a direct sum of line bundles. So our theorem that every vector bundle is a sum of line bundles is just reduced to this question of linear algebra over over these polynomial all around series rings. So let's see what happens. Well, we can use row operations to make the left column of the form X to the something 0000 and so on and whoever knows what goes on there. That's because K of X is a Euclidean domain and we can use the usual operations of Euclidean domain to sort of find the greatest common denominator of all these things here. And what we do is just by applying the usual Euclidean domain we end up with just the power of X here and we can use that to clear out all the other zeros there. And we can repeat this so we can get X to the something there zeros here X to the something here zeros here X to the something here zeros here and so on and we get something going on there. Unfortunately, we seem to get stuck. So let's look at the following matrix suppose we got down to the following matrix one naught X X squared. And we want to get rid of this X and we can't do it using row operations because that would mean you have to subtract X squared times some non negative power of X from this which doesn't work and we can't do it using column operations because you'd have to multiply one by some negative power of X and we can't use those to get rid of it either so we seem to be kind of stuck and we need to backtrack. You can actually get rid of this but involves turning this nice zero element into something non zero for example we can multiply X times the first row from the second and we get one X minus X zero. Then we can swap columns and we get X zero minus X one and then we can multiply negative power of X times this column from that column and we then reduce it to X naught minus X naught so we can actually diagonalize the matrix but we have to be prepared to backtrack and let's stop and think what went why this went wrong. Well what went wrong is the power of X here was less than the power of X here so we couldn't use either of them to get rid of that. See if the power of X here was greater than the power of X here then we can use either row or column operations to eliminate all powers of X here. So this suggests that we should start by making the matrix of the form X to the T zero zero zero something there with T as large as possible. See the problems were that the if the power of T was less than a power of if the power of X here was less than the power of X there then we ran into problems so so maybe if you make this as big as possible we won't run into the problem that it's smaller than that. Well first of all how do we know there is a largest power of of T you know maybe we could continue changing the matrix of this power kept on getting bigger and bigger. Well you can see this by look at the largest power of X dividing each column here by dividing we mean that everything else in the column must be a polynomial in X times that power of X. And this is not changed by multiplying by elements of gln of kx on the left, as you can easily check these are just doing row operations on each column. And multiplying by gln kx minus one on the right cannot increase the biggest power of X in the matrix. So there's the largest power of T, so largest power of X that can divide all elements of a row, and it's bounded by the largest power in the original matrix. So a summary is that is a maximum power of of T that can occur in the top left corner, and now it's quite easy to finish off. So what we do is we choose our matrix here with X to T as big as possible, and then we can apply the thing to the next row so we get X to the s nought nought nought, and who knows what goes on there and there's some sort of rubbish here. And let's think about what goes on here. Well, first of all, so step one. We can assume that this involves only powers X to T plus k with k greater than or equal to k greater than nought using column operations, because we can multiply X to T by negative powers of X in order to kill off all powers of X here that are less than or equal to T. So first of all, we do that. Secondly, we notice this only has powers of X bigger than T. So next we notice that s must be less than or equal to T, otherwise column two would give power of X bigger than T dividing all entries, which isn't possible because we chose T to be the biggest possible one. And thirdly, now that s is less than or equal to T, we can use row operations to kill all remaining entries here. In this thing because we've arranged that all, this only involves powers of X bigger than T and s is less than T so we can multiply this by powers of X and kill off this entry. So, so we get a matrix now looks like this X to the T nought nought nought X to the s nought nought nought and so on. Now key point is we've got a zero here. And now we can continue like this to diagonalize the matrix mumbling something about induction or whatever. So, we get a diagonal matrix of the form X to the T one X to the T two X to the T three and so on the zeros here, and this corresponds to a sum of line bundles. So, no n where n is either corresponds to T one or to minus T one and I can never remember which. So, so any locally free sheaf on P one is of the form. Oh, and one plus oh, and two, so on. And notice that the numbers and I a unique up to order. And a very easy way of seeing this. I'll leave in one line leave as an exercise we just look at the dimensions of global sections of L and twisted by N here. This is the line bundle L and if you count the number of global sections of L twisted by various values of N, then you see that the dimensions of global sections determine these numbers and I. So this gives a complete classification of locally free sheaves on P one. And I now want to compare locally free sheaves on P one with representations of the group S one which is the set of complex numbers such that Z equals one so the circle group. And now, in some ways, these look very similar and in other ways they look quite different so first of all let's say why they're similar. We've got indeed composable objects, which are the line bundles and here we've got indeed composable objects a N, which is just see Z with S one acting as Z goes to Z to the end so representations of a circular very easy they just correspond to the. So irreducible representations just correspond to integers and we notice that O M tensor with O N equals O M plus N and similarly a M tensor a N equals a M plus N. And every object is some of these indeed composables, and this is true in both cases, and it's unique up to isomorphism. In other words, the number of times a N occurs is unique up to isomorphism and the endomorphisms of O N or a N is just the field K you're working over which I guess would be the complex numbers in this case. So, the classification of locally free sheaves on P one looks very much like the classification of representations of the circle there's a natural one to one correspondence between the isomorphism classes of objects and both. However, you shouldn't confuse these the categories and the category of locally free sheaves is really somewhat different from the category of representations of S one for the following reasons so here are some differences. Suppose we look at the morphisms, the homomorphism from O M to O N. Well, this can be none zero if M is not equal to N in fact it's always none zero if M is less than or equal to N. However, the homomorphism from a M to a N is equal to zero if M is not equal to N. So, although the objects in these categories correspond the morphisms don't, and this has some slightly bizarre consequences if you're used to representations of the circle. So representations of the circle are completely reducible. This means all exact sequences split. It's true for representations of any compact group. On the other hand here, we get none split exact sequences. And we've had some examples of these before. For instance, we get naught goes to O zero goes to O one plus O one goes to O two goes to zero and this is none split. So if we've got this line bundle and we start by trying to write it as a sum of line bundles, we might find a line bundle O zero as a sub bundle of it. But if we take that, we get stuck. We can't write this as a sum of O zero and O two. In fact, this is very similar to the fact that when we got to this matrix here, we got stuck and had to kind of backtrack. Picking a one up in the top left-hand corner here is kind of similar to picking the sub bundle O zero as a sub bundle of this. In both cases, we get stuck if we try to write the thing as a sum of line bundles. Okay, I think that's all about vector bundles on a rejected line for the moment.