 This algebraic geometry video will be about elimination theory. So in his book on algebraic geometry, Hodgshorn rather briefly mentions it as theorem 5.7a and I want to expand on this a bit. Incidentally, in theorem 5.7, Hodgshorn refers to van der Waarden's book on algebra for the work on elimination theory. Unfortunately, we won't find it there anymore because the chapter on elimination theory was eliminated in the second edition of van der Waarden's book. Elimination theory has caused a fair amount of controversy. So Henri Wey in his book foundations, it's a famous book, Foundations of Algebraic Geometry, had a rather notorious footnote that I can show you here. I'm going to need to magnify it a bit so you can read it. There we go. So he says the device that follows which it may be hoped finally eliminates from algebraic geometry the last traces of elimination theory. Okay, so Andre Wey did not like elimination theory. On the other hand, Abhyanka wrote a famous poem about elimination theory with this line in it saying eliminate the eliminators of elimination theory, which is it's compulsory to quote this whenever you discuss elimination theory. So this is a poem titled polynomials and power series may they forever rule the world. Anyway, what is this elimination theory that people get so excited about? Let me just use the magnification back down. So here's an example. Suppose you've got two polynomials such as x cubed y to the four minus seven x squared minus x y to the eight and three x squared y to the five plus four y squared plus x to the four y to the seven. And what we want to do is eliminate y. What I mean by this is to find a single equation in x that you get by working out why from this equation and substituting into that equation in some sense. Well, that looks like a rather hairy problem because you have to find why by solving an equation of degree eight here, which is going to be rather a mess and substituting that in here is going to be a real nightmare. In fact, you can see roughly what answer you're going to get because this equation has degree nine. And this equation has degree 11. So by Bazou's theorem, there should be nine times 11, which is 99 points of intersection. So there should be 99 possible values of x you get by eliminating these. So x should satisfy a polynomial of degree nine by 11. And now what I want to do is to write down this polynomial explicitly. And how on earth do we do that? So let's look at a more general problem. Suppose I've got two polynomials f of x, which is a m x to the m plus a m minus one x to the m minus one plus a naught and g of x, which is b n x to the n all the way down to b zero. What is the condition for them to have a common root? So I want to find some condition on the coefficients a i and b j, which is a necessary and sufficient condition for these to have a common root. And first of all, there are some slight complications if a m or b n are zero. So let's just assume a m is not equal to zero and b n is not equal to zero for the moment. And I'll remove these conditions later. Well, the condition for these to have a common root can be written as follows. If they have a common root, then f of x times p of x equals g of x times q of x, where degree p is less than n and degree q is less than m for some polynomials p and q. And if they have a common root, then you can just put p equals g over x alpha and q equals f over x minus alpha, but alpha is the common root. And conversely, if this condition holds, then you can see that they do in fact have a common root. So this is the condition for a common root. You must be able to solve this equation for polynomials p and q. p and q must of course be none zero. Well, if you expand this out and look at the coefficients of each power of x, you get a whole lot of homogenous linear equations in p and q whose coefficients depend on the a's and the b's. Now the condition for a set of homogenous linear equations to have a non-trivial solution is that some big determinant vanishes. So we have to write down a determinant that vanishes if this has a common solution, which basically means writing down the matrix of all these equations. And the matrix is not difficult to work it out, but it just requires a certain amount of bookkeeping. It looks like this. First of all, you write down the coefficients of f and then you add a certain number of zeros and you write down the coefficients of f again, only shifted by one and then a certain number of zeros. And all together, we're going to do this n times. So we go all the way down to here, we get zero, zero, then a naught, sorry a one, up to a m. And then we do the same thing all over again with the other polynomial. So with b n going down to b zero, some zeros, zero, b n, b one, b zero, zero. Then we have a all the way down to here and then we get zero all the way up to b n and b zero there. And we do this with m rows. So this is the, we take its determinant and set this determinant equal to zero. And this determinant is called the resultant. This word ant, the end of a word, means two things. First of all, it means that you're probably referring to some sort of invariant. So the determinant is the most, the best known example. The other thing it probably means is that the terminology was probably invented by Sylvester, who loved inventing funny names for things. He invented a whole collection of names for similar invariants, like catalecticons, determinant, harmonizant, canonical, and so on. It's been more or less forgotten. Determinant still lives on obviously and resultant is common enough for people to remember. People later invented more of them like bazootian and so on. Well, there's a slight problem. What do we do about a m and b n being zero? You can see this massive resultant also vanishes if both of a m and b n are zero. If one of them is zero and the other. So the condition that a m is zero can be sort of a saying that the polynomial has a root at infinity. So if we look at a m x the m plus a naught, this normally has m roots. If m is none zero, if m is zero, then it will have less than m roots. So we pretend it's also got a root at infinity. We can make more sense of this if we look at the corresponding homogenous polynomial a m x the m y to the naught plus a m minus one x to the m minus one y and so on plus a naught x to the naught y to the m. And then we can think of a root as being a point in projective space, which is just our field k together with a point at infinity. So the point at infinity will be point when y is not and the other points will be x colon one. And then you can see that the resultant is zero is equivalent to the two homogenous polynomials a m x the m plus a naught y to the m and b n x the n plus all the way down to plus b naught y to the y to the n have a common zero in one dimensional projective space, which is k union of point at infinity. So let's see an example of this. First of all, let's just ask when does the polynomial a x squared plus b x plus c equals naught have a double root. Well, the polynomial f has a double root if f and f prime its derivative have a root in common. So we're going to take f to be our polynomial here and g to be f prime, which is equal to two a x plus b. Now we work out the Sylvester matrix. Well, it has a b c at the top of the coefficients of f. And then we put in the coefficients of g two a b zero zero two a b. So we just take this determinant. And this determinant is really easy to work out. It's just a times b squared minus four a c. And here we have the usual discriminant. And we've also got this possibility a equals zero because we said that if a is zero, then f and its derivative both have a sort of zero infinity. So if a is none zero, then this has a double root if and only if the discriminant is none zero, which is the result we all remember from high school algebra. So let's have a look at a slightly more complicated example. Suppose f is equal to x cubed plus b x plus c. I'm missing out the term in x squared just to make things easier. And we want to know when does this polynomial have a double root? Well, we look at its derivative, which is three x squared plus b. And then we look at the Sylvester matrix, which looks like this. It goes one, zero b c zero zero one zero b c. So these are the two rows of f. And then we have to put in the coefficients of b three times. So we get three naught b naught three naught b naught naught three naught b. And we want to know the determinant of this. And it's not terribly difficult to work out because it's got masses of zeros all over the place. So you can quite quickly subtract three times the first row from the third and three times the second row from the fourth and reduce to something that's easy to evaluate. And you find it's four b cubed plus 27 c squared. This isn't quite the discriminant of a cube because the problem with the resultants is there's always a sign problem. You never quite know whether it should be plus one times this or minus one times this. And in fact, the discriminant of the cube because in fact minus this expression. So it should be minus four b cubed minus 27 c squared. But anyway, the vanishing of this expression here is the condition for this polynomial to have a multiple root. So in the next lecture, we will use resultants to show that projective varieties have a property known as being proper, which is a sort of analog of compactness.