 This lecture is part of an online commutative algebra course and will be about the Bernstein-Sato-Polynomiel. So this lecture will be a bit different from all the others because it's using some slightly non-commutative algebra. The point is a lot of the theory of commutative algebra extends to rings that aren't quite commutative. So some examples of these might be exterior algebras if you take the exterior algebra of a vector space then this is the property that x times y is sometimes equal to minus y times x instead of being y times x. And you can have algebras that almost commute, for instance a ring of differential operators. Now the ring of differential operators with polynomial coefficients might be generated over the complex numbers say by x and also by differentiation with respect to x so this gives you a perfect good ring. However Leibniz's rule says that d by dx of x is equal to x times d by dx plus one. Here we're thinking of these as being d by dx as being an operator not as something acting on x. So here d by dx and x don't quite commute but they're not far off because the amount that they don't commute by is just one. You can also have things like clifford algebras you get in Leigh Group's course where you might have a quadratic form and a b plus b a might be given by say the inner product of a and b if you've got some sort of inner product on a vector space. So all of these have the property that a b is closely related to b a. Roughly speaking a b is equal to b a or minus b a plus something simpler. Whatever simpler means for instance here d by dx and x commute up to a constant and a constant is obviously simpler than x or d by dx. And if you've got any algebra which is sort of close to being commutative in this sense then you can quite often apply the techniques of commutative algebra to it. And I'm going to give an application of this where we're going to use the ring of differential operators in order to show the existence of something called the Bernstein or Bernstein-Sato polynomial. By the way if you look up Bernstein polynomial it actually turns out there are two totally unrelated things called the Bernstein polynomial. So the sort of Bernstein polynomial used a bit of numerical analysis and it's absolutely nothing to do with this Bernstein polynomial. They're named after different Bernsteins. This is Joe Bernstein. So let's explain what the Bernstein polynomial is by looking at the gamma function. So as everybody knows the gamma function gamma of s is the integral 0 to infinity of e to minus t t to the s minus 1 dt. So this converges for the real part of s greater than zero. And if the real part of s is equal to zero then this bit diverges near zero. However you can analytically continue the gamma function by integrating by parts. So what you do is you find that gamma of s is equal to 1 over s gamma s plus 1. And what you do is you differentiate this bit and you integrate this bit in your integration by parts. And if you do that you find this relation between gamma of s and gamma of s plus 1. And this can be used to extend gamma s to all complex s possibly with poles. Because in order to extend it to s less than one you see you have to divide it by s and that divides it to the real part of s being greater than minus 1 and then when you extend it to the real part of s being greater than minus 2 you might get another pole somewhere else. So you can extend it as a meromorphic function. And let's see what the key point why you can extend it is because this bit here has the following property. We notice that d by dt of t to the s plus 1 is equal to s plus 1 t to the s. And this means you can sort of integrate t to the s and convert it into a t to the s plus 1 when you're integrating by parts. So now we come across the Bernstein-Sato polynomial B of s. Here f is a polynomial in several variables x1 up to xn. And suppose we can find a differential operation such that p of s f of x to the s plus 1 is equal to B of s times f of x to the s. So this is some sort of differential operator in x1 up to xn d by dx1 up to d by dxn and s. And this is going to be a polynomial in s. And if you can do this then you can extend the integral over r to the n of phi of x times f of x to the s dx provided let's suppose f is always greater than or equal to naught and this might be some nice function. Say it might be smooth, maybe vanishing infinity compact support say. Now if you look back at this you'll notice that e to the minus t is not smooth because we were only integrating from naught so it's got a discontinuity at naught and it's not compact support either but that doesn't really matter because these conditions are actually rather strong than we need. If phi is smooth and of compact support then you can write down this integral but you can quite often do it even if phi is a little bit more complicated. Anyway if you do this this might converge for s greater than or equal to zero obviously because this is perfect nice for s greater than or equal to zero and then by integrating by parts using this formula here so you're going to integrate by parts in a rather complicated way using this differential operator you can extend to all complex values of s except as before you'll pick up poles and you'll pick up poles at the zeroes that have something to do with the zeroes of this Bernstein-Sato polynomial. So for example in this particular case if you just take f of s equal to t then we will find b of s might be equal to s plus 1 say so that's a very simple example. Let's look at a slightly more complicated example in several variables so let's take f of x1 up to xn to be x1 squared plus plus all the way up to xn squared and now you want to find some polynomial so you want to find some differential operator which takes x1 squared plus xn squared to the s plus 1 to be something times x1 squared plus xn squared to the s so you want to fill in the missing bit here and the missing bit here well that's not too difficult to do because you can take d by dx1 squared plus plus d by dxn squared so you can take the Laplace operator here and if you apply the Laplace operator to this then you find what we get here is 4s plus 1s plus n over 2 times that so here we find b of s is equal to s plus 1 and s plus n over 2 and p of s would be say a quarter of sum of d by dx i squared so you might want to normalise the Bernstein-Sato polynomial so the leading coefficient is 1 and you actually notice if you go back here the possible polynomials you can get to actually form an ideal so you actually define the Bernstein-Sato polynomial to be a generator of that ideal with leading coefficient 1 and that identifies it uniquely but I mean once you've shown that the problem is to prove that some polynomial like that exists once you've proved that some polynomial like that exists then that's generally all you need the Bernstein-Sato polynomial is actually rather difficult to work out in general for example let's take f of x y to be x squared plus y cubed so it's only slightly more complicated than x squared plus y squared well in this case it's really rather hard to calculate the Bernstein-Sato polynomial I'll give you the answer so the answer is it's Bernstein-Sato polynomial is s plus 1 times s plus 5 sixths times s plus 7 sixths so you see already that's getting fairly complicated if you try and work it out or even check it you find you're getting having to deal with mountains of linear algebra of course there are lots of computer algebra systems these days that will compute the Bernstein-Sato polynomial for you but it's really not at all trivial to compute even in quite easy examples like this so the main theorem proved by Bernstein-Sato is the following theorem every polynomial every non-zero complex polynomial f has a Bernstein-Sato polynomial so that just means that there's some differential operator satisfying this equation here with b of s not being identically zero well I'm going to explain how to prove this using ideas from commutative algebra in the next couple of lectures but what I'll do for this lecture is just show you that the Bernstein-Sato polynomial is really quite powerful by giving an application of it so here's an easy corollary of the existence of a Bernstein-Sato polynomial the corollary is the Margrange-Erenpreis theorem this says that every differential operator with constant coefficients has a fundamental solution so let's think what this means well a differential operator might be called d and what you need is a fundamental solution f so that d of f is a direct delta function of course the direct delta function is not really a function it's really a distribution so here d is a differential operator in several variables and if you can find a fundamental solution of the differential operator then you can solve the equation df equals g for any reasonable function g by sort of convoluting so informally you write g as a sort of linear combination of delta functions that's really a sort of infinite continuous linear combination so if you can find a fundamental solution then you can pretty much solve this differential equation for any function g here so for a long time it was a major open question in the theory of linear differential equations to show that any differential equation had a fundamental solution this was finally proved by Margrange and Erenpreis and at the time it was considered this really big difficult theorem well to show you how powerful the Bernstein-Sato polynomial is we will just show that this follows almost trivially from its existence so the problem we want to solve df is the delta function well what we're going to do is we're going to take Fourier transforms so if we take Fourier transforms in several variables the Fourier transform converts differentiation into multiplication by x so it will convert any differential operator with constant coefficients into a polynomial q and it will convert f into some Fourier transform of f and it will convert the Dirac delta function into the function 1 so what we're trying to solve is this equation here here this is a polynomial and f is some distribution and this is the constant one and now you look at this and you think well this is perfectly trivial to solve let's try solution, let's try f hat equals 1 over q what's wrong with that? looks like a completely trivial thing to solve well this is, if q is none zero everywhere then this gives a perfectly good solution because 1 over q is a perfectly good function the problem is what if q has zeros and the problem is if q has zeros so near q of x equals zero 1 over q is not locally integrable and if it's not locally integrable then it's not at all clear how to make it into a distribution in general now if the zeros of q are fairly simple for instance if q has a simple zero that's just a non-singular variety so it's very easy to make 1 over q into a distribution and you can even do this with more complicated things for instance if the zeros are several nice smooth varieties meeting transversely so we say the singularities have normal crossings then again it's very easy to make sense of this as a distribution the problem is the zeros of q might be some very complicated singularity for instance if we're thinking of a function of two variables we might have a singularity of y cubed equals x to the 4 or something and it's not immediately obvious how to turn this into a distribution and in high dimensions things get even more complicated well there's one way to solve this by using a really big sledge hammer which is you quote Hieronaka's theorem about resolution of singularities in high dimensions and what Hieronaka's theorem says roughly is if you've got any singularity in high dimensions you can do this magical blow-up operation and turn it into a singularity with normal crossings and then you can find a distribution 1 over q but I mean Hieronaka's theorem was one of the most hardest theorems in mathematics when it came out and although it's been simplified a lot it's still pretty hard going I mean I've seen claims that you can present it in the last couple of weeks of a graduate course and I'm sure you can present it in the last couple of weeks of a graduate course but how many of the graduates will actually follow your presentation is not at all clear to me anyway so one way of solving this problem is by using Hieronaka's theorem well we don't want to do that because that's really a lot of work it's much easier to use the Bernstein-Sato polynomial so we can assume q is greater than or equal to 0 everywhere and that's very easy because 1 over q is just equal to the complex conjugate of q over q times q bar and this bit here is greater than or equal to 0 so if we can invert polynomials that are always non-negative then we can do all of them and now we notice that q to the s is holomorphic for the real part of s greater than or equal to 0 absolutely no problem we think of this as being a holomorphic as a distribution now using the Bernstein-Sato polynomial we can continue qs as a meromorphic function of s to all s in c now when I say it's a meromorphic function you have to be a little bit careful because it's actually a function taking values not in complex numbers but in distributions so what this means is that if we've got any nice function say smooth compact support then if we take the integral of phi with q over dx to the n this is going to be meromorphic in s for any nice function phi so it's not a meromorphic complex valued function it's something a little bit more complicated but it's not really much more complicated you just bang it against a test function and then you get a meromorphic complex valued function so it has poles related to 0s of B of s plus n where s is the Bernstein-Sato polynomial so it might have complicated poles all over the place now what we want to do is we want to define q to the minus 1 well q to the s might have pole at s equals minus 1 if it didn't have a pole then we'd be done we could define q to the minus 1 nicely well let's expand it as a Laurent series so we might have q to the s minus 1 equals q minus m s to the minus m plus q to the 1 minus m s to the 1 minus m plus q naught s to the naught and so on so this is just a Laurent series and the coefficients q something rather are they're of course not complex numbers they're going to be distributions so they become complex numbers if you bang them against some sort of test function now let's multiply by q so we find q to the s is equal to q times q to the minus m s to the minus m and so on plus q times q naught s to the naught and so on on the other hand we know q to the s is holomorphic at s equals 0 in fact at s equals 0 it's just 1 so this is equal to 1 plus something or other and all these terms here are 0 so we find q q minus m equals 0 up to q q minus 1 equals 0 and q q 0 equals 1 so q 0 is an inverse of q and we can use q 0 to find a fundamental solution to our differential operator in fact we just take the this will be a distribution we just take its Fourier transform q 0 twiddle is going to be the fundamental solution well there's actually a you might think there's a slight problem here because we notice that q times q minus m equals 0 all the way up to q times q minus 1 equals 0 so there are many inverses of q because we can take q 0 plus a linear combination of q minus m up to q minus 1 in fact generally there'll be lots of other stuff as well and this seems to be a bit of a problem because an element of a ring has at most one inverse because if we've got a ring q and we've got two inverses so q a equals 1 and q b equals 1 then we can say b a so b q start again b q a is equal to a because b times q is equal to 1 and it's also equal to b so what on earth am I going on about claiming that I can find lots of inverses of this polynomial q well first answer to that is that the product of distributions is not always defined so distributions don't actually form a ring you can quite often multiply distributions in fact you can always multiply distributions by smooth functions for example and you think about it a bit and you say well the fact the product of distributions is not always well defined doesn't actually affect this because here q is a smooth function all we're ever doing in this equation is multiplying a smooth function by a distribution so here we take a smooth function times a distribution we get 1 and we then multiply that by another distribution so although the product of distributions is not always defined this isn't actually what the problem here is the second problem is that when the product is defined it need not be associative and this is the real reason why we don't get a contradiction and I want to emphasize that because everybody knows products of distributions aren't defined but not so many people are aware that the product isn't associative when it is defined let me give a really simple example of a product of distributions not being associative so let me take the distributions x and I'm going to take 1 over x I'm going to make this into a distribution and okay this is not locally integrable but it's such a nice function it's not very difficult to make it into a distribution you can sort of take a Cauchy principle value or something and make that into a distribution and then I'm going to put here I'm going to put the Dirac delta function and then we know that x times delta equals 0 until we're multiplying a smooth function by distributions there's no problem but 1 over x times what times x is equal to 1 so you see that if we take 1 over x times x times delta this way this is equal to delta whereas 1 over x times x times delta well here we've got x times delta which is 0 so this is just 0 so even for very easy distributions in one variable where you're just multiplying distributions by smooth functions the product is still not associative okay so what we'll be doing in the next couple of lectures is using commutative algebra in particular the Hilbert polynomial to prove the existence of Bernstein polynomials