 I wanted to continue with what I was doing last time, but I decided I have two digressions, and I thought I would do the digressions first, because one should always have fun first and then work. Oh, yeah, I shouldn't have to ask. There's no need. So that's a good role in life if you have a job and you have fun. You should have the fun first, because if there's no time for both, you wouldn't want to miss that part. OK, so I'll start with the digressions. They're a little bit related to each other. So one was about orthogonal polynomials, and I saw from the reactions of people actually asked people to raise their hand if they feel reasonably familiar with orthogonal polynomials, like they know basic definitions in a few examples, and almost no hands went up. So maybe people are shy, but maybe many people don't know, and as Gelfand told his young mathematician who was teaching the 16-year-old mathematics, you haven't told him orthogonal polynomials. He's already 16. Everybody should know orthogonal polynomials. So I don't apologize for making a digression, but I'll tell them in a second what they are. So if you have some measure, you get a sequence of orthogonal polynomials, which are unique if you make it, make it, monic polynomials. And the Bernoulli polynomials are certainly monic polynomials for each n. You have a unique polynomial, bn of x, which starts x to the n. And if you have orthogonal polynomials, you also have one of each degree, and you typically normalize b monic. And so the questions are the Bernoulli numbers orthogonal for some measure. So I answered last time that I've thought about something related once, but I couldn't remember what. So I would look at my notes, and I would feel the answer would be no. And indeed, the answer is no. And the proof is very short of that. But first of all, the proof uses wonderful properties for orthogonal polynomials, which, as Gelfand said, everybody should know. And actually, there are three of them. I wrote this up in a paper. Well, I'm a wee in a paper with Kaneko, I think 1997, where we, so it's also my home page, if you want to look at more details in the paper. The proposition is three parts. One is quite well known. One is less known. Most people don't know it. And one, I think, was new. At least I've never seen it in the literature. So I will give all three here, because thank you all. Even if you happen to be an external or orthogonal polynomials, of which we definitely are at least one in the room, then you'll probably learn something. Because as far as I know, that last property is not known. But the simplest property of orthogonal polynomials already is not satisfied by the Bernoulli numbers. And so that answers the question. There's no measure. However, what would it mean for them to be orthogonal? It would mean that for some measure that you would have bn, it's called it r and s, because I have too many n's and m's later in sums, it would say that for some measure on the real line, so some w of x dx, or if you want to fancy your notation, some dv of x, some measure on the real line that this would be 0 for r different from s. But if you think about it, there's only one reasonable measure. For Bernoulli polynomials, they're really made to be periodic. I mean, they're made to be, because b of x plus 1 is almost the same as b of x. And therefore, they're really on the circle. And there's only one possible measure on the circle. It's the Bayek measure, because it has to be variant. It's a group, it has to be variant. And the circle would therefore, the only reasonable question would be this. So one could ask whether this is 0. It certainly isn't. That, of course, you could just check an example. As I said, I'll show you in a second that they aren't orthogonal for any measure. So this is not true. But you can ask, what is it? And it turns out that that has a very, very pretty answer. And that very pretty answer has a pretty application, which is not a well-known theorem. In fact, that's also in a paper of mine, an appendix to a book. Like so many appendices I write, a book by three Japanese people. All the former friends of mine, well, one died just as the book came out on Bernoulli numbers. And I wrote along an appendix. And I discovered several things, one of which, I'll tell you, that's the prettiest, had actually been known since 1923 by Nielsen. But I only found that afterwards. But there were a couple of related things that were quite fun that I won't talk. So I'll make a digression on the orthogonal polynomials and also a little bit about Bernoulli polynomials, even though they're not orthogonal. So let me say what orthogonal polynomials is. So first of all, if you have any vector space and you have some scalar product. So let's say, so V is a vector space. A vector space means you have a field, which to me will certainly be characteristic 0. And V is a vector space, maybe, infinite dimensional. And you have some scalar product. So given any two vectors, you know, V and W, in this vector space, you have a scalar product, a number. Then you can try to make an orthogonal basis. So basis of things, which are mutually orthogonal. And so the case I'm going to take care about is, first of all, this vector space is not any old vector space. It's the space of polynomials in one variable, with coefficients in k. But this phi is not anything. So to give this, I'm writing a round bracket, x m x n could, a priori, be g m n in the only condition, really, for it to be a scalar product. You extend by linearity. So it's automatically by linear. And it should be symmetric, maybe anti-symmetric. But let's think of it all as real. Then I would just have a collection of numbers, which are arbitrary. And then you could try to make then orthogonal polynomials, which might or might not exist in the general case, would be p n of x is, let's say, renormalized. Because if you rescale everything, it won't change. And I want a basis, so I need something in every degree. So let p n of x be something of degree n. And with the property that p m and p n are orthogonal for the scalar product, if m is different from n. That's what I would mean for any orthogonal basis, especially for polynomials. But if it's an arbitrary vector space, you won't get that. But let's assume, so I can first say it in a fancy language. Well, first, very fancy. So phi is an element of v dual, or v star, depending how old-fashioned you are, which means that phi is a linear map from v of k linear map from v into k. And then g n is simply phi of x to the n. That's just a fancy way of writing it. Because of course, any linear map on the polynomials is uniquely determined by its values on monomials. And so it's equivalent to give a sequence of numbers. So whether I do the one or the other. But now let's assume that this thing, so I want that the scalar product of g n with 1 is going to be this phi of this g n. But I want more generally that the scalar product here only depends on the sum n plus n. But that's a really stupid way of saying what I really want to say, which is that if I have two polynomials, then their scalar product only depends on their product. You multiply them, and then you apply some function. So that's a special kind of scalar product which factors through the product. So you have the map, I mean v. Well, you have this. That is a map multiplication, v tensor v. Just multiplication to v. And the scalar product on this will actually seem to, because it's symmetric, and it should factor through. Now the classical example, as I said, is when k is either the reals or maybe a subfield of the real numbers, so often it's the rational numbers. And your f and g is defined, as I just wrote before, it's the integral over the real numbers, since k is containing the real numbers, f of x, g of x, and then sum w of x dx, where this is a positive function. No, sorry, greater than or equal because it might have compact support, but it's positive somewhere. So that's all of the famous examples in the literature. Amit and Laguerre and Chebyshev and Gaggenbauer and Jacobi. And all of those polynomials are certainly there's some appropriate measure, usually compactly supported, maybe from minus 1 to 1 if it's the Gaggenbauer polynomials or whatever it is. But we don't have to have that. OK, so how would you find these polynomials? Well, of course, it's clear you use Gram-Schmidt. So you start, you have the basis 1x squared x cubed. p0 has to be 1 because it's monic and it starts with 1. This is x minus plus a constant, but then the scalar product of x plus a constant with 1 is p1 p0. And since 1 is different from 0, that should be 0. So that means that c with 1 is the scalar product of 1 with 1, which is the number I call g0. And x with 1 is the number I call g1. So you see that c is now uniquely determined, g1 over g0. And in particular, you see that the first condition is that g0 shouldn't be 0. But g0 is the scalar product of 1 with itself. So in the most typical case, and in particular here, ff in here is always strictly positive because this is square. And therefore, that's automatic. So then it'll automatically be non-degenerate. In general, if you take a random collection of gn's, it's almost always non-degenerate. But if you do it badly, and you'll see the condition in a couple of minutes, the precise condition on a sequence, let's say k is q. And I have a collection of numbers gn. What is the exact condition that the system is non-degenerate in order that we can find these pn's? And you see the very first condition is already that g0 is non-zero. But then you continue, and of course, the process is very well known. And you all know it's called Gram-Schmidt normalization. So it's just basically your algebra of it. There's nothing to do with polynomials. So you take xn, and then you subtract a polynomial of lower degree. Because after all, it has to be monic, and xn is monic. But by induction, the pm's, for m less than n, you already know what they are. And they are monic of degree m. And so this is certainly something. And so there's some coefficient here with this. But to find the coefficient, you just take the scalar product of pm with this thing. Well, the scalar product of pm with pm prime is 0, and that's m equals m prime. So I'll only pick out one term. And here I'll have the scalar product of x to the n with pm. And here I'll have the scalar product of pm with pm, which is certainly one of the conditions I might need is that f is never 0. I'm not sure I actually need that. OK, so this is the form of the inductive form. So you can go to your computer if you know the gm's. And then, of course, you know the coefficients of each previous pm. And then you use just a combination of the gi. But we want to see how these things actually look. And so it's very nice. And so generically this exists, or in the case of real polynomials, it certainly exists. So the proposition I wanted to tell you, it has four parts. The first one is very well known. The first two are certainly well known. And easy, the second is also known, but it's not that well known. It's something everyone should know. And the fourth is a slight improvement on that. Well, for the corollary, which, as I say, Kaneko and I found that we don't even know if it's in the literature. So the claim is that there exist numbers, a, n, and b, n, in the field. So given such a sequence, so we have these gm's. So the gm's will uniquely define a collection of numbers, a, n, and b, and the last parts will tell you exactly how to get a, n, and b, n, such that you have the following inductive formula. To get, well, let's assume you know those numbers. So somebody gives you some numbers, a, and b, but not all numbers from this early work. Then you define pn plus 1. Well, we already know pn of x, but that's monic of degree n. Starts with x to the n. We want x to the n plus 1, so I better multiply. But now there's a constant, x plus a, n. The way I wrote it is I should follow my notes so that I don't get the number mixed up. And the claim is that the induction always is a very, very simple form. Pn is, of course, some kind of a combination of all the predecessors, but it's just x plus a constant times the previous polynomial plus another constant times the pre-previous polynomial, and none of the others enter at all. So it's a three-term recursion. Which, by the way, if you talk about recursions, this is length 2. The three-term recursion is length 2. So recursion of length d is your replace n, n plus 1, or n minus 1, n minus d. That's the key invariant. It's at the order of a differential equation. So this is a second-order thing. And this already tells you if you're used at all to continued fractions, anything, any sequence of numbers at all would satisfy the three-term recursion. So the n plus 1 is a combination of the n, then the n minus 1 with some coefficients, like Fibonacci numbers. Such a thing always is related to a continued fraction. So the second-order recursion is almost the same as saying that there's a relation. And I'll write the rest later in the proposition with continued fractions. So that's the first statement. Actually, that's so easy. Why don't I prove that before I write the other parts? So we assume that these p n's exist. And as I say, generically, they will exist. So then if I just multiply, I won't well, I can put the x each time. It's not that much work. If I multiply p n of x by x, I've just found p n of x. I want to find the next one. Well, this one starts with x to the n plus 1. So if I expand it, I can certainly expand it as a combination of all orthogonal bases because it's a basis. And the first coefficient is 1 because this is 1 because it's monic. Now, the next coefficient is an. Maybe I'll drop all the x's. These are just numbers. p n minus 1 plus c n p n minus 2 plus d n p n minus 3, et cetera. Just by the definition, I mean, it's a basis. So I can write it as a basis. And what I'm claiming is exactly that c n d n and all the others are 0 because that's exactly recursion I wrote. And that's the reason for the minus signs, x minus a n p n. So let's do d n as an example. Well, if I multiply d n by p n minus 3 p n minus 3, then remember, I'm kind of assuming that the scalar product of something with itself is never 0. In the classical case, it's strictly positive. So it's definitely not 0. Then that will be the same as the scalar product of p n minus 3 with the right-hand side. Because if I take p n minus 3 and take its scalar product, it's orthogonal to every p n, including the next p n minus 4. They're all orthogonal except with itself because that's what it means for an orthogonal basis. But the right-hand side is equal to the left-hand side. So this is now left-hand side. But the left-hand side is x times p n. But remember, my basis only depended on the product of the two polynomials. The product of p n minus 3 with x p n is the same as the product of x times p n minus 3 with p n. But now I'm done. That's certainly 0 because x p n minus 3 is degree at most n minus 2. And so it's a combination of earlier polynomials which are orthogonal to this one. So end of proof. And you see why it fails. It works perfectly well for n minus 2. But it doesn't work for n minus 1 because x p n minus 1 p n is non-zero. So in fact, we actually get more. Because by that same argument, if I do the same thing not with n minus 3 but with n minus 1, this number is non-zero again because I'm assuming the scalar products with themselves are never 0. So b n times p n minus 1, that's we assume is known. I mean, the scalar product is given. And we've already found p n minus 1. Then b n times this by exactly the same argument will be the scalar product of x p n minus 1 with p n. But x p n minus 1, by the same recursion, it starts p n plus lower polynomials. And those lower polynomials are orthogonal to this because they're orthogonal. So this is simply p n p n. So I've now normally proved the first part of the theorem I've proved the second part, which I didn't get written, which is that p n is the ratio p n p n divided by p n minus 1 p n minus 1. So the first part tells you exactly how to get the function recursively, but not really exactly because there are two unknown sequences, a n and b n. But now these stuff told you half of them. I've told you that b n is. Now, this is enough. So here's an important remark. This is not enough for the general case because we have two sequence of numbers. And I've only told you the b n's, not the a n's. But in all of the classical case that I listed by memory, just what I could think of, Amit, Lager, Chebyshev, Jacobi, there are so many Chebyshev of the first and second kind, Gegenbauer, and so on. All of those have the following property. They have a well-defined parity. So in the space of polynomials, there's an involution. x goes to minus x. The even part is the even polynomial. The odd is the odd. And all of these typically have the property that they're even polynomials, which means, well, in this case, we need the G n or 0 if n is odd. So therefore, two things of different parities are automatically orthogonal. But in that case, we're done because if n is, let's say, even, then this is an odd polynomial. This is an odd polynomial. This is an even polynomial. So this has to be odd, which means A n has to be 0. So if P n of minus x is equal to minus 1 to the np of x, so if that's true, which implies it's simply equivalent to saying that these numbers that determine my scalar product, that the only ones that are non-zero are the even index ones, and this is very, very common, then trivially A n is 0. And so I don't owe you a second coefficient because I've told you B n. Well, I haven't really, really, really told you B n because let's say you want to be 1,000. Well, then you have to compute P 1, P 2 all the way up to B 1,000 by this recursion. Then you can compute the scalar product P 1,000. So I'd like to give you the a's and b's directly in terms of the original numbers. And that's what I'll do in the fourth part, which is the part that we haven't seen. But the third part is actually the most interesting of this clearly shows. So define, well, I can make the definition outside, let phi of x be the power series g0, which remember is the scalar product of 1 with itself, then g1, which is the scalar product of 1 with x, then g2, which is the scalar product of 1 with x squared, and so on. So let me make a power series in 1 over x. 1 over capital X. I'll use little x for a small variable for now, and big x for a big variable, and I'll go down over little x. So let's make that power series. So now define a second sequence of polynomials, qn of x, by the same recursion. So that's what you always do with continued fractions. You have two different solutions of the same recursion. If it's a two-term, a three-term recursed meaning then you can pick the first two numbers arbitrarily. So for instance, 0, 1, and 1, 0, or something else. And then that gives a basis. So we always want two solutions if it's second order. If it's 10th order, we want 10 solutions. So I pick a second, but I have to give initial conditions. So here, it's the recursion 1. So exactly the same qn plus 1 is x1 is anqn minus bmqn minus 1 with the same a n's and b n's. But with the initial conditions, well, I can already tell you now the degree of p, and we already know, is supposed to be n. I want that the second series, and you'll see in a moment it's crucial that I have that, has to be n minus 1. And therefore, the 0th one has negative degree and by convention, or well, actually not by convention. By definition, the polynomial of negative degree has no coefficient, so it's 0. So for that, I have no choice. So q0 of x is the 0 polynomial. And q1 of x should have degree 0, because it's a constant. I'm going to choose g0, which in the other notation of my function is 5, 1. Then, so that's a long definition. Then, this statement, I won't get the error term right, so I better check my notes. I don't get it wrong. Then if I divide qn of x by pn of x, so I should have used capital X, actually I have been using capital X as the variable, then this is what's called the Pade approximation, which means I have this power series 5x, and I try to approximate it as accurate as I can with rational forms of a given degree, which means numerator and denominator have degree at most n, for some n. Well, since it starts with 1 over x, if the denominator is at most n, the numerator is at most n minus 1, and so that'll be this. And just by counting the number of constants you have, you see that if I try to approximate phi as well as possible, this has degree n, I don't yet know what it is, so there are n plus 1 coefficients, but actually only n, because I can multiply both these by constants. I've normalized it, it starts with 1. So although it's nth degree, there are n free coefficients, and q has degree n minus 1, but I don't value them, so it's 2n coefficients, and so I can achieve for it, the best I can hope to achieve is 2n minus 1, and I can, and these are the polynomials that do it. And there's a standard algorithm to find that, which is in fact, was down in a second at this recursion. So the p and then the q and are that, and for people who know Pade approximations, they already know that that means there's a continued fraction, but now for I have to put somewhere else because I've run out of space. So here now is, and this is very pretty, how you find the a and some of the b n's directly out of the g n's without computing all the polynomials in a row. So part four, and it got too low, part four says, if I take the power series the same generating function of the g n's, but now with little x, which you can think of as one over big x, but I don't put in the factor one over capital x, I just take the same thing. Well, this is a power series, so I can develop it in a continued fraction if nothing goes wrong, so it starts g zero because I think it has to start g zero, and now I put one minus lambda one x over one minus lambda two x over one minus lambda three x over, well, dot, dot, dot. So you can't always do that. You might find at some point that the next coefficient would have to be infinite or something. I mean, you could get into trouble and the condition that this works is exactly the non-degeneracy condition, but for generic coefficients, it will certainly work. So I developed the power series, the generating function of the g n's. There are two standard kinds of generating functions, some a n x to the n, some n over n factorial x to the n, but a third kind of generating function is a continued fraction, it's completely different. There's an equivalent between the power, let's say it starts with one, the power series starting with one, and a sequence of coefficients, and it's a different way of making an isomorphism between sequence of coefficients and numbers. And so if I go from the g n's as the coefficient of the added generating function to the lambda n's, which are the coefficients of the thing, then I get some numbers. Then the formulas, I can't remember which is plus and which is minus b n. Well, it'll be a corollary. Then first of all, lambda i is non-zero for all i. That's the non-degeneracy condition. So if you take generic tree, of course, why should they be zero? But if one is there, it won't work and there won't be a lot of problems. And the reason is very simple. These numbers b n, you can get if you multiply two adjacent lambda n's, but in the order any two adjacent numbers, one is even and one is odd. If the odd one comes first, so lambda two n minus one times lambda n, I claim that is b n. But we already saw that b n is a quotient of scalar products. And so if the scalar product is never zero, I say it better not be, then basis, then indeed this can't be zero. That's the first statement. And the a n, it's really very pretty. You do exactly the same. You take two adjacent ones. Adjacent numbers, one is even and one is odd. So if the smaller one is odd, you multiply them. And if the smaller index is even, you add them. So the sum of the lambdas of index two n and 12 plus one, that gives you the a. It's very cute. That I won't prove because we don't need it for anything. And I actually don't know any important application to this. So I said in the interesting cases, the a n's are simply zero. And therefore the lambdas are just alternating. And you can rewrite the whole thing with x squared. It's a continued fraction with x over one minus mu one x squared and so on. So it's sort of boring. But in the general case, I find it's very pretty that you take the generating function of the numbers that determine your scalar product, you expand them into continued fraction, and then you either take the product of two of them or the sum of two of them and those are the x's that you do. OK, so that's the general proposition. Now to prove this, I just switched on the computer. If I take b12 of x, x, b11 of x, b11 of x, and b10 of x, those are polynomials, all of them degree at most 12. So there are 13 coefficients. There are four polynomials. I computed in split second in Paris that 4 by 13 matrix and its rank was 4. So there is no linearly dependent. These are linearly independent. You just check it. I could have taken any other number. And therefore, there is no such recursion for any a and b a's. It's simply not true that x times b11 is a combination of b12, 30, and b. OK, so therefore, they're not orthogonal polynomials in that usual sense. But I took advantage of that question to, because it's such a fun thing. And the Skelthon said, everyone should know orthogonal polynomials. And I think the fact that there's this connection with continued fractions, even if you don't remember this theorem, but this pade is essentially the same thing, that's something that one should be aware of. And if you ever need it, at least then you know that you should look it up and remember how it worked. Sorry? 12 wasn't the first degree. Oh, no, not at all. It's just I like 12, because you know, 6, 91. No, no, I just took b12. No, I'm sure it happens for every single n. I'm sure it happens already for b2. We can even do it for b2, since I know them by heart. b2 is this. b1 is x minus 1 half. Oh, no, sorry, that's not going to work. That's too small, because these polynomials all have degree 2, and it's a three-dimensional space. So obviously, any four things are going to be linearly dependent. But if you take the next one, it won't work. No, it'll never work. Also, because these do, the renewing polynomials are not even or odd, but morally they're even or odd. Remember, I told you, the most frequent situation, like Chebyshev of the first kind, second kind, the even index ones are even. The odd ones are odd. But in a new way, you might say, hey, they're not even odd. I mean, I just wrote down a couple. But remember, the first three polynomials are x minus 1 half. Well, that looks like this, through the point of half. It's certainly not odd, but it is an odd function around 1 half, if you shift by half. And similarly, the next one, x squared minus x plus a sixth is x minus 1 half squared plus a 12th. So that's therefore an even function of x minus 1 half. And its graph, indeed, looks like this. It's symmetric around 1 half. And that's always true. And I wrote it last time. That's one of the very easy properties. But the renewing polynomials that they had this reflection. And this exactly says it's even or odd around 1 half. So if I made that rather hard on the shift of just shifting x to x plus 1 half, then they would be even or odd. And then I could probably even use n equals 2 to see that it doesn't work. OK, so those are not orthogonal. But then, as already said, the only natural measure for the renewing polynomials is the higher measure on the circle. It's really deep down there on the circle. It's true that, of course, they're not probably periodic, no polynomials periodic. And you remember, and I'll use this in a second, that we define Br of x to be Br of the fractional part of x. So this is the curly brackets. So this is the fractional part, x minus x. So the more modern notation that people like is this, the floor, because there's also the ceiling. And I use that too, but I kind of still like the Gaussian notation. So this is true for all r greater than or equal to 0. And for all x in r, except that b1 of x is 0, if x is an integer, of course, it should be periodic. So when I say x could be 0, that's the same as x is an integer. And then you don't take b1 of the fractional part, which is b1 of 0, which is minus 1 half, because I just drew the picture. Here's the polynomial. If you make it periodic, it jumps. It's a solitude function. And as you always do in Fourier analysis on, you take the middle value as 0. So that's the definition of b r of x. And so what I really want is the scalar product of b r of x with b s bar of x. And now this is no longer on the real line. It's on r multiple of z. And there, there's only one reasonable product that you could probably, I mean, scalar measure, you can think of it. It has to be r measured. It has to be compatible with the translation group. And so this thing won't be, so this bar is not context conjugation. It's this periodic thing. And here it doesn't matter what I do at the end points. It's an integral. So we want to compute this thing. So the question is, what is this? And I'll just give the answer, but I'll sketch the proof in one line, because it's also an important point. But then I'll give an application to that formula. So the formula is very nice. So I'll erase all this framing. So here's the formula. This scalar product of b r with b s, of course, it's an integral. It's a rational number, because these are the polynomials with rational coefficients. And what it is, it's slightly amusing, because it doesn't look symmetric in r and s. But of course, it better be symmetric. So it's a reciprocal binomial coefficient, r factorial s factorial over r plus s factorial multiplied by the Bernoulli number of index r plus s. So it's certainly not zero, but that's the formula. So remark, this is symmetric. That's obvious on the left, because it's just the integral b r times b s. It's not quite obvious here. This is symmetric. This is symmetric. But minus 1 to the r is not minus 1 to the s. But of course, it is minus 1 to the s in less r and s of opposite parodies. But if they have opposite parodies, then r plus s is zero, is 1, because that's the only odd index Bernoulli number which exists. So only for b 1, but for b 1, this is false, because if I take r to be 1 and s to be 0, now I'm confused. If r is 1 and s is 0, then this integral is 0. So this must be, sorry, it's of course, 0 if r plus s is 1. This of r plus s is different from 1, because if r plus s is 1, since these are non-negative, 1 of them is 0 and 1 is 1. And the Bernoulli problem is average 0. But that's a silly way of writing it, because it's also 0 automatically if r is different from s modulo 2, which is why the symmetry is back. You see that now that's obviously on the right, because the Bernoulli numbers, if r plus s is not 1, then there's 0, unless r plus s is even. But it's also true here for the reason I told you. If you look around the point half rather than the point 0, then this is even if r is even and odd if r is odd. And similarly this. And so if they have opposite parodies, you're integrating an odd function, the integral is of course 0. So that's the, I mean that's just trivial comments, but my form wasn't quite right because I forgot to say that r plus s is 1, you get 0. You get 0, yeah, that's a trivial case. Okay, so let me tell you the proof of this. There's one I told you already last time that the b, r bar of x are very important because we use them in the Euler-McClarence summation formed by induction. We had an integral from 0 to 1 of Bernoulli, then we got to shift the integer, the interval, 0, 1 to 1, 2, 2, 3 to sum. And then I had to keep shifting the Bernoulli polynomial back to the origin. So I used b, r bar. So that was one motivation, so important for two reasons. Actually three reasons, and now reason. First of all, it appears they appear, so these modified periodic Bernoulli appear in the error term which I did last time of the Euler-McClarence summation formula. But if Euler-McClarence summation, okay? But the other is this. If you have a function which is periodic, then it's got a Fourier expansion. And the Fourier expansion, assuming converters, I'll assume r is not 0 because b is 0, it's just 1. So a Fourier expansion will always have the form, so there's a constant, so I better leave a little space for the constant. Every Fourier expansion has the form e to the 2 pi i nx times a coefficient, a numerical coefficient. And of course it should converge. And so the answer is, up to a constant, the coefficients are incredibly simple. It's simply 1 over n to the r. So the easiest sequence as function of n, here are the pure powers. These are much more complicated than pure powers. But these are simple. This is b, r bar because it's periodic. And the coefficient is certainly there's a 2 pi i to the r. So what I can't remember, it's minus r factorial over 2 pi i. That's why I used r and s rather than i and j because I didn't want 2 pi i to the i, so 2 pi i to the r. Except, of course, you shouldn't divide by 0. So n is different from 0. Now this includes Euler's famous proof because a to the 2, or more generally, if I take x to be 0, then this is, of course, 0 if r is odd because the terms n to the r and minus n to the r cancel if r is odd. But if r is even, then so for r equals 0, we'll find that b r of 0, which is simply b r, if r is even, will be minus 1 to the now r over 2 minus 1, r factorial over 2 pi to the r. And then this sum will be 2 times a to the r because x is 0. And so that's exactly what is formula for z to the r. So this is absolutely standard calculation. There are a million proofs of it, one easy proof. I'll just do it in our heads. For b1, whether it's a formula for any function, the nth coefficient, you just integrate. And by integrating by parts and using the properties of the Bernoulli polynomials, you get this. But you can just do it for b1. And then remember the derivative of any bn is a multiple of the previous one. If I differentiate this in n, I just reduce r by 1. So both sides have the property that the derivative works. And then you just need one more normalizing property. Anyway, there are many, many proofs. This is an exercise. So now this you can, I hope, see more or less in your head that this formula implies this proposition. Sorry, I can't hear you. And the audience can't hear you. Is there a constant term in the Fourier series? No. There is no constant. So the integral of b r, it should be 0 then. Yes, that's certainly true because remember that bn prime is n times bn minus 1. But we also have that bn of 0 is equal to bn of 1 as soon as n is at least 2. But then when you plug it in, I'm answering your question. If I integrate a function by the fundamental theorem of calculus, the integral of a derivative is the difference of the values at the endpoint. But bn is a derivative of bn plus 1, which is the same values at the endpoint. So yes, the average value of every Bernoulli polynomial except b0 is 0. So indeed, there's no constant term. How about in this formula when you plug s0 in the formula that you wrote on the board? Which formula? I've made a formula. The integral of br against bs. If you put s0, b0 is just a constant, right? You should integrate br. Then it should be the right-hand side. Then it should be 0. Well, it is. If s, I hope it is. s0. I mean, I just asked because something seemed... Oh, sorry. I know. I was thinking in our paper there isn't any special case that something panicked. Here, of course, no one is interested in b0 times bs because indeed it's 0. Thank you very much, Emmanuel. This is true if r and s are positive. If r and s is 0, this is trivially 0 and I exclude it. And that's why the symmetry is OK because if they're positive, then they have to have the same parity because otherwise this would be 3 or bigger. So thank you very much, you covered it. And I knew I was making a mistake when I wrote that, but I couldn't find it. Thanks. Wow, not just knowledgeable, but paying a lot of attention. So yeah, I completely missed that. Thank you very much. That was an oversight. And indeed the theorem was obviously not always true. But this is true and I gave the reason that the integral of bn is 0 because it's the derivative of the next one and the next one is the same values at the end points, indeed the integral from 0 to 1 vanishes. So now you can do it in your head. This integral, as I already said, since it's periodic, is the same but now I don't have to go from 0 to 1. Physicists would do it anyway because they always break symmetries, but now we don't have to choose a fundamental domain from 0 to 1. It's simply the integral of this periodic function over the whole circle. But now if I expand br as up to a constant, e to the 2 pi i nx over n to the r, and the other is e to the 2 pi i mx over m to the s. But when I integrate e to the 2 pi i nx against e to the 2 pi i mx, I only get something non-zero if n equals minus m. So therefore, I'll just pick up those terms and it's 1 over n to the r plus s, which is z of r plus s. But it's odd. So if r plus s is odd, that's 0. But if r plus s is even, as I just reminded you, it's a very annoying number. And so if you work out the details, you get this formula. It's nice to know that because there's actually a thumb formula, which is exactly the same proof, which says that if you take, now again, I can do it multiple r and now I really want to because there's about 0 and 1. If I take the renewing numbers, Br of x, but I shift. Now obviously, if I shift both of them by the same amount, alpha dx, then it's the same integral because higher measure it's invariant. So that means if I shift by alpha and beta, it can only depend on alpha and beta. I'm putting again the periodic. And indeed, it's the only form that could be. It's the periodic version of alpha minus beta. That's kind of pretty. And before alpha and beta were 0, this was just the renewing number. It's exactly the same proof. OK, so that's kind of a fun property of renewing numbers. And in particular, as I said, we already knew they weren't orthogonal because they don't satisfy three-term recursion. But now we know what the scalar products are. They aren't 0, but they are renewing numbers. But now the last fun thing that, as I said, was discovered already by Nielsen in 1923. I didn't know it. It's not at all. Let's assume that I've let, I'll call it little pn, so they don't look orthogonal, be polynomials of degree exactly n. So in particular, they form a basis for the space of polynomials, whether they're orthogonal, Bernoulli, or whatever they are. They form a basis. Then, so then v, v which we remember was k of x, for us will usually be q, is the direct sum. That's what I just said, of k times pn of x. So they form a basis. But this is, of course, a ring. It's an algebra. So it's a k algebra. It's a ring. So that means that I can multiply p. Now I can go back to i and j, because I don't have any i's. So I can take pi times pj. And it's going to be a combination of pk's. You can do this in any ring with the basis. And you get numbers, cijk. So here, k greater than or equal to 0. But of course, because this is degree i and this is degree j, then here, k, it's going to be a finite sum because it's a polynomial of degree i plus j. And these read the structure constants of the algebra. It's the basic thing you do if you have any kind of an algebra, the algebra would ever. And the basis, then, if you want, you can write the thing in terms of the base. And then the multiplication becomes a collection of structure constants. Now here, we already know that this is, you know, k is less than or equal to i plus j. So I can write this i plus j minus something. And also, because it's the same, remember that in my case, if I do this for Bernoulli numbers. So now let me not be so general because there's no reason. So these are Bernoulli polynomials. Sorry, not Bernoulli numbers. I can certainly write the product of any two Bernoulli polynomials. It says linear combination of Bernoulli polynomials. And those are the structure constants for this different base. Now, this k has to be at most i plus j minus something. And I claim that something has to be even. And that's again because if I shift by half, which doesn't, you know, it's still true, this identity with x replaced by x plus half, then the even b i's are even function. The opens are odd. And so their product is even or odd depending on the parity of i plus j. So k has to have the same parity. Should be congruent to i plus j. And now the amazing thing is when you do this, you can compute a table of these, let's say, for all i and j and k up to 20. And what you'll find is that this is always, so let me take a nice example. Let me take 12. This is always divisible. Well, of course, all rational numbers are divisible. At the beginning, they're very simple rational numbers. Small, numeric, and dominoes. I claim it's always divisible by 691. So the top coefficient, of course, c ij i plus j is naturally 1. i plus j is naturally 1. It's got to be, because these are modeling polynomials, if I multiply x di times xj, it starts x di plus j. The next one down, b i plus j minus 2, you can do by hand. The next one you can do by hand. You can even do 12 by hand if you have a lot of hands for a lot of time. We take you in afternoon to write it out. But you won't see this because it's a sum of a whole bunch of terms. But that sum of numbers will always be divisible by 691. And remember that 691 is the famous prime that every number theorist knows well, which is the numerator. It's the first prime that occurs non-trived in a Bernoulli number, so bn over n. I guess there's a minus sign. So it's the numerator of b12. This is a story I'd like to tell. And in quotes, I told the one about Sarah. I'll have another one about myself in Sloan soon. This is one about, as I told the one about Gelfand. Here's one about Sarah. I can't vouch for it. I know Sarah very well, but he's never told me this, but I've heard it from other people. Many years ago, some young mathematician who came to Sarah and he said, I would like to, I think, number theory is such a beautiful, I like number theory so much. Could I become your student and your PhD student? So Sarah gave him a one line test. It was very quick. He said, what's 691? The kind of completely blank out in one case. It's a prime. Not by 2, 3, 7, or 13. And Sarah said, go away. You're not actually interested in number theory. It's a beautiful answer. And basically, if you're not a number theory, you'll think it's just a put down. But if you are a number theory, like Ken Ohno, his email address has a 691. My passwords, they have other things. It doesn't help you, but there's always a 691 somewhere. Number theories just love 691. It's this really sexy prime. It's the first irregular prime, not the smallest, but the first one that occurs. Every number theory knows it. And it's kind of true if you've read all the class field theory and so on, you think you know number theory, but you've never heard of 691. It means you aren't actually interested in numbers. Anyway, according to the story, Sarah said, go away. If you study something else, you don't really like number theory. So this is the statement. In other words, in general, I'm claiming that to be i gen minus 2l is divisible. Whatever it means, every rational number is divisible by any other rational number if it's not 0. But it's sort of naturally divisible by b2l. That's the acute discovery. And I'm not going to give the proof. It's only, well, no, I will give the essence of the proof because there's acute property in it, too. So it is kind of fun. And maybe I will give it. I'm digressing a lot. Maybe I'll never even finish them. But I was supposed to do in this lecture, and then I'll do it next time. But nobody's keeping track of it. But this one, I didn't write down my notes because the formula's too long. So here's the theorem. Let me write, just for convenience, yet a third notation. So b and bar was bn made periodic. Let me ignore the, remember, b0 is kind of a stupid polynomial. It's the constant 1. Let me renormalize the renormal numbers by dividing by n. It just makes every formula, I'm about to write, more attractive. Then the proposition, which I said is due to Nielsen, but it's in this appendix of mine. So if I multiply 2, I want the structure constants. Then it's the sum. So as I already told you, it has to be b i plus j minus 2l of x. And so l has to be, at most, i plus j over 2. But let me do that case separately. So let me take first the terms, which are not the constant part. So I write this as a combination of renormal polynomials. We already know by parity that it has to be i plus j minus 12. But there will still be a multiple at the constant so it will be a constant, which I'll write separately in a second. So now the question is, what's the coefficient of this? Sorry, this is script b again. And this was script b, just not very well written. And the answer is rather nice. Not quite as nice as it might be, but nicer than it might not have been, so to speak. So I'm writing 2l like physicists would do, but it's the wrong way. I should write l is between 0. It's greater than 0, less than i plus j. And put i plus j minus l, but l always has to be even. So it means that everywhere it's really l, but I have to write 2l because it's better to write just even numbers. I'll call them i over 2l. So you have an odd combination of 1 over i times the renormal coefficient, i over 2l plus 1 over j. And that's the coefficient, except as I warned you, you still have the renormal number. So that's what I told that the coefficient in that structure comes c, ij, and then upstairs, i plus j minus 2l. It's a simple multiple. This is just some binomial copies, but this is a highly non-treatable number, like 691. Then there's still a constant term. And the constant term plus the renormal number be i plus j multiplied by some, it's basically, it's like a reciprocal binomial coefficient again. OK, so that's the formula. And let me give the proof sketch. Let bij of x, I just define bij to be this first term. I don't care about the constant you'll see by in the second. I'm going to determine this up to a constant. So the proof of the two parts. I want to show that this thing is equal to the product bi, b, j up to a constant. And then this form is true up to a constant. Then I have to find the constant which is completely different. This doesn't look like that. So let bij equal this, let's call it equals. So it's this first term. Then use either that bn. So the property of Bernoulli numbers, remember, Bernoulli polynomials is that the difference at x plus 1 and x was nx to the n minus 1. But now it's even nicer because I've divided by n. Or you can equivalent use bn prime of x is n bn minus 1 of x. Sorry, n minus 1 because I divided by n. So as an exercise, I'm going to give the proof using this. But you can also use this and prove the same. What I'm trying to prove is that this expression bij differs from the product bi, b, j by a constant. So there are two ways to show that two polynomials are equal up to a constant. The obvious one that's true for any real functions is you differentiate the both. That the derivatives are the same, they differ by constant. But the others, if the difference is periodic, a periodic polynomial is also constant. And so here I'm going to use the periodicity line because it's more fun. So let's compute now. I'm not going to do it in detail. bij, just look at what happens. Place x by x plus 1. Then you get sum of the same L as before in this sum. 1 over i times i2l plus 1 over j times j2l. But now in this thing, what happens in here? Oh, here when you replace x by x plus 1 in subtract, you'll just get x to the power i plus j minus 2l minus 1. OK? But that is just a combination of bi of x and bj of x. But I'm going to be a little tricky because there's still a term i plus j minus 2. And I'm going to split it into twice half of itself. So if you want this out, it takes one. You just expand this by the binomial theorem. Then what you'll find is bj of x times plus 1 half x to the i minus 1. But I claim that that's kind of, obviously, the difference, bi of x plus 1 minus b times bj of x plus 1 minus bi of x. I hope you can still see this down here. If not, I'll have to copy it at the top of the board. If you multiply two polynomials, f times g at x plus 1, bi of x plus 1 is bi of x plus a monomial. And bj of x plus 1 is bj of x plus a monomial. So when I multiply, there are four terms, bi of x times bj of x. I'm subtracting it. Then x to the i, some power of x times bi's, some complementary power times bj. And then a constant in i plus j, which is i plus j minus 2. I've split it in this nice way. So it has this property. And so what I've shown is that the difference of this is exactly the same as the difference at x plus 1 and x of bi and bj. And therefore, they differ by constant. And you can also do it. It's an exercise to differentiate bi prime bj prime. It's an easy computation. You'll also get the same as the derivative of the left-hand side. So now it's true up to a constant. But the constant is now easy because now I just integrate this over the unit circle. This one we already did. These all give 0. And this gives what it is. And that's the answer I had before. So the hard part of this, which is the constant term, is exactly the lemma we had before of the integral of product of two Bernoulli polynomials. So this was an extremely long answer to a very interesting question of the Bernoulli polynomials. I thought, no, whose answer is one two letter word, no. And I spent a lot of time on that, no. Let the famous put on what part of the word, no, is it that you don't understand when somebody asks you for a favor and you say, no. So that's my, well, at least I've done both. I had two digressions, one with the orthogonal polynomials and one with nice properties of Bernoulli polynomials, the Fourier expansion, which is really important. The caravari, which was the integral formula for Br bar times Bs bar, even with shifts. And this very nice formula for the structure constants of the ring. OK, well, now I've used up one hour out of my 1 and a half, not to start today's lecture. Well, at least what was supposed to be today's lecture. So today's lecture had two halves. One was the part I didn't get to last time's lecture. And then I was starting on new themes. So that's actually very nice, because now I can finish the story of last time. And then the next time I'll start. So next week, Tuesday and Thursday, I'll start with the extrapolation trick. And that's the thing that one can use all the time in practical problems too. And there's many variants that are very nice to know about. So now let me go back to my actual story. So last time, so now the course now resumes, last time we showed if f of t is some function which is at the first, it's asymptotic at the origin to some b and t to the end. There were many variants where you could have log singularities and powers. But let's say it's just some smooth function at 0. It's small than infinity, small enough that I can make the sum m from 1 to infinity, f of mt. So if f is a nice function which has such an expansion as t goes to 0, then in the simplest version, g of t had an expansion which had what I called, just to make it easier to remember the Riemann term, by f is the integral from 0 to infinity f of x dx. So it's the integral divided by t that would be the Riemann integral approximation for the infinite sum. And the other part is the formal thing you would get just by substituting one series in the other and pretend you can do what you please. And then you find that the coefficient b and t to the end turns into b and z of minus nt to the end. So this was the thing we had proved using Euler-McMorrin. And I won't give four examples to I said already last time. Maybe I gave them, but I'll do it very quickly again just for completeness. That's just a reality check that we haven't gone crazy. So if I take f of t to be pure exponential, e to the minus lambda t, then of course g of t is the sum e to the minus m lambda t, which is a geometric series. It's simply e to the power lambda t minus 1. And this we already saw last time. It's an easy, well, no, sorry, that's the definition of the Bernoulli numbers. We got the sum n from 0 to infinity lambda n t to the n b n plus 1 over n plus 1. In fact, well, that's simply the definition of Bernoulli numbers. But here, of course, and here I can remind you that this can also be written as minus 1 to the n times the nth Bernoulli number divided n plus 1 divided by n plus 1. So here you see that, of course, the expansion of this is just what Euler found to define the exponential function. It's minus lambda to the n over t to the n. And so the minus 1 to the n goes away with this. The lambda to the n is sitting here. The b n plus 1 over n plus 1 factorial is my node over t to the n over n factorial. The n plus 1 and the n factorial give you an n plus 1 factorial. And you see it works just like it should. Of course, that's the trivial example that we used to make the whole thing work, but just to make sure nothing has gotten lost. Then another example I mentioned briefly last time, but since the question has come up a couple of times, what happens if I take f of t to be e to the minus lambda t squared? So now, b n, well, first of all, I didn't say what I was. Of course, here, I, well, it's what it has to be to make this work. Here, I, f is the square root of pi over lambda pi, famous in all that Gauss did. So f of t is e to the minus lambda squared. So here, that means that b n, it's an even function. So it's 0 if n is odd. And if n is 2k, then of course, it's minus lambda to the k over k factorial. So we know exactly what the b n's are. And so what is the g n? Well, the g n, that's where we use the property of the theta function. So here, g of t is the sum m from 1 to infinity of e to the minus lambda m squared t, m squared lambda t. But the right way to write that is if m were 0, this would be 1. I won't subtract that. So it's the sum over all m in z. If every m is either positive or negative or 0, if it's positive or negative, you get the positive ones twice. Because this is even, so you get that. And the constant term, I've subtracted off. But now, by the Poisson summation form which I gave, in which basically you're kind of supposed to know. But in case you don't, I wrote it down last time. If you have any nice function here, a Gaussian, so a very nice function, and you sum it over the lattice, it's the same as the sum of the same lattice of the Fourier transform. And the Fourier transform by the same integral that Gauss did that get the square root of pi, you can compute. So we know that this thing, if you replace t by 1 over t, then actually you have to, if with pi, for minus pi m squared t, that doesn't matter in which order. If it were pi times m squared t, you'd replace t by 1 over t. Now you replace lambda t by 1 over lambda t. So it means it's actually e to the minus, I think it's pi squared over lambda t inverse times m squared. But it doesn't matter if I got it right or wrong. Because whatever number, I think it is minus pi squared over lambda t. But it's something positive over t. And so in exponential, e to the minus something over t is, of course, small to all conceivable orders. So this is plus summation. And therefore here, I'll write it back at the top because I'm running out of space, I get one term if m is 0, that's 1. So it's 1 over 2 squared of t. Then the next term is the constant term which the g didn't have. And then it's plus o of t to the n for all n. So in other words, this is a very, very simple, the wrong series. It's a terminating the wrong series. It has its pole term. It has, well, in square root of t here, I do it correctly. So 1 over 2 squared of t, that's right. So it's not quite a pole term. So the wrong series in square root of t. It has the negative power t to the minus half is a constant, but it has no more terms. But that, of course, we know because of the plus all summation for them. But now we check this. And you see that here, all that I really care about is that b n is 0 if n is odd. And b 0 is 1. But now in this sum, if n is odd, this product is 0 because b n is 0. But then even, this product is always 0 because z of minus 2, z of minus 4, z of minus 6, all vanish. Equivalently, the odd day of the experiment numbers vanish, except for b 1 and except for z of 0. And so this is just b 0. So you see that this is equal to i f over t plus b 0. Sorry, z of 0 is minus 1 half. So it's minus 1 half beta 0 plus o of t to the n for all n if the odd index ones are 0. So we just get a confirmation of what we already knew from the plus all summation. Let's explain this on this special case. You can see in general, also without doing this, just from plus all summation, if the function is odd as a power series, then if I just define it by reflection, then it still c infinity at the origin. It may not be analytic, but it will be c infinity because all the odd terms vanish. And it's small at infinity, so I apply plus all to that. And I get my expansion and everything is 0. So the sum over half the lattice is half the sum of the whole lattice. So that's the true basic, the trivial case of Euler-Mclaurin, which is the exponential function, which is why the Bernoulli numbers have to come in. But it also tells you this trivial example tells you that if you knew that there is such a theorem with some multiple, I mean, you didn't know that it's a to the minus n. You didn't know that it's this. It's just something. Then this example tells you it has to be bm plus 1. OK, so those are the two boring examples. And I won't give two non-trivial examples. I hope I can finish them both today and then start on a new theme next time. OK, so the first example is, so example 3. I'm going to avoid the 2. You could do this for 1, 2, but there's an extra argument. I'll say it at the end, believe it as an exercise. So if k is an integer different from 2, I define function gk of q. So here q will be, let's say, a real number, which I don't even care that it's real. They're just a complex number of absolute value. And here I put the sum of the k minus first powers of the divisors of n as a coefficient. And so that's my power series. So this is a power series in q, a formal power series. But it converges, of course, if q is less than 1, because the coefficient is at most polynomial growth. And the q to the n is exponentially small. So that's the definition. And now the question is, what is the asymptotic of that as you tend to 0? But let me first give you the proof why it is because of modernarity. So if k is even and not equal to 2, 2 is a little different and we'll do it at the end, then we have the following. I define the function gk of tau. So tau is now going to be a multiform, not a multiform. This is going to be a multiform. Tau will be in the upper half plane, which means complex numbers are strictly positive real part. And q, as always in that world, is e to the 2 pi i tau. I'm not going to use anything about multiforms, except just with the form I'm about to write, which I wrote last time. gk of tau is defined as a certain multiple. I think I'll put it, but I may get it wrong. 2 pi i to the k, but this might be the wrong constant. But anyway, there's a constant depending on k. And then it's the sum, as I wrote last time, but for all integers m and n, except 0, 0. So all non-zero lattice points in C2 of 1 over m tau plus n to the k minus 1 to the k. And this has the property of being multiform. And I wrote down last time what it means. g of a tau plus b over c tau plus d is c tau plus d to the k times gk of tau. I don't really care about that, except that that's the basic property that people know and that's absolutely standard. And the Fourier expansion of gk is also very standard. The constant term is the Bernoulli number, the k-th Bernoulli number divided by 2k, remember k is even, so that's non-zero. And the rest is exactly this gk, the gk of q. So in other words, the Fourier expansion, there's a constant term, which is a denominator, general actually, always. And then the other terms are integers and they're exactly the sum. I gave that form the last time. But the point is that this is multinar. So gk is multinar. Remember that means that a tau plus b over c tau plus d, I just said it, I'll write it out. Remind you c tau plus d to the k, gk of tau. So in particular, if I take the matrix 0 minus 1, 1, 0, minus 1 over tau is still in the upper half plane, then I'll get tau to the k, gk of tau. Which means that if I take gk of it, inverse, if t is positive, and I take tau to be it, so on the imaginary axis, then I'll get here minus 1 to the k over t to the k times gk of i times t. So that's a fact that I'm going to use. But to prove the asymptotes, I won't use. We'll use our general formula. But I'm showing that here we reproduce something that's known if k is 4, 6, or 8, but not if it's 2. Well, 2 is more or less works, but not if it's odd. So here I have this thing because of the multinarity. So now I am using the multinarity. And so I have this. But then that means I should have put it on the other side. Maybe I'll do that right now. Actually, I did it wrong anyway. The t to the k, the 1 over t to the k is exactly if I put it. So I actually got it wrong. The negative power is when you put it and here i over t. So we have this property. But now what does that mean? This thing here is the constant term, bk over 2k, which, by the way, is simply 1 half zeta of 1 minus k. I don't care about that. Whether it is, OK, it's 1 half zeta of 1 minus k. Plus, and then the rest is gk of tau. But that's e to the 2 pi tau, q. It's e to the 2 pi tau, so it's e to the minus 2 pi t. And on the right, I'll have minus 1 to the k over 2, which is plus or minus 1 because k over 2 is an integer, over t to the k. But now gk will be its constant term, which is, again, minus bk over 2k, to all orders. So that means plus o of t to the n for every n. Because again, this thing has a Fourier expansion. There's a constant term, but everything else is exponentially small. It's e to the 2 pi tau. But tau is now 1 over t, so it's e to the minus some constant over t. It's smaller than any power of t. So this tells you the asymptotic expansion. So let me write what I've just proved. But using multilayer, we know that e to the minus 2 pi t is equal because of what I just wrote. There are two terms. There's minus 1 to the k over 2 minus 1 times the k per noise number over 2k times t to the minus k. I didn't get the sign right. And then it's plus bk over 2k. And it's plus o of t to the n for all n. So here we know the answer. And of course, if you didn't know this thing was multilayer, we'd have no reason to call the e to the minus 2 pi t. So I should really call this e to the minus t. And then there would have to be t over 2 pi to the k. That wouldn't change. So now I want to show you that that's the example that we can get this. And that's where in my talk that many of you were out on the Ramanujan Day on Monday, I mentioned that the Hardy somewhere has written that one couldn't expect to get any of these formulas if one didn't know multilayer. But I'm going to show you here in the later lectures that, in many cases, this Euler-McLauren with the shift at Euler-McLauren is good enough to get the asymptotics. Of course, it's weaker than multilayer, but it's much more elementary. So this is our goal. So now how do we do this? Well, I simply define f of tau. Now suddenly panic. Here it is. I define f of tau t to the k minus 1 divided by e to the t minus 1. So it's the usual expansion. If this were t over e to the t minus 1, this would simply be the usual expansion of the generating function of the Bernoulli numbers with t to the r. And so here I just multiplied that by t to the k minus 2. So this is, of course, this is not inequality because this diverges, but it's an asymptotic expansion of all orders. OK. But if you look what this is, this is, so let's maybe I should have started in order, gk of q is the sum n from 1 to infinity. Look at the definition. It's the sum sigma k minus 1 of n, which is the sum n is dm. But if n is dm, that means that n is d times m, where d and m are both integers, and now they're independent integers. Because every positive number together with the positive device is equivalent to two positive numbers, d and m, and n is their product. So here I have d to the k minus 1, and here I have q to the md. But you see that the sum q to the md, m from 1 to infinity, is just a geometric series. It's simply q to the d over 1 minus q to the d. So therefore, if I take gk of e to the minus tau, as I said, now e to the minus d, there's no reason here to call it 2 pi t, and it's just a name, gk of t. But if I multiply it by t to the k minus 1, it looks crazy. But you'll see very quickly why I have to do that, because now the d to the k minus 1 multiplied by t to the k minus 1 is dt to the k minus 1. And then the next part is e to the minus dt over 1 minus e to the minus dt, so it's e to the dt minus 1. And so this is the g of t of our general theorem. If f is this function, then the sum f of n t, which now I'm calling f of dt, is indeed the sum dt to the k minus 1 over e to the dt minus 1. So now I'm exactly set up to use our thing. So I have to compute if. if is the sum n from 1 to infinity. Well, it's the integral from 0 to infinity. And then t to the k minus 1 e to the minus n t dt. But the integral t to the k minus 1 e to the minus t dt, by the definition of the gamma function, is k minus 1 factorial. And here it's over n to the k. And so we know exactly what that integral is. It's just k minus 1 factorial times the Riemann zeta function k. So now, by our general theorem, we know that this is the expansion to all orders. All orders. The thing that I just wrote, that's k minus 1 factorial z of k times 1 over t. Plus, and now I have to take the expansion of this. And now you see that's why I didn't want k to be either 2 or 1 or something too small. k is at least 4. So if k is at least 4, then k minus 2 is at least 2. R is at least 0. So this is at least 2. And Bernoulli numbers that are at least 2 are always even. So for what it's worth, but in particular, they're non-zero. So I can apply the general algorithm. And the general algorithm says that I have to multiply t to the n by 2 minus R minus k. t to the R plus k minus 2. But now, if you look at this sum, k is even. Because I chose k to be even. R is always even, except if R is 2. But so except if R is 1, R has to be even, because all other Bernoulli numbers are 0. So therefore, this is always an R is even. k is even. And this is strictly negative even, and so it's a to 0. So therefore, this whole thing, only one term survives, which is the term R equals 1. R equals 1 with d1 minus 1 half. Here, I've z of 1 minus k. And here, it's t to the k minus 1. And so you see exactly the two terms. If you put in the Bernoulli number, those are the two terms we solve for modularity. A term 1 over t to the k. And you have to rescale them and put in some 2 pi's. So here, you see that we don't need modularity to. Modularity tells us, in particular, by weakening in the log, it tells us that the expansion of the function, as tau goes to 0, or as q tends to 1 in the unit circle. So q tends to 1 is the same as tau tends to 0. Then that expansion by modularity is the same as in infinity. But in infinity, everything is except the constant. Everything's exponentially small with the leading term for any multiform. So for any multiform, you have the full expansion at 1. And what's more, you also have the full expansion at minus 1. And that's e to the 2 pi i over 3. And at i, and at all roots of unity, because they correspond to rational points here, like the third and the half and so on. So because of modularity, you know all of those things. So if you have a multiform, you have the expansion as q tends to any cusp. But if you don't have, but it's given in a nice form, you may have that expansion anyway, using this Euler-McLaurin, where you would use the untwisted one here with the Bernoulli numbers, but the twisted one with Bernoulli polynomials at other rational points. Well, that's completely a waste of time if the function is rational, is multir. But now let's first follow up what happens if k is 2. Well, if k is 2, I'm not going to write it down. But g2, it's well-known, does not satisfy this. But it almost does. Well, I will write it down. Maybe g2 of minus 1 over tau is tau squared g2 of tau, plus a constant, which I won't try to say because I'm used to different normalization. It's some constant, it's 6 over pi, and the other normalization, I don't know what it is. So there's a slight hiccup. It's what's called a quasi-multiform that we just heard a lecture about over a week a few minutes ago, if you were there. So it isn't quite true, and in particular when I do the asymptotics, again, as t goes to 0, then 1 over t goes to infinity, and this is dominated just by one term. So it's the same expansion as before, but there's one extra term. And here you see that if k is 2, then there's one extra term. Because I still have to determine if r is 1. There's the wrong parity. But I also have to determine if r is 0. That is even, b0 is not 0. But if k is 2, I have t to the 0, and it's the only one that still works, that you don't get into trouble with product of your new numbers. So you also get the quasi-multi-narrative, the same expansion for g2. But where it's really fun is when you take k to be odd. So if you take k to be odd, let's take g3 of e to the minus t. Well, the method, that's not multiple at all. You absolutely can't use the previous method. But you do the same thing. So this is an exercise, just a straightforward complication. The calculation of the interval is the same as before. It's always k minus 1 factorial. So here, 2 factorial times z of k over t to the k. And then if you just do it, you'll get the following expansion. There's a pull term. There's a t cubed term, but another pull term. It's an odd expansion. It's an infinite expansion because there is no modularity. Because what happens, well, in general, what you'll find for any k even are odd. Well, in fact, I already wrote it, but I think I just erased it, so I'll write it again. gk of e to the minus t will always be, to all orders, k minus 1 factorial z of k divided by t to the k plus the sum from 0 to infinity. If I now put in this a devalues in terms of Bernoulli numbers, and if I don't get anything wrong, it's this formula. So this is completely, it's the same proof I just gave. It absolutely doesn't care if k is even or odd. If the k is 1, it's not quite allowed. There's a log. You have to use the thing I did last time about, if there's a log singularity in f. But now you see that if k is even, then k minus 1 is odd. These are Bernoulli numbers of opposite parity. So they're always 0, and that's one of them is b1, which happens twice for g2 and that constant term. But if k is odd, then these are the same parity. And so you get a product of two Bernoulli numbers. Now the r factorial will help for convergence. But br blows up like r factorial. So this is just exponential, but here there's another br. So it's factorially divergent. So this is always a divergent asymptotic function, but it is the full asymptotic expansion of this function. This function is completely well-defined. This thing is absolutely convergent, because q, which is e to the minus t, is less than 1. So that's a nice example that you see that if it's multidorm, you get a confirmation. But you know almost all the terms disappear. But if it's not multidorm, you see that, because the terms don't disappear. But they're still completely explicit. They're rational numbers. And you get the asymptotics to all orders. By the way, this asymptotic is used. I invented a word, not really any theorems. About 10 years ago, Barney and Trieste called quantum multidorm forms. And now I have a more refined version of that called quantum holomorphic multidorms that I've talked about once or twice in Trieste. And they're functions which aren't modular, but they have the property that the difference between f of a tau plus b over c tau plus t and what it should be is not zero, but it extends holomorphically over part of the real line. And then the odd way in Eisenstein series, exactly these functions have that property that's quite related to these asymptotic expansions. So there's more going on to this than just asymptotics and products that are renewing numbers. OK, well, I have now two minutes left. And that's too little for the last example. But I'll do it anyway, half of it. And then I'll promise you that I come back to it in a later lecture when I talk about, maybe I'll do it at the beginning of next time. So let me just do it very briefly. So that's the last example. Let me take f of t this time to be minus log of 1 minus e to the minus t. So I gave the expansion last time, because if you differentiate this, you again get 1 over e to the t minus 1. And so we know the expansion. So this has a standard expansion, which, well, maybe I will write it out. It's log of 1 over t. I hope I'm not making any mistakes. Minus the sum n from 1 to infinity. And then it's the usual bn over n factorial, but now also n times n factorial, t to the n. So that's the expansion if I got the signs right that we had last time. So this asymptotic all orders. And so g of t, but the general theorem, which I explained last time, you can also allow log singularities. And then the first, the full formula, there's a 1 over t term, which is the integral. And you get the integral by the same method I used before. It's a of 2. Then this was the general thing for anything with the log. You always got log of t over 2 pi. And then the other terms, you do what you always do. So it's minus 1 to the n bn over n times n factorial times bn plus 1 over n plus 1 times t to the n. That's by the general theorem. But then, again, this whole thing is simply, and I'm not sure if the signs have it. It's either with or without the minus sign. I think the sum is simply 1 24th times t. Because if n is 1, I'd be 1 and be 2. And they're both non-zero. But if n is anything else, they have n and n plus 1 of opposite pairities. One of the two brand new numbers for all this vanish. So here I can get a terminating expansion. So I know what g of t is. What is g of t? Well, g of t is the sum. That's how we always define it, of f of m t. But that's the negative log. So it's the log of 1 over the product, m from 1 to infinity, of 1 minus e to the minus m t. Here, if I took 2 pi t, then this would be exactly log of, well, it's essentially the dedicated data function. So it's q to the 1 24th over, well, tau over 2 pi i. I can't do this with my head. Yeah, it is tau over 2 pi i, probably. No, I give up. I'll go back to what I said, 2 pi t. So with the real number. And then it will be 8 of i tau. So in this case, this is essentially the log of the eta function that I spoke about briefly in the last time during the previous lecture. And that has modularity properties. And therefore, the modularity properties here says that i of eta over t is the square root of t times i times t, which is therefore equal to square root of t. eta starts with q to the 1 24th. So here I have pi t over 12 to all orders. So again, because of the modularity, you know the expansion of this to all orders. There's a single exponential. When I take the log, this exponential becomes 1 over t. So it's minus pi over 12 t. So if I take the log, I'll get a 1 over t term, a 1 half log t, which you see here, and everything else. And so the modularity would imply this. But here you have a nice example. Again, we don't use modularity. And the final example is going to be that I'll leave that as an exercise. Exercise, and I'll, next time people can say if anyone succeeded in doing it, we define pq of t as the McMahon function, the same McMahon that came up in connection with Ramanujan. And they did the computation of p of 200. The McMahon function, which was later realized to have a beautiful property. It's a generating function. And it counts plane partitions rather than ordinary partitions. I'm not going to say now what they are. If you take 1 minus q to the n to the n, that's not modular at all. But you can do the same thing. And so the question is, so the exercise is what is the asymptotics as q tends to 1? And the nice way to do it is put q as always as e to the minus 2 pi i. And the question is, what is the answer as t tends to 0? So can you describe that in using this asymptotic? Of course, you take the log. Because it's a product and we need the sum. And then you just do the thing. And just as with here, you'll find that you can have two Bernoulli numbers, but they're no longer shifted by 1. They're shifted by 2. And so now you get an infinite power series expansion. It's non-trivial. I mean, it doesn't terminate, but it's completely explicit. It gives you the answer. So that's a collection. And this, in connection with the circle method that I'll come to in probably two weeks, is the way that will allow us to find information, just like they did for partitions, about the number obtained partitions from this method. I'll do that case. I'll do a similar case. OK, so that's all for now. And I only have three minutes over time. It's not so bad. If there are questions, feel free to ask. You can also come afterwards. What about the Eisenstein series of half-integer rate? Can one do a similar analysis? No, that has nothing to do with this at all. So I don't know if you could all hear. He asked about the Eisenstein series of half-integer weights. It sounds like they're very related. Eisenstein series of integer weights, but actually, they're completely different. So the Eisenstein series of integer weight is this thing, where k is an integer. It should be even and bigger than 2, let's say. And then this one up to a constant and so on has an expansion, sigma 1 minus nq to the n. Now, if k is a half-interval, n plus a half, this series still converts, but it doesn't quite converge for 3 halves. But anyway, for bigger weights, there is a Eisenstein series. It's a perfectly good multiform. But it's for expansion. For instance, for 3 halves, it doesn't quite converge. But you can do a convergence trick. I did that 30 years ago. And it was one of the first examples of what we now know as a multiform. The coefficients, the nth coefficient of that, so the Eisenstein series of weight, 3 halves, has a free expansion. And roughly, the nth coefficient is a class number. And similarly, if you take 5 halves, it's a value of, class number is also the value of an L series, a Dirichlet L series, as s equals 1 or s equals 0 of the function equation. If you take g5 halves, the values of L series are s equals 2 or s equals minus 1. They're deep, number theoretical points. They aren't trivial sums. This was just a divisor sum. So although it looks like it's the same object, because it's still a multiform, that part doesn't change. But the Fourier expansion is completely different form. So this has absolutely no relevance to that. Those things don't fit this, and they're way harder. I'm to know the asymptotics you have to use, again, the circle method. But you get information about the asymptotics of H of n. It's of a different nature. So the same, by the way, happens if you look at higher, like Zeal, Eisenstein series. If you look at them, it depends a little on whether the genus is even or odd. Sometimes you get something like class numbers. So on sp4, the next case of Zeal-Multiforms, you get class numbers. But the next one, sp6, which is very interesting. Then you again get divisor sums connected with the Quaternion algebra. So which kind of number you get where the coefficients aren't elementary, arithmetic function like this, or highly non-elementary function? Like class numbers, it depends on the parity and here, therefore, on integer, half integer. So the answer is they don't fit this story at all in there. But it's wonderful functions, and I loved it, but they don't fit with only the chlorine. If you have more questions, ask me privately or at the beginning of next time. So actually, one or two, you had a question. I couldn't find you, but because you took the mic. You had to take the mic to something.