 OK, thanks. So we have a nice example here. When I came in, the room was 3% full. And asymptotically, if we go to infinity, it will obviously be over full. So to explain even what I know about asymptotics would take a semester, and to explain the whole subject probably years. So I'm going to give kind of an impressionistic account of a few things that I like that are nice techniques that, in my experience, neither mathematicians nor physicists necessarily know. And that would be extremely useful in many problems that they see. So I'm going to tell them very informally, you can certainly interrupt and should. And also, I'll essentially not give any proofs of anything. But if after the coffee, there's anybody who comes back and somebody would like me to speak about one of the topics in more detail, I can sort of do it ad hoc. So asymptotics means a lot of things. So for instance, something that's rather amusing to me is that people talk about asymptotic expansions. So you take, I don't know, something like the integral e to the t over t dt from 0 to x. And then that has an asymptotic expansion that you can compute very easily, because there's a differential equation. And so typically, it might be that as x goes to infinity, that it's a 0 plus a 1 over x plus a 2 over x squared, where the series, however, doesn't converge. And the meaning of asymptotics is, in the sense of asymptotic series, that f of x has this limit. That if you take two terms, that it's more accurate. If you take three terms, it's more accurate. So the more terms you take, the smaller the error grows. In other words, at each stage, if you take n terms, the difference is asymptotically equal to the next term. So what's amusing is that's a word that we all see in our lives. And it's a little mysterious, because the series diverges. But there's another thing which is very familiar, which is called the c infinity function, which is a function which has a Taylor series to all orders, which doesn't have to converge, because it's not analytic. It's just c infinity. So it's got a Taylor series. And of course, that's exactly the same thing. So the same concept goes under two different names, one very familiar and one always shrouded in a bit of mystery. Now, what I want to talk about is, first, a practical problem that comes up all of the time. And actually, my second example, I was particularly, sorry, I can't hear you at all. Oh, well, then I'll go from one to x. OK, I don't care. Good point. There is a problem with the integral, but we're not going to let little things like that stop us. OK, I could also subtract one here. I thought I hoped it was so small you couldn't see it. So what I want to first tell is, I say, it's a trick, a numerical trick, that I found many years ago and I use, I'd say, twice a week. It's very useful, but I've never met anyone who knows it. I'm sure it's not new, but it's somehow not standard. And I was very delighted, well, it would be anyway, that Boris Dubrovna is here today. Because one of my examples is a numerical example I took from an old paper of his. It was just in my files. I took it at random. I have many examples, but the one I had chosen. So the question is this. Let's say that you have a sequence of numbers, a1, a2. There might be an a0. I don't care. We're interested as n goes to infinity. And you believe, you think, that the an maybe has a limit. So let's just start with that. But you think that it achieves the limit very regularly. Now, believe it or not, there are mathematicians who will simply make a table. So let's say you can compute them up to 500. But it's too expensive. You can't compute up to a trillion. You can compute up to 100 or 1,000 one at a time. But you can't compute a lot. So there are actually mathematicians who will write a paper and say, well, we computed up to 1,000. A1,000 was 3.62. And so A900 was 3.59. So just eyeballing it. It's probably about 3.7 or something. I mean, just completely vague. Or there are also mathematicians who will make a graph. And graph the first things up to 100. And then you try to eyeball this and decide what the limit is, even to one decimal. You can't do it. So at the very least, what you should do is you should graph one over n. So you take 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1. Of course, you'll stop at 1,000. You can't go all the way to zero because you can't go to infinity. But then, at least, your numbers will look like this. And they will have a very clear linear dependence at the end. And at least eyeballing will give you, let's say, two decimals. But let's say you don't want two decimals, but 2,000 decimals. So you really want to extrapolate rapidly and highly correctly. So the numbers themselves, you have precisely. They're either rational or you computed the high accuracy. But you only have 500 of them, let's say, a few hundred. So let's say, of course, it depends on the speed of convergence. If it's a plus e to the minus e to the n, there's no problem. But let's say that an has an asymptotic expansion, a0, which would be the a plus a1 over n. You may not know that. It may only be a conjecture, but the method will test at the same time that this hypothesis is true. And so then you want to find the limit, a0. And actually, you'd quite like to find a1 and a2 also. So before I tell you the method, let me give you two numerical examples just to show you the kind of problem that you can apply this to. But as I say, every week I use this. There are always numbers coming up. So the first example is the famous Aperi numbers, Aperi numbers, An. So they're 1, 5, 73, 1, 4, 4, 5. They grow quite quickly, as you can see. So this is a0, a1, a2, a3. An is the sum k from 0 to n, n over k squared, n plus k over k squared. And you can see that they grow quickly. Well, here, of course, they won't have a limit. So in this case, if you look very roughly, you'll soon see, let me get the notation consistent with what I had, that it would seem that it's a constant times an exponential times maybe a power of n, and then maybe 1 plus a1 over n plus, well, now I'm changing notation too much, b1, b2 over n squared, and so on. Of course, I could include the c into this and put a0 or c0 plus c1 plus c2. But a small remark is that in practice, very often, when you have such an asymptotic expansion, this is a very frequent form of pure exponential, the power of n, and then a power series in 1 over n, very often is a pre-factor, which is very transcendental, but the remaining functions, the remaining coefficients are then rational or much simpler. So it's always a good idea to take out this function. So the question is, how can I find these numbers numerically? So let me just show what the method is able to do for you without yet saying how. So I just checked this last night on my computer so that I wouldn't be lying in the numbers. I computed in Paris, which is, for this kind of thing, the only really good program to use, GP Paris. I computed a1 up to a500. That took 100 of a second, so I have 500 values. And from these 500 values, I'd like to get c and a and lambda and b1 and b2 and b3 to very, very high precision. But 500 isn't that big. So I'll just tell you the result of the calculation first. The calculation tells me first that the yom c, sorry, a, is 33.9705627484. This is by numerical extrapolation, by method that I'll show you in five minutes. Just numerically using only 500 values, you can predict that a to about 20 digits is that. And then if you're good at recognizing numbers, which is a good thing to be in this field, then you'll find that 1 plus squared of 2 to the fourth is 33 times the same thing up towards the end, where it becomes 5, 5, 2, and the other is 6, 1. So it's pretty good. So we have a numerical method that using only a mere 500 values that you can compute in a tiny fraction of a second, will give you a. And then still on the same computer that I plug in this a, I look for lambda, and I find minus 1.500 to a whole bunch of 0s, like 30 0s. So we're already beginning to find here 1 plus the squared of 2 to the 4n divided by n to the 3 halves. Then if you look a little more at the next coefficient, I'm not going to write out the decimals. Again, you can recognize the next coefficient, the c. And the c is a little complicated. It's 1 plus the squared of 2 squared divided by 2 to the 9 fourths times pi to the 3 halves. I'm not going to tell you how you recognize numbers like that. That's another story, but it's easy. So here that would mean 4n plus 2 times 2 to the 9 fourths times pi n to the 3 halves. So this would be the main term. And then similarly, b1 and b2, I'll give a couple of values. b1 is equal to minus 0.4185. You still get 30 digits or 20 or something, even for that. And then you recognize that as 48 minus 15 squared of 2 over 64. So as I said, the numbers after the constant are simpler. But in this case, they're not rational. They're all in q squared of 2. But the c was much more complicated. It was this. So it was still intelligent to pull it out, even though these numbers weren't rational. And similarly, the next one you recognize numerically as 2057 minus 1200 squared of 2 over 2 to the 12. And so on. And you can recognize maybe 10 or 15 coefficients to hire enough accuracy to uniquely write them down just with 500 values. So the second example, before I show you the method, as I said, I read this particular thing in a paper of the proven of a certain number of years called the quantum co-mulitry of the projective plane. And so there's a famous set of numbers. Nk is the number of rational curves of degree k. It doesn't matter if this doesn't interest you. It's just a number for the moment through 3k minus 1 generic points in the plane. So you take sort of 3k minus 1 random points. And that's just the right number that you can force a rational curve of degree k to go through them. But then you'll find many. So if you make a little table, well, for k equals 1, of course, there's only one line through two points by Euclid. If k is 2, there's only one quadric through five points. So it's 1. For 3, it's 12. For 4, it's 620. And for 5, it's 87,304. So you can see these numbers grow very fast. And of course, one wants to know how they grow. So there's a long story about this, but in particular, Konsevich found a recursion, a non-differential linear differential equation for the generating function, which implies a non-linear recursion for the Nk. So you can easily compute, let's say, 1,000 values or 10,000. But you can't compute for a trillion. You don't have anything like a closed form. You can only compute them inductively. So the question is, given this, how can you work out the asymptotics? So in the paper of the proven that I quoted, he quoted from an earlier paper of Itzigson and Di Francesco. But numerically, and those are also very good. Well, Itzigson was a very famous mathematical physics. These were extremely good people. And what they found, well, they did guess the power of k, it was 7 halves. But they found roughly c, a to the k. And I'm quoting from the paper where a, the important one, the ax mentioned, is about 0.138 and c is about 6.1. Quoted verbatim from the paper. But with the method that I'm about to show you, you will get that a is actually 0.138, 0093, 466, 345, 257. In this case, I couldn't recognize it, so it didn't help. But you can never recognize three digits. At least here, you have a chance. And c, I also have lots of digits. And actually, it continues. And I've managed several more copies to high accuracy. So how does this work? And now with all this preamble, the actual method is trivial and very amusing. So first, let me reduce the general problem. So the general problem was I have an a, n with some behavior, c, a to the n, n to the lambda. Well, let's put it here, c0 plus c1 over n plus so on. And I want to find first a, the most important, then lambda, then c0, c1, and so on. Let's take the more special problem that I started with that you simply have a sequence of numbers which has a limit. And you want to find the limit and so on. So let's say here I want to find a and lambda and c0 and c1. This is a special case. a is known to be 1, lambda is known to be 0. And I only want c0. But the first trivial remark is that if you can solve this easier problem, you can solve them all. Because if you're this, then you see immediately that a n over a n minus 1 will start like a times 1 plus lambda over n, or maybe lambda plus n, something. So if you can do this problem, you'll get the a already. But then you can remove the a multiplied by n, and then you'll get the lambda. And you can see that in general, if you know how to get c0, nothing will stop you once you've found c0 from looking at the new sequence of numbers a and minus c0 multiplied by n. And now we'll start c1 plus c2 over n. And so you applied the same method you used. So if you can solve this problem to get c0, you can also get c1, c2, c3. But you can also get a and lambda. So that's just a slight remark. So the real problem is this one. How do we get this limit, the limit c0, which was the original problem anyway. When you have the sequence of numbers converging very slowly to a limit, but in a very regular way, how do you find it? So I'll tell you the mnemonic first. So that was the question. And the answer is you multiply by n to the eighth. That's how you do it. So of course, I'm not entirely serious. Sometimes you multiply by n to the ninth, or for instance here, I multiply by n to the seventh. But you pick a sort of a small integer, one or two would be a little too boring. And 15 or 20 is usually too much, depending on how many numbers you have and what precision. But the point is now we have this. We had the numbers a, n. Let's say n from 1 to 500, but actually I don't even need that. Let's say I just know them from 475 to 500. I actually only need a segment, which sometimes is much cheaper to just calculate a few of them. So I have these numbers. Then nobody can stop me from multiplying them numerically by n to the eighth. That's a new set of numbers. I know these numerically. Now I take the eighth difference. So I recall that if you have a sequence of numbers bi, then delta bi is bi minus bi plus 1, or sometimes bi minus 1. It doesn't matter. It's a question of normalization. And delta squared bi is just it's delta of delta. So it's bi minus 2 bi plus 1 plus bi plus 2 with binomial coefficients and so on. So if you have these numbers, you can just take the difference of n to the eighth a, n. Take the difference eight times. So that's the eighth difference. And I prefer to divide by 1 over 8 factorial. Now let's put in our unsets. Our unsets was that a, n had an expansion. I'm sorry, I keep changing notations because I didn't plan it carefully enough. I'm assuming that it has an asymptotic expansion of 1 over n. I'm just assuming that. I don't know the c, i's. Well then when I multiply by n to the eighth, this will be a polynomial of degree 8. And then the next term will be c9 over n. And the next will be c10 over n squared and so on. Now when I take the difference, the difference of a function f of x minus f of x minus 1 is roughly the same as the derivative. And for a polynomial, it's trivial to see that a polynomial that starts with n to the eighth. Its first difference starts just like its derivative with 8 n to the 7. So when you take the first difference, this part will become a new polynomial, which is degree 8 n to the 7. When you take the first difference, then 8 times 7. And so on. At the end, it'll be 8 factorial, but I'm dividing. So it'll be c0. But these terms will all give 0. Why is that? Well, because a polynomial of degree 7, when you differentiate it 8 times, gives 0. But if you take its eighth difference, it also gives 0. So the c0 that we want, it has survived. But the next eight terms have been killed. Now let's look at c9. When you take the ninth derivative, the eighth derivative of 1 over n to the 9, of 1 over n, excuse me, you get 8 factorial divided by n to the 9th. And so the next term, since I'm dividing by 8 factorial, will be c9 over n. In other words, the new sequence has exacted the same kind of asymptotic behavior as the original one with the same c0. But c1, c2 up to c8 have all been killed. c9 is unchanged. To be honest, the next one is not just c10, because the eighth derivative of 1 over n squared is 9 factorial. So when I divide minus, actually. So when I divide by 8 factorial, I've lost a factor of 9. But that's not very serious. So sorry, 9th and 10th. So now let's say, now I just take this at n equals 500. Well, this will be c0 plus roughly c9 over 500 to the 9th. So we're in the original series if I just computed for n equals 500 and taken that as an approximation. I would have been off by 1 over 500, which is the third decimal. But now I'm off by 1 over 500 to the 9th, which is the 25th decimal. And so I suddenly have 25 digits accuracy. And it's clear that I can vary the number 8 and do what I want. So it's very simple to remember, very simple to use. And it works, as I say it in this case. In this particular case, this form you can prove easily by Sirling's form. It's a lot more work than doing it numerically. In the other, it's not known. And so it's nice to have a numerical method. So that was one of my chapters. Now, since on the little blue advertisement, I wrote two sample problems. One of them, this actually came up in a research problem that I'm looking up with Vasily Golachev. He had a conjecture, which I thought we could prove. And we don't quite have approved, but we're very close. He pointed out that this makes sense, even if n is a complex number. And for instance, if you take a minus 1 half, well, here I can put infinity. When n is a positive integer, I can put infinity because n over k will stop anyway when k is n. But now this makes sense for every n. It's an analytic function. It's convergent. But it converges roughly like 1 over n squared, k squared. So it converges very slowly, but it converges. And in particular, if n is minus 1 half, it's the sum minus 1 half over k to the power of fourth, which is one of the two things I put on the blue sheet. So he had a conjecture that this was equal to a factor, which I think is 16 over pi squared, times L of f2. We're just for fun. It has nothing to do with my story today. I'll say what f is. f is the modular form, a to of 2 tau to the fourth, times a to the fourth tau to the fourth. So most of you know a little bit about modular forms, or at least have seen this. But anybody can write it out. It's 1 minus q to the 2n to the fourth times 1 minus q to the fourth n to the fourth. So it starts q minus 4q squared, plus some terms that I don't quite remember. The minus 2q cubed, plus 24q to the fourth. And so this thing has an L series. L of fs is just the corresponding derichlet series. But of course, that converted extremely badly. I'm not even sure. I think it doesn't even converge at all for s equals 2. But even if it did, it would be very slow. So to compute this numerically, you need some tricks which many people know, and it's not my subject. But anyway, if you simply say any sum is a limit, because a sum is the limit of its nth partial sums. So you apply the limiting method I told you before to this sum. Then on a simple table computer laptop, in 0.01 seconds, you get 500 digits. Well, actually quite a bit more on this. And then another 100 to the second 500 digits of that. And they agree. So although it's still maybe not proved, we at least know that this identity is true to 500 decibels. But both of these things are either extremely slowly convergent or divergent. So this isn't quite, well, it's related to asymptotics. I'm using this extrapolation method to get at these numerical values. So that's what I wanted to say on the numerical side. So that was the first topic was numerical extrapolation. So maybe I should list the topics. I didn't at the beginning because I don't know how far I'll get. So the first was numerical extrapolation. Given the first 500 elements of a sequence, try to give the full asymptotic behavior when n goes to infinity. So the next thing I want to talk about is as an application of asymptotics, special values of L functions. So here I'll just very briefly give the result because it's a very useful thing that everyone should have seen. And if anybody's interest in the proof and comes back, it's a possibility for what we could discuss after the pause. But so let me just write the theorem. So in L series, well, L functions is the same, it's just a synonym as Dirichlet series. So a Dirichlet series means that instead of having a power series, a sum a n x to the n, you have a power series a n n to the x. But it's usual to call x s and actually to call it minus s. So the classical notation for a simple Dirichlet series is the sum a n over n to the s. And now I want to know when does this thing, so this will, let's say, it should converge somewhere. And then it'll converge everywhere to the right of that point, eventually, absolutely. So I have an absolutely convergent series and some right half plane. I want to know when can I continue analytically and what are the values. And it turns out the simple values, the only ones for which you can really have a general method, are the values at negative integers while including 0, so non-positive integers. So the question is analytic continuation. So first you have to make sense of this when n is negative, or 0, and then you have to evaluate it. So just to wet your appetite, of course, I gave the example a n is 1 for all n. Then L of s is the function that everyone calls z of s. And unfortunately, everyone calls the Riemann zeta function. I don't know why. It's Euler's function that Euler studied 110 years earlier, which is, of course, just 1 plus 1 over 2 to the s. And then Euler famously found that z of 2 is pi squared over 6 and z of 4. But he also found that z of minus 1 makes sense. Actually, even z of 0, it's minus 1 half and z of minus 2, also, but all the even ones are 0, except that z of minus 1 is minus 12. z of minus 3 is 1 over 120. z of minus 5 is minus 1 over 252, and so on. And so that's what I put on the blue sheet. What does it mean to say that the sum n to the fifth, which is obviously divergent, positive and integral, is equal to this number, which is finite, rational and negative. It looks crazy. And that's what Euler wrote in his paper of 1749. He wrote that this looks crazy, and the reader will think he's lost his mind to write such nonsense. But it really makes sense, and he explains, and of course, he was completely right. So the question is, how can we do, well, not just the Riemann zeta function, but any l? And so there's a general theorem, which is very easy to prove, very easy to state. And so I'll just state it in case you haven't seen it, because it's a good thing to know. So theorem, let L of s equal to sum. So by this, I just mean the a n's are some complex numbers, and this should converge for at least one value of s. Then it'll automatically converge for all sufficiently large real part, s with large real part. OK, assume that the corresponding power series. So the corresponding power series, you take the same coefficients, n from 1 to infinity, but you put x to the n. Actually, q to the n might look better. So you change the 1 over n to the s to x to the n. And actually, it turns out it's a little more convenient to replace x by e to the minus t. The x before would be tending to 1 from below, so the t will be tending to 0 from above. So let's put this. But you'll see that if t is positive, then e to the minus nt is exponentially small. If this converges for some complex number s, then the a n's have at most polynomial growth. So there's no problem of convergence here. This continues. This certainly makes sense. This function that's called the g of t makes sense for all positive t. So if that power series, which certainly converts them, therefore it's a perfectly good function, so I can just say the corresponding function, g, has an asymptotic, and now again the word asymptotic, in the sense that I started my talk with. So asymptotic expansion, g of t, looks, well, actually, just means it's a c infinity function from the right, as I already said. So it has an asymptotic expansion, which just means it's c infinity from the right. It's infinitely differentiable. In other words, g of t has a limit, so it's continuous, with limiting value of 0. Then g of t minus b0 divided by b1, that would be the derivative, sorry, g of t minus b0 divided by t has a limit b1, that would be the derivative, the second derivative, and so on. So I'm just saying if g of t is c infinity at the origin, and therefore it's an asymptotic expansion that's completely equivalent, then two very good things happen. I'm now writing backwards, I'm sorry. Two very good things happen. First of all, one L of s is equal to an entire function of s. In other words, it was originally only holomorphic, well, conversion, and then holomorphic in some half-plane far to the right, but it now automatically is a function for all s with no pulse, holomorphic function. And secondly, for all n greater than or equal to 0, the value of L of minus n is universal form. It's minus 1 to the n times n factorial times bn. I won't prove it, but I just want to have written it there. I'll give an example in the middle, minute, but before that, I want to slightly generalize the thing. If I can find some colored chalk, which I guess I can. So let's say slightly more generally that there's also a term here. I didn't leave room for it, beta divided by t. I mean, there are many more general forms, but let's take the simplest. It's called that g of t has a Laurent series expansion, so it is one polar term, beta over t, and then b0 plus b1t. Well, then all the changes here is that if there's a pole at s equals 1, the residue of that pole is beta, but that's the only pole. There's only a single simple pole at s equals 1. The difference is entire, and this function is unchanged. So that's very, very simple. And just now as an example, let's do the Riemann's Aida function, since that's what I started with. So this is special values of Dirichlet series. So in the case of the Riemann's Aida function, if l of s is z of s, that means that a of n is 1 for all n, then g of t is simply the sum from 1 to infinity e to the minus nt. But of course, that's a geometric series, so you can sum it. It's 1 over e to the t minus 1 if t is positive. And this, by the very definition of the Bernoulli numbers, is the sum l from 0 to infinity bl lf Bernoulli number over l factorial t to the l minus 1. So that's the formula. And therefore, we immediately get that z of s is equal to 1 over s minus 1 plus an entire function. In other words, it has a meromorphic extension to all s with a unique pole, which is a simple pole of rescue 1 at s equals 1. And z of minus n, well, I need the coefficient of t to the n, so n is l plus 1. So it's bn plus 1 over n plus 1 factorial, but I'm multiplying by n factorial. And so it's just minus 1 to the n, bn plus 1 over n plus 1. And that's the famous formula, which in the case n equals 5. We'll give you the minus 1 over 252 that we just had. So that's another good thing to know. And it's not really asymptotics, because these are exact formulas. But it's asymptotic in the sense that in this case, here this series actually converges as it happens. This particular series converges for t less than 2 pi. But there are many cases where the bn's are such that this series is divergent, but it makes no difference. If it's so long as it's asymptotically correct, you still get the analytic continuation and this. And the proof is three lines. It's very easy. OK, so now I want to come to my third topic. And there's a very nice application of all of these from physics, which if I do anything after the pause, if there is a continuation after the pause, maybe I'll give that because it illustrates all of the techniques. But for the moment, I'm going to skip it and go to the next general thing. So again, I'm trying to show you tricks that I've found over the years more in my own work that I haven't really ever seen, very easy, but that are extremely useful in the everyday life of a mathematician or a physicist. You have functions. You want to find values of L series at negative values. You want to numerically extrapolate. And the third is something that you very much want, which is the behavior. Well, the asymptotic behavior. I'm an American, so there's no U. Asymptotic behavior of a series of the form, well, a function of the form, g of x, I'll call it g of t, which is a function. This is some nice function that you know. But now you take, and the function nice means it should be smooth at zero, let's say, c infinity, something like that, and a little small at infinity. So a small might be o of t to the minus 1, minus epsilon for some positive epsilon, something like that. So the picture of f will be, you know, it eventually goes to zero, but it can be an absolutely arbitrary function. And now the kind of series that you find very, very, very frequently in mathematics, a huge number of problems can be put into this form, is you take the sum f of t plus f of 2t plus f of 3t and so on. You want to sum up this whole thing. And because, and you want to know, of course, if t is large, there's no problem. This is the dominating term, in some sense. But if t is very small, then each term is tending to a constant. So you get the constant infinity often. It's not really clear what to do. So I'm interested in the asymptotic behavior of this as t tends to zero. So I'm going to give you a theorem, which tells you so the question is, what is the asymptotic behavior of g of t as t tends to zero? Obviously, I have to make some assumptions. So again, I'm going to make an assumption where I can use, so let's assume that f itself, I said, is smooth and infinity at zero. So that means, again, that it has an asymptotic expansion, Taylor series expansion, not necessarily convergent as t goes to zero. In the same sense, I've been saying that f of t tends to a zero, f of t minus a zero divided by t, which is the derivative, tends to a one and so on. So let's assume that. And it's a bit smaller than infinity. Well, before I write down the theorem, let me solve it in two different ways. One is Riemann's way. Well, Riemann's way would be this. t is very small, so here's f of t. I mean, t is this distance. So f of t is here, f of 2t is that, f of 3t. So you're just summing, well, admittedly, all the way to infinity at intervals with t, which is very small. So it's clear that this is an approximation to the Riemann integral. And therefore, it's clear that you should have the asymptotics i over t as t tends to zero, where i is the integral of f from zero to infinity. So that'd be Riemann's answer, but it's very crude. It's only one term, but at least it's correct. Now Euler, who is not crude at all, but not correct. So Euler's way will be much less crude. We'll give lots of terms, but it's wrong. And it's certainly not proved, as you'll see in a minute. You simply do the funding. You'll say, well, g of t, you just ignore all convergence problems. g of t is the sum m from 1 to infinity f of mt. But f of t is the sum ak t to the k. So here I'll lift ak times mt to the k. So interchanging blithely, ignoring convergence completely, just interchanging the sum. Of course, this was already false. It becomes much falser. After I interchange, we'll get the sum ak t to the k times 1 to the k plus 2 to the k up to infinity. But it's obviously divergent. But on the other hand, Euler himself computed this. It's what we now call z of minus k. And he gave the formula that I just told you, with a very annoying number. So though the method was horrendously illegal, the final result at least makes sense. This is a perfectly good power series, formal power series. And they diverge, because these blow up like factorials. But at least there's a formal power series. And it's the same as the original power series of f, the same ak multiplied and the same t to the k. But then each one multiplied by a constant, which is the bernouin number. For instance, the coefficient of t squared gets divided by 6. And so the theorem is very short, sum. It's the sum of those two. So the actual answer is, it's not very beautiful. Because I'm not much of an artist. It's i over t plus that sum. And that is correct now to all orders. So that also is a nice mnemonic. After you've seen that, you can't forget the answer. You do it the Riemann way, which gives the correct asymptotics, but it's very crude. You do the Euler way, which tells you what all the other coefficients should be. And indeed, now it's exactly correct. And again, the proof isn't very hard. It's an incredibly useful thing. So now I spoke so fast that I actually have some time left. So I want to give, I gave several applications of the first thing, the numerical extrapolation. I gave you three examples. The up-and-rein numbers, they are asymptotics. The asymptotics of these numbers come from quantum comology. And the evaluation of the infinite sum dividing A with the index minus a half. For the second special values of L functions, I only gave you one application, which was zeta of minus n. There are many more. And that's the one I might do after the pause. But let me give you at least one example of how to apply this thing so that you'll see, to use this, you have to first remember it. But I hope that you'll find it very easy to remember with this mnemonic. You take the sum of the Euler way and the Riemann way. But I also want to show you how to use it. There are many, many cases where you can't see immediately that it's applicable. But if you practice a bit, then you find that it's applicable in many situations. So I've chosen one example I like. My favorite field is multidiforms. So I'll take an example that if you use the theory of multidiforms, it would be obvious in certain cases and hopeless in others, but we're not going to use the theory of multidiforms. And it'll be obvious in all cases. So I'm going to reproduce a result which is well known and trivial using multidiforms. But not using them and also much more generality. So let me take the following functions, gk. K is a natural number, but at least 2. And gk, I'll call it q. And those of you who know multidiforms won't be surprised at the choice of letter. So it might be g2 of q, g3 of q. This is the infinite sum. It's just a power series of q, q to the n. And the coefficient of q to the n is sigma k minus 1 of n. I could use sigma k and shift everything. But if you do know multidiforms, k is what you'd expect. It's the weight. So this thing is simply, by definition, the sum of the divisors of n, which by convention means positive divisors, to the power k minus 1. OK, so for instance, let's write out a couple. These are highly convergent sums because this is bounded by a small power of n. And that's exponentially small if, as I'm going to assume, q is less than 1. So I'm in the unit disk, complex unit disk. And this series converges exponentially fast. So there's no problem. So just to write out a few terms, g2 of q, well, the sum of the divisors to any power of 1 is just 1. So they all will start with 1. But then for q squared, the divisors of 2 are 1 and 2. So I get 3. The divisors of 3 are 1 and 3. So I get 4. The divisors of 4 are 1, 2, and 4. So I get 7 and so on. And similarly, g3 of q, I take the sum of the square. So here it's 1 and 4. And here it's 1 and 9. And here it's 1 and 4 and 16, which is probably 21, and so on. So these are just completely explicit power series, rapidly convergent if q is less than 1 in absolute value. Because they're coefficient of polynomial growth, the coefficients are integers. And we want to know the asymptotics but to very high precision, the complete asymptotics, as q tends to 1. But it's tending, of course, in the unit circle. So it can't be bigger than 1. It's, say, along the real axis, it's tending to 1. So how do you do that? Well, first, let's assume that we're in the, just for those of you who do know a little, multideforms, one only needs the very, very first example. If k is, first of all, not 2, but also not odd, but even. So if it's even and at least 4, then, let me make sure that I have to get all the normalizations right, then multideforms theory, if you don't know what multideforms are, it plays no role. I'm just saying there is a theory. But most of you do know it. Multideforms theory implies that you have an exact functional equation. And the exact functional equation is as follows. I take this function gk. I change the variable q to e to the minus 2 pi t. That turns out to be better than e to the minus d. I mean, it makes the formula simpler. And I add a constant, which is bk over 2k, the very same Bernoulli number that we already had. So this is actually 1 half times 8 of 1 minus k if k is even, which it is. OK, so I look at this thing. And now, multideforms theory tells you that this function, this exact function, has an exact functional equation. It's equal to minus 1 to the k over 2, which makes sense because k is even divided by t to the k, times the very self-same function, but where t has been inverted. t has been replaced by 1 over t. That's an exact formula coming from multideforms theory, and not particularly hard to prove. And now, this thing, of course, is exponentially small as t goes to 0. So in particular, as t goes to 0, which means q goes to 1, q is e to the minus 2 pi t. Then this thing is exponentially small. So in particular, it's smaller than an ever so large power of t as t goes to 0. So that means that in the asymptotic expansion of e to the minus 2 pi t, there are only two terms. There's minus 1 to the k over 2 times minus bk over 2k times t to the minus k. And then there's plus bk over 2k. And then that asymptotic expansion is correct to all orders. In other words, it's a terminating expansion. Uproar, there could have been infinitely many powers, but only two powers occur. Well, it's a Laurent expansion. There's a 1 negative power, 1 0 power, and that's all. So to all orders in t, we have that. So that's not what I'm talking about. That's not asymptotics. This is exact, and comes from multideforms theory. So the question you might say, well, this looks hopeless if you didn't know this functional equation. But I won't show you that normally it's not hopeless. It's easier, and now we can do it even if k is 2, which was an exception here. It's a so-called quasi-multiform. But also if k is odd, in which case you can't use multiforms at all, because it isn't multiform. There is no function equation. So what do you do? What would you do is this. gk of q was the sum, n from 1 to infinity, sum d divides m d positive d to the k minus 1 q to the n. So turning that around, that means it's the sum d from 1 to infinity, d to the k minus 1. And now you have all q to the n, where n is a multiple of d. So renaming the variables that's d to the k minus 1, sorry, not renaming, just summing the geometric series, you get this formula. That was just this very simple transformation. Now we make the transformation, q equals e to the minus t. Here there's no particular reason to put in the 2 pi, so I'll just call it e to the minus t. And then you see that you have to do a slight trick. So we've already made a change of variables, changing q to e to the minus t. But I also want to multiply by t to the k minus 1. After all, that's harmless. If I know the asymptotics with that factor, I know it without, and so on. So if I do that, then I see that this is the sum d from 1 to infinity of d times t to the k minus 1 divided by e to the power dt minus 1. Because q to the d over 1 minus q to the d is 1 over e to the dt minus 1. And d to the k minus 1 times t to the k minus 1 is dt. So changing d to m just to match my preceding notation, this is exactly a special case of my preceding thing. Well, this is fk. fk of t is t to the k minus 1 over e to the t minus 1. Well, now I'm done. Because I take the formula, remember it was the sum. So there's the Riemann term, i over t. So i here is the integral from 0 to infinity, t to the k minus 1 over e to the t minus 1 dt. You should be able to do this in your head. If you can't, then try tonight, and you'll find in five minutes that you can. It's very easy. So i, this integral, is simply k minus 1 factorial times Riemann's a of k. And so that's the leading term. That's this term, i over t. And the other terms, well, we have to take this expansion. But remember that 1 over e to the t minus 1 has an expansion that we know. It's b l over l factorial times t to the power l minus 1. But here I'm multiplying by t to the k minus 1. So it'll become k plus l minus 2. And so now if you apply the Riemann business, which remember this a of minus k, well, I shouldn't have called it k. I should have called it something else. The name of this is minus b. There might be a sign over k plus 1, essentially. So this will become, that means that g k of t, or rather t to the k minus 1 g k of t, because it's t to the k minus 1 g k of t, to which we can apply this trick. So t to the k minus 1 g k minus 1 of t, we're nearly done. Is this integral, which was k minus 1 factorial, over z of k, divided by t plus the sum l greater than or equal to 0, b l over l factorial. And then if I got it right, which I may not have, it's k plus l minus 1 over k plus l minus 1. I think there's a minus sign. It doesn't matter. Here, t to the k plus l minus 2. But then I might as well divide by t to the k minus 1 and take the thing I actually wanted. So most of the mistake, when I did it at home, this was l and not l minus 1. But this is essentially the expansion. The details don't matter if it went too fast. The point is that I applied this recipe. And in three minutes, I just wrote down the complete expansion. I didn't have to know modularity properties. I didn't have to do anything exact. It's a purely asymptotic formula. Now, if you look at this, you'll see it's very amusing. The Bernoulli numbers, as was already discovered by Seck and Bernoulli 350 years ago, they have the property that all of the odd ones, except the first one, are 0. So therefore, if k is even, which is what I first assumed, k was 468, if k is even, then k minus 1 is odd. So l and k plus l minus 1 of opposite parodies. So one of them is odd. But all odd Bernoulli numbers are 0, except for 1, if l is 1. But even if l is 1, then this is bk. No, sorry. So that's right. So what you will get if k is contained in the set that I originally had, the nice modular set, if it happens to be even, and at least 4, then because k is even, k plus l minus 1 of opposite parodies, the only way this term can survive is if l is 1. And then b 1 half is minus 1 half. z of k is also bk times some very simple factor. And here, l is 1, so l minus 1 is 0. So it's something else, bk. So what you get in this case is a terminating series, just like we had before. You have an asymptotic series, which I wrote down to all orders. This is, of course, when I write equal, it's asymptotic. The series doesn't necessarily converge. Here it does because it terminates. But as an asymptotic series, that means no matter how many terms you take, the difference between this and this is the next term. But here it stops. There are only two terms. The first one is a simple multiple of bk. The next is b, a simple multiple over t to the k. There's some other multiple bk. And that's the only two terms, because every other Bernoulli thing is 0. If k is 2, then it's the same. You get roughly, it depends whether you put the 2 pi or not. So I won't try to have it in my notes, but I don't know if it's worth trying to do it exactly correctly. So in the case of g2, you'll get that g2. This time I'll put the 2 pi back in. If what I just did here had been correct in this case, there would be two terms, 1 over 24 t squared and 1 over 24. But because of the fact that in that case, you have another term. If k is 2, you can have l is 0. Then k minus 1 plus l is 1, which is also non-zero. So you get one extra term, which when you work it out, is minus 1 over 4 pi t. So you immediately get which multiple of forms theory also tells you, it's more difficult, that there's a similar asymptotic expansion, again terminating. This is true to all orders. So the full power series has only three terms, 1 over t squared, 1 over t, and 1. If k is 2, if k is 4, 6, or 8, there are only two terms, 1 over t to the k, and 1. But if k is, for instance, 3, well, then it doesn't terminate. Because if k is 3 or any odd number, then l and k plus l minus 1 are the same parity. And so you get all the terms when t is even. So for instance, I'll just write down the answer for g3. It's 2 times 8 of 3 over t cubed minus 1 over 12 t plus t over 1440 plus or minus t cubed over 181440, et cetera. But now it doesn't terminate. There are infinitely many terms. Multiple forms theory would not tell you that. The series doesn't converge anyway. So it's not really a function on the right, but it's a perfectly good asymptotic expansion. And so with no particular trouble, we found the asymptotic expansion to all orders. I intentionally went very fast. I just want to show you that by applying the rule, you can in a few minutes take a series to which it doesn't seem that the thing implies this sigma k minus 1 of n. But often by transforming one series into another form, you can make a new thing and apply this general principle. So I'll just repeat again the general principle is that if f is a nice function, then the sum f of nx is also a nice function. And you can find its asymptotics as x goes to 0. Then there are many variants. For instance, you might not sum over integers, but half integers or integers plus alpha where alpha is some shift. Or you might have an f which has some singularity at the origin, not just a power series, but a term 1 over t or log t or square root of t. And all of those things can be done. And then you have to change the Euler and Riemann thing very slightly. But essentially it's the same. And then you get a huge number of new applications. So I can also give a reference for all of these things so that if you want to see more slowly how it works, you can look and there are written examples. But I want to show you kind of a variety of numerical methods that you can use to get at these three kinds of things. So extrapolation of series, interpolation, or values at negative inches of Diraclet series, and evaluation of some form f of nx. So maybe I'll even stop five minutes ahead of the R, unusually. I had another example, but it's too hard work. So I'll stop. You can ask questions now. We'll have coffee. And if anybody wants, I can say in a word what the application is that if you would like I could do after the pause. But if not, I can also give you the reference and you could just go and read it. It's an application to a thing that the physicists have and do in an extremely non-rigorous way that doesn't make much of any sense. And also they get only the leading term. And if you do it using this kind of technique, you can get the full answer. So I'll just say the physical thing is what's called the Casimir effect. And the actual thing that the physicists want is this completely crazy sum. You take the triple sum over three integers, L, M, and N, in Z of, I don't know why, there's a normalization that I've forgotten. That's why. The normalization is just for convenience. So there's going to be a parameter lambda. And the physicists are interested when lambda goes to infinity. But you can also ask for fixed lambda or for lambda tending to 0. And you can do all three. And there's a normalization that I've put in for convenience to make the formulas look nicer. Mine is 2 pi. But it's really a crazy thing that the physicists need. It's some electrostatic quantum effect between two metal plates that are very close to each other. And you assume there's some lattice of atoms. And you end up that you have to, you want to study the asymptotically behavior for lambda large of the triply infinite series sum over L, M, and N of the square root of L squared plus M squared plus lambda squared N squared. So that's even worse than Euler's thing because the individual terms are all tending to infinity. But they're no longer integers. They're just some complicated numbers. The thing obviously doesn't make sense as it stands. So the problems here are first to make sense of it. So make a well-defined function that has this interpretation. Then show how to compute that because there's a fairly easy way to do that. But it would be very, very slowly convergent. How do you actually compute it? Let's say lambda is 1. You want f of 1 to 100 digits. You can do that. Very easily. And then also find the asymptotics when lambda is small or big. So even if I don't present that, that was the application. And I can tell you where to look at it if you want. And read everything I've told in a much slower pace. And with examples and details, there's a very nice book, which will eventually have five volumes by Iphard Seidler, which is an introduction to quantum field theory for mathematicians. Well, an introduction. It's 5,000 page volumes, of which the first three have now appeared. But at the end of volume one, there's, or somewhere, it may not be at the end, there's an appendix by me because Seidler asked me if I could give an explanation, a rigorous way, of computing, making sense of and computing this Casimir function. And so that was a nice exam plan. I took advantage of the opportunity to write down all of these asymptotic tricks about Dirichlet series, about asymptotics. So that's, it's published. It's also on the website. If you just go to Google and type my name, and then find Seidler, you'll find it instantly. And all of the formulas are there. So I'll stop with that. And I say questions are more than welcome. OK. Yeah, short questions because I'm short. No questions. That's too few, too short. Does anybody have a favorite series that they would like to sum? Or a sequence of numbers that they want to know the asymptotics? It actually would surprise me if some of you didn't because, as I say, it happens to me easily once a week that I have a series of numbers that came up in some problem. And either I do know how to do it theoretically, but it would be a lot of work. It's much quicker to use the asymptotic method and just have the computer tell you, and very often, like in the quantum-comology problem, it's not known theoretically. And then it's very nice to be able to at least convince yourself that the asymptotics are as they are, or the sums as it is. If anyone has one, I've actually only ever heard that from you, but I have heard it from you. Anyway, thanks.