 If I don't speak, tell you tell me. I can talk to people here. So apparently, we aren't yet switched on. So just to say thank you, those of you who come physically, it's easier to sit in your office or at home and watch it on the screen. But it's much, much nicer when you come. And also, you can ask questions. And I feel I'm talking to actual people. It's a big difference. So I hope I don't put everyone off by the second lecture when you never come again. Professor, we're ready whenever you want. OK, you're ready. I'm ready. So I don't know how many people are participating in this lecture from other places. And some will be participating in a few hours because this course is actually a joint. It's been announced jointly from the two places I work for in Trieste, which are the ICTP and CISA, which combine in ICAP, the Institute of Geometry and Physics, which I'm somehow affiliated with, whether it exists or not in any actual form is not too clear to me. But it's slowly existing. But I also have a visiting position, like usually it's a month a year, in China at the Southern University of Science and Technology in Shenzhen. And this was meant to be the joint course. And we were hoping to do it at 11 in the morning here or 12, so people there could see it. But because of the restrictions on the Boudini Hall, it has to be at four from 4 to 5.30, which takes it to practically midnight in China. So some people will be watching it tomorrow. Or tomorrow morning, I say hi to those two, to my friends in Shenzhen. So this is a, well, you've presumably all seen the announcement on both the website and in emails. And as of today, there's also a poster that I've only seen on my screen. I haven't seen it on the wall, very nice. And it gives all the details, which are that the course is meant to be for six weeks, so twice a week, each time an hour and a half. And maybe I'll write that, although I'm sure you've all seen it because it was on all of the announcements, but just, so the first four weeks, that's again because of the availability of the Boudini lecture hall, it's Tuesdays and Thursdays as today from 4 to 5.30. And then the last two weeks, you can work it out yourself, or it's written in the announcement, but it's not very difficult. And obviously, when it changes, I'll announce it during the course, not to forget, now I've forgotten. Is it 2 to 3.30 or 2.30 to 4? I'll put 2 to 3.30 with the question mark. Maybe it's 3 to 4.30, maybe I don't put anything. Anyway, by that time you'll know, or you can look at the announcement. So the main thing to know about this course is that it's got only one objective at all. It's not to teach anybody anything, it's to have fun. I want to have fun, I want you to have fun. These are methods which are very useful in one's everyday life as a mathematician or a mathematical physicist, but a lot of them are simply fun and sometimes even funny. And asymptotics is like a very, very human thing. It's not a very abstract part of mathematics. You can, second year undergraduate, can understand essentially everything, but the most sophisticated researcher needs it all the time. Since I've started getting kind of in love with this subject, which is for many years, I've used it more and more in more and more different ways. And I would say every day, I use at least one or two of the methods I'll talk about in this course, most of which nobody except my friends that I've shown it to knows. They may be in books, some aren't in books, but most people, if they have a sequence of numbers and have to figure out how they grow, they use mathematical already, my viewer mistake, and then they make a graph. So let's say you have a sequence of numbers A, N, and you've calculated the first 1000 of them. A1 is, I'm just going to invent some numbers, and then A1000 is 6.72, but they're quite expensive to calculate. You could calculate another 1000, but it would take a day. You can't calculate a billion of these numbers. And so what many people do, honestly, this is frankly, basically so dumb that you would think no one would do it, but I know lots of people are doing, including very, very smart people. They take mathematics, they make a graph. Here's A1, and then by the end it's 6.72, and then they eyeball it, and it's growing, let's say it reaches 6.9. So you get like one decimal, which may well be wrong. If you do it correctly, and you have these, let's assume these numbers you have to high precision, with a thousand numbers, typically you can get 200 digits of that extrapolation. You can do it extremely accurately, and it's a very, very simple trick. That's one that I use at least once a week, if not once a day, but that's one of many topics I'll talk about. So asymptotics are fun. The course is meant to be fun, and that also means you, that includes people who are listening by Zoom and have to raise their hand or do something. Everybody should ask questions if anything is unclear, which will happen all the time, because it's me, or if I speak too fast, which will happen all the time for the same reason, if somebody could just raise their hand and say, could you go through that again, or give an example, or just speak more slowly, or take off your mask so we can hear you, for instance. There's no danger of infection when I'm standing here. I completely forgot. So feel absolutely free. It's meant to be informal, and especially the people who are physically here should be completely free, but also, if you're watching online and want to ask a question, I just talked to Mark, who's the technical person. He said, nobody is muted, so you can just switch on your microphone, or it is switched on, you can just ask. You can also, if you're embarrassed or not sure, you can ask a question by chat, but I won't see it. I'm obviously not reading the chat window. So then you still have to say loudly to the microphone, I have a question in the chat window, then I can look. So really feel free to interrupt. So as I said, there'll be 12, it's six weeks times, each time an hour and a half. So it's meant to be two hours, but with a 15 minute break, which we'll have after the course so that it doesn't mean the technical people have to stay over time. So this is meant to be two times a short hour. So I'll try to make a brief break in the middle, if I forget, maybe somebody could remind me and say, could you please shut up for a few minutes? Sorry, member. But roughly it's six weeks or 12 lectures, and I have a number of topics, and some of them will be for two or three lectures because they're more interesting, more useful. Some might be only half a lecture, and some will probably run out of time anyway. So for today, at least for the first half, maybe for the first, for the whole double hour, I want to give a kind of an overview of the course. What are some of the topics that are going to come, but also some of the aspects that should make you aware why you should care about asymptotics and what there is to look forward to. So let me start with the philosophical remark. As I get older, I get more philosophical and make general observations about the sociology of mathematics, and one, I have several examples, but please don't ask me for the others because this morning I couldn't think of any of the others. But I've thought of three or four examples in my life at least, where the same concept occurs in mathematics in two completely different contexts, and the people in the two contexts have naturally different names for it. They see it completely differently, but it's mathematically the same. And sometimes there are people or even many people or even old people who know both, but they don't notice that it's the same because they're in different compartments of the mind. They have different names. And so the only one I can think of, I have other examples I couldn't think of on this morning, so please don't ask. Maybe by next time. The example I thought of is formal power series. So a large part of this course will be about, it'll almost always be about power series. So power series are two things in mathematics. First of all, they're functions, which have an expansion, the Taylor expansion of function. So you think of this as a function, and I remind you that a function is not something abstract. It's an operation which assigns to another number. So then the series should actually converge if it's actually a function. If x is a number, I want this to be a number. We all know the criterion, more or less it's not, well it's sometimes there are series where it's hard to decide, but roughly if the terms blow up, it's not going to converge, and if they go exponentially to zero, it will converge and essentially any power series except for one limiting value is in one of those two classes. So this is either a function or a formal power series. So let's say that the cofus are in C, I mean they usually will sound like they'll be in Q, but anyway they're in R, but contained in C. So it's just an element of C of X. Now I know from experience and from courses even here in CESA and in ICTP, and in other places that there are many people, graduate students, but also more advanced researchers who are frightened of asymptotic power series. When you write a series like in a sum, n factorial divided by x to the n, and you say you have some function which has this asymptotic expansion at infinity, to me that's a very simple statement to follow, but there are people who get frightened, they say what does that really mean? Since after all this is a function and there should be another function, the ratio of the two function should tend to one, but it's not a function. And kind of they know, but they don't feel comfortable with the divergence series. So that's one thing we learn, we all learn the word diverge asymptotic series, or formal power series, and we learn it is a purely formal thing, but if you think that numerically with X being a number, it makes you very edgy. On the other hand, there's also something else which is the word smooth. Now I doubt that any mathematician beyond you know, second or third year undergraduate was even remotely terrified by the word smooth. Some functions are analytic, which means that every point that's on the real axis, they have a power series, there are Taylor expansion converged in some neighborhood and gives the function back, and some functions aren't analytic, but they're still smooth, they're C infinity. So those are syndromes, everyone knows that, C three would be three times different, smooth usually means infinitely smooth, and nobody's worried about that. And so you have a function which here's a graph of the function and at some point it has a value which is well-defined, it has a slope which is well-defined, it has an oscillating parabola which is well-defined, and now it's a Taylor series expansion to the 0th order, to the first, to the second, to the third, to every order, and it's simply if F of X, now an actual function, not something formal, if it's called is analytic, that's X equals X here as we all know, if the Taylor series converges, so I'll just write it out, Fn of X zero over n factorial times X minus X zero converts and equals the function for X sufficiently near in some neighborhood of X zero, but on the other hand it's smooth, or smooth, or C infinity, it's C infinity if F of X tends to some limit A zero as X tends to X zero, then since there's a limit I can try to differentiate by subtracting that value just at the one point and dividing by X, by X minus X zero, and this should tend to A one, et cetera, and this will give me the same Taylor series, simply there's no requirement that that series converges or if it does that it converges to the function, it's just a function, so C infinity means the function has a Taylor series at every point or at the point you're looking at, and that Taylor series doesn't have to converge, if so it's simply asymptotic, and the meaning of asymptotic is if you take six terms, then the remaining error is O of the next term, well that's exactly what it means here, but for some reason nobody finds that scary and lots of people find this scary, I've never known why, and most people haven't made the connection that the word smooth which is very familiar is the same as asymptotic expansion, so that's just a word about not being frightened, but also I find that an amusing remark, most people will probably say I would never make that mistake, of course it's the same, but I assure you many people worry about divergent series, what does it mean if it diverges, and nobody worries about the function being smooth at a point in the power series, it doesn't converge, it's not analytic, that's as familiar as pi, so that's one comment, another comment is that for everything I do, some of the techniques I've described I found numerically, because I do numerical computations more or less every day, and so should you, most of you, if you're practically any sort of mathematician or mathematical physicist, no matter how abstract you might be doing something to do with algebraic geometry, with modular spaces, there are always sequence of numbers that you want to understand, and if you do calculate, then I highly recommend many of you know it, that you use the, it's called either GP or Pari or both, or with small letters or big letters, it's the program developed originally by Henri Cohen, whom I'll mention a few times, and later co-authors, co-collaborators like Karim Bellabas, Bill Alomberg, excuse me, and it's an absolutely wonderful program, and there's a fantastic book written by our colleague who actually I don't see, he was going to come, Fernando Villegas wrote a beautiful book of experimental methods, a number theory where he explains many things deep, elementary and deep, a number theory, and each one is illustrated by program in Pari, so if you don't know it, I highly recommend you can download it for a Mac, Windows, Unix, even for a smartphone, anything you want, it's free and it's extremely easy to learn how to use. So as I explained methods here, if you enjoyed them at all, try to work out an example numerically because everything's about numerics and you can't do it in your head, I mean nobody can do 20 digit calculations with a thousand numbers in their head. So the other general comment is this, there's a numerical side of all of this and you should take it seriously, there's nothing wrong with numerics, it's not the deepest mathematics, but mathematics doesn't work without numbers or at least a lot of mathematics doesn't work without numbers. So that's two general things. So as I said, I want to give various topics in the course during the course, and I've eight main ones, of which a couple are less main and I might not get them there at the end and it's less important and some are kind of central and some I'll give and then come back to later with more detailed examples depending on time and depending on how things evolve. So let me say, so there are going to be eight main topics, but if you count you'll very soon see that there are, I put 10 numbers because I'll start with zero which is not, the numbers will go from one to eight but there's a zero and the zero is why one reason, one of many reasons why one should care about asymptotics at all. So asymptotics have an application, the rest will be about how to do asymptotics but first I want to say an application which is very well known to many number theorists but usually not to number theorists and not to all number theorists and this is special values of Dirichlet series or L series or Zeta functions. So the generic word is Dirichlet series or Dirichlet or you can also call them L series or L functions or various kinds of Zeta functions, all of those are more or less synonymous maybe in slightly different contexts. So let me start here before I get to the main topics one to eight. By making a comment, this is another thing that I find amusing about people being afraid of one thing and not afraid of another thinking one is easy and one is harder but often getting wrong which is the easy one, which is the hardest one. So Euler found, actually this was in 1749. So 15 years after the previous discovery which I'll come to in a second, Euler found that if you take what's now called for reasons that escaped me entirely since it's Euler's function, the Riemann Zeta function, he found that in particular Zeta of minus one is equal to minus a 12th. So this is a mysterious, so here's Euler's mysterious formula and in his paper of 1749 which is French and a lot of fun to read, he writes that my readers will probably think I've gone crazy because Zeta of minus one remembers Zeta of S if you're not a number theorist but I think everyone has seen the Riemann Zeta function even if they don't realize it's the Euler Zeta function. It's this sum, Zeta of minus one is simply the sum of all the natural numbers and so Euler says the reader will probably think I'm crazy since there are three obvious things about this sum. The first is that it diverges, that it's infinite. That's pretty obvious. The second is that it's positive, we're adding up positive numbers. The third is that it's integral, we're adding up integers. But no, the answer is finite, it's negative and it has a denominator. But then he said I hope in the course of the paper I'll convince my readers that this is not nonsense and he does and some of you may have heard, again I don't see me, said he might come our boss or is he there? I can't see, Atish. Yeah, I can't recognize people because I said now I can recognize between the mask and my near side. I was going to say he was going to come but he's not here but he is here. Hi, so Atish is a wonderful lecturer as well as being our big boss. He gave a course of lectures a couple of years ago, three years ago, I don't remember. On quantum field theory, wonderful introduction, I wish I could remember everything he said or even half of it, but he started, he motivated the whole thing by showing how this crazy formula of Euler's has a meaning in quantum field theory and how it can lead you to understand things. So if you can find notes on them, I'm sure it's recorded somewhere, those lectures are really worth listening to again. So that's the most mysterious. But then Euler's earlier formula, so the same Euler who apparently, the students we haven't discovered the Riemann Zeta function, was showed 15 years earlier that Zeta two is pi squared over six. I've told this story many times and probably sometimes here and some of you've heard me say it but it bears repeating. How did Euler find this formula? Yes, I'll come back to that after. Well, okay, I'll come back to that after but now I want to first tell the story. Anyway, I'm going back in time, this first paper. So the story of this, this formula, he was a young man, he was born in 1706, he was 28, so not super young, but this made him famous all over Europe. This is the thing that, and the reason was this problem is a famous problem. It was called the Basel problem because a mathematician, I didn't look it up, it's Pietro Bo, I've got that here or something, somebody with a very Italian name, but he was in Basel. He had posed this question, 80 years earlier. So he had seen that the sum, one over n squared, so one plus a quarter plus a ninth, it converts, that's very easy because you can compare it with another series that obviously converges. And you could easily approximate it just by eyeballing it, like I've got the same people, didn't say it's about 1.6. If you were really careful and took a hundred terms that didn't make mistakes, you might get around 1.65. But nobody had any idea what the number was and this problem had attracted the attention in particular of the Bernoulli's, one of whom was Euler's teacher, so it was a very well-known problem. And Euler solved it and the way he solved it is a wonderful illustration of, first of all, how really top mathematicians think and work, it's a mixture of intuition and knowing how to proceed. But it also illustrates several of the topics of this course because the first thing is he did, he wanted to compute the number to 20 digits so that he could recognize it. But if you take this and you just stop at capital N, then the error is easily seen to be about one over N. So if you just take a hundred terms, you get it within one percent. So even if you don't make a mistake, but it's not easy when you're working by hand, you get around 1.64 plus or minus a hundredth. Euler invented the method which I'll talk about today or next time, which you've certainly all heard of and most of you know, called the Euler-McLauren summation form. I think it was Euler originally. And he invented it more or less for this purpose and using that, using only something like 20 terms of the series, he could calculate 20 digits of this. And that's typical of the extrapolation methods I'll be talking about. You have a small amount of information but you extrapolate more intelligently than just stopping and saying the last value is very close. So Euler, there were three steps. One, calculate numerically to 20 digits. That was not at all obvious and nobody knew how to do it. And so you get, I don't remember the number, it starts I think 1.6449, but lots of digits. Then second, and I'll talk about that too, that'll be in my list of topics, it'll be 3A I think, recognize it. So there are many mathematicians, probably including most of you, who could look at the number 1.6449 dot dot dot and say, well, that's a nice number and would not say, oh, I know that number, that's pi squared over six. But Euler was not one of those. He said, oh, that's pi squared over six. And then he proved that that took him several years. The proof was, I mean, the big thing was the discovery and to realize this is the answer to the Basel problem. So now coming back to what people think is easy and hard. I know many mathematicians, when they see this formula, say, oh yeah, Euler couldn't have known about that because they don't know their history because you need analytic continuation. And that requires complex function theory. Actually Euler looked at the function only on the real axis, but he did it completely correctly for all s positive and negative. He found by the way the functional equation. He discovered it as a conjecture. He said, I'm sure it's true. And he calculated numerically when s was 1.5 and minus 0.5, that the relation he predicted in which he had proved for integers was true. And he said, someday somebody will prove this will be important. And that when Riemann did prove. So Euler, so people now think, ah, to get this mysterious formula, I start with pi squared over six. We all know that Euler proved that a long time ago. Then you use the functional equation. Then you deduce this. That's completely backwards. It's triply backward of those three statements. This one, this one, and the functional equation. This is by far the easiest. You can prove it in two lines. This is much harder. This one, by the way, Euler did this paper just by hand. You have to explain, of course, what it meant. And he gave, I'd rather not answer the question. I can tell you, probably he gave two completely different approaches how to make sense of it. And both of them led to an answer, well, not just from minus one, for also minus two, minus three. And he calculated up to minus 40 and found the Bernoulli numbers, and both methods gave the same number. And he said, I hope you're convinced that even though what we're doing is not standard analysis of 1749, it is, in fact, correct mathematics. And later, with complex analysis, it made sense. But this is way the easiest formula. This is considerably harder and took Euler several years to prove. And the functional equation, Euler discovered, tried very hard to prove it, stated as the conjecture with the numerical example, and said it'll be open. That was proved by Riemann in 1859, so it took 110 years. So the functional equations weigh the hardest of the three, and so it gives the really easy state from much harder one and yet much harder functional equation is not good thinking. But it's because we all learn this formula first because a high school student can understand this. And then we learn about the functional equation and then we learn about this. So that's a side comment. But why do I say it's connected to asymptotics? And that's because it's a theorem. I'll give the proof later. It's very, very easy. Let L of S be any Dirichlet series. So that means it has an expansion, the sum N from one to infinity, C N over N to the S, where C N grows at most like some power of N. So this converges in some half plane. It converges somewhere, okay? And you want to know, it does have an analytic continuation and does it have interesting values at negative interest. I'm talking about special values. And by the way, this can be more general or it can be a generalized Dirichlet series, which means the sum C N over lambda N to the S, where the lambda N's then have to go to infinity at least like a power of N and the C N to some growth condition to make it convert somewhere. But let's not worry, it's the same method. Okay, and so what do you do? You define an associated power series. Well, I'll call it a power series because it's the sum you take the same coefficients and you put C N X to the N. But it's convenient that X should be less than one. And so it's convenient to call it E to the minus T. So let me call it E to the minus T then it doesn't look like a power series but like a power series in E to the minus T. Okay, this of course converges for T positive because since the C's have to grow at most polynomially and E to the minus N T to K's exponentially, this will always converge absolutely. And now the statement, unfortunately I'm too short to write at the top but I can try. This is a theorem, easy theorem. If phi of T, which is defined for all positive T's you make a graph and let's say that it has a smooth, remember C infinity at the origin. It'll be analytic for T strictly positive because of the locally uniform absolute convergence. But at zero it might converge. If F of T has an asymptotic expansion, A zero plus A one T plus A two T squared and so on as T goes to zero from above, not necessarily convergent, it can diverge. So it's simply a smooth function. Then two statements, one L of S continues analytically to all of C. So however the function looks originally, of course you have to know this and you may or may not but if you know that, then you have this for free. Let me keep the definition of L of S here. And secondly, L, the values at negative inch including zeros in the French sense, maybe Italian sense, so if N is a non-negative integer, then the value of L of minus N up to a sign is N factorial times A N for all N. That's simply always true and it's a very simple statement and so that already tells you one reason especially if you're a number theorist or an algebraic geometry. In modern number theory and algebraic geometry everything is about zeta functions and their special values. That's the key to understanding practically everything so you can't always value it but when you can, you're happy and this is the case when you can if all these nice things happen. More generally, let me give the more general in a second, if L of S, if sorry phi of T has the same expansion but it also has a pole term, so the raw expansion, then one prime is that L of S is equal to, it has a simple pole that S equals one and the residue is just this number and so what's left is an entire function, it has no poles anywhere. So now with L of S extends meromorphically to the entire complex plane, it has a unique pole, the pole is to S equals one and it's simple and it's a residue is A minus one. That's four statements right there and two, two prime is simply equal to two. So this is one prime but two prime is the same. You don't have to subtract off this term, the values of L of minus N, non-positive integers are given by the same formula. So as an example, let's come back to Z of minus one, if I've Z of S is the sum one over N to the S, so phi of T is the sum one times E to the minus N T which is a geometric series, you can sum it immediately and this of course we know the expansion, it's T, I guess minus a half plus not adult and you get Bernoulli numbers and so if you apply this you immediately see that Z of S has an analytic continuation to the whole plane except for a simple pole of residue one at S equals one and the values at negative integers such as Z of minus one is essentially the same as the Bernoulli numbers which here would be B two which is a sixth and there's an extra minus a half. So it's essentially the Bernoulli numbers which are defined by the expansion of that function around T equals zero. So this is a very, very simple thing to prove but it gives you one motivation to care about asymptotics that if you know the asymptotics of series and if you're lucky if you know for this particular series then you get for free special values of Dirichlet series which is as I say a kind of a key theme in number theory. Okay so that was the topic zero at this rate I will need an infinite amount of time to get eight topics but there won't be any more unnumbered topics, I promise. Fake numberings. So let me say some of the things I should also keep an eye on the clock. When do I start? Four I started, right? So it's been half an hour. I will try to make a break after around three quarters of an hour but if I'm in the middle of a thought I might be a little later and if I forget please remind me. So the first one I have to look at the order because I kept changing my mind. This is something that's extremely useful to know and it comes up all the time. Asymptotics of functions of a very special form. I can just my notation. So let's say g of x is the sum and from one to infinity and most functions that you see in life are not given by closed form like sine x over e to the x minus three they're given by an infinite sum but let's say that the infinite sum is a fixed function, f is a fixed function so it doesn't change but you're evaluated at x two, x three, x and so on. So that's an extremely frequent sort of infinite series that you see the nth term is simply the first term with some variable replaced by n times that variable. So that's a big class and it's very useful. So this has many, many applications and so many that's probably I don't wanna start listening to today but when I come to that topic which will be very soon like maybe next time since it's topic one, I'll give some of them but maybe one of them I'll give one but this thing is very connected with the zero thing which I just erased because for instance if you see that if f of x well I should have called it then t is e to the minus x then g of x is exactly the sum e to the minus nx that we just used which is one over e to the x minus one but it's also the sum, the geometric sum that corresponded to the, to find the analytic continuous to the Riemann's data function and in that case the general formula for this is basically just uses renewing numbers. So I might still today at the end of the day at least come back to this, begin with this and the beginning statement is a rough statement about the asymptotics of this. So here as x goes to zero. So what are the assumptions on f? F should be reasonably small at zero, at infinity. Let's say it's o of one over x to the c with c bigger than one, just as an example then the sum will at least converge just to get the ball rolling but often f is very small than infinity it's e to the minus x but that's not really the problem. The problem is f might have some smooth behavior it might even have a pole, I mean the method is many variants, you can allow logarithmic singularities all kinds of singularities at the origin and what we're doing, this is f and of course I want this as x goes to zero. So since x is small, you're taking the value f of x, f of two x, f of three x so you're taking the sum, and the sum will converge because f is small enough that the integral converges. And so this is, the width of these strips is x which is going to zero and so you can see in your head the rough asymptotics is roughly just by Riemann's definition of the integral it's a Riemann integral, it's roughly i over x where i which is what's a functional, so i of f is simply the integral from zero to infinity f of t dt. So indeed that's true but you would like to have the entire asymptotics and it's very easy and I'll explain it but as I said there are many variants like when f does other singularities and so so basically the answer is given by the same formula that I already mentioned except there it was for convergence sum here it's typically not by the Euler-Mclaurin formula Euler-Mclaurin summation formula in a slightly non-standard form in a, it's not very different from the usual but it's not quite the way one's slightly non-standard versions form. So it's not a very deep thing but as I said there are variants one of the variants is what if f is a singularity the original already mentioned like one over square root of x or log x over square root of x but another variant is what if you don't sum over f of nx but you sum over f of n plus a third times x so shifted sum then the asymptotics will be the same the Riemann integral is the same so the leading term will still be i over x but the remaining terms will be different so there are small things like that and that as I say is many applications in both physics and mathematics so let me mention one briefly just to, so in the application in mathematical physics but that I won't give any details now I'll give it when I come to it application in mathematical physics or just in physics in this case it's basically just ordinary physics this is a thing called the Kassimier effect which the physicists in the Altens know I'm not one of them and I only know kind of a mathematical statement it was discovered by the Dutch physicist Kassimier who discovered many beautiful things and the problem it was asked by the way by Ibrahim Ziedler who is a good friend, he died a few years ago he was the founder of the second Max Planck Institute in Mathematics the first one was founded by my teacher Hirtsebruch in Bonn and I was there from the beginning and I'm still there the second one was founded about 20 years later it's called Max Planck Institute for Mathematics and the Physical Sciences and it's in Leipzig rather than in Bonn and the founding director was Ziedler who was a wonderful mathematician but also a very good mathematical physicist who was working on a six-volume introduction to quantum field theory for mathematicians but he wrote so much detail each volume is 1,000 pages and he only finished three completely and the fourth was half finished when he died after quite a few years so he asked me, he said, you know when you talk about this the physicists talk about a function which well I'll leave and put in the normalization because why not? There's a minus two pi which doesn't matter at all for what I'm going to say but it's the sum and this is really it's the sum over all triples of integers so you have some lattice and here you take a quadratic form L squared plus M squared the last one has a coefficient which traditionally would be lambda squared times N squared but you take this you take the square root of this expression so it's obviously horribly divergent there are infinitely many terms and they're all going to infinity so it's not a convergent series and so the question was one, how to make sense of it and secondly, compute it and computing a function to me can have two meetings and here they both work both numerically so you give me a value of lambda like let's say 1 or 1.2 and you ask me to compute it after all this series is diverged you can do some trick dimensional regularization to make it converge but typically very slowly and you'll still only get two digits but I always want you know 20, 50, 100 digits I want essentially unlimited precision with the reasonable amount of work so first of all numerically so lambda given and secondly in tune with this topic of this course asymptotically and there are two ways you can take the asymptotics lambda is here a positive number so lambda might tend to zero or lambda might go up to infinity and in both cases you can ask for the asymptotics and I forget which one the physicists need when their plates come very close whether it's small or large but anyway he said what you hear what you read in physics textbooks even for a physicist is hair-raising it's so not correct I mean it's not just the things diverge every single statement in every detail is wrong and completely wrong makes no sense but then they do their thing and at the end they get an answer that every physicist learns in every textbook but so he asked me once we were just at a Max Planck meeting can you look at this and make sense of it and so in the end I wrote a long appendix for his book it's 25 pages where I explained some of the things I'll explain in this course you can find on my web page but it's better to read it after the course you'll be bored and so the answer is you can make sense that I'll explain later but I gave a little table so here's lambda and f of lambda here I have a table of I did it for point one point five I'll drop one oh sorry one and ten and I computed with this method I'm not going to write it out but I have the numbers right here on my web page and year 220.762 but each of these numbers I give to 50 digits so that's a proof that the method is working except you might say how do I know the 50 digits are right anybody can have their computers spit out 50 digits between zero and nine like the wonderful repartee this is not part of the course my friend Hendrik Lenster once gave a popular lecture and afterwards somebody came up Professor Lenster yes your mathematician how many digits of pi do you know and he looked at it and he said all of them what? yeah I said Lenster zero, one, two, three, four, five, six, seven, eight, nine so it's a little like that anybody can make the computer print out 50 digits but how do we know they're the right ones this method which I hope I'll show later there's a free parameter t and when you compute you can set that t to be a half, one, two and you get completely different you get an infinite sum very rapidly converged you get 50 digits but the terms are completely different when you do it with different values of t the sum which is to 50 digits is the same all 50 digits agree so these numbers are kind of guaranteed that they're correct so that's the question of computing it numerically so it's kind of amazing that this thing that doesn't even make sense and you could imagine roughly subtracting some leading term and making it convert very, very slowly but how do you compute it to high precision and then the part, the other part was f of lambda was two c over three times lambda inverse plus pi over three lambda and then to all orders came from the method actually the error is exponentially small as lambda tends to zero so in other words the error here would be less than any power no matter how big of lambda and as lambda tends to infinity it's another constant pi squared over 45 lambda cubed plus some other constant c prime again to all orders actually with an exponentially small error and both c and c prime are given explicitly and one of them is the one you find in the physics textbooks so here you have a problem that at first doesn't even make sense you can answer to extremely high precision and this is the kind of thing I feel should be in the arsenal of every mathematician you never know when you'll have a problem like that you have some series that maybe doesn't even make sense or it's convergent extremely badly and you want to make sense and then there are tricks that will help you to do it efficiently okay so I won't go into so that's one application that I just described of well it's in the same paper it's actually not a direct application because you can see this doesn't quite of that form but it's very much in the same ballpark when I get to it you'll see that the two belong together or as I say there is this appendix that side there's a book on my web page and then both of them are explained in kind of the same context it's not directly an application okay so then the second main topic so maybe I'll give the first three which are the most important and then make a brief pause or at least not talk for five minutes so the second is Koshy's formula which we all know and its applications and the main application well it's almost the same thing again these are kind of synonymous words but in different contexts this is what's called ever since Hardy and Ramanuj on the later Hardy and Littlewood and Rado Machar and many other people the circle method so the basic method of course is you use maybe again I'll take another board to write what you do I think I didn't really quite finish the thought that I said at the philosophical part at the end at the beginning but there were several philosophical things but one was that a power series can be both a function but also a formal power series but when it's a formal power series that's just equivalent to a sequence of coefficients there's no difference and the wonderful idea one of the most important ideas in all of mathematics was found by Euler when he invented the partition function that's that if conversely you have a sequence of numbers in his case the partitions and you want to understand them you make a generating series you make the sum A and X then it may converge it may not in that case it does unless it doesn't but you study that function and the properties of that function will tell you the asymptotics of F and N and vice versa now it may not work no method always works but it should be your absolute research reflex so some good advice if you're a younger mathematician I'm at an older one but then you probably know if you encounter a sequence of numbers anywhere in mathematics they're counting you know the number of polytopists they're counting whatever they're counting they're integers let's say there's some numbers you always know except you always write down the generating series and spend at least an hour to thinking about it can you see anything it may not work one time in five maybe it won't work but four times in five you'll make a lot of progress you should always do it it doesn't cost anything you just think about the generating series so sequences and generating functions are the same but now if I subtly replace X by Z which as we all know is a complex number where X is probably a real number but you're allowed to have complex X and real Z if you do this then of course we all know that if this is convergent in some domain around zero and if I take any closed curve but I'm going to take a circle of radius R so convergent in a neighborhood of the closed disk Z less than or equal to R then of course Cauchy's formally tells us that if you simply divide F of Z by Z to the n plus one then all of the terms except the nth one for the same n will not be one over Z and those powers Z to the I where I is not minus one have an integral which is a power of Z and when you go around a circle you just get zero but only the one over Z gives log which gives you two pi as you have the formula that of course we all know very well which is this and here I'm integrating around the circle where Z is is R but now if I put that into the formula and go to polar coordinates then I get this formula which is equally obvious I'll integrate from minus pi to pi and now if F of R e to the i theta times e to the minus i n theta d theta and of course that's equally obvious for the same reason if you write down that form you substitute this series then every term a n Z to the n where n is not this n will have the power e to the i n theta where m is not zero and the integral of a non-trivial power e to the i theta around the circle is zero so only the nth term contributes and that will have the R to the n so I have to divide by it so both versions are completely familiar and you don't even need complex analysis to see that you just write down the series and integrate term by term so now the idea is you can choose this R nobody tells you what R has to be so there are two main cases case one F is entire I remind you entire means holomorphic in the entire complex plane so then it's then you typically let R go to infinity because if it's entire that means that the a n's go rapidly to zero because other way, more rapidly than any exponential because otherwise there'd be some z where this would diverge so the a n's are very small so if you take z reasonably small you get nothing because the series converges too fast you have to take z very big so you have to take a circle of large radius so let R be well chosen and that's the whole R of the thing that's where it's the circle method rather than just Cauchy's formula this is always true for any R but now you choose an intelligent R and what you do is typically this thing, when you think of this function of theta let's say it's real and imaginary part it'll have a maximum sum where it'll be very big and the neighborhood of sum theta zero is typically zero and then you look near that and you approximate it by Gaussian and then for the Gaussian you know the integral and then you approximate by Gaussian times X times X squared and so on and you get a whole power series expansion so that's called the method of stationary phase so you choose, you see if you do it for some other R which is not the best one then you won't have this picture instead of it'll be an oscillatory thing because there'll be an exponential with some non-zero complex number and it'll oscillate and it won't give you there won't be a unique maximum that you can estimate okay and the other case is F has radius and convergence finite then you can assume say equal to one so that's the typical case so the a n's might grow like at most a power of n or go to zero like a power of n and then you let R tend to one from below because now we have a function that only converts strictly inside the unit circle here's one and in the circle method typically there's infinitely many singularities all around this circle at rational points at points of rational angle but usually the big singularities at one and let's assume it is and then you take a circle and again you take it very very near that point if you try to make your life easier by taking near this you only get the first few coefficients it's boring so here I take typically R equals to e to the minus epsilon and epsilon you choose it also here we chose the R to depend on n so each time you have an n you choose the right R and here for each n you also choose the right R so you choose the right epsilon so I was going to give two examples of that and since I've already used up my time so here I'll give you example one it's a lot of fun and I was hoping to do it today that's Sterling's formula of course there are many ways to get Sterling's formula we've all seen some of them but this is not the usual integral that you approximate it's a different way and example two is my favorite this was my discovery it's that there's a trap it's a proof of the amazing theorem that I discovered that way that e is equal to the square root of two pi and you might be very surprised you've never seen that formula indeed it's not true of course e is 2.72 and the other is 2.51 but if you apply the method blindly you just apply it following the rules you will get equal square root of two pi so there's a small trap and I want to give those two examples to show how it works in practice both how you get Sterling's so here I can say what the function is here I'm going to take no n factorial Sterling's formula we all know it's n factorial is roughly n to the n e to the minus n or the square root of sorry times the square root of two pi n and if you want times one plus one over 12n plus I think the next term is one over 24 and so on now using the extrapolation method that's going to be a point three if you just take Paris or another program compute n factorial for n going from let's say a thousand to a thousand and thirty you only need thirty values but big ones divide by this and then use that approximation you get the first twenty coefficients of this series numerically but in this case you can also prove them because of course there's a recursion it's easy so here you don't take n factorial we're going to get an asymptotic formula for one over n factorial of course then you can invert it so here I'm going to take z to the n over n factorial which is of course a well known function it's simply e to the z that's entire and so now you take r very large it turns out for the nth coefficient you take r exactly equal to n and if you want to do an exercise that's very instructive sometime between now and Thursday when I'll do it take a few minutes to try to work out how you would get Sterling's formula and you can actually get as many terms as you wanted the expansion just by applying Cauchy's method to this here you take r equals n okay but here you take an even easier function take the this is a question nobody probably except me would ever ask what are the asymptotics of the sequence one so the sequence is one, one, one, one, one the asymptotics are pretty easy it's just one but let's say we try to do it using this method it's supposed to work so the generating series of one is of course one over one minus z and here you take r I think if I remember correctly it's n over n plus one so it tends to one and it works beautifully but if you do it naively what you'll get is that you get a whole power series with the constant times the power series and one of n but the constant term is e over the square root of two pi but it's supposed to be one so something is wrong so it's just a small warning one should apply judiciously and not completely blindly okay, one inside I've gone well over maybe I make like a two or three minute I'll just stop talking for a minute erase the board and then I might be I hope that I can quickly list the other so I've eight topics and I've listed two but all the philosophy was at the beginning I hope it will work so we're supposed to go until 5.30 which is another 35 minutes yeah or we can use the time for a couple of quick questions yeah well I'm not explaining these methods this is a survey of what I'll do in the course here I told you the answer r is n here r is n over n plus one okay the general method now I did explain the general method you look at this function which is a function of both r and n and typically the big bump will be at zero I mean typically if you know as things go to infinity it'll be centered around here and most applications I'm not saying it has to be as you just expand this and so I can say so typically f of r e to the i theta will be a constant which of course we know what it is plus and then of course there's a Taylor series in theta but I can write that if I divide by f to the r is e to the something which is the derivative times i theta and then so I take the logarithm of f divided minus the logarithm of f of r I'll get theta squared and so on and so here there's some function a r and here there's some function b r which better be negative or I'm being big trouble so because there's an i theta so it's really i theta squared okay so now this a r will depend on r and it will go to infinity typically as r tends to infinity in this case or to one in that case and now you choose it choose it's a very good question I will try to answer these that much choose r so this is typically an increase in function here's r and here's a r choose a r such that a r is equal exactly to n because remember I'm multiplying by e to the minus i n theta and so now there's your stationary phase there's no oscillation here theta equals 0 otherwise b i times some number in the state is very small it would oscillate you can't estimate that at all but if you make a r equal to n then that term goes away the next term may be influenced by the rest of e to the i n theta but it's something and so what you're left with to the first approximation is e to the minus constant times theta squared and that's a Gaussian and the answer is the square root of pi over that constant so that's how it works and that's how you choose it so actually I urge them in fact I'm glad that I didn't get I prepared it that I didn't get around to doing it today those of you who like to play with the fun problem look at these two cases you don't have to get two or three terms just the very beginning try to see how if you apply that here you will find you must choose r to be n and you will get at least the beginning of Sturding's form at least that much and if you're raving you can get the next term or the next two or three terms and similarly here see that if you just apply it without doing anything you will get that e is the square root of 2 pi and then the question is find the mistake why isn't that a legitimate proof because of course e is not the square root of 2 pi so thank you great question I first tried to avoid it but in fact I think I answered it but you really see it if you look at one of these examples you can take lots of other functions of your own take some simple sequence that you know the generating function so you happen to know both the an's like here n factorial we know very well than one but you also know in closed form the function then you can apply the thing and you know what you're supposed to get and you can see how it works so it's much better to first take such examples and of course later you want it for examples where you don't know the answer you had had the question about Euler but as I said it's a little hard to explain I can give you wonderful advice if you read French which you certainly do look at this oh well I mean you know Talvin in Portuguese and French it's just an interpolation you use the interpolation method it's absolutely wonderful it's the I think it's my favorite paper in all the mathematics because Euler unlike Gauss who tried to keep everything secret he didn't want the reader to see how the great Gauss came up with his ideas so he would wait years and then Publisher thought they were just the theorems and the proofs kind of bourgeois style but worse Euler was the opposite he loved to explain everything so he will say you might think this is the right way and then he takes five pages where he says I thought so too and I did this and he gives me a miracle example and in the end it doesn't work like here and then he says you see that's not how you should do it here's a better way and maybe it's another paper that's one reason why his collective works it took 150 to 250 years I think to finish it's 80 huge volumes like it's like to beat the volumes he wrote an immense amount but he explains everything and the result is that until about 1900 essentially all good mathematicians all the famous ones you've heard of learned the mathematics by reading Euler not from teachers they all said read Euler he's the master of us all that was the statement because he really explains why things go wrong and he explains why you might think I'm crazy and why the method works and his explanations are just wonderful and then he shows you how it works but of course it's very naive because it's 100 years before we knew complex function theory and he's just worked with real variables but it still works very well and the other question otherwise I'll go on and use it through many time Don, hi, you said for one start those speakings Oh, you're a voice in the in the heaven, yes I'm from Sissa, ciao, Don yeah so what's your question? the question is I think at the bottom of all this method is just analytic continuation right? No, absolutely not and I don't want to get Giuseppe, I don't want to get now into any philosophical I've talked first of all all I've only given two of the eight topics let me give all eight before you say that they have a common theme but I don't think it's about analytic continuation all that's one of many many many themes maybe one of 20 so basically I don't think that's the right way to look of course it's very important and here this is about analytic continuation but many of our functions would be divergent and then you can't use Cauchy at all because the a n's make sense but these are divergent powers yes, I'll come to that in a second it's not about analytic continuation that's certainly a theme but it's it's definitely not what's at the bottom of everything so is that enough for now? anyway I mean to call you it's enough for now I have to call you tonight or tomorrow anyway about other subjects and so then we can discuss at leisure thanks for the question and your questions are always good but here I think there's there you'll see that there's more that that's only one aspect of what should be this whole course so let me continue I didn't really make a pause but I let other people speak so at least you got a pause from my voice so three is the one that's in the in the announcement I gave two of the examples so how to recognize this is super super useful in in realize I use it I would say once a week for the last many years for you know for problems from every field of mathematics that you can imagine how to recognize the asymptotics of a given sequence of numbers so I mean I already mentioned that before somebody is you have a machine that will produce numbers A1, A2, A3 let's have to A1000 and they're growing roughly and you want to see how they're growing do they have a limit or do they grow like some power so typically you have AN so typically you know AN numerically but I precision maybe it's exact maybe it's an algebraic number even a rational number even an integer you know AN for let's say N equals 123 up to 1000 maybe it's 2000 but not up to trillions and certainly not up to 10 to the one million so and you think you expect because you eyeball the graph and you expect that AN maybe there's even a power of N factorial and even if there isn't there is or there isn't there may be an exponentially big term and whether there is or there isn't there can also be a pure power term and then there might be some power series C0 plus C1 over N plus C2 over N squared so you think that that might be the case and the problem using just let's say 1000 coefficients find lambda alpha beta C0 let's up to C20 to high precision so really fine when you say this is the exact value and the method is very easy it's a method I invented years ago but it's certainly not new it's equivalent mathematically to a method called Lagrange extrapolation so that's Lagrange it's equivalent to some deficits used called Richardson I think also extrapolation and it's equivalent to other things but those are formulated at least when I've looked them up in we compete or so in a way that to me is harder to remember and harder to understand the way that I look at it I'll explain when I get to this in the course which will be very soon this is the most fun part of the course it's a very simple way and I can already tell you the mnemonics some of the people here including people who were in my course last year have seen it so the method but it will make no sense unless I explain the method you multiply by n to the eighth eight doesn't have to be eight you might multiply by n to the tenth and then you do something so you take this sequence you multiply by some moderate power of n and you do something and it's a lot of fun I'll show lots of examples not today because we take too much time of this I could easily spend the entire six weeks giving you examples of this I say I have in my own work it comes up you know once a week and many of my friends had a sequence and I told them and so I have beautiful examples from other people I know so I've endless examples I'd actually made a list and maybe a couple of them I will give because all of them are fun but when I get to it so the first one it's listed in the web page and on the announcement on the poster so you take the sum this was the sum Konsevich wrote down which is a meaningless series so it diverges whatever Q is except if Q is a root of unity so you take the sum of what are called the Kuhl-Hammer symbol so the product one minus Q to the I I goes from one to n and you sum that over all n but this is a perfectly good formal power series because if Q is very near to one if Q is one minus epsilon then the nth term here is divisible by epsilon to the n so the series converges and so we can write this this thumb for some historical reason it's called XID it's a power series in one minus Q and you can easily compute the first few values and they start I've them written down on a piece of paper that I just lost one, one, two five, fifteen, fifty-three fifteen, fifty-three and so there's a fun story about this I'll tell this story briefly you know when you see a sequence of numbers and you want to know how to recognize them how do they grow there are two methods I've made this joke before one is to look it up in the online handbook of integer sequences by Neil Sloan and the other is to ask me both are good ways if you ask me I'll think about it and a week later usually I'll be able to say I think your series grows like this even if I don't recognize it so here this series which I'd found in some it doesn't matter it's something to do with quantum topology was actually Maxim Konseevich had written down these things and looked at them with me I typed this into the you know online handbook of integer sequences and he said I'm sorry I don't know your sequence so I thought that's normal I mean it's a new sequence then two nights later I got an email from Neil Sloan might happen to know and he wrote dear Don sometimes when I can't sleep at night I take a cup of hot chocolate and that helps me to fall asleep but sometimes I don't take a cup of hot chocolate I go to my computer and I look who has been looking at my online sequence and so last night I looked and I saw you had asked about this sequence and so he was happy because he knew me and he said but you obviously didn't know is the computer doesn't allow two initial ones which I told him is silly if you put in the Fibonacci numbers it'll say I never heard of it so he changed it it removes the initial ones and still finds you so he said I added I took away the first one year series and it found it but my great surprise the definition it found was not my definition it was something wildly different coming from not invariance and I wrote a paper in the end it's been several pages showing that the series defined there and the series I'd found were the same but in the meantime that person in not theory Stroimanoff had needed the asymptotics of this and he had said that psi d which was important for him to have wrote seemed numerically at the first 30 values like I did in simpler form like I'd get my 200 he said it's roughly seems to grow a factor like maybe d factor all over 1.5 to the d with the asymptotic method I could say this 1.5 is actually 1.6449 which I recognized because I knew from Euler that's 8 of 2 times squared of d times the power series I gave the first several cofists to many digits and later I found a closed formula and the first coefficient so it's actually not this it turned out to be d factorial times the squared of d over pi squared over 6 to the d times a constant plus another constant I found for several constants to high precision but c0 this I'll write out completely I've lost track of where I am 295952 whatever number of digits that is and later in the paper I found a closed formula by theory and it was this number 12 squared of 3 over pi to the pi times e to the pi squared over 12 when I calculated that number numerically it was this every single digit was correct so the asymptotic method works but my term my 3a which I won't write out is the same but how to recognize not a sequence of numbers a number so in this case I didn't do it actually now I think I would know how to do that I had this number of lots of digits I should have been able to say aha like Euler I know that number it's 12 squared of 3 over pi to the pi to the pi squared over 12 after all Euler only used his head I've got high speed computers and I can try 100,000 combinations I actually didn't in this case but in many cases I have I'll give you other examples so that's a fun side thing and when I show people numbers then they say that's magic only you could do it but there's no magic once you know there are a couple of tricks and it's actually quite easy if you know roughly where to look to recognize the number if you have any idea that this might involve these constants then which combination you can find by computer of course if you have no idea you'll never find it so that was the first example and I had six examples that I'd prepare for today just to list the examples without saying the form list but then I won't finish it all so I have several beautiful examples that come from combinatorial problems problems of algebraic geometry like the second one is the number of lines take a hyper surface in projective n-space and if the degree is exactly 2n-3 then it has to find that number of lines by some kind of Schubert calculus and that number of lines grows very quickly and there's a formula but it's a very horrible formula and the question is to find its asymptotics and that I found first with the asymptotic method but in that case I had to prove but everything was correct but there are many many other examples including the two that I put on the website so including this rather amusing example j plus 4 thirds over j to the power I think was minus 4 thirds so a friend of mine asked me about a year ago this series had come up and he had a guess that maybe it was something specific but he had calculated this to five decibels using 30,000 terms and his guess worked to four or five decibels he said is that the exact answer with the asymptotic method using not 30,000 terms which he used it took a day I used I think 250 terms I could get this number to 300 digits and it wasn't the number he guessed and there was no reason it should have been it was very close but it was just an accident so this comes up all the time and there are many fun examples but I'll do that when I get to it so this is in a sense the high point of the course because this is the most fun and the most useful in everyday life you have a sequence of numbers that comes up it might be in the theory of multilized spaces some intersection numbers some volumes let the things miss Akhandi studied or Siegel Beach volumes and you want to know how they grow as a function then you want to find the asymptotics first numerically to high precision and then high precision and then sometimes you can recognize it so those are the three kind of sample topics and then the others are much smaller but they're fun and they all fit with the theme of asymptotics so I hope to get to all of them or at least some of them so I'll describe them more briefly the next is called sum out and that's very easy to understand very fast so it's incredibly fast very accurate it gives a huge number of digits and I don't know if this is a word memory light I couldn't think that if you program it no matter how far you go it uses O of one memory you only have to store like eight numbers and then you don't have to keep anything as you go you throw them away so you're getting on the fly it's a method to compute alternating sums that means sums of real numbers where the terms go maybe very slowly to zero but they alternate in sign so we all learned as undergrads that that means it converts to something so you want series numerically but as I said very high precision and this is joint work of five people the first is Euler the second is a Dutch mathematician I certainly never met called Van Weingarten the third is Henri Cohen whom I mentioned before is the developer of Paris the fourth is Fernando Villegos who's here in ICTP and the last is me and it's of course not joint work where some of us lived in different epochs Euler invented the method but in a form that you could never implement but of course there were no computers anyway Van Weingarten sometime in the 20th century had a verse that you could implement it's very inefficient but it does work and then Henri Cohen found a tremendous improvement and when he was visiting Villegos or Villegos was visiting him he showed him and Villegos said oh but you can improve that even more and then they're both good friends of mine they showed me and I found another trick so there were four successful improvements what was basically Euler's idea and we have a little paper it's even on my website and it's now programmed in Paris it's a lot of fun it works very very fast and if I don't get to it it's fun you can just look at my website the joint paper with Cohen and Villegos and see how it works and lots of examples so but it's it's an incredible method program in Paris is one line and it's absolutely nothing to do with what the function is so you just have some function minus one to the k a k where the a k's are going to zero and you plug that in the computer if you take 200 terms you just so let's say you only know 200 terms because that's all you've computed then you just since it has to be linear it's just you have to predict some combination and so it just tells you what combination of the first 200 is the optimal combination to predict this value and it gives an incredibly good approximation you take that approximation it's very very close to the truth and of course if you have more terms it'll be better that's kind of fun because it's it's a universal form it doesn't of course always work it works for large class of functions then I have three final topics in fact now I can start speaking more slowly because I still have 15 minutes so I can take five minutes for each but it's also nice at the end we have five or eight minutes for questions questions and answers but you can also always ask the next time before I start at the beginning it doesn't have to be at the end so that was number four I nearly skipped okay number five now this is the thing some of you will know because I gave a course last year also a joint ICT PNC course but quite a few of you attended or heard and there I told this method because it was needed and part of it we developed in the joint work with Garfalinus that I presented in that course on quantum nod invariance but it comes in many different places so this is making numerical sense of factorial e divergent series so what does that mean now you're doing exactly what I said you shouldn't do you have a divergent series some a and x to the n let's say you know the a n's or you have some algorithms to generate lots you know experimentally or by the extrapolation method or theoretically that a n grows roughly like n factorial to the one so this lambda is one but the other coefficients may be there and you want to know what is f of for instance 0.001 I mean if x is a hundred it doesn't make any kind of sense because the terms go very rapidly to infinity but what happens if x is a hundred well the first few coefficients so you'll have n and then you'll have a n maybe the first few coefficients well let's say it's exactly n factorial then here you'll have 1, 1, 2, 6, 24 they're not that big I mean they're very big if n is very large and if f is this then a n x to the n this is 0.001 10 to the minus 3 this is sorry this is 1 this is 10 to the minus 3 this is 2 times 10 to the minus 6 this is 6 times 10 to the minus 9 it's still very small this is 24 times 10 to the minus 12 it's still very small so at the beginning the terms of course are getting smaller no matter how quickly the n's increase the first 20 are finite if x is small enough so at some point if you look at the terms they get smaller and smaller and then they start going to infinity so the obvious thing to do is you just stop at the optimal stopping point with the terms of the minimal this is spit down by mathematicians through the ages and by mathematical physicists it's very important in a quantum field theory and already in quantum electrodynamics and so for instance the magnetic moment of the electron is a constant that's been computed to I don't know 12 you know more maybe 18 decibels and it agrees with the theory so there's a theory and the experiment if you do it in the lab you get something in both they can get about I think 17 or 18 digits so huge accuracy and they agree but the theoretical method is you have a sequence of Feynman diagrams and it's a power series and I think the fine structure constant which is like 1 over 140 so at the beginning it converts and if you take like six terms then that's about when it gets to the minimum that's all you can compute anyway because the Feynman diagrams with more and more loops become intractable but when you do that you get a very high precision number and so you can use the naive thing is just so where am I here you can use optimal truncation you just stop you just replace the sum by the sum from the n optimal which means the one where let's say it's exactly n factorial then if you change n by 1 n increased by n x to the n increased by x so you want nx to be 1 so you take n to be 1 over your number in my case x was a thousand you would go to the thousandth term and stop and then the error will be exponentially small rather than nearly a fixed power of x but there's an improvement actually several improvements that I called we called smooth optimal truncation improved smooth optimal truncation where you get still exponentially good converters but with bigger exponents e to the minus constant times n but with a bigger and bigger constant you can't make the constant code infinity but you can beat the optimal truncation by quite a lot so that's quite a practical thing in real life this comes up often that you have a series all series coming from quantum field theory in any form so from Feynman diagrams and ours were not physics they were topology but they are given by Feynman type integrals they always have this factorial divergence that's typical and so if you want to evaluate them numerically it makes sense that you have to do this then another topic which is sounds similar that's if you have a very divergent series so super divergent series so e g you have some series n x to the n and a n is roughly I mean there may be further terms powers of n and x but it's roughly for instance n factorial squared or some bigger power than the first so this for the people who know that terminology it's called chevree class 1 class 2 and then this is something I did in joint work with Martin Miller so it's in our sorry with Chen and Miller two papers with David Chen and Martin Miller we found that actually much better things this is now not about numerical evaluation this is about formal series it turns out these actually behave better than simply factorial series so for instance you have a series like this you have some topics of a n let's say it starts x plus dot dot dot then of course you can take the inverse series there will be some other series with some other coefficients b n x to the n and it turns out you can give the exact asymptotic to b n from the exact asymptotic to a n you can't do that if it's conversion or even simply factorial it's a sum of many terms that they interfere with each other but here because of the rapid growth so it's a fun thing it's a small topic but I may say something that these super divergent series in some ways behave better they don't make any kind of sense numerically at least typically you can't there's no function that you know sometimes there is that has that as some topics but it's formal power series they have these beautiful properties in our cases these and we're coming to all numbers coming from what I mentioned already Siegel Beach volumes and Siegel Maser volumes of modular space it's some fancy invariant in theory of modular space of curves and has to do also with the dynamical systems and so on so that's very rapidly divergent series then in the opposite direction you've convergent series so you might say what's to ask if you convert it you just look at it but what if they're super slowly convergent and so there the basic example was suggested you know a century ago by Hardy and Littlewood I believe that's their example and it's mentioned in the announcement on both the poster on the website it's the sum n from one to infinity so let's call this f of x it's one over n times the sine of nx well there's absolutely no problem with the convergence whatever x is n is eventually bigger than x it's going to infinity once x is bigger than n x over n is small sorry sine not x that wouldn't work at all excuse me sine of x over n excuse me I wrote nonsense if n is very big x over n is very small so sine x over n is roughly x over n then the whole the nth term is roughly x over n squared and we all know that one over n squared converges but the problem is what if x is ten to the thirty well of course you could take a computer that n run from one to ten to the thirty one and then the rest you could approximate by what I just said very slowly but no computer in the world can do ten to the thirty one terms and the age of the universe is ten to the thirteen or ten to the fourteen years and the year is ten to the seven seconds and the computer does ten to the seven upgrades per second you're not even close so you can't just do it directly and the problem is the small terms the first ten to the ten terms this is essentially random because x is a huge real number and so you divide by n and reduce by 2 pi it's essentially completely random number now it's true that the sum random number over n converges one over n doesn't but a random number because drunken man's wall will converge but it's incredibly slowly it'll converge like in a square root of n so you'll never get anywhere that way and so when I was a colleague of mine in Bond, a physicist, monideen, very mathematically very astute physicist he asked me and we spent some time thinking and I spent more time and I came up with four different methods if x is fairly big, much bigger, very big and even huge and it does work I probably will never get to the course that's problem one problem two I can't find my notes I look for them in Bond then I hoped I'd take them with me to China in my luggage but they aren't and they're not in China and they're not in Trieste so I don't know where I put them so I would have to reconstruct it for my computer programs which would take time anyway I'll never get that far in the course and I'll tell you this never comes up in real life it's a beautiful theoretical example due to two great analysts but this is not a practical problem you will never see a series like that in your practical mathematical life and the last I'll just mention the name it was supposed to be the name of a joint paper Cohen and Bellabas who were the authors of Parin and the N they just included in some numerical thing we didn't write it up carefully but we were going to call it magic constants and it's a fun thing but I probably won't get to it and it is no special application in this course for the title was for Lagrange extrapolation which already mentioned is a method that is mathematically equivalent to the one I'm going to describe in the course but form that it very differently and so when you do it things converge rather unstably and there are various constants in asymptotic spans so you could try to study what they look like and get nice firmness and they call those magic constants so that might be an eighth topic so I'll probably not get to this and I very likely won't get to this only very briefly so it's 1724 which means I'm finished with my overview there's no point starting on one of the eight topics now so if anybody has questions now it would be great otherwise we can just go home everything was super clear or just oh no well this I said there is a paper I don't know where it is it's I don't know why you would want to read unless you've studied the Lagrange extrapolation method I don't know why you'd be interested they have a paper somewhere actually they wrote a joint book but I don't know if it's already finished on numerical methods and there they have a section they include and thank me that I helped them work it out but in the end we didn't write a joint paper but it's not a major thing I mean I'll never write a paper on that and this I told you that my notes I can't even find I don't even have it handwritten and it's certainly not published so there's no reference and even my notes I spent two hours looking in Bodin then I called the secretary in Shenzhen I thought maybe I left it there to send to Trieste but she checked and with the we we chat she photographed showed me all of the papers I'd left and it was lots of math but it wasn't the calculus with Monin I have no idea where those notes have gone so I don't have even my own reference so now I can't give reference some of these things are written one or two I mentioned when I get to it you can always ask if there are convenient references I usually prefer to have no reference when I tell the story and then at the end say you can read more because otherwise people read first and then you've seen a different point of view and it's better to to hear it you know with the fresh mind and then see it worked out formally but of course in some case I've revealed where the things are published anyone want to ask yes please oh no of course not because for that the physicists would have to tell me the next twenty terms of the series and they can't actually it's even worse as far as I look at that opera as to physicist friend the actual optimal truncation if you estimate you would have to take let's say seven terms that they can only get up to fifth five-loop diagrams so they actually don't even hopefully truncate they just stop when when they don't know the next term they only have like five terms and no matter how smart you are you cannot extrapolate to ask for five terms in the series it's just not enough information and five things you can continue anywhere you want you need fifty terms so I've many applications in the work with Garfield leaders we typically have cases where we can compute let's say some we could compute our number one with a lot of problems sixty seven terms one we could get a hundred and fifty one we could get thirty five but this was on his days of computation so typically the terms are hard to get but still with thirty forty fifty hundred not for four or five and then we could get numerical values and when things made sense we could check that they were giving us true things so there are numerical case but not not for that because the vitamin diagrams no one knows even the next two terms I think they don't know so it's sub-optimal truncation sub-optimal truncation you go until the terms start getting bigger but they have to start earlier because they don't know the next two terms I think I mean a physicist should check from I'm aligning them but I'm pretty sure that they at least at the time in this famous competition was done the number of terms they had in principle if they could do more of the theory using some huge supercomputer to compute the next vitamin diagrams they could get maybe two more terms and predict theoretically another three or four decibels but there's not too much point if you're a physicist because how do you know they're correct unless you can do the experiment at that time is that the absolute limit of what experimental technique could do so it was beautiful because both numbers could give the same number if you get with it was thirteen digits fifteen and they agreed to that precision was considered one of the triumphs of quantum electrodynamics and the reason that people believe that quantum theory and standard models and so on are working because this was a wonderful numerical check but both you would like to get of course twenty years later you would like to get two more decibels but I don't know if that's ever happened so a physicist can tell you I don't know but usually our situation is these are well-defined numbers and you can in principle compute as many as you want but as many as you want might be a hundred or two hundred it's not you know thousands but it's also not just four or five so Feynman diagrams you can't compute them they get hopeless you know seven-loop Feynman diagrams I mean there are a huge number of graphs already so it's a sum of many many contributions and each one is horrendous to calculate it's a many-dimensional integral and so you know you just can't do it now we only have one minute so we could even reasonably call it a day but if somebody still has a question feel free you can also ask me privately I have an office here as many of you know if you're at ICTP and I'll be in essentially every day so you can always come and knock and I'm always happy if I'm there and you can have if I'm not there but you won't notice it okay so then till Thursday same time same place Thursday I'll actually start and there'll be some mathematics or some examples not just talk and I'll also do the Stirling's formula and the proof that E is the square root of 2 pi just so that you see but I hope by that time some of you will have thought about why it isn't