 I won't notice it. It weighs about one gram. So does anyone have any questions? Or OK. So today I'm going to, well, last time I ended with something that I didn't quite finish doing. I'll just remind you very briefly, which was a question to you. It was kind of homework. If you want to think about it, I explained how you could find the asymptotics of the product 1 minus q to the n, which of course we know is a modular function, as q goes to 1 using the Euler-McClurrin method. And that has applications to the circle method. And later I'll come back to this in the course. And also when q goes to minus 1 or root of unity using the shifted Euler-McClurrin. And that will have an application. But that I kind of carried out briefly. But the exercise was to do the same for what's called the McMahon function, which is known to be the generating function for plain partitions, as they're called. And so the question was if anybody wanted to amuse themselves trying to find the exact asymptotics of that. So I don't want to ask now if anybody's done it, because I'll come back to such questions in a week or so when I'm going to come back to the circle method and some of the things that I mentioned actually in my talk last week of the Ramanujan date that many of you heard. So there's another function. It's called the P2 star. So I can add to the list of questions you might want to think about to test whether you really have learned how to apply that method. Because as I told you, that method is useful once a month in your mathematical life. You'll have a function of the form sum f of nt and want to understand it. And then these were nice, not completely trivial applications of that method. So if you want to know whether you understood it well, these two variants of the original 1 minus q to the n, where you either take 1 minus q to the n to the n, or 1 minus if you want this to be q to the n to the n inside the brackets, so q to the n squared, those are both very nice examples. But I don't want to discuss them now. I don't want to ask who tried the exercise because I'll be coming back to this in a week or two. But I thought I would add one more question and remind you that it was meant as a homework problem. You're under no compulsion, nobody will check. But if you like the method and want to be sure you can use it in a non-trivial situation that you might encounter, these are very good test problems. Because it's not entirely trivial. They don't obviously have the right form. And you have to fiddle a little to make it work. So as I said, I'll be coming back to that in connection. So in particular, both of these have interesting generating functions. So this is known, although I don't think Major McMain knew it when he wrote it down. This is the number of plain partitions of n, or solid partitions they're sometimes called. It's really more solid. If you think of the usual partition, like 5 might be 4 plus 2 plus 1. And then you make 4 plus 2 plus, well, 6 could be 5, 4 plus 2 plus, no, 8 could be 4 plus 2 plus 2. You make a young diagram. So you put in the corner of the right quadrant. You put a line of square of bricks. Then another line of bricks above it. And one really can't stick out. You can't have that. And then you count them with n squares. And you can do exactly the same in three dimensions. So you put one cube in the corner. And then you put a line. So on the bottom line, you'll have the young diagram. And then on the next line, you'll have another young diagram above it that doesn't stick out. And so you can do a cube. And of course, you can do it in any dimensions. But nobody in the world has any idea how to write down the generating function in dimension 4. But strange enough, in dimension 3, it turned out I don't know who proved that. I think physicists. It is the same as McMahon's function. And this one, of course, is an even more obvious interpretation. This was already in the paper of Hardy and Littlewood. This is the sum, q to the n, of the number of partitions of n into squares. And this was already treated in the famous 1918 paper of Hardy and Littlewood, Hardy and Ramanujan, I mean. And as I mentioned in the talk last Monday, that some of you heard, it's much more subtle than the original. And it has many surprising aspects. And so I would like to talk about in this course, because it is asymptotic. And it's recent research. So I'm not only telling things of you 30 years ago. So I will come back to both of these. And eventually, the object will be not to understand the asymptotes in this function, but to understand the asymptotes in these numbers. But for that use, the Hardy, Ramanujan, and later, Rau-Democrat and Littlewood circle method. And so there are two stages. You have to know exactly how this function behaves, near one and near every word of your name. And then you use it. It's a two-step process. And the first part, you don't need, even though Hardy thought you did need, you don't need the multilayerty. These functions are both not multilayer at all, but it works perfectly well. For the second part, well, if you haven't been able to do the first part, you'd be stuck. You can do it, but it has a lot of new subtleties. So that's just an announcement advertising for a future lecture. OK, so now I want to come to the subject for today. And so let me say the general problem. I mentioned this in the introductory lecture of the course. There's going to be a general problem, and then there'll be a less general problem that I'll reduce it to, a special problem, and then there'll be a more general problem. But I'll talk about those later. I want to state the general problem very briefly. I stated it already in the opening lecture. And then I don't want to talk about the solution yet. I want to give several examples. Because the thing with the Euler-Mclaurin for, not Euler-Mclaurin, sorry, well, generalized Euler-Mclaurin, sums of the form f of n, t. That I said, of course, these are meaningless guesses. I said you will encounter some like that once a month in your mathematical life. Of course, it depends what kind of mathematics you do. Let's say I encounter such sums like that quite often, or that they may be a little hidden. So this is useful. But this problem I'll talk about here, depending what mathematics you do, but let's say in my case, I encounter problems where I can use this, I would say three times a week. And I found this trick like 20 years ago, so you can work out how many times I've used it. And I've shown it to a lot of friends, and so many of them could use on some numbers that they had immediately. It's not new, I said that before. It's essentially equivalent to various extrapolation methods that were known. But there, at least the way they're presented in the books, it's more complicated than the way I see. So most people don't actually know them. This one, I can guarantee, once I've shown you with the mnemonic, you can't forget. It's so obvious that you think, well, of course, that's what you do. And it's super simple. The program, let's say in Paris, to do this extrapolation is typically one line long, and not one of my lines that are like that, but a real line. So it's very easy. So the problem is this, and I'll try to be fairly specific, given a sequence of numbers, a real or complex, they might as well be real. You could take the real and imaginary part of numbers, let's say, a n, n greater than or equal to one. But I have to explain what I mean by given. Is a given in closed form? Is there a simple as, for instance, if it's given as an asymptotic formula, then the question I'm about to ask is empty, because the question is, can you recognize the asymptotics? So in the sense that you can calculate any given one, so there's an algorithm, or maybe a closed formula, any given a n in a reasonable amount of time. So let's say you could compute a n for n less than maybe 100, or maybe 1,000 in another case. But I don't mean that you can compute a n if n is 10 to the one million. Then you wouldn't have to do anything. You could just see the asymptotics. If n is practically equal to infinity, but so typically we'll have a problem that it costs money, meaning time, on the computer to compute. So if you let the computer run for half an hour, you get 100 values. If you let it run for a day, you might get 1,000 values. But if you want a million values, you'll have to let it run for the rest of your life. You don't want to do that. So the question is, you have a limited value. So it's a limited number. So limited number, that's very important for the application that you don't need a lot. And there's a small improvement on that that I'll say later. But the other is, but you know those numbers to high precision. So in other words, I maybe will only use the first 100 numbers. Remember, I'm doing asymptotics. So you have to take, you can't take three numbers and make asymptotics of A1, A2, A3. That makes no sense. Let's say I have 100 and you might draw a graph and it seems to have some behavior. But I do need to know these numbers to very high precision. So if I need 100 digits, or if I need only 20 digits or 50 or 1,000, depending how many numbers I have and how much accuracy I want for the final asymptotics, these numbers should be calculable to high precision. Now sometimes there's no problem of precision. They're integers. So they're exact numbers, but they're rational numbers or they're algebraic numbers. But sometimes they aren't, but you can compute still each one if you want to 50 digits or 100 digits. So that's the input. That's what I mean for this problem by given. It's not given necessarily by formula, or there may be a formula that doesn't make the answer to this question clear. I just had a question, such a question from Emmanuel who's sitting there. Numbers where we have a closed formula, but it doesn't make the asymptotics clear at all. So one wants to do some kind of interpolation. Okay, for which you think, for which you expect growth or like, and now I'll put in several parameters. But as I said, this is the general problem. I'll have a special case in a second and reduce the general problem to the special one. And then later in the lecture, I'll have, or maybe next lecture, I'll have generalizations where you'll have more parameters. But the simplest one would be as n tends to infinity. First of all, it might grow factorially or it might like square, factorial square. So that's actually got a name. It's called Giver type gamma, but let's say, sorry, I was calling that alpha. I don't want to change my notation for my notes. So n factorial to the alpha. Now, once we've removed that, it should have less than factorial growth. The next most common growth after factorial is exponential. So there might be some positive number, maybe some actually could be even an alternating number. And that's how this happens. Algebraic number, anything, beta to the n. Then once you've removed that, it no longer has exponential growth. What's the next most familiar growth, less than factorial, less than exponential? Of course, it's power growth. So then we might have n to the gamma. So alpha, beta, gamma are constants. But then, once I've removed all of that, it should have a limit. That's called the limit C0. And if you subtract that limit, it should be something over n plus C2 over n squared. So this is a very, very common situation. Not every nice function, even very smooth analytic functions, may not have quite this behavior. As I say, the generalization of the law more parameters and more things. And you can also fewer. For instance, you can just start a n has a limit and you won't find the limit, which will be the special case. So here, C0, C1, et cetera, are constants. So that's what you expect. And the problem is, find, by the way, these are in order of importance. Alpha is dominant. After alpha, beta is the next dominant. After beta, gamma, after gamma, C0 after C0, C1, and so on. So you have an infinite sequence of numbers that's numbered, if you wish, from minus three to infinity, okay? And the question is find first alpha, then beta, then gamma, then C0, then C1. Of course, not all of them, but let's say the first 10 of the sun eyes, I can get the first 50, if I have enough numbers here and enough precision, to high precision, and find them quickly. Quickly means not just that the method is fast, the method is always very fast, but that you don't need a lot of A n. So the A n's may be very slow to compute. In the work I did with Garoufalidis that I lectured about last year, we had case where we could get up to n equals 67 with like a day or two of computer time, we could have gone up to 75, but they were very expensive to compute. So once you have them, the algorithm will be very quick. That's not the point. But you have to assume, so when I say quickly, what I mean is in the sense, and quickly, quickly means quickly in the number of A's that you need. So you want to use a small number of coefficients, and nevertheless have very high confidence and get these numbers. And remember, it's not obvious because let's add just of this. So if I have 100 numbers and I just say, well, it looks like the limit is 1.3, you just draw a graph, then you're just estimating this sum by C zero. But the error is one over 100, so you're off by 2%. I want the number to 30 digits. So the question is, how do you really pin it down? So if I leave you to think about it, many of you would find the answer. It's not really hard to do this. You just have to think of a way that's really easy to think through of an algorithm. And I'll come to it afterwards. You'll see it's very easy. But what I decided to do in the actual lecture is to start with many examples. I've like four or five prepared, and I've like 50 others that I could have given. I just made a selection, all from my own work. So just things that came up in various things, but my own work. Some were other people's papers, but they asked me, can you find the asymptotics? So things that I was confronted with, and they come from combinatorics, algebraic geometry, they can come from anywhere. It's just numbers. As I said, this is a situation you see all the time. You find some numbers, just a sequence of numbers, maybe even integers. And they're growing very quickly. You would like to know exactly how quickly they're going. And not something vague, like it's roughly between n to the four and n to the five. I would like it's n to the 4.229, blah, blah, times something in the exact asymptotics. And surprisingly enough, it's extremely easy. Okay, so example one. So that VD, this is meant to be some kind of a script, D is an integer, is the vector space of Vassily of not invariance. And I don't remember what's called the degree at the end, it's called the degree, not invariance of degree D. So D is some integer, zero, one, two, three and so on. Now, if you don't know what a Vassily of not invariant is just to make my constant, put my constant rest, I'll tell you very briefly, it's completely irrelevant. There are just some numbers that I'll give. We're going to bound them in terms of some other numbers that somebody else proved, Stoimanoff, and those other numbers I will define. But this is the motivation. So let me tell you very briefly, Vassily of maybe 30 years ago, I don't remember how long ago, discovered a very nice type of not invariant, which a couple of the known not invariants, maybe Alexander, I forget what were examples. And the idea is, if you have a not, you can always draw a not by making a picture in the plane with crossings and then each crossing, it either goes over, it goes under. So you take a plane, generic plane projection and here would be a typical case. Well, this would be a kind of really stupid one because it's only a three crossing, so it can't even be alternating. This is probably, no, maybe that's the trefoil not. I'm not even sure. But the first interesting one, same with an eight, I can't draw it anymore. I used to know how to draw this by heart and now you make alternate crossings. It doesn't matter at all. Anyway, you have a not. Now if I have a not invariant, then it'll associate to any not a number. So invariant of K. So not invariant is something you've invented that is well-defined. So by definition, when you draw a not, you can simplify that not. You can do so-called Rydermeister moves and then compute this invariant. So let's say that this invariant had the property that if you have locally in your picture a crossing this way and you replace it by crossing the other way, that it doesn't change. But that'd be a really stupid invariant because you could immediately unnot your not. You just pull. And so that would be the constant. So that's D equals zero. But what if it does change, but the difference between the, so the invariant of this minus the invariant where you change one cross into the other, let's call that I one of the not. Of course, it depends where you do this. And now you can do it again. You take I one and you make another difference. And now let's assume that that's now zero. Then the original one wasn't necessarily zero. And so that's a bigger vector space, but you can easily see it's finite dimensional. So don't worry. So the number of times that you have to do this difference before you kill the invariant is called the degree. And if you fix the degree, it's a finite dimensional vector space. And let V be, well, we want to know the dimension and we want to know what it is. So there are four bounds that I know. Well, the work I did is 20 years gone. For all I know there are many new bounds, but I'm talking about this method and that was this work some number of years ago that I've forgotten. So the first thing is you have VD. This was, I don't even know who proved it. It's less than the number of linear chord diagrams of size D. So a linear chord diagram, let's say D is, well, I don't have to say D is four. I can say D is anything. One, two, three, four, five, six, seven, eight. So you put two D points on the real axis and you number them one up to two D and then you connect them in pairs by semicircles. The size of the semicircle doesn't matter at all. It's just such a picture and we have to know how many ways are there of connecting, okay? So they correspond, I mean, you do things with that picture, but I won't. So this corresponds to elements in the symmetric group on two D points, which is a free involution. So in other words, each I has a partner which is then called tau of I and of course the partner of tau of I is I. So tau squared is identity and no fixed points. So it's just free involutions and it's what's known actually, sorry, so of course that's true. Nobody is is the dimension of the space of linear chord diagrams on two D points. So this is somebody's theorem which I should have looked up but I've forgotten. Multilow or relation called the four term relation. So there's a specific relation that if you have four different of these chord diagrams related in some simple way, crossing and uncrossing A, B, C and D, then A minus B plus C minus C is zero in this group and you divide by that and you get a vector space. So that immediately gives, sorry, this is equal. In fact, even without the dimensions. This is a theorem, VD can be written as the vector space of linear chord diagrams, multilow relation called the linear relation called the four term relation. So that immediately shows that dimension VD is less than or equal to the number of free involutions with no restriction. So if free involution means there are no fixed points and it's an involutions, it's a product of two cycles and this number is of course trivial to compute and very well known. It's two to the D times D factorial of which you can also write as two D minus one, double factorial and the reasons of you start with this point, you have to connect it to point which is not itself. So there are two D minus one. Once you've chose that you take the next point, there are two D minus three possible partners. So it's just two D minus one times two D minus three down to a three times one. So that's an immediate upper bound and this number grows roughly like two to the D. Well, this is asymptotically equal by Stirling's formula to two to the D D factorial over the square root of pi D. So it's roughly like D factorial but it has this two to the D as well. Okay, so that's the trivial upper bound once you know this theorem. So this where, that's the starting point. So this implies the first upper bound is the trivial one. Then I'll just give the names of the authors, Kutoff and Dugin in 1994, proved that it's less than or equal to D minus one factorial. That's much better because we've gained a factor of two to the D. We still have the factorial, okay? Then, I don't know how to pronounce that. My Vietnamese is non-existent. And Stanford, if I can pronounce that, in 1999 improved it not by very much but the factor of roughly two D. So that's not a huge improvement but just to show that people took this problem very seriously and those were improvements. But then, Stoimanoff showed that it's less than or equal to number XID, which I'll tell you in one second. So this was Stoimanoff in actually 1998. So you could say, well, why isn't it stronger? Well, because XID wasn't known what the asymptotics was. So he showed it's less than the number XID which I'm about to show you but he didn't know anything about the asymptotics. So these results don't per se either one overtakes the other. And XID, what he showed is that every, so Stoimanoff's theorem was that every linear core diagram is congruent, this four-term relation which I haven't told you what it is, to a linear combination, LC is always linear combination of regular core diagrams. For my lecture today, it makes absolutely no difference what the definition is, but I will tell you because it's fun, a regular core diagram, there are two ways to say it. So tau is this free involution, tau is regular if whenever, well, whenever tau reverses two points. So in other words, two adjacent points. So that means tau of i plus one is less than tau of i. They have to be different because it's a permutation. i plus one is bigger than i. So if it reverses the order, then the chords starting at i and at i plus one, so those two particular chords, that i plus one cross or coincide. So you could have i and i plus one and it could be that i is joined to i plus one, i plus one is joined to i, then they also cross, so or coincide. And if I write that in formless, it says that if tau of i plus one is strictly less than tau of i, then tau of i plus one is less than or equal to i, which is of course less than i plus one, which is in turn less than or equal to tau of i. So now it's the original interval is inside the new interval. You can easily see that that's the same as it doesn't cross. So those are called regular, as I said the definition doesn't matter except for fun, but the point is you get this upper bound and here's an output. So this is the number of regular chord diagrams, regular linear chord diagrams. And here's an algorithm to compute it for any D, but it was a triple loop. It was quite complicated. So if you write a computer program, it's roughly O of D cubed to do it for D, so it takes some time and he obviously used a computer, he'd gone up to 30. So Stoimanov went up to about D equal, I think D equals 30, exactly. And found, this is on his own words from his paper, he found that this excite D seems to be, but he didn't like the asymptotic method. So it was just looking at eyeballing the numbers. He put is, so this is from his paper, something like D factorial over one and a half to the D. So that's a lot better because here we had D factorial times an exponential two to the D. Here we gained an exponential factor, here we gained only a linear factor, but now we're gaining another factor, 1.5 to the D. So compared to the original number, we've gained a factor of three to the D, it's already good. But then comes my hot chocolate story, which I asked, I think Emmanuel only said this one, I haven't yet told. I tell lots of anecdotes. I'm not sure if it was hot chocolate or cocoa. It's almost the same drink. I'm not quite sure what the difference is. So I was studying some completely other sequence of numbers, which I'll define in a second. So I forget what year I could look it up. I had some other numbers, some other sequence, totally different definition. And so I want to know, and I could compute a much simpler description, so I knew 200 values. Well, actually I can tell you the definition. This was in a paper on a crazy function invented by Konsevich. It's now called the Konsevich-Sagir function, because he invented it, showed it to me, and we played with it for an hour, and that's all he did with it. Well, he had done some numerical computation, and I worked on it for several months, found the thing I'm about to show, it connects with multiple forms. So there's a paper written by me, but he invented the function, except it turned out he didn't invent the function. I was told later by several different apologists. This was a known function. It was the Kashaev invariant of the Trefoil log, which is this very simple log I drew first. And his function was the function, the variable is q, and it's a very crazy function because it makes no sense anywhere. You take the infinite sum from zero to infinity of the product, one minus q, one minus q squared up to one minus q to the n. That's called the nth Pochamer symbol, or q factorial. Now, if q is bigger than one in absolute value, then the nth term goes rapidly to infinity in absolute value. It certainly diverges. But if q is less than one in absolute value, then the nth term has a limit, which is the infinite product, and it's a non-zero limit. So the series still diverges. And if q is on the unit circle, then it oscillates all over the place. So this essentially never diverges, but it does sometimes converge. So this converges if q is equal to root of unity. For instance, if q is i, then as soon as this n is bigger than four, one of the factors is one minus q to the fourth, and it's zero. So the series terminates, so you get an algebraic integer. However, it also converges somewhere else. This is an example of what's called the Habirro ring, which is a wonderful ring. I won't talk about it in this course. This also converges not just at root of unity, but infinitesimally near root of unity. So let me write q as one minus epsilon where epsilon is infinitesimal. You might say there aren't any infinitesimal numbers, but epsilon is a formal variable. Well, now you see that this thing is a power series and epsilon is divisible by epsilon. Well, it is epsilon, it's a polynomial. This is a polynomial epsilon, divisible epsilon, so is this. So the whole thing is a polynomial, hence a power series, divisible by epsilon to the n. So now we're in the epsilon attic, but topology, and of course any power series whose nth coefficient starts with epsilon to the n, is a well-defined power series because each coefficient is just a finite sum. So therefore, you can expand this as the sum of some numbers and to fit with what I said at the beginning, let's call this a n or a d, I could call it a n epsilon to the n. That makes perfectly good sense and I can even give you a little table, but if you're taking notes, keep a little space because of the second table. A bit later, well, n could also be called d, but I have n, so actually this is zero that we don't really care about very much. One, one, two, five notes, we care for my story. The story won't work without the zero. So you can see these numbers are growing quite a bit and I can already let the cat out of the bag, but I'm sure you guessed that the story is going to be that Stoimanoff's idea, which is his upper bound on the Vasilev invariant, was mined, but how did I find that? I'd never heard of Stoimanoff. I'd heard of Vasilev invariants that didn't know any of these bounds. So if you have a sequence of numbers and you want to recognize them, I'd like to joke there are two algorithms. One is you look it up in Sloan's, it used to be a book. So it was called the Handbook of Integer Sequences. It was a wonderful book, it was in every library and I used it many years ago, but of course for many years now, it's also online. As you just go to the online version, so it's now the online Handbook of Integer Sequences and you type in the beginning of your sequence and it immediately tells you this is a known sequence of these, we have one that starts that way and it gives you references, the definition, what is known about it, it's very, very good to identify. To finish that part of the joke, there are two algorithms. One is that and the other is ask me. Because quite often, I already recognize that I can figure out something because I love numbers. I'm not the only person, but I'm the only person I know who loves sequence of numbers as much as I do. So in this case, I couldn't ask myself because I was me. So I went to the Handbook of Integer Sequence. I typed, you know, one, one, two, five, 15, 53 and the computer immediately said, sorry, it's not on our database. We don't have this sequence. Well, there was no reason they should. Nobody, as far as I know, had ever looked at this thing. So that was all. But then the next day, I got an email from Neil Sloan, from the author of this, who happens to be a, not a close friend of mine. I haven't seen him in many years, but a friend. And he wrote a very nice email, I still remember. He said, dear Don, late at night, if I cannot fall asleep, I sometimes get up and make myself a cup of hot chocolate. Or maybe it was a cup of hot cocoa. But sometimes I get up and go to the computer and see who has been using my Handbook. So last night, I looked at the computer and I saw that you had looked up the sequence. As I say, he knew me. So he said, but you obviously didn't know. I had typed one, one, two, five, 15. He said, the computer program has been told to ignore, it leaves away trailing ones because there's no point. Which is actually idiotic. It means if you put in the most famous sequence in mathematics, the Fibonacci numbers, one, one, two, three, five, it will say unknown sequence. But I told him, by the way, this is idiotic. Surely you can tell your computer if somebody types in a sequence to remove any unnecessary ones and immediately did. So now you can put in Fibonacci. So he said, so that's why it didn't work. So I typed in your sequence again, one, two, five, one, two, five, 15 without this font. And he said, here it is, so it's number zone, so. And he said, I assume it's the same definition you had, but it wasn't all that led to a research paper because then I found this whole story. He had computed the first 30. I get 200 because this thing is very easy to compute. You could compute a thousand or a few thousand if you wanted. And then, okay, so that's the story of the hot cocoa. But then the question was, how did these numbers actually grow? And it was actually for this paper that I worked out the number, the method that I didn't know. I said it's definitely known, but I've never seen it in this form. So I found numerically, and you can see just by looking at this table that they're going quite, quite fast. It's already a thousand 14 for n equals second. Numerically, I found, indeed it was n factorial as we already knew because remember these upper bounds were n factorial times an exponential and Stoimanov said roughly, well his d is my n, n factorial over something. The number in the denominator had a well-defined limit. That's just like what I told you before, n factorial to a certain power, which here is one. Then some purely exponential thing, like his guess, 1.5 to the d. Well, this number was 1.64.9 dot dot dot to the n, but essentially any number of this we recognize that is 8 of two, which is pi squared over six according to Euler. And indeed that's how Euler found that by recognizing 1.6449 is pi squared over six. So you see it's already a much better bound than the 1.5 to the d because 1.6 is bigger, but then you can continue. Then if you divide by that, there's still a squared of n, which makes it a little worse, which is why his number was smaller than 1.6. Remember his n was only 30. So the difference between 1.6 to the 31.5 to the 30 isn't huge, it's no bigger than five which is squared of n. So of course if you had a thousand numbers you can distinguish them easily, but they were expensive for him, even very expensive for me, less expensive. But then comes the point that it's easy to recognize, but the other numbers I could only get with numerical procedure. And this actually told me opening lectures so I've ruined the surprise, but I'll write it again. The first number C zero to high precision, to however many digits that is, I think 19, to 19 digits it was supposed to be that. And the next ones in the nature of the method you lose precision as you go up, you could get more here by taking more terms, but I was using only 200 terms. Now I didn't know the method as well as I know it today. I think now I could get many more digits with the same 200 terms. These were the first three, but purely numerically. So the fact that they exist and have very well-defined values also confirms your guess, after all we don't have a proof at all yet that this has any asymptotic expansion, but the fact that if it does have one and get very well-defined numbers makes you think it's true. And in many problems like this, you have no idea even when you've done the numerical analysis, whether it's true, but if the numbers converge to very high precision, you're fairly sure. But then after I studied the numbers I said for many weeks and finally wrote a paper that related to the multiforms, it turned out that I could give a closed form for C zero and in principle for any coefficient, but I think they got so complicated I don't think I could even see one. And I think I maybe even wrote this number in the opening lecture. C zero is exactly 12, I mean that's a theorem, 12 square, so it's a theorem that there is such an expansion first of all with state of two and the C zero is 12 square out of three over pi to the five halves times e to the pi square over 12. And when you now calculate that number on the computer, you find that all 19 digits are correct. So now it was the asymptotic method really works. That's the message I'm trying to say and it's super easy to do. But let me actually tell you more about these numbers because it's kind of very pretty mathematics anyway. Some did I told in my course last year because this is a Kashyap invariant and that course was on not invariants and Kashyap invariants, not the Vasilev connection but the Kashyap invariant. Those are the other definition because you shifted factorial. So here's another fact. If I take the same series, so I take the same series, one minus q times one minus q to the n. And then we see that what we did before was actually very unnatural mathematically. Because anybody who looks at that says, wait a second, the limiting value of this summit already said that if q is less than one absolute value, the limiting value of this is the infinite product. And that infinite problem we all know is the dedicated eta function except for two things. First of all, it's all five factor one over 24. So one should put in the one over 24 so that the limiting value is truly multiple. They'll have much better behavior. And secondly, q is not an independent variable. It's e to the 2 pi i tau. So if I'm on the imaginary axis, it would be e to the minus 2 pi tau but I can just, e to the minus 2 pi t, I can rescale. So we don't want to, it's a knot instead of one minus epsilon. So it's the same thing. If I substitute q as e to the minus t, there's no longer a polynomial in t but it's a power series, this by t. That's, they're all this by t, they're n of them. The n throws to visible. So this converges. And then these were, now I can illustrate my story before that there are two methods. You find that you get some integers, tn up to a factor 24 to the n. And here are the tn. They grow a lot faster, 1, 23, 16, 81, 2, 5, 7, 5, 4, 3. I'll only give one more. I only wrote one more down and they're huge. 3, 7, 2, 8, 1. So now these mathematically are much more natural because we're taking the right variable t which is essentially tau in the upper half plane and we've included the q to the 1, 24. Now again, there are two methods. You can ask me, well on this I'm proud to say I didn't look it up in Sloan's and Book of Sequences. I thought about it and I found the answer by myself and then I looked up in Sloan's and Book of Sequence and of course it was there and they're called the Glacier's t numbers and they were already found, I think Glacier was 1903, so well before me, but you could find them and they're defined by the very, so you again, you always remember any series of numbers you use a generating function. But a generating function, there are many kinds. We could have just tnx to the n or we could divide by n factorial but we might want to divide, make an odd series and then again make it exponential. Now these numbers grew so fast that this diverges but this one now converges. So it's an actual function and it's a very nice function. It's sine of two t divided by two cosine of three t. So that's the definition of the Glacier numbers and so that gives you, they're listed. There these are the Glacier's t numbers and that's the definition. They're defined by a generating function and we take the same generating function with a different factorial and then you get something completely different. Now from that, you get a closed formula I'll also give because in the first lectures of this course I explained about asymptotics and values of L series that negative interest and about Bernoulli polynomials and this is a bit of all of them. Let me put L12 of n is the Diraclet series. 12 over n, n to the minus s. So if you don't know what 12 over n is, I can tell you the first few values. It's zero if n has a factor in common with 12 so two or three. So we start with five, seven, 11. So the coefficients for one, five, seven, 11 are one plus minus plus and then it repeats with period 12. Okay, so that's the L series and so tn is equal exactly to tn plus one factorial over two squared of three times the L series, this L series at two n plus one over pi over six to the two n plus one. So that gives you the exact asymptotics because if n is large, then this is just one plus o of one over five to the n so this is essentially one. So this tells you the asymptotics. Of course you can also add the functional equation which I want you to write because I didn't write it down. You could also write it as some simple multiple, now rational but the value of this L series at a negative number but you can also write it by the method I explained in the first lecture of the course. So this is another nice application that you could write any Diraclet series if chi is periodic in terms of Bernoulli numbers, of Bernoulli polynomials with the denominator is that number. So here the denominator is at most a 12, well it's exactly a 12 in the exact formula for what it's worth. It's just up to a factor, it's just a difference of two Bernoulli polynomials. Well the same Bernoulli polynomial evaluates the 12 and five plus. It should really be at a 12 minus, at five plus minus at seven plus 11 plus 12, remember the Bernoulli polynomials are a symmetry. So that thing would be zero if the index were odd and it has to be even and then we'll need two of the four terms by the symmetry. Okay so that's a very nice sequence of numbers and that's my first story including the hot chocolate story. So I'm purposely digressing because that's the whole point of the course is to tell lots of fun mathematics that in some way are related to asymptotics. It's not a concentrated course proving a bunch of theorems. So I want you to apologize for all the digressions. This kind of a course. Okay so that was my first example and I hope it's a nice one with all of these surprises that it has this deep interpretation in terms of the regular in your core diagrams are completely different in terms of the expansion of the so-called Otsuka expansion of the function of the Euroring. So that's a class of a huge, an example of a huge class of functions of very great interest in three dimensional quantum topology that my whole course last year was about that some of you also heard. Okay well at this rate I won't even finish the second example and then I wanted to tell the actual method but if I don't get to the method I don't care because then we have something for next time. Well if something for next time anywhere I'm sure but the method is fun but the examples are somehow even more fun and the less you know the more fun it is because once you know you say sure I could have done that too but when you see this well certainly when I did this and found it I was completely amazed that it was possible just out of 200 numbers to normally see the pi square over six but get this number to all those digits and then I thought this can't be right but when I found the analysis two or three months later gave the exact number it really was right so I kind of fell in love with this method which again I emphasize it's definitely not, I'm not presenting it as a new piece of mathematics it's probably equivalent in some sense to Lagrange interpolation Lagrange was a very long time ago and it's almost certainly the several friends have told me equipment is something called the Richardson interpolation method when I looked that up in some physics book the explanation was so hard for me I couldn't quite figure out what it was I'm sure one could and I'm sure my friend is right it's equivalent but the way I'll show you with the mnemonic makes it easier to recognize so my second example is from algebraic geometry specifically it's from a paper of two friends of mine Daniel Grunberg and Peter Moppe I won't write it because what they did is not relevant I'll write a definition and this was these were numbers that had been studied for 50 years or something by people like Van de Varten so that ends again the index let this be the number of lines so this is what's called enumerative geometry it's a whole field of algebraic geometry the most famous one the person really started with Schubert not the musician and that's the famous Schubert calculus and you do it in terms of term classes of various bundles and various properties of brass magnets and things like that but it's to count some problem with algebraic geometry how many objects of a certain sort are there but you have to adjust all the dimensions so it's finite if you know about multilize space you want the virtual dimension so multilize space to be zero and then it's just a collection of points and you want to know how many points so here it's the number of lines in a hypersurface so here I'm not changing the degree it's always lines and some problems like this with things of degree n but here it's degree one in a hypersurface generic maybe of degree something in P n it doesn't matter let's say P n of c it doesn't matter so in projector space and here the degree has to be two n minus three and that's to make the number finite to make it interesting so if the degree is too big and too small in one direction there will be generically none and the output will be infinitely many to get a finite number you have to have zero dimensional multilize space when you compute it's this so let's take two cases well v one even I mean I like to be very pedantic and very degenerative even I don't want to think about things of degree minus one in P one I mean these are lines in P one P one is already a line so is that one line or is it not a line just the line of degree minus one let's just ignore it but how about v two if v two instead this is degree one so it's a straight line and this is P two say something is wrong now something is really wrong oh no it's a hypersurface in P n so that's right so before I P one but a hypersurface in P one is a point and the number of lines with any property point is surely zero but if you ask for lines for degree two minus so I guess it's zero but it's really kind of weird but if I take two a hypersurface in P two of degree one is a hyper plane but hyper plane in P two is a line the number of lines in a line is one so this is trivial v three now two n minus three is three we're in P three but we're in hypersurface in degree three that's called a cubic hypersurface so it's a cubic surface the number of lines on the cubic surface that's the most famous example the numerative challenge that there is there are 27 lines on the cubic surface okay so there's a formula that I'll write in a second but let me give you a couple more numbers so n and vn so I already told you it starts one that wasn't very hard 27 but they grow rather fast the next one is 2,875 the next is 68,905 and the next if I can read my handwriting is 3,50,000 no 30,509,306 so there's a formula and I'll give you the formula just for fun and even these slide improvement the formula that I found which I was proud of maybe it was known, maybe it wasn't but for their paper and then I found the asymptotics in the end theoretically but first it was the asymptotic method and these two authors this was to say Daniel Grünberg and Peter Morenges still at the Max Planck Institute and they liked it so much that they wrote I wrote an appendix prove the asymptotics at this sequence they wrote an appendix explaining my method they said don's an asymptotic streak so that's one place where it's written down because they thought it was a lot of fun but in fact we didn't use it here because you could prove it in this case so here is the closed formula so something that I don't know if anyone in this room has used this maybe Pavel Putroff but certainly Lothar Götzsche would know it well the Artier-Bott localization formula in some equivariate version it implies after some work so this was a known formula in the old literature the following explicit formula you sum over all pairs of integers let's say i less than j between 0 and n inclusive and then you take the product a and b I won't write that they're non-negative I'll just put a plus b is this degree that we want 2n minus 3 so a goes from 0 to 2n minus 3 and b is the complementary number and you take a times some free variable w i plus b times w j so that's a linear form in n variables n plus 1 variables so we have a probability of linear form so it's a form of degree 2n minus 2 and then we divide by the product k again goes from 0 to n it's different from i and j so this is the answer w i minus w k times w j minus w k so that's a much bigger degree now that's the answer and the crazy thing about this answer so the a and the b are very in summation but w and i and so this is the answer and surprisingly you can't see this I look at this formula this is independent of these n plus 1 complex numbers which is certainly not obvious at all but I found this expression in a bunch of variables it's simply a constant it only depends on n and for small n it's given by that table so they showed me this that formula was known and first I found a much more elementary proof of the fact that it's constant and that then gave a closed formula which is very much easier to work with which I have here it's the coefficient of x to the n minus 1 in 1 minus x which is the product and again a and b are non-negative and with some 2n minus 3 and you take a plus bx that's a much simpler formula that's obviously there aren't any parameters that's just you expand and you get some integer so those are the first values and now the method here if you apply the method using this formula it's very easy to compute the other would be very unpleasant you can use some random numbers like 0, 1, 2, 3 up to n I guess and just do it but anyway this is extremely easy so you can compute a bunch of numbers and if you do that you will find this is now anticlimactic because you'll find something I'm talking about it doesn't really matter what it is but just for completeness I'll say what it is it's the square of 27 over pi it's again growing here it's natural to do it as a power of 2n minus 3 and then the power is 2n minus 7 halves and then you get a power series which starts square root of 27 over pi but all the coefficients contain the square root of 27 over pi so I took it out because then they're now rational and it starts 9 over 8n minus 111 over 640n squared so there are some numbers 9, 11, 9, 9, 9, 9 it's like calling the police over 25, 600 n cubed and so on so there is an asymptotic expansion you can find it very easily usually the trick once I've shown you the trick and the only problem when you do it is recognizing this number because this number will be 1.125000 anybody can recognize it and once you know it's rational you either just multiply by something small or you use continued fraction you find it immediately so these are very easy this what happens to be easy because you might think of squaring maybe you think of multiplying by tri in practice recognizing a number like this is already not so easy and actually nobody really knows how to do it unless you have a guess if you think it might be say the square root of an integer over some power of pi but it is, you know, just give it to a decimally it would be very hard to guess so the number is simply a square root of 27 over pi I believe you can easily recognize so even if you didn't know but in this case you can prove it theoretically using these formulas but that was a nice example where this got used very early very early for me I mean so it was one of the first problems maybe I'll leave that so then the next example is an infinite collection of examples so example three let's have some numbers bn let's say in r again known explicitly and we decaying in some way and we take my sum I want to compute the infinite sum this is certainly a problem we have a lot in mathematics you have an infinite sum and you want to compute its value numerically by precision after which if you're lucky you might be able to recognize it exactly like one could do here with this formula you might not but at least you'll have 20 digits and of course it should be fairly slowly converted or it's no fun if it converts like 1 over n factorial you just take 15 or 20 terms and you already have 20 digits but it converts like 1 over n squared and you want 100 digits then you need 10 to 100 terms and no computer will do that so that's the question well now it's kind of clear what you have to do you take an let's call it a capital N to be the partial sum and so this will be maybe if this is nicely behaved c0 plus c1 over n plus dot dot dot and you use the asymptotic series so using the general problem so the the asymptotic trick I'll call it because that's what Grimbeck and Moret called it the asymptotic trick what you'll get is the limiting value which is what you want the infinite value of the sums of course just c0 to as many digits as you want you now don't care about the way it converts you only care about c0 okay so an example of the well obviously that's a whole class of examples that's any convergence sum it won't always work so let me make a couple of comments about that first I'll give an example I think I'll erase all of this example 2 has finished so an example of the other of this in practice so this is dg and the example I'm going to give you it's actually a little trickier it's not quite like that and yet the method applies very beautifully so the example which I gave is one of my three examples on the course announcement which is on the wall here at the ICTP as advertising for the course so this is a problem a colleague of mine a friend of mine asked me about a year ago I wanted to think that the infinite sum had a certain value just because he computed a lot of terms it didn't have that value at all but anyway he asked me if I could compute the following sum so this is written in the poster so you've already seen it if you looked at that take the binomial coefficient j plus 4 thirds over j I assume you all know that binomial coefficient is defined even if it's not two integers if the number below is an integer it would converge because this grows roughly like j to the 4 thirds so it's growing even if I took the inverse it would be like 1 over j to the 4 thirds well actually that would converge but just barely but the one he needed was the minus 4 thirds power of this don't ask me why actually I never asked him why and so this converges it converges like j to the minus 16 over 9 well the sum of that 16 over 9 is bigger than 1 but it's less than 2 so it's even worse than the sum 1 over n squared but the error after n terms would be o of 1 over n here the error after n terms would be n to the 7 ninths so even if n is a thousand you take a thousand terms you'd only have two digits or something so the question is what is this number and I can already tell you if I find in my notes what the number is so it is in fact 2.075 6.7266 591 dot dot dot so in the course announced that the exercise was compute this number with the reasonable number of terms of the series not too many terms to let's say 250 digits just of a benchmark of course I could ask for 100 or 500 so my friend using Mathematica which is a mistake anyway but by brute force I did this today but I forgot to write down the answer sorry if you just take the first 30,000 terms then oh and I didn't even write down the timing I especially ran it on my computer even in Paris which is faster than Mathematica and forgot to write down but I remember it finished so it was let's say 5 minutes I don't know so if you just do the sum then you'll get s plus roughly I don't remember exactly but at the order of 10 to the minus 4 and you can do it easily from the remember the error is n to the minus 7 ninths and the 30,000 with the 7 ninths is maybe 1,000 so you'll get maybe even 3 digits so and the 3 were very close to a particular number and my friend asked me is it that number well it wasn't at all once you have more digits so the question is can you do it now somebody who is in the course but it's not here now because he's away came to you a few days and he said you asked that as an exercise I said he wasn't meant as an exercise he was meant as advertising but he had done the exercise and did use Mathematica and he got the 250 digits I asked considerably more slowly than this but still it was like half an hour or something I mean this thing is I'll tell you in a second I get 250 digits in the 20th of a second using only I think 300 values he used but he didn't use 300,000 30,000 values but he used Euler McClourn and he thought it's a nice application so let me say a word about that if which is not quite the case here but it's very close if the numbers what do I call them BN itself has let's say it has the asymptotic well it should converge it should converge so let's say it's you know A over N squared or alpha over N squared plus beta over N cubed E2, E3 and E4 over N to the fourth plus E1 over N to the fifth then what you can always do on the computer is compute up to N just take the first N terms N is let's say 500, 300 and then if you compute BN well of course it will be exactly E2 times the sum 1 over N squared plus E3 sum from 1 to N and then here the sum from 1 to N 1 over N cubed and then the sum 1 over N to the fourth and then plus the sum from 1 to N the difference BN minus E1 over N squared minus E2 minus E3 over N squared minus E4 I'm sorry this is illegible you just subtract it now this one you can do by Euler McClory to any number of terms this is a constant which happens to be pi squared over 6 minus a power series of 1 over N you can have as many terms as you want simply this is 8 of 3 minus as many terms as you want this all of those you know they're complete asymptotics and then the last one since I've subtracted the terms 1 over N squared 3 and 4 it's 0 1 over N to the fifth the sum of that from 1 to N to infinity is 1 over N to the fourth so you will get it up to N to the fourth which means even if N is 300 you're doing a lot better than 1 over N the problem is to do that you have to know what are E2, E3 and E4 so how do you find their asymptotics well if it's a simple form then here it is in this case by the way these numbers so in general the E lambda over N to the lambda and here remember our thing actually started 16, 9, 16, 9s plus 1 so here it's shifted so it's but all the Maclaurin remember works they're equally well explained so here it's easy to give the asymptotic expansion of this and now if you use all the Maclaurin for each piece and subtract them out that's what he did very good thinking but in order to you have to get these coefficients here you can just get it from Stirling's formula but if you have numbers that if I see the numbers themselves are and so to find the asymptotics to this you have to apply the asymptotics trick to the individual terms but to each one because once you get the first one you have to get all of these but then why do that why not just apply to the sum right away so that's why this method although it's it doesn't give more than all the Maclaurin in favorable cases when all the Maclaurin applies it often applies when it doesn't but even when it does it's much easier because for all the Maclaurin you have to do it twice once so now I actually haven't used up my time so I want to now go back to the general problem and as I said I want to both specialize it and generalize it and then solve it and I think I can do that in the remaining time and the next time I'll talk about more applications have a more general problem so here's a special problem which is a special case so given an in the same senses before so you know the question is also what is given mean like you know what is the word is in the same senses before which remember means that you can compute any given n in a reasonable amount of time if n is not too big you can compute for n is a hundred or n is maybe a thousand in a minute or an hour or whatever you want to spend but not in the lifetime universe but very important you can compute to arbitrary precision in some of my examples these numbers in most my exams these numbers were integers so there was no question of precision if I had the first 300 they were integers there was no loss of precision but sometimes they're real numbers and then you have to compute them accuracy because the method although it's very quickly convergent will magnify any instabilities it's a very unstable method in that sense so you do need high accuracy so friends for the problem I just wrote down that I stupidly erased I didn't even finish the problem so this problem remember it was j plus 4 over 3 to the over j minus 4 thirds so brute force with the sum up to 30,000 gives you about an error of about 3 digits the method with the sum of just 300 gives you more than 250 digits I'm usually I would stop at a rounder number like 250 but then I was just under 150 that's what I put on the poster so and by the way this takes even in Paris many minutes maybe 10 minutes I mean not not hours but it takes some time because it takes a certain amount of time to compute even the binomial coefficient minus fourth it's kind of horrible numbers but this one took 0.0 less than 0.05 seconds in Paris took me a little more than that to write the program because the program is three lines so that took probably five minutes to write write the program but it ran in 0.05 and if you needed 3,000 digits it would take longer but a few seconds so the method really works as I said this is kind of a stupid case of the method because as that person pointed out to me you could just use Euler-Mcthorne here because you know the asymptotics of this thing by Sirling's form but in most cases that we had we had no idea how these numbers grew it was just a sequence of numbers so this was the general method and I want to say the the simpler problem well I can call the simpler problem the simpler and of course more special but it's now it's bounded or they're bounded but they look bounded you compute again a few hundred like here these psi n's certainly don't look bounded if you look at this sequence 1,1,2,5 up to a thousand fourteen of course you can't prove they're not bounded maybe they never exceed a million but it sure as hell looks like they're growing even fast so if they look bounded and you want and you want the limit that was like we had when we let the infinite sum you want the limit oh I changed notation that was capital A no here it was little a and then I'll go back to little a and prefer capital N but now you want the limit a infinity and often that's all you want like in this thing I just wanted the infinite sum I didn't care how fast it converges once I got it so specifically that means that the ansatz here so your assumption working assumption is that a n looks like simply c0 plus c1 over n plus c2 over n squared so it's the same assumptions before but now we're assuming that alpha, beta and gamma are 0 and I so I'm assuming less and I want less it's easier on both sides we want only c0 so I'm not asking anymore for c1, c2 and all the others just the limit okay so that's but to high precision we do want it I think not exactly but to high precision to ever precision you need and of course quickly so quickly means that the algorithm itself shouldn't be complicated super easy but that you don't need too many terms like in my example we need 300 terms instead of 30,000 to get you know 250 digits instead of 3 so we want very high precision with a small number of terms as input and of course you can't put in 3 terms of 5 as I said you can't expect to do asymptotics with 3 numbers you need of the order of 50 or 100 is sort of the least to make it work so before I explain how to do that so that's a simpler problem let me show how to reduce that's very simple reduction of the general problem to the simpler problem I'll do it in 4 steps first of all if alpha equals beta equals gamma equals 0 of course we don't know what they are but let's imagine that in this ansatz that I didn't have the factorial growth the Sheffrey class I didn't have exponential growth and I didn't have power growth so I've simply An is equal to C0 plus C1 that's an ansatz that's what we expect and you want to find all but remember we're doing the more complicated the general problem so even though alpha, beta and gamma are 0 we're now asking for C0, C1, C2 for all of them but then it's very easy because the simpler algorithm gives me C0 to high precision that's exactly what it does so now I define it's pretty obvious An star to be N I just subtract C0 from the Nth term in my list I have a list of 200 numbers I subtract that constant from all of them and multiply by N well now that's supposed to be C1 over N plus C2 over N squared it's sorry C1 now plus C2 over N plus dot dot dot and so the simple now gives me C1 because I have a method that exactly applies to that problem and now it's pretty clear I do the next stage which is that I take An star minus C1 which is the same as the original An minus the first two terms of its expansion and you multiply by N squared and then that's C2 plus dot dot dot and that will give me C2 etc and I do it as many times now if I do it 100 times I'll start getting complete nonsense if I only had 100 coefficients at the beginning it won't work if I had a thousand I have done one case where I got more than 200 terms at the expansion and later there was an exact form that there were rational numbers and all 200 were correct and by the way you know they're correct when you find them how would you know that a sequence of rational numbers is correct if you aren't used to working with numbers you'd say how do I have a sequence of numbers they come out of the computer how do you know that they're correct it's very easy rational numbers they come as a bit life in real practice has denominators that are highly factored and grows slowly because however they come they're adding and multiplying simple things and so the denominator will only have small prime factors maybe it divides to N factorial or something like that or to N factorial squared but your computer doesn't know that so when you compute the 100th coefficient and you approximate you get it as a real number to 100 digits use continued factors write it as a quotient of two integers factor the denominator of course there are huge prime factors but this will be small and then you find it's 2 to the 135 times 3 to the 87 times 5 to the something that can't be random it's 100% convincing if the denominators are very highly factored and don't have any large prime factors then you're okay and you see it when you do run the computer program at a certain point something does a big prime factor and you stop and either give up or you go back with more numbers more precision and you redo it so in practice you always know so this is the first case if we are in the simple case that we have simply a bounded sequence so there's no factorial growth no exponential growth, no polynomial growth but there is a power series and you want them all the simple method gives you C0 but then you're just subtracted multiple times that's pretty obvious actually all of everything I'm saying is completely obvious the only thing that's fun is the solution to the simple problem so the next case is alpha equals beta equals 0 so now an is equal well this is an ansatz but we're assuming it's a power of n times C0 to C1 over n plus but you have these numbers let's say the first 200 well now I can use sequence that I have numerically I just divide an by an minus 1 I mean if there's no a0 we only care about n bigger than your 5 or something we're interested in the asymptotics divide n by n minus 1 then of course n to the gamma over n minus 1 to the gamma will be 1 minus 1 over n to the gamma times C0 plus C1 over n and so on over C0 plus C1 over n minus 1 and so on so that will start 1 minus gamma over n and so then if we already use the previous one that we get gamma and if you want the next term it's not very hard you expand this by the binomial theorem the next coefficient will be gamma times gamma plus 1 over 2 and this part the C0 is cancelled but you'll be left with minus C1 over C0 so if you want to use a bit more of that method you can get gamma and C1 over C0 but it's silly it's way too complicated you get gamma by the previous method because you subtract 1 multiple by n take the limit by the simple problem you get gamma and then you take your new an tilde and then you divide by n once you know gamma then of course you're back to the previous case so in each of these things you add one more thing to the left once you've found it you just divide by you're in the previous case so the next case is when alpha is 0 that beta is anything so now A and the ansatz is that it's beta to the n so it's exponentially big now it's even easier because again I just take the quotient simply beta times 1 minus gamma over n plus etc and so the simple problem now with no modification gives you beta immediately and then I can divide by beta and now in the case when beta is 0 and then I divide by that then alpha and beta are both 0 and I'm back to this and finally the most general case you again take A and star to be A n over n minus 1 well n factorial over n minus 1 factorial is just n the power of that is n to the gamma times beta times 1 minus gamma over n plus etc and now by the previous the one we already did that was the case we already did when there's a power but no factorial this will give me gamma sorry this was alpha I always forget what I'm calling what this will give me alpha to arbitrary precision so that's how successively so in practice of course you don't start that way you start the other way you first take the quotient and you find alpha then you divide by n factorial to the alpha then you in this case take the quotient and find beta then you divide by beta to the n take the quotient and find gamma and then you divide by n to the gamma and you have the power series and you find the limit so it's kind of clear that we only have to solve this simpler sounding problem where you simply have a single power series as an ansatz for the nth coefficient you have the nth number up to n is 200 you have the nth amount of numeric to high precision then you can get all of these other numbers okay so now I still have 14 minutes see that's the advantage of speaking so fast I can say more you can't understand but I can say more so I'm sorry if I speak too fast as I always do you should stop me if you don't stop me I'll continue speaking too fast so now at least we can give the find me the method well I think I told you the method on the first day because it's impossible to forget with this numeric it's intentionally slightly provocative it's not meant to be completely clear what I mean but the method is you multiply by n to the 8th so if some of you have seen this before then you know what I mean if you don't know what I mean you can say I don't know what he's talking about but you have to admit it's simple multiplying by n to the 8th is not very complicated okay so what do we actually do we make a n which remember as an ansatz I now assume it has a power series of 1 over n as its expansion and at the risk of seeing pedantic I don't mind that risk because I am pedantic and always seem I'm going to go all the way to the 10th coefficient now you multiply by n to the 8th let's call that a n tilde or star whatever you want okay if our ansatz is correct we'll be a polynomial of degree 8 of course 8 is the number I chose it of course doesn't have to be 8 you could take 11 but you can't take a million or 1 I mean you have to take something that has a little pizzazz in it so this is a polynomial of degree 8 plus c9 over n to the over n to the 1 plus c10 over n squared and so on so we now have a polynomial plus a power series of 1 over n but the first non-trivial coefficient of the power series is c9 much further than we had originally now if you have a sequence of numbers any sequence bn let me define delta bn to be bn plus 1 minus bn so the first difference or if you prefer bn minus bn minus 1 it makes no difference it's the same sequence just shifted by 1 and similarly you can do it again delta squared of bn is delta delta bn so if I do it upwards it'll be bn plus 2 minus 2 bn plus 1 plus bn you've all seen this it's just the alternating sum of bn bn plus 1 up to bn plus whatever with the binomial coefficients as coefficients by the way when you do this on the computer I'm not sure it's faster but my program the way I program this long ago it's two lines I just have a thing called dif of a sequence and so dif of a sequence I can even write the paris program you'll see it's not very hard it's the sequence it's a vector of length the length of n which is called number of v or you can put length of v minus 1 it's one shorter it's indexed by n and the nth coefficient is v of n plus 1 minus v of n so that in paris is the definition of the difference and then I just instead of working out the binomial coefficients I'm going to do this 8 times I don't compute all the 8 over i i from 0 to 8 it's not very hard and paris knows them by heart but it takes a little time I just take the difference I do it again and again 8 times so now let me take a n it's called a tilde so I don't have too many stars I'm going to take the 8th difference of a n star but then I'll divide by 8 factorial now when you take the difference of a polynomial that starts c 0 n to the 8 to the leading order inside the derivative it starts 8 n to the 7 the next difference starts 8 times 7 n to the 6 the 8th difference is 8 factorial times n to the 0 you divide by 8 factorial you'll get c 0 but the 8th difference of a polynomial degree 7 is identically 0 just like the first difference for constant 0 the second difference for linear function is 0 and so on so we now get a very nice expansion c 0 plus 0 over n plus 0 over n squared all the way up to an including 0 over n to the 8 because the difference is not very general now we come to c 9 well when you take c 9 you have to take the nth derivative but as I told you the nth derivative to a leading order is the same as the the nth difference is the same as the nth derivative so when I do this the derivative of 1 over x to the 9 is 9 over x to the 10 excuse me this is n over 1 so it's 1 over x the derivative of 1 over x is minus 1 over x squared then plus 2 factorial 8 times since 8 is even I'll get plus 8 factorial over x to the 9th and so this is exactly if I divide by 8 factorial as I'm doing it's exactly 1 over x to the 9th so the next term will be c n over n to the 9th and you see what I've done is it's exactly my original series the c 0 is still there, the c 9 is still there but I've killed, these are now all gone these intermediate coefficients have simply left now I don't like to be dishonest I could have just written that, you would have already thought I'm a little nuts to write out 9 terms after I write 2, surely you know how it goes on, but you see I had a reason to put this but I had a reason to write this here I don't want to be dishonest when you take the nth derivative of 1 over x squared which remember what we're doing in the next one then you'll get 2 the first time times 3 up to 9 so you'll get 9 factorial over x to the 10th factorial is 9 so the next term is not exactly what it was before so you don't quite take the original series and just remove the intermediate terms this gets multiplied by 9, the next by 9 times 10 but n is large n is going to be 1000 if we had 1000 coefficients so if n is 1000 if I had 1000 coefficients then the error of simply the original sequence minus the limit will be of 1 over n the order of 10 to the minus 3 but this a n tilde minus a infinity well a infinity is c0 it's going to be o of 1 over n to the 9 which is certainly 10 to the minus 27 and now I don't really care about this because it's 10 to the 30 and 10 to the 33 the 9 isn't good at both of them n is much bigger than that of course it's diverged eventually but we're truncating so what you see is that by this completely trivial thing was a single subtraction of the nth term minus n plus 1 and I repeated that 8 times so that takes milliseconds on the computer and what I get now I don't have to do any extrapolation I just take a 1000 as I already said is c0 plus o of 10 to the minus 27 and so I get 27 digits and so what you do is you just print out the numbers and you see that you know to 27 digits they just converge and you take those digits and that's the whole method it's totally trivial to program in any programming language under the sum it's one line in Paris also now I showed this to Henri Cohen who's the originator of Paris many years ago and he's now included it so you can even just call it extrapolation but it's incredibly useful because you keep I gave you three or four different examples actually since I still have six minutes I quickly wrote down just from my home page several other things that I've encountered recently but I didn't lose the page so let me mention three other things that are fun interesting mathematics completely different fields where one had a sequence of numbers in which this could be used all of them I'm either the author or the co-author I think all of them are called through the paper because that's why I'm changing those examples so in the paper maybe 20 years ago so in all paper I knew the method with Rolf Kaufmann and Yuri Manin and myself and this is somebody called by the title is Higher Vey Peterson Volumes I'll just give you the word stable I'll drop an endpoint at curves these are rational curves it's not in the title so it means you're looking at the multilay space and there's zero curves here with N marked points and there are multilay spaces MGN and some completion and there's a volume form called the Vey Peterson volume and so you want the volume of this thing and it's some number so VN is this number and there by the asymptotic method you would find first of all for some reason the 2N factorial divided by constant exactly like in the storm nothing remember here d factorial over numerically it looked like 1.5 to the d but it was really z of 2 here you find with the method 2N factorial to the power 1.000 so remove it then you get a constant which you can compute numerically by the method and the others you could also get so here I can put a constant minus something which doesn't matter at all anymore dot dot dot plus 0.019 over N squared and so on this is what you would get with the numerical method and you get as many digits as you want this is not what happened many years ago because before I thought of the method I had no idea nor when I thought of doing that I didn't think about asymptotics but through some trick in particular by Sograf who had given a formula for them it's a complicated recursive formula which leads to non-linear differential equation and then I noticed that the first coefficients VN were very close to things called the Euler numbers but they weren't equal but using that I made some change of variables and it turned out with some strange non-linear change of variables you could linearize the differential equation as you could solve it exactly that was my main contribution the Manlin and Kalpana are both and I'm not but my contribution was to solve this recursion and understand the numerics and so actually in the paper this is some explicit expression so in the paper we actually find this and I've never actually checked that the numerical method would give this number to that number of digits I just looked at the paper I don't remember anything in terms of vessel functions and their derivatives of course you only need their first derivative it's the second derivative it's a combination of the 0th and 1st but the differential equation so it's written in the paper that it's an explicit but it's actually 0s I think it's in terms of 0s if I remember correctly so you have to take some combination of Bernoulli the numbers you take like the derivatives of J1 and so on so this is that's the classical of A. Peter Baldwin of this thing so this is a non-example but where such numbers came from this number is not at all easy to recognize I don't know anybody alive who could take this number and say aha that's a combination of the 0s of J0 and J1 or something and that's impossible but you can find it numerically and hope that someday somebody will have a theory and can check the theory so this was to say for my own work of many years ago and the other examples I have easily 10 papers in recent years where the asymptotes could be used like this one with multilized spaces so I'll just mention two of the co-authors they're all fun papers but not for this purpose but that was just to decide things so this paper is 2018 this is large genus limits so when people do multilized space here it wasn't large genus the genus was 0 but n was growing so it was large stably pointed curves but with a large number of marked points this is the title it's even largely it's a Ziegelweig constants that suddenly comes up in dynamics and specifically in the theory of flat surfaces of which I know a little bit because my co-authors explained I didn't know anything before and it's not my field but there are numbers but the title tells you it's an asymptotic statement and there were various numbers and this n that I'm going to write it's probably 2g-3 but it's n is even and large and here we actually wanted to know the numbers and finally we have a proof that's now written in this paper with W.H.N. Martin Miller and myself but the formula is exactly but you quickly notice that it alternates so you just take out the minus of the sign, that's not very hard and then you find n factorial and then again you find a power to the n which is 2 over pi but then when you find the whole thing it's convenient to have another 2 pi I don't know why there are 2 and there's an n to the 5 halves maybe I miscalculated oh sorry this isn't a pi this is an n so there's a power which is 2 over pi to the n then there's a square root of n in the ring because there's another 2 over pi so it's a square root of pi so it's best to put it like that and now it starts with 1 when you find some sequences you have the power series c0 plus c1 over n c0 is some very complicated number like it was there but if you pull it out so it's a constant c0 times the power series starting with 1 then very often those numbers are rational or at least much simpler but here also the original number would have had a square root of pi now there's no square root of pi but it's already harder to recognize this is now theoretical but would be very easy to recognize the two coefficients as I told you you expect the denominators always to be highly factored so 24 and 1, 1, 5, 2 only 2s and 3s in them but the coefficients are not just integers so although that is the denominator it's the denominator in the ring q of pi or q of pi squared actually because the actual numerators for instance the second one is 4 pi to the fourth minus 36 pi squared plus 9 that's a little harder to recognize I think we would have recognized that because you'd have a lot of digits and one of the things you try is a polynomial in pi and there's an easy way I'll talk about that in another course how you recognize things once you have a conjecture where it might be lying but here in the end we prove this so this is a true theorem but the asymptotic method would give as many terms as you want and the last I won't even give any of the results it's several papers there was a colleague here Boris Dubrovin at CSU died two years ago one of the reading was one of the great leading mathematicians and D. Young who was in ICDP well now he was in CSU but he visited here a lot I got to know him here and we wrote I think already five papers together two of them appeared post-Humislamin after Boris had died but he was a co-author I'll just say many papers on things like Horvitz numbers and also multilised bases and volumes and large genus limits and in each of those some with this asymptotic section that's what people really want to know what happens for very large genus and each time we proved in each case these were theoretical papers we'll be proved that having done the new method in knowing exactly how the answer had to be that's very very helpful in doing mathematics and you know you have to have something like this you reject certain things that couldn't possibly give this and say it can't be quite like that it has to be trickier so I want to give you a kind of spectrum of examples coming from different parts of mathematics with this method you see the final method is incredibly simple you multiply by n to the 8th you take the 8th difference divide by 8 factorial and you just see the first coefficient it's 8 by larger number and if it's still not good you use more terms of the sequence higher precision so you increase all the parameters you get as many digits as you want and that I think you'll agree is so easy you can't really forget it you multiply by power of n differentiate down to the constant the fears before your eyes and that's it and you just see it more and more digits so that's the asymptotic trick actually that was the main point of this whole course because it's so useful and so this whole course I started to teach the asymptotic trick it's kind of a triviality and actually there are many other fun things which I thought of afterwards that I could work into the course so I hope you're still having fun that once again is the only point of this course it's not for any other purpose okay so if there are questions you have to ask in the previous four minutes or next time or you can ask if you want to leave and if you're not visible of course you're free anyway so yes okay I was asked when people in the audience ask you a question to repeat it because you have no microphone the question is is there some theory how to choose these numbers like 8 certainly I don't know of any and frankly it's not worth bothering but let me try to answer I've done this in hundreds of cases I've had a lot of experience you try 8, you try 12 you look at the sequence of numbers how quickly they converge and you gain a lot of digits and after a while if it's too big of course you lose digits each time it doesn't converge to anything so you see it and you don't have to try 8, 9, 10, 11 you try with the jump you try 8 then you try 18 so typically when I do this I extrapolate of some sequence v and then I put a number h and so typically I'll write it in Paris language I'll make a step h goes from 5 to 50 in steps of 5 and then I just print h and a little space and then I print the extrapolation using that number instead of 8 and there's this extrapolation as a parameter it tells you how well it's converged because you've used as many terms as your vector has and then at the end you see how big the difference is and that's roughly how if the last two terms different in the 30th digit then they both can't both be right so you have at most 29 digits but typically you're almost there so you can see just from the numerical output so typically I'll write this and then you see that the difference you'll have the first time 10 to the minus 6 is an error 10 to the minus 12 10 to the minus 18 at some point you'll have 10 to the minus 113 these are not inventive numbers it's quite typical and the next time you'll only have 10 to the minus 110 it starts getting worse and you say well I want to have h to be around 40 and then you replace the for a step 5 to 50 you look at that interval you take the very best h so your sequence is finite length that's all you've computed and given that you just optimize yourself and that takes a few seconds to do so I've never thought of the theory and in a sense there can be no theory because we don't know anything about these numbers these numbers come from a black box remember we simply that's the assumption that I wrote you have an oracle you can say dear oracle what is number A83 and it will tell you A83 and it won't give you 200 but it won't give you a formula so there is no theory we have no idea what these numbers are doing we don't know what the next one will be like so you can't expect a theory and this is really about experimental mathematics but in practice I write this program with the stupid 4h I should include it in my standard routines because I type this about once a week just to see what's the optimal h and then typically if it's high precision it'll be like this you can go up to maybe 40 and then it starts getting worse and if I'm happy with 113 digits I stop there and if I'm not happy I take more so basically the answer is I don't know and I don't worry about it because A I don't know a method B I think it's impossible if these are unknown numbers you can't expect so of course if you know that these numbers that these ansatz does exist and you know the growth of the Cn and that you can sometimes do in the C2 up to let's say C50 or C30 then you can do their asymptotics and make an ansatz that they grow like n factorial with a specific then you can make a theoretical estimate how many terms you should be taking but that takes much more time than just doing it experimentally so yes and no if you know a lot about the growth of the coefficients of this asymptotic series assuming it exists you can but in practice you don't really care what yourselves tell you how well they're converging on one last thing which is very important which I want to say let's say that it's expensive to compute the numbers but it's not recursive so I've like the numbers we had today the akn I can compute them when D is a thousand I don't need the values from one to a thousand so if I compute for all and up to a thousand let's say the last ones are taken ten minutes each then that will be what did I say one thousand or ten thousand one thousand times ten minutes well maybe it's only the last five thousand that took ten minutes each the earlier ones were faster but we're talking fifty thousand minutes that's days but if you can compute a one thousand in a reasonable amount of time like ten times then what you do you don't compute an for n equals one up to a thousand assuming a thousand is reasonable in a short time like ten minutes you compute for a thousand a thousand and one up to let's say a thousand and thirty so you only compute thirty times not ten thousand then they're taking ten minutes each this will be three hundred minutes it's five hours but it's not a month and now you do the interpolation with those you just all the only difference is you have to take you have to remember that that's where you started so you take a thousand the eight times a thousand up to a thousand and thirty to the eight times a thousand thirty you don't need the others you take the difference eight times well there are twenty two numbers because each time you lose one so this is delta to the eighth but actually I would take probably delta to the twenty eight and I'd be left with just two numbers and if those two numbers agreed to sixty digits you just assume that's probably about right and so in practice we use this all the time because very often the definition is not recursive if it's recursive you'd have to do the first n but if there's a closed formula but very slow to compute that's the thing we had today that happens all the time then you just compute some reasonable number but not three it should be thirty or fifty or hundred so sometimes I use two hundred successive values but starting at five thousand so from five thousand to five thousand two hundred then with two hundred you get many many many digits but you don't have to compute nearly as long as if you had to do all those thousand numbers so that's the other thing I wanted to say that's very useful to realize that you don't need the entire sequence but only the last ones maybe we should stop because it's very late for people who want to go home and for the people running the audio they should also go home so no more questions but we'll stop I'll turn off the mic I say goodbye