 Very few people here a couple said they couldn't come maybe a couple are coming late We can profit if there are well, there can't really be too many questions here because there aren't so many people Also, if anybody on zoom wants to ask any questions About anything earlier in the course now's a good time But today I don't think I'll fill up the whole hour and a half anyway I'll end I expect to end a bit early so there'll be time at the end if you have Questions or favorite problems of your own and want to ask, you know, if they'd be suitable Subjects for any of the methods I've been talking about So this is as I said and as you know the last of the 12 Lectures of this course the last three were on the very specific subject to quite complicated Application of the circle method to sums of the partitions of a number as a sum of squares or cubes or higher powers this is a much simpler problem which was mentioned on the Advertisement of the course and on the poster and I'd said I didn't know if I would get to it It depended on time and whether I could find my notes Which were either in China or in Germany or in Italy and in fact, I never did find the notes But I of course I found all the computer calculations and spend a couple of days trying to sort it out But I'm still not entirely sure and the software has changed a little and the computers are faster I redid many calculations. So the numerical aspects may be a little vague But it's a question of principle and the question was what how do you sum numerically? So we had looked at how you sum numerically several kinds of sums using Euler-Mclaurin or using Take the partial sums and do an extrapolation by the method that you now know multiplied by power of n Take a difference and extrapolate. So we had various sorts of infinite sums But today I'm interested in a very slowly convergent sum So I'll just call it Actually, I don't I've already made a mistake in my notes. I call it H of X because H stands for Hardy and Littlewood this time not Hardy and Ramanic and I actually don't know the reference where they mentioned this problem But it was told to me by my friend Hartmut Monin And actually we worked a little bit on this and on a related problem. I might mention at the end but in the end We didn't finish anything jointly. He wrote a paper with his method and I'll say something about mine today But he told me about the problem and they gave this as an example of a function It's very different very easy to define Very easy to see that the sum converges But very hard to compute numerically and the question is how do you actually compute it? So I've written it before on the board. It's this extremely simple So again, it's not that this came up and it's important so far as I know for any application But that it's kind of a test function or at least that's how I understood it Can one take such a series and actually compute it? So obviously if you know if X is point one or or even one Then there's no problem because this series obviously converges once K is bigger than X sign if X over K is But it's always for every X sign of X over K is O of X over K With the universal bound like two and therefore this series converges like one over K squared So in that sense, there's no problem, but the problem is if X is very large So in order to do that directly you have to wait for the term to start getting small And the terms only get small when K is much bigger than X. So if X is one, it's not a big problem If if even if K is 20, it's not a big problem But if K is a million you already have to take many more than a million terms before the terms even start to get zero And then you slap to extrapolate and if K is very if X is very large And we'll see very large numbers later Then it's completely hopeless. You can't begin to sum up to the order of X. So what we want to do is go to the order of X to the lambda Plus some kind of a correction. So we want to compute a certain number of terms Of course by hand and then do something to predict how much is missing and we also want an accuracy at the very end of At the end we would like an accuracy which is at the very least asymptotic to all orders, which means it's bigger Than as X goes to infinity. It's bigger than any fixed negative power of X So that's the desideratum and in theory one of these plays off against the other if you want more accuracy You might need a lot more time. Of course what we care about then is the time Maybe also the amount of memory, but especially the time So if X is very large and lamb is one let's say then this will be completely hopeless on the computer If lamb is small then maybe this correction term is very small. It will turn out I'm going to describe four different methods for a tag this it's more about giving ideas how one solves such a problem Then nobody really cares about this particular function And it turns out in each case that the a is not very simple once it works Once you can get 100 digits you can also get 500 digits and it only takes a little bit longer What's important is the time, you know, how many terms you have to take at the beginning. That's the key parameter So I'm going to show you if the time permits Three methods well four methods the first one is lamb is approximately one one plus epsilon So that's very bad if X is true The second one lamb will be about a half So it's still slow, but you can go much much much further and the loud the third lamb will be about a third and the fourth Which is due to Monin and that's the one that he showed me when we work work on the other problem Lamb is again more like a half so the most efficient numerically is the third way But the the method is very interesting and so I want to show you that one too So I'm going to show you four different methods You know none of which well knows Monin's method is certainly by no means an obvious idea and Even the others that you know they have a little bit of of punch So let's start with the first method is Taylor expansion So in principle this would work if I were summing not just this particular function one over case sign X over K But some other function Which has a nice Taylor expansion the origin such as sign does but I'll just stick to this example So if I just do it directly, but here I have to look at my notes Which are now in tech rather than printed because there's so many formulas. I didn't want to copy them all so The expansion of sign as you know is just the alternating sum over odd numbers of X to the n over n factorial But here it's not X to the n it's X over K to the n and then there's an extra one over K So that's clear of course and so therefore This is the same as the sum and from one to infinity and all the guns I'll just put minus four over n because the short remember minus four over n is zero if n is even and if n is all It is minus one to the n minus one over two. So that's the same And the X to the n doesn't change but of course the sum K to the n plus one I can just put inside and some and now because the n is odd so n plus one is even so this is just a Bernoulli number times Times the power of pi and luckily there it starts at n equals one So we start at z of two z to zero z to one wouldn't be such a good idea and Actually, the Bernoulli numbers has an alternating sign that conveniently uses up this stupid sign And then we have 2 pi x to the 2 n 2 pi x to the n So this is an exact formula and of course hadn't done at every stage everything was convergent absolutely convergent This last formula is an exact series. It's obviously rapidly convergent because 8 is o of 1 And so this is like X to the n over n factorial but as you know and I've discussed before if you want to compute for instance e to the X where X is minus a thousand by the alternating sum minus a thousand the n over n factorial you need very very high Accuracy because first you need a lot of terms and also extremely high actually because the biggest terms are huge And they cancel the final number is very small and it'll be a bit the same It's not very small, but it's much smaller than the biggest terms So you need if you do this directly when X is large. It's not a good idea. However. It is always correct now, I'll save Words in a minute about the numerics, but first I want to tell you a general idea. That's extremely useful So imagine that you have a sum You have some sum of some a n and you want to know what it is But let's say that a n individually is defined as the sum of some b and m So in other words, this is this is a table of numbers b and m And you happen to know how to solve some any column in closed form and you get a n So then you can read through this and then of course you could say that this also the sum of the cm So in other words, what I'm saying is if a n is the sum over m of b n m Then the sum over n if a n is the double sum which is also the sum of cm and So sometimes if you're lucky you can sum both the rows and the columns at the infinite table Then you'll get a non-trivial identity that the original sum is the one you want is this new sum and that's exactly what I did here I expand its sign as an infinite sum Interchanged and then I could do the inner sum but it means really I have a double sum and so a thing that And I realize this use go because I do so many numerical calculations It's obvious but for a long time and I've noticed other people often do this I would just say you I'd use this summer you use that some if one is better Well, you've gained something if it's worse you've lost something but you shouldn't do either one because if you think but what you really have I mean all of these numbers are calculable just you don't want to have to calculate infinitely many so you have a big sum and So one way is that this one is known. Let's say this sum is a 1 a 2 a 3 and These sums are also known if you sum all the way to infinity. This would be c1 c1 c2 and c3 and So you could say if I want the whole sum of everything I take all the C's right take all the a's But what you should do is what you actually want to do is you take all b and m With n Well, what I'm going to do I'm going to take n less than or equal to some n I'm going to take the first capital n rows But I don't just stop with that and stop because what I'm throwing away Let's say I take the first 10 rows then I'm throwing away all the terms to the right of that and some of our way down here They may be very insignificant. So I don't want to do that So I take the first capital n where I have to choose n and m conveniently I choose the first capital and a m and the first capital m of the b and m and I do both So I don't make a choice am I so this would you see em in my notation? So I don't take just for instance these three rows and these four rows I take these three rows and these fourth rows and now I say what I got is the whole sum Well, I'm missing this part except I've counted this doubly And so the price you pay is that you have to remove the sum where both things are true because if you have a pair and m and n is less than or equal to capital n and M is less than or equal to capital and then you've counted it twice So you have to remove that so the price you pay is let's say you've removed a hundred lines here and a hundred here Well, the hundred terms is only is nothing on the computer, but a hundred times a hundred is ten thousand That's already a hundred times slower So the price you pay is that instead of having single sums you've also got a double sum But the what you gain is that the actual error is now you've only left out the terms that are bigger than both N and M and that's often very very much smaller So it's often the case that this sum converges fairly slowly But the main contribution to the first row you left out is from the very bottom and the next one and so if you keep a lots of bottoms and Sometimes you don't have to say both of these numbers very big one of them might be only You know five and the other might be two hundred and then it doesn't matter taking five But if you have two big numbers then as I say you pay a price So here this is as I already said an exact formula. There's nothing wrong with it It converges them for any x, but so does the original series. So if this is convergent for any x and So is this So in that sense we've gained nothing you could of course look at the speed of converges. They're completely Different, but this is always one over x squared, but the the O constant is horrible It's x over x squared and x is very large. This is something with factorial But still if x is large B and it's like n factorial. So this like one over n factorial as you can see here But now we can combine them and so it's that's exactly the trick if I take a better approximation So this sum if I just stop at capital N. I could call the nth approximation So now if k and n are two numbers like my Emma down there I can make the approximation where I take the original sum the sign of x over k over k So this is the one that will always be used and it's just called that hk of x You simply truncate the sum after k and if k is extremely large, of course, you get the right answer But it has to be so large that it's useless. So we take this and then just like that before we add The terms now double the ends. I don't have to put the odd. I don't want to waste terms that are zero So now the n factorial has become 2n minus 1 factorial And now I take z of 2n which you can compute either with the Bernoulli number or as a zeta function But now you take away the beginning Of that sum so that's just what I said there this one if k went to infinity. Sorry, this would also be This is now to n and this is k from 1 to k. Sorry. I'm sleepy. So Here we're taking the first k columns and the first n rows and then we're subtracting n times k terms where I both So this is much better in principle and now you can make an actual estimate I'll do it quickly because who cares but still you can now estimate the difference So this is now the sum. It's already said n is bigger than n and k is bigger than k. I Can drop all the signs for doing absolute value and let's say it's always could be positive So it's this sum and Now the sum over n Sorry, the sum over k is very easy. It's just 1 over k to the 2n And you can easily do that and it's 1 over k to the 2n minus 1 But we already had an x to the 2n minus 1 so it's x over k to the 2n minus 1 divided by 2n minus 1 because this is like a geometric series. It's a little worse. You lose the 2n minus 1 That's nothing the big one is the 2n minus 1 factorial So this is the estimate there and then you can keep going with this estimate here. You can estimate this rather easily As 2n capital capital n plus 1 over 2n plus 1 The details don't matter at all, but I'm right them out because it's not very long this and then there's still the factor which is quite harmless and Now if you assume that k times n is Bigger so that I mean I did this long and I didn't check it again These are just all more less trivial estimates But now you have to use sterling and you see that this thing is roughly 2n over e to the 2n And so I won't roughly 2n over e to be comparable with x over k So you won't roughly that kn is ex over 2 So that's bad news Because k times n Well, it's not really bad news That's the number of terms that you're you're duplicating and so that's going to be o of x terms And that's independent of what k and n are whether k is very small and then he's very big or vice versa This method always contains a rectangle with kn terms and kn as we just saw from this just from sterling Has to be roughly bigger than a constant times x and so that's why I said this method is o of x So if x is large, it'll be rather hopeless But if you do assume that then in this domain what you find is ex over 2n To the power 2n plus one and then there's a stupid Power n to the three halves and if I did it correctly 39 just put together some junk But roughly it's so if this number is less than one it's exponentially good, which is great I mean if n is big it's exponentially small, but that's the error, but we don't want just a small error I told you small error is easy. We want fast time and that's very bad because it's xk So let me give you some just a little bit of numerics how this works So this is the worst method, but it's the naive method it works perfectly well and You get the exact formula and I did want to make this comment about double sums because that's useful You know lots and lots of times as a general thing once you've seen it. It's obvious, but many people of course know it I'm not saying it's new, but many people don't think and just take either this summer that some rather than taking both and then subtracting the over and up so here Numerical values so if x is one remember the function is this I should write the definition once again sign of x over k Over k from 1 to infinite. That's our function h of x. So if I want h of 1 to 500 decimals then There's a playoff. It's already said you can take k to be zero. So that's the original method I just take this or I can kk to be one It's already quite good because when you subtract just one from z to n it's already two to the minus two n It's already a lot better and that costs very little and so here I took a couple of sample times, but with very small k and As you'll see in a second it made So here each time I took the number of terms that I needed to get 500 digits Okay, and it turned out with k was zero. I needed 105 if k was two I needed fewer but as you see the product k and is actually getting bigger So it's a little that part is worse on the other and the calculation times are very big the time in all three cases Is almost the same it's a tenth of a second. So that's of course easy And any method you use you'll be able to compute h of one But if you take x to be a thousand that's already a little less stupid and again if I want So here's k and n and I'm going to do it first with only 50 digits then a hundred and Then a thousand so I have different desiderata how many terms I want and here if my table is right it was zero So if k is zero I need now 1415 terms a lot more if k is one I only need half if k is two I need 506 if k is eight I need 200 and if k is 23 I only need a hundred So indeed if you take a bigger k you have a smaller n, but in fact the time is Always about the same The time here is about point four seconds whichever of these you take So sorry, this is k and n for these various accuracies For a hundred decibels the corresponding numbers It's actually not that well hundred isn't all that much bigger than I don't know why I took two numbers that are so close 1786 the numbers are very simple 6400. It's the same now. It takes 0.5 seconds It's almost the same and when you want a thousand digits it now takes three seconds So it's you know eight times as long and now The choices that worked again to get a feeling if you want k to be zero you now need more than 2,000 terms The other way if you take one you only need 1479 if you take two you need 1190 if you take seven you need 800 and If you take 60 then you only need 400 but to make up for you have 60 which is a lot bigger And all of these take about the same time So the moral of the story is there's not much in it You don't really gain anything by fooling around with k and n and in fact since through all the same time You can just take k to be zero or maybe one or two a very small k just to remove the leading term so that was slightly instructive not very Exciting and we could do one with trifling we could do a thousand to even a thousand digits in you know Three seconds if you needed a slightly bigger number, you know, maybe ten thousand you it would still work But you wouldn't want to go through a very big number So now that's as I said is the lambda is one method and we saw that the number of terms the kn has to be at least a constant times x So that's really o of x Calculating time even assuming the individual calculations are instantaneous. There's still o of x of them. It takes at least o of x time Now the second method is the one that I've discussed at the beginning of the course in detail Oh the McLaurin, but it looks a little different here from the way it looks in general because of the same Same thing. So what we're going to do is we'll write h of x Minus this abbreviate remember h k of x is just the sum up to k So this was the sum from k plus 1 to infinity the sign of x over k Over k and by Euler McLaurin that will be to some very high degree for approximation The integral of that function, which is the sine integral so s of t. It's called the sine integral. It's a standard function I'll say in a minute how you calculate it But s of t is just the integral from 0 to t Sine u over u du again That's not very if t is very large you don't want to do that directly because it's it's inefficient But there are ways I'll write in a minute what you can do for s of t But anyway, that's the sine integral and take that to be known that that you can compute very quickly And then there are this sign is the imaginary part of e to the i x over k it's easy to work with that and So if you use so remember Euler McLaurin tells you that the sum from k plus 1 to infinity of any reasonable function f of k is Roughly the integral from k to infinity of the same function f of t dt Plus the sum I won't get the signs right if I don't look the n plus 1 over n plus 1 factorial Times the nth derivative at k Okay, that's what Euler McLaurin tells you so here when you do that Of course, we know the Euler McLaurin expansion of everything the power series and when you work it out But you get us the following mess. It's a double sum, but each term is elementary. It's r plus s factorial Over r factorial s factorial squared r plus s plus 1 and the Bernoulli number with index r plus s plus 1 and Then it's e to the i x over k so that would give you sign or cosine Because I'm going to have in a second a power of i So here depending whether s is even or all of this is either plus or minus one or plus or minus i still get signs and Cosines so this is an infinite sum and it in fact diverges You don't want to go all the way to infinity, but this is as an ad asymptotic series So first of all, how do you compute s of t? That's not interesting I'll just say very recently roughly s of t you can To any order well, I'll just write the thing you just try and the devalued Infinities pi over 2 and then what's left by successive integration by parts. This is standard asymptotic series I mean it's divergent of course because the coefficients there Ultimately plus or might well not quite alternate in period 4, but the coefficient is k factorial as you can see It's 1 1 0 factorial 1 factorial 2 factorial 3 factorial So it's an asymptotic series, but if t is large this Converters to very I mean you stop up on truncation like I discussed earlier. You get a very good value But there's another trick if you're using paris as I do then this error is the incomplete gamma function and paris and Presumably also mathematics and maple and lots of programs have that pre programs You just call it and it's intelligently programmed at least in paris So even if t is huge it does it the right way and it gives you the right answer to whatever precision You have so this we can think of as simply a known number and this is asymptotic and each term is easy And so if I take five terms of this that's a very simple correction If I take 500 terms it it takes a little bit of time But of course the main thing is how far I have to go in k to make this good because the slow part Is this this part is fast, and this is very fast It's the slow part So now if you analyze that I have a lot of numerical examples And I think I'll skip most the details in my example. I took x to be 10 to the 10 That's much much bigger than we were able to take here. I was looking at x is 10 to the 3 so now x is 10 to the 10 and I actually in my various calculations I took k to be 10 to the 5 10 to the 6 10 to the 7 and all the way up to 10 to the 8 But this one if I did 150 digits now I have to find the numbers up to 10 to the 5th this took 0.7 seconds This took seven seconds this took 70 and this took 700 so 12 minutes I mean it's it's just proportional to k because each calculation is the same It's just a sign of some random number not to the 1 so but but then I want to go at much higher precision For a thousand digits. It already takes more time. This takes 14 seconds and this one takes 140 Of course, I could have done one more, but it turns out you don't need to You only have to go to k slightly bigger. In fact, it turns out even with 10 to the 5th It works fine. I didn't believe that when I started that was take much bigger k Because originally I was taking like five or eight terms of this correction term And then I was getting hundreds of digits with a very large k by computing for instance a hundred million terms Of the original sum but here. I'm only computing a hundred thousand terms That takes almost no value you can see here 14 seconds in the high precision and now if you take enough terms It turns out that here So I won't give all of the numbers, but we've 10 to the 5th if I took plus 400 terms 400 pairs are s I just ordered them in some way and take the first 400 that took another 12 seconds and if I took 500 extra terms that took 26 seconds So if you add that to the 14 seconds, we're talking less than a minute and this gave already 320 Digits correctly this already gave 400 more than 400 Digits, so if you want and you can get more so even with this small k That's both for k equals with k is a hundred thousand if you k to k to be a million then this takes 140 seconds But the other part it takes only a few seconds like eight seconds and you get the same accuracy So it's a simple is a certain interplay But it's actually more efficient the total time to take a smaller k and do the correction more carefully So you're perfectly happy here with k roughly the square root of x and here we easily got 400 terms and if you wanted to know a thousand terms it would work and one could take x to be bigger than 10 to the 10 But you couldn't take it very very much bigger because this method still is that the square root of x the first method was like X and so x had to be very much smaller Okay, so those are the first two methods and a little bit about the numerics now comes the third method which would be a generic for any problem of this sort and It's about x to the one third. So in other words, you have to take k in the third method So the third method I'll start a new column the third method is Summs over short intervals So remember what we're doing. We have a function which at the beginning is Essentially a random number over k right because this the numerator is sine of x over k Sine is 10 to the 10 you divide by 1 2 and 3 and reduce modular 2 pi It's a completely random number. So you there's no kind of regular behavior The function is doing at the beginning the small values are just jumping around their order of 1 over k But that's all you can say of them but if you go very very very far then eventually it becomes a smooth function and So what we did now is we stopped up to k and we estimated the sum from k to infinity in one swell foop or one I don't know what the correct English for that is that's kind of slang, but we did it in one step using Euler-Mclaurin But what if we took the difference some kind of Euler-Mclaurin a little more intelligent in a much shorter interval and broke up the Remaining piece so we're going to go up to some big k, but we're hoping only x to the roughly one-third So much smaller than before this has to be by hand the beginning terms you have to do by hand because they're completely random You can't predict random numbers, but then we're going to divide up the interval into lots of Shorted for words already reasonably smooth and make an approximation of the terms and then that Take a lot of the intervals that get you out far enough that it works So again, I'm only going to sketch this the details are in none of the case They're interesting if somebody really cares and has a problem that they want to think they can of course ask them by email But I'm just trying to illustrate different methods and give a little bit of the numerical results so the idea is In in each short interval, but of course, it's not really short because I'm starting My ex I can already tell you it's going to be 10 to the 60 for this example It's a much bigger and so even x to the one-third is 10 to the 20 so that you know, sorry that can't be right I can't have gone up to 10 to the one-third 10 to the 20 by hand, so I don't know quite what I did actually now I'm a little worried with the one-third because I know it was 10 to the 60 but I Can't have actually done 10 to the 60 I think I only did the short terms and I just somewhat smaller x I could certainly do like 10 to the 30 In a reasonable time on the computer, but the idea is in short interval you estimate The the sums so you make some kind of an expansion of the sum and it's a sum of pure powers and each pure power is just an Exponential function times so it's either x to the I in some interval Which is an arithmetic sequence for I x to the I or I squared x the I It's a small derivative of a geometric sequence and each of those can be solved in closed form And that's what we'll do and that get gives you the exact contribution of that to the whole interval You don't have to sum anything you just subtract the value at the beginning at the end so you estimate the sum as a combination of the contribution as the sum of generalized Arithmetic sequences arithmetic progressions Sorry, not arithmetic excuse me geometric series So let me define so g 0 of a n So in actually the ones we're going to have a is going to absolute value 1 and G j you'll see in a second will be well I'll just write the I can find it already g j of a and n Will be the sum n from 0 to n minus 1 the way I'm doing it end the j a to the n So that's what you want a is going to in practice being a number of absolute value 1 or maybe very near to 1 and we're summing this So let's start with G 0 of a n is just 1 plus a up to a to the n minus 1 and of course we all learned this in school I'm assuming a isn't exactly 1 then this is a to the n minus 1 over a minus 1 So this is an exact formula so no matter how big your interval this you can do that when in closed form Similarly g 1 of a n Well, it's 0 plus a plus 2 a squared Up to n minus 1 a to the n minus 1 of course that we also learned in school how to do and I'll write out the first three Form this this one this one in the next one Because it's not quite obvious how to proceed so it's there's a term with n and there's a term without any n and Up to constants depending on a It's always a linear combination of a to the n and 1 so this is a to the n minus 1 This is just a to the n, but there's the n and similarly if I take g 2 of a n So that'll be a plus 4 a squared up to n minus 1 squared a to the n minus 1 I'm running out of space. I'll put it here. This is now a term with n squared and squared times a to the n over a minus 1 Then minus 2 so by normal coefficient 2 times n times a to the n plus 1 Over a minus 1 squared and the last term has no n anymore But it's more complicated in a a times a plus 1 over a minus 1 cubed and Then times what we'd expect a to the n minus 1 So if you look at these three formulas, it's clear that you could find the formula for the fourth of the fifth power You know each individual one, but it's a pain in the neck So when I did this I worked out I'm sure this is Stand, but it's kind of cute how you get these numbers and it turns out so on the computer We're going to want let's say the first 20 of these so the sum with n to the Zero n to the one up to n to the 19. Let's say so I'm always going to want a whole vector And it turns out it's more efficient to compute them all Rather than one at a time Well, I don't have put four numbers But in the matrix I'll put four because it's a little confusing and I'm going to go to some point J I'm going to compute this whole matrix and So I'm going to have a new matrix. It's the following a minus one and then zero zero zero It's a triangular matrix the next column will be a and then a minus one the next will be a to a a minus one and zero Okay, the next will be a 3a 3a and a minus one So it's pretty obvious. These are just the binomial coefficients times always a in the last term We subtract one and so the very last will be a J over one a J over two If I do it If I put it in nice rows, I guess this should be here J over one a Then here it's J over two a J over 3a and the last term again will be a minus one well if you look at that you say well, that's ridiculous It's that easy. Oh, sorry, and I have to put what this matrix It's a vector I have to multiply this by something. It's a to the n minus one n a to the n n squared a to the n and cubed a to the n Down to n to the J a to the n So if you look at that you'd say oh, that's ridiculous then why write the whole thing the bottom row is just this simple sum But of course, it's not There's an inverse you have to take that matrix and invert it It's not very hard because it's triangular But when you do that you get more and more complicated terms But of course your computer will trivially compute this invert this matrix and then typically compute its inverse So that's the formula and as I said when you actually want to do it You decide in advance how big your J is like 20 and you're gonna have a lot of different days And so you don't compute this numerically each time That's what you usually do if you have something with letters you compute numerically each time you compute this enclosed form for your given J Like 20 as a combination of you know these a to the n's and once you've done that you get a closed form And you can just substitute any a into that form instantly So okay, so that's how you get the G J and now we have to come come to the point. How does that help us? So now as I said, we're talking about a short interval like see that I'm Unless I really drag things out. Well, no, I still have the fourth method to do. So here's the lemma so for any k bigger than n bigger than zero and X and c I'm not even sure if I need k bigger than n but probably I do I'm going to take the sum over what I call the short interval which of course means a long interval but relatively short So I'm going to take n terms starting at k and k will be very large and n will be large But not nearly as large so I'll have a long interval But the idea is I won't care how large n is even if it's a trillion because I'm going to make this Approximate the sum I find that number of these things with the same n and each of these is given in closed form So whether n is one or a thousand you can compute this instantly you aren't actually adding up n terms So the idea is to have actually very big numbers. In fact, I'm not quite convinced by the numbers in what I did But these were days of computation. I didn't have time this week to redo them So I have to trust them and hope that everything's correct But anyway, let me so in the examples. I'll give later. I don't know why it took quite such a small Then quite such a big K. It seems like not such a good idea, but maybe it is So it's the same sum that we're talking about and now it's going to be Well event originally it's simply in the exact formula Because it'll actually be convergent if everything I did is correct So it's going to the exact sum the imaginary part of e to the ix over K So that'll be have some sign or cosine and Then it'll be the sum over all pairs of interest p and l greater than or equal to zero and then a bunch of stupid factors My normal coefficient p plus l over l minus 1 to the l over p factorial and Then I x to the p over k to the power 3p plus l plus 1 and Then the same function g j of a and n that are ready to find So the j will be 2p plus l so if my p and l are going up to in our ten or so This will be a reasonably small index and then as I already told you the n will be n and the a will be the same Number that I had except now it's ix over k squared and But again, I don't really care what these numbers are because once p and l are fixed and n is fixed This thing can be computed in a small amount of time. So this series It's it works very rapidly. So you take this relatively small number of terms at this Using these exact forms that you get by inverting that matrix and that's the answer So actually since it's not terribly hard and not entirely obvious Then you quickly do this because the method would work once again for any other function It is nothing of all that much to do with but it does use the sign So I'm going to so the proof is for any n in This range so n goes from zero to n minus one I'm going to this k is now going to be k plus n So I'm going to take e to the power ix over k plus n and divide by k plus n and then at the end then One take the imaginary part in either order and secondly sum over n between zero and n minus one Okay, so this is the sum and we want and so I just Express this and now the idea is the n is going to be much smaller than k So to order to leading order. This is one over k and the remaining term It's going to be a geometric series and n over k which I can just multiply out Okay, then I'll have the e to the ix over k because k is big and remember k is much bigger than Then capital n and therefore much bigger than little n So this ix over k plus n is roughly ix over k which I could even take out But it's actually convenient to leave it But then I expand one over k plus n as the beginning of a geometric series to the next term has a case K squared n over k squared So the next term is e to the minus x over k squared To the end and that's where I'm going to use my lemma because it's the nth power of sulking of absolute value And so what I'm using is that one over k plus n is Equal to one over k minus. I guess n over k squared plus whatever is left Which is the the part that's left which is therefore e to the ix n squared over K squared times k plus n But here I can't do the trick again Of course here. This is here. I'm perfectly happy to do Is this little kid? This was actually a big case. It should have been it doesn't matter This will be k squared times k plus n and this will be I guess plus n squared The problems I can't do it again because the next time I love to sum with something to the n squared Nobody knows how to do that in closed form. So the idea is this is a constant You can just take it out. This is the nth power. You can sum it in closed form This is the square, but it means that now you only need roughly n squared may become the k cubed That's where the with the three comes in. So that's the that's the little trick So this is the key idea of the proof. I don't want to remove the lemma. So I better not cross it out So now if if I expand that so the e to the ix over k just comes out The one over k also comes out This thing to the nth power comes out and the final exponential. I'm going to write out as an infinite exponential It's one. So this last exponential is the sum one over p factorial i n squared x Over k cubed now because this member is k cubed times one plus n over k. So now I get k cubed that's why k cubed will compare with x and then I'll have one plus n over k to the minus p Except already how do one plus n over k. So it becomes minus p minus one But that's no more expensive than one over p so now we're nearly there Namely this one You just expand as the sum l greater than or equal to zero of l plus p over l times n over k to the L I hope it's maybe minus Maybe it's my I'll get the signs from minus n over k to the l and then that is the double sum that I said Okay, so here we have the sum of a p and l with some power k to the l I hope I got it right and here k to the 3p plus one anyway when you do it all you get just that But what you're left with is the sum of a pure nth power and so that's what gives these interesting g so that's the that's the method and So there's some there's some discussion here about how to choose things Optimally, and I'm not even sure I really believe it. So I'll skip it That bit roughly it will come out x to the 1 3rd. I hope but in my example actually I did take x to the 1 3rd I'm really puzzled So I took the interval From 10 to the 30 So 10 to the 6 so 10 to the 30s way too big I cannot go up to 10 to the 30 by hand However, what I did here is still reasonable my x is 10 to the 60. That's my target I'm going to do up to a certain point by hand and then I'm going to cut off the rest in lots of so-called short intervals And one of those short intervals will go all the way from x to the roughly a third or maybe less all the way out to Infinity and so one of them might be this one. This is the interval from 10 to the 30 Up to 10 to the 30 plus a million which is you know a very very small compared to 10 to the 30 So the numbers are hardly changing which is why I can do this Approximation of a smooth function by its constant terms linear term in its quadratic term without losing too much Okay, so that's so in this numerical example I took these values and then I can again fill in some of the actual numbers So the sum Here the sum so for these numbers This sum the sum is Approximately 2.6 times 10 to the minus 31 So the total contribution this interval is very small anyway Remember it's already one or 4k and the signs are essentially random So on the average they tend to cancel out but the individual terms would actually be much bigger than I mean It's not and they aren't a millionth of this they're of the same order see you have that and now if you do it Directly, so if I do this particular case, it's only a million terms So Paris can do this perfectly easily. This was several years ago So probably be much faster now this took if I simply can add it up these Terms it was two and a half minutes Which is not huge But remember we're only taking a litter in full length of million We have a zillion of those to get up to ten to the six of a huge number So but still it's two and a half minutes but if you take the terms but the terms of Of the lemma Just the ones with 2p plus L, which is the key parameter up to 10. So that's you know like 50 terms Then that takes bird takes and this was 10 years ago. So now it'd be faster It took eight milliseconds instead of two and a half minutes That's a hell of a gain and he gives for this number, which is 10 to the minus 31 He gave 110 digits of accuracy. So we aren't losing anything So in other words this method over short intervals is extremely efficient gives very very accurate answers in a very very short time But the price once again is you you need a lot of short intervals And of course you still need your original k to be large because all that is dependent on k being bigger than n and preferably Well a lot better to make these things. Well, this is exact. It doesn't matter here I'm about to say I haven't yet said so I can't remind All I said now is this is an exact formula so long as n is less than k This is exact Then I gave an example if I chose the interval with x is 10 to the 60 from 10 to the 30 to 10 to the 30 Then the sum by direct computation It's about 10 to the minus 31 it takes three minutes if I use the first 50 or so terms of this It takes a hundred to the less than a hundredth of a second and gives up more than a hundred digits And I could take a few more terms to get more so getting just this interval is very fast But now I want to get the whole sum up to say how I choose the intervals and that's the last parts indeed That's I still absolutely owe you that so in practice if I can Trust my notes. So this was only an example How you would do it for a single interval 10 to the 60 But I think I cannot pretend that the 60 actually give you the number because the k I certainly can't go up to 10 to the 30 by hand But even 10 to the 20 which is the cube root that'd be you know a year or two on the computer I mean it's out of the question so but you probably could go to 10 to the 30 But this is correct to show you how a single interval works and so it's convincing so in practice So this the last part of the third method to calculate h of x where x is large We first calculate directly H k zero of x which remember was just the sum from from one to K zero of sine x over k Over k where k zero is let's say the integer part or it's roughly well We can take the integer part you pick a lambda which is a third or slightly bigger than a third is what I decided that the end Was up whether I was correct I do I try to do it carefully at the time, but I wasn't able to reconstruct the calculation I didn't have my hand written notes I just have the notes I didn't take and I just didn't have the time this week It's several days of work to check all of this and I didn't do it So let's hope that I did it correctly So we first do this by hand and that's the one that already tells you it's one One third which is better than the previous for one and a half, but it still means of x is 10 to the 60 It's too big 10 to the better you might do it for 10 to the 30 or 10 to the 25 It's still much better than before so now you calculate you take the intervals Where k goes from this k zero and I'll put one plus some small number which I called a to rather than epsilon So here the eta is going to be small and I'm going to take I decided this was optimal I'll take intervals But multiplicative I don't take fixed inputs of fixed length the further out you go so it grows exponentially It's one plus roughly log x over x to the lambda I mean it obviously doesn't matter if you take two log x over x to the lambda But it should be a little bigger than x to the lambda not too much bigger, okay, and the j goes From one up to roughly c x to the lambda with the same lambda about a third and c is to the order of one So it means we have the x to the lambda twice we already at x the lambda terms and here there's no point having fewer or more because the bottleneck is whichever one is bigger and Both our elementary computation computation might as well take them of the same general order and that gives that's the one that Forces you to take a third to make it work So now what we have is one sum exactly up to k zero Then we have a lot of these short intervals up to k zero times one plus a to the whatever it is C x to the j and then with the the tail and then the tail You have to see you either use Euler-Mclaurin as we did the first time remember once it was very very far off you could use Euler-Mclaurin or Depending how far you went you simply ignore it It actually turned out numerically you could do this method It was quite fast so far that by then the tail was smaller than the error that you were making anyway You could just drop the tail so in other words you could use this more all the way to infinity and stop when it comes to small But you can also stop earlier and do it by Euler-Mclaurin But that's more work because you have to work out all the formulas and and don't make mistakes and checks with several What's good in all of these methods? This is I've said this before in this course the most important thing when you're computing is knowing that you aren't making mistakes You get numbers to a thousand digits great Your computer will always print out a thousand digits if they aren't the correct thousand digits It's not much use and nobody can look at them if we knew the answer. We wouldn't be doing it So it's very important in every problem of this that there are free parameters here There are several how do I choose the eight I can put two log x or 1.3 long? How do I choose lamb you change many things and you do it independently and then if the two things exist to 500 Decimals, it's not a proof of anything, but you know moral that you've got 500 decibels So it's very important that we have free parameters and that we have choice and how we choose them if there's a unique method You say do exactly this it might be proven, but you still might make a mistake. It's it's very worrying So it's always better to have have this so these are the two methods And as I said the experiments with Paris showed that the accuracy is What I said at the beginning is some negative power of a with any a and the a hardly played any role So the reasonable amount of time but the time which means the number of steps is Roughly x to the one-third plus epsilon as opposed to x to the one plus epsilon one half plus epsilon that we had before So this is the most efficient It's also the ugliest because we've broken up into different pieces but still this idea that if you have a smooth function that We all know that a smooth function in short info is roughly its constant value and then the more accurately It's linear that we all learn but of course to higher accuracy It's the the oscillating parabola and then to yet higher accuracy and so the idea is the linear part If you have an e to the ix over some lambda to the n you can sum in closed form and the rest you can expand as a power series Here of e to the x and if you do that you gain one one power. So here though one half became one-third Okay, so those are the three methods that that I was using and the fourth method I said at the beginning is actually a little less efficient for this particular problem because it's Back to lambdas a half, but it's very interesting. That's the method that Monim was using for different problem that we both looked at and at the end he had a much better approach and I Sort of dropped out of the project his paper came out for 12 years ago or so So I'll mention that problem also. I still have Lots of time for the fourth method so the fourth method is I already said the name how to put Monin is a physicist, but Well, I mean he's a physicist. He's a very good physicist, but he's he's really a mathematician by now He does really beautiful things in mathematics both numerical ideas and theoretical ideas. He's a very Surprising person and he's certainly now as much a mathematician by any body's standards as any Straight mathematician, but he is a physicist by training And by many many papers so his method is completely different and so the fourth one as I said will be Monin's method which I believe from the quick Analysis, which I didn't do very carefully for one thing because he did it in his case. He had a different problem from this one And he did it more carefully in that and with the error analysis, which I haven't done here But roughly it should be lambda the half and the problem will actually be let me write s as a Functional if f is some nice function. So f is a function on R Real or it doesn't matter real or complex value. You could split it into real and imaginary part Let's say it's analytic and should be fairly small at I don't care it's analytic and For us it will be even there would be a variant if it's odd, but I'll assume it's even and it vanishes at zero So roughly if I draw a picture of it, it'll look like like this and I don't really care what I don't care at all What it does at infinity? Okay, and so s of f is going to be the infinite sum k from 1 to infinity f of 1 over k Now if you think about what I've already said Then you realize that something see this looks like an Euler-McLoran kind of a situation of the sum over a half lattice k positive But it's not because I'm assuming f is even so instead of writing k positive I could also write one half the sum over k different from zero of f of 1 over k and So I'm basically some over whole lattice and analytic function except that I haven't said anything about with this function Infinity so the function f of 1 over x near zero could have an arbitrary horrible singularity So you can't simply say okay, I'll use plus on summation But still morally the fact that f is even means that this sum is much easier than if f were an odd function So in our case you can see that the original function I've written it so many times now for raised it again, but it was the sum one over k So what I called h of x Was the sum one over k sine x over k But of course just as in things that did earlier in the course the thing to the trick would be to multiply by x and Then this would be my function f of x over k we're here f of t would be t sine t, which is indeed an even function So that'll be the special case to which I'll apply the method with the method is quite general and so the question is compute this intelligently and quickly and accurately for a function where With the same problems that we had here remember that when hay is extremely large There's no problem because you're in the region where one over k is very small the problems when k is small This function is jumping around at random if efforts for instance oscillatory if it's something really easy like a polynomial Maybe you'd be okay So the method is very surprising and very cute some I'm glad that I will have time to Tell them in this course because it certainly fits in with asymptotic methods, and it's a nice thing not very well-known I assume I Wouldn't otherwise know it So for all n greater than or equal to zero I Define two polynomials, and I'll call them c n of x and s n of x That's supposed to remind you as you'll see in a second of cosine and sine of degree at most n and C of x is going to be an even polynomial as cosine would be in this will be an odd Polynomial so they can for degree exactly and one of them will be exactly and one will be exactly n minus one I'll give a little table in a second, but here's the exact definition cn of x is the sum minus one to the d n Minus d d this the Pochamer symbol, but I'll write it in the second with factorials x to the 2d over 2d factorial Which I can also write as the sum J between zero and n J even and then an alternating sign CJ I'll define in one second X to the J. So it's an even polynomial and s n of x Is going to be the sum Zero less than or equal to d less than less than n over two less than or equal to n minus one over two and then minus one to the d This even if you're taking notes, which I hope you're not there's no point writing down these formulas because I'll write down a more Illegal one. I didn't even say which Pochamer symbol this is But you can write the two uniformly. This is exactly the same thing But now we're taking only the odd ones J odd and of course then I don't want an I so it's J minus one over two CJ X to the J and here's CJ Well, the easiest way to write it is to the J over J factorial times binomial coefficient n over J divided by binomial coefficient 2n over J Obviously, you can cancel some factorials and simplified, but that's the shortest way to write it So these are some explicit polynomials And they're extremely nice. They're extremely pretty polynomials with some really nice properties So I'll take a minute to say how they look so first. I'll give you a little table So we have two of them for each n C and s So if it's zero well, this one is even the degree at most zero so it's a constant this is odd So it's zero The next one. This is even of degree at most once it's again a constant. This is odd. So it's going to be constant times x For two this is one minus x squared over three and this one is x again for three I Don't know how far I'm going to go but as long as But this is one minus two x squared over five. This is x minus x cubed over 15 For four this will be one minus three sevenths x squared Plus x to the fourth over 105 and this one the x minus two x cubed over 21 And I'll do one more Just because there still aren't so many terms One minus four ninths of x squared plus x the fourth over 63 and x minus x cubed over nine Plus x the fifth over 945 so these are very explicit polynomials and you notice that you can compute this extremely easily just using the Recursion which is that cj plus one, but it's kind of trivial is n minus j over n minus j over 2 times cj Over j plus one. So when you're actually computing this you don't compute a bunch by normal coefficients We've had several forms at that earlier in the course c zero is of course just one And then you just to get each cj from its predecessor by two or three multiplications and divisions Okay, so that's what we have and now what do you do with these polynomials and what you do is very surprising to me It's it's a kind of interpolation like Lagrange Newton all the various things that people know We've discussed, but it's somehow different because we're using these strange polynomials So let me erase all of this up to here So I have some space Okay, so you fix n Well, let's say for fixed n because I don't want to keep putting an n in the notation. I'll just drop it For fixed n denote the zeros of Sn of x I want to renormalize them by pi. So I'll call them pi xk and Remember this is an even function s Well s is an odd function c is an even function So the zeros will be one in the middle and then they'll come symmetrically so I'm going to number them xk and here absolute value of k would be less than or equal to Actually strictly less than n over two where obviously x zero is the middle one. It'll be zero And x they'll be symmetric x minus case Is minus xk and they're simply in increasing order Okay, so I simply number them from friends Let's say there are seven of them minus three minus two minus one zero one two three So I number them and I define weights Again k is fixed. Wk is the value of CN at this number Divided by the derivative of Sn. So this is the derivative remember pi xk is a root of Sn So this is the like a pole term. It's a residue and you'll see in a second. It is a residue, okay? And then one in its approximation, which is really a pretty thing Is approximation to s of f. So we fix the n is and now Sn of f With these numbers is the sum because it's even an f was an even function and f of z Zero is zero. I just take the k Strictly between zero and n over two. I take w of k and I take one over x to the k Except it's not quite right. It's almost right. There's an it's correct. If n is odd Then you don't have to add anything but if n is even There's a small correction, but it's very easy It's pi squared times c2 from that table whatever c2 is over n times n plus 1 So it's a somewhat not of it not very obvious formula Okay Something seems a little strange about this there should be a Maybe this oh I didn't say with c2 is it's all right. It's not those c2. So c2 where the f of x Remember f is even and analytic at the origin and in zero so it starts it as a power series expansion Sorry, it had to depend linearly on f c2 is the coefficient of x squared. Sorry. It's not the c2 of Okay, so except for this very trivial correction you have to correct take the values at 1 over xk So let's first look how this works and why it works And then I'll tell you a little bit about where these zeros are because that's very amusing So roughly, you know xk for a fixed k is going to be roughly 1 over k Wk is going to be roughly 1 at the beginning if k is small and therefore the first few terms are just 1 times f of 1 over k But then the further ones you only take n over 2 terms But you take them with clever weights and these are somehow the best weights you can do So that's his idea So now I don't really want to erase even my table because if I have to do it again So remember the claims that this is roughly this is an approximation as the laboratory mind you is the sum from 1 to infinity f of 1 over k Okay, so that's what we're trying to do. So let me explain Why why it works? Why does this work? So why are we doing this and then I'll explain a little bit where these numbers lie? So it's very nice. Remember the CJ we're defined the CG CN and SN of x were defined with the same sequence of coefficients CJ which were just the simple binomial thing and then you took either the evenness of the odd ones So for that you get very trivially that if you take the generating function of the CJ the minus sign But I'm fixing my n remember n is fixed in this lemma so CJ goes simply from 0 to n Those are the ones I want the even ones of the odd ones are used separately so this is a polynomial and That polynomial is going to be quite close to CJ and the identity which is very easy to prove it's the binomial coefficient minus half over n inverse times the Infinite sum again binomial coefficient p minus half over n x to the p over p factorial So it's an exercise. I won't do so this is exercise Okay Now if I place in this x goes to ix and take the imaginary part Then we'll get if you remember with CN and SNR They're the ones with minus one to the J for J e for minus one to the J minus one over two But then there's still an e to the ix from here And so what you'll get it looks like the addition law of cosine cosine times sine minus sine times cosine But it's it's this and what you'll get is the following exact formula x to 2n plus 1 over 2n minus 1 double factorial Times the sum p from 0 to infinity Minus x squared over 2 to the p over p factorial times 2n plus 2p Plus one double factorial you remember double factorial of an odd number is one times three times five up to that number Okay So these are just this is an elementary exercise But this thing is incredibly small when x is small and n is big because it's already Even if x is a bit big because of the 2n minus 1 double factorial the first term here when p is 0 has another factorial So this is roughly x the 2n plus 1 over 2n minus 1 Factorial double factorial times 2n plus 1 double factorial. So it's incredibly small So with this means that that these two polynomials are the best approximations and in fact they are the best Approximation so this calculation implies that the ratio cn of x over sn of x is a best which means if you know the theory the Padae approximation To the function cosine of a sine which is cotangent of x so Padae approximation given any power series I think I used that once in this course when I talked about orthogonal polynomials You're going to approximate it for each n by quotient of polynomials to create most n and there's a best choice And you get them from a continued fraction that you would have here too that I won't go into and so these are the best But once you happen the proof that it's the best is that the order is x the 2n plus 1 You can't do better than that you've n coefficients here and n coefficients here as you get to n So once you found something that works, you know you did it correctly So these are the best approximations and you see from this then indeed cn of x over sn of x as for x fixed and In large Will be what already said x the 2n actually minus 1 over 2n minus 1 double factorial 2n plus 1 Double factorial and then there are further terms. I'll just write one of them 2 times n plus 1 times 2n minus 3 over 3 times 2n minus 1 times 2n plus 3 x squared and so on With this series eventually converges So if x is large this coefficient of the order of 1 then this term will be a little big But it's still just polynomial size and this is extremely small because of the two factorials so this is the Approximation and now we're going to take Manin's approximation and this is going to be I Don't know where I'll put it. Maybe I go over here now. We don't need these calculations at all anymore In fact, I no longer really need the exact definitions This was cn and sn So if we take this sum Well now there's a slight confusion with the k, but if I take here k Less than n over 2 and then these weights and I divide by some power xk to the m Okay, then by the residue theorem remember these wk. I hope it's still written xk with the roots Well, it's not written, but pi xk with the roots of the polynomial sn Where k went from minus n over 2 to n over 2 and here I have the derivatives So here if I take xk to the minus m Then what this is? It's a half because it should really be an even and odd So sorry, this is not this is zero less than k less than n over 2 This would be the sum here over sn of x is zero But excluding the root at zero and then it's cn of x over sn of x times x to the minus m Because this thing is only simple poles and the pole at xk the value of the residue rather cn of xk divided by sn prime of xk except there's a pi xk So that's why I normalized and then I multiply by xk to the n and it's a half because here I'm taking positive and negative But these are polynomials in this upon this irrational function So the residue theorem says the sum of all of the residues is zero So therefore this is minus a half times the residue at x equals zero plus the residue at x equals infinity of cn of x divided by sn of x times x to the m So this is my rational function cn of x over sn of x x to the m and this is now an exact formula Now when I compute these residues The residue at infinity is very easy because see this thing at infinity is roughly one This is a negative power the only time that it's non-zero is if m is one and n is evened and Then it's equal to two over n times m plus one times that derivative and that's exactly what gives this Correction term so the correction term is just the residue at infinity But the residue at x equals zero So it's minus a half the residue that's equal zero But now we can use That cn of x is very close to cult of x But this thing is just the coefficient and so this is some binomial number the exact version of that is Exactly minus 2 z of 2m well if I put minus a half there's no minus 2. It's simply z of 2m Over pi to the 2m mother of mistakes So the residue at zero if you could replace this rational function cn of her As I'm by code center that would be just z of 2m over x to the 2m But you see that's exactly what we want because remember we're summing f of k f of 1 over k From 1 to infinity so obviously if f of x is the polynomial x to the 2m Then I'm getting exactly z of 2m and there's an extra pi because remember I normalized the roots by factor of pi So you see if I have an exact power that would be on the nose And so I would be this this remaining number. This is exact This is this little correction term and this term Would be exactly the residue of cn over s and divided by x to the m. That's extremely close to this and now the error So the result is if m so we're thinking of f Originally as being just x to the 2m where m is one two three Okay, that would be an analytic function even analytic function which vanishes as long as m is less than then you get an exact formula So this sn will be exactly equal to s on the nose because I haven't thrown away anything And if m is greater than or equal to m because I'm taking the coefficient remember the difference between cn over sn and Cotangent is o of x roughly the 2n 2n minus 1 and therefore if maybe this is 2m Probably it's 2m by now. Well, this is m. It has to be even. I think I've probably messed up I want only even powers. I think so m should be an even number. You could call it 2m. Yeah, so here. It's 2m So I need as long as this is bigger than that Then of course there is no coefficient at all because the power series only starts next to 2n So if m is less than then it's exact But for instance if m is equal to n then the error is 1 over 2n minus 1 the same number we had Product of two double factorials. That would be the error So I'm you have to do a detailed error now, but you can see it's incredibly accurate And so you get this very surprising formula that the infinite sum. Well, there's this stupid correction term if if Just coming from the x squared term that you can ignore But otherwise you just take the sum but instead of f of 1 over k with weights 1 you take 1 over f of x k with weight W so the last thing I have to tell you and I still have a few minutes and I won't need them all Maybe five minutes is how did these zeros look? Where are they and how are the weights? Okay Well the weights 10 rapidly to 1 I think I don't even have them written down But where are the so where are the zeros? Remember they were x 0 which is 0 plus or minus 1 x 1 up to plus or minus x to the n minus 1 over 2 of Sn of I guess it's pi x Because it was you know pi x So those are the things and so what you find is that up to for small k K small and experimentally. I happen to really test this up to a round and over 3 x k is Very close to k as of course it has to be the method couldn't possibly work And we just showed that it works if in the limit the individual terms after all f is anything it wants to be These individual numbers better be f of 1 over k with weight 1 So of course the x k better be close to k and the weight better be close to 1 But you'll see in a second that they're extremely close and they're actually all the way up to about n over 3 They're still close and I find this also quite Surprising so let me give you a little table So here's a table. I have lots of digits here in my notes, but I'll only give you a bit of it This is s 100 of x. So k will go from 1 to 50 So I'll take 1 2 3 10 20 30 31 and then I'll continue on the next column So here I have lots of dead so I'll just put the main term The first zero next to zero. I mean of course x zero is exactly zero. We know that it's it's even X1 for a hundred is 1 plus 3 times 10 to the minus 288 That's kind of nifty. So when I say it's close, it's essentially equal So the first few terms are that the next one is 2 plus 8 times 10 to the minus 217 so we've already lost 70 digits the next is 3 plus 2 times 10 to the minus 181 For 10 it's 10 plus 2 times 10 to the minus 78, but it's still extremely close for 20 It's 20 plus 9 times 10 to the minus 25 For 30 it's 30 point zero zero seven suddenly. It's no longer incredibly close 31 it's 31 point oh five oh seven and if I continue So that was 31. I'll do 32 33 34 and then I'll skip To the last few remember it ends at 49 because it's n minus 1 and n over 2 strictly smaller So here it's thirty two point two Here it's thirty three point five five. So it's it's no longer really little of one here. It's already For thirty four it's already bigger than thirty five So as I said, it goes roughly up to n over three and then the last few I'll just give the original of the up to the I can put more digits 103 128 571 256 and 511 you can see the last one is like, you know, it's it's huge So so that's how the actual zeros look but the weights are going to be such that these last ones We aren't going to care about or at least I assume they're like that But the amusing thing here is that you get that they're so strictly glued to where they should be that of course has to With the Padé approximation that it's cosine over sine and sine of course is exactly at the integers So in fact if k is much smaller than n I don't know how much smaller it has to be then what you can do in exact asymptotics I did this numerical but you could certainly prove it Well, you can do the numerics by the method I showed many times in this course And that's what I did, but you could certainly prove it in this case So the small zeros if you fix a zero that would be like, you know x3 Then it's k plus something very small and something very small as pi k over 2 to the 2 n plus 1 over n factorial squared times 1 minus capital K plus 1 over 4n plus K squared, I mean I have more terms. I'll just keep you these 6k plus 5 over 32 n squared and so on so where k capital K is 2k squared pi squared So roughly if k squared is less than n then then you're extremely close and in the opposite direction at the other end k Roughly n over 2 the biggest xk if n is even and the biggest xk maximum I'm doing it's our k maximum, so which is n over 2 minus 1 is Again, this is experimental, but you could certainly prove it It grows roughly quadratically as we just saw it was 500 remember when n was a thousand of 502 pi squared and if n is odd it's the same No, it's not the same. It's over pi squared plus a 12 plus O of 1 over n squared If n is even so so the last Values is much bigger than n and there's actually an asymptotic formula if I take one there near the end But I'm looking about the variety valve So this will form the few of x to the power n minus j over 2 then there's an exact asymptotic formula also so Well, that's the end of this method It's the end of today's lecture and it's over the end of the course I gave four very different methods for studying the same problem None of them is terribly important and the problem isn't terribly important But it's to show a kind of a variety of methods you can use to study You know awkward sums and each of these methods could be used in other contexts Maybe I'll take a minute since I still have Four to say what the problem was that Monim was originally studying and that he wrote a paper on using this method And I'll just say it very quickly because it also makes a nice exercise if you want to test your knowledge of the You know asymptotic method, so we define some so-called Hankel determinants But these are determinants of a very special form and I'm going to define them slightly differently for even and odd numbers You take the z values, I don't have to put it in it's clear So it's a matrix you take z of 2 z to 3 up to z of n and here's well It's a metric so n up to 2n minus 2 if n is even 2n minus 1 is odd brother and if 2n is even you start at z of 3 z to 4 up to z of n plus 1 and Here's z of n plus 1 up to z of 2n minus 1 and Not very surprising these numbers are very small because after all to Exponentially close all the Zetas are one so this matrix is you know all ones that we certainly have determined zero But it's extremely close So I'll give you just I'm not going to write out. I've all the numbers here But delta hundred for instance is 3.4 times 10 to the minus anybody want to guess How small is small? 3,371 Which means it's only you know paris will happily compute Zeta values and it'll happily compute 100 by 100 determinant But the problem is to compute this when you have to compute the individual numbers to more than three 3400 digits because each product you know you've n factorial products with science and each product of the order of one But the sum of all of them is of the order of tenth minus 3400 so it's already numerically a bit tricky and Well, maybe I'll just leave it completely as an exercise so exercise. I'll only give you a hint exercise Delta n is asymptotic to what and the only hint is that it's different formless if n is odd and n is even So when you do this on the computer and you try to interpolate using the interpolation method I showed you if you compare n and n plus one you'll get into a mess you better compare n and n plus two So that's a small hint, but I won't say but of course this gives you a certain hint It's certainly more than exponentially small and that it's more than factorially small It's like really small Okay, so factorially like n to the n this this goes to zero even much faster than that So the question is to prove that and what money and actually wanted is not just the determinant of this Which is the product of its eigenvalues the eigenvalues are real. It's a real symmetric matrix He wanted information about all the eigenvalues. I started it a lot I have ten pages of notes. It was all experimental I couldn't prove much of anything, but there were all kinds of beautiful things about the behavior of these eigenvalues And it's another fun problem So then he's what he's looking for a fun thing too, but it's been basically sold by Modine in that paper a dozen years ago But it's a very nice problem I want to end with one beautiful arithmetic things so the fact that these determinants are so incredibly small Of course, if you want them, I can happily write down the correct asymptotics, but let's say up to So up to order So let's say up to order relative order So multiplicative factor one plus. Oh, so something times. Let's say one of n to the fourth So, you know give several terms of the asymptotic expansion So oh actually it's no I went further. It's n to be 11 In fact, I can tell you that after you take out the prefactures an even power series and one over n squared So that would only be like five or six terms So if anybody wants to test their skills now with very tricky numerical analysis, that's a good one to do and it's a good exercise in Paris Okay, so I'm all finished. It's exactly three thirty if anybody has a question Including people those were left on zoom